var/home/core/zuul-output/0000755000175000017500000000000015136203022014517 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015136227563015503 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000377522015136227400020266 0ustar corecore/yikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD p6qi.߷;U/;?FެxۻfW޾n^8/ixK|1Ool_~yyiw|zxV^֯Li.`|!>ڌj+ACl21E^#QDuxGvZ4c$)9ӋrYWoxCNQWs]8M%3KpNGIrND}2SRCK.(^$0^@hH9%!40Jm>*.ww}(o./WY<͉#5O H 'wo6C9yg|O~ €'} S[q?,!yq%a:y<\tunL h%$Ǥ].v y[W_` \r/Ɛ%aޗ' B.-^ mQYd'xP2ewEڊL|^ͣrZg7n͐AG%ʷr<>; 2W>h?y|(G>ClsXT(VIx$(J:&~CQpkۗgVKx*lJ3o|s`<՛=JPBUGߩnX#;4ٻO2{Fݫr~AreFj?wQC9yO|$UvވkZoIfzC|]|[>ӸUKҳt17ä$ ֈm maUNvS_$qrMY QOΨN!㞊;4U^Z/ QB?q3En.اeI"X#gZ+Xk?povR]8~깮$b@n3xh!|t{: CºC{ 8Ѿm[ ~z/9آs;DPsif39HoN λC?; H^-¸oZ( +"@@%'0MtW#:7erԮoQ#% H!PK)~U,jxQV^pΣ@Klb5)%L%7׷v] gv6دϾDD}c6  %T%St{kJ_O{*Z8Y CEO+'HqZY PTUJ2dic3w ?YQgpa` Z_0΁?kMPc_Ԝ*΄Bs`kmJ?t 53@հ1hr}=5t;nt 9:I_|AאM'NO;uD,z҄R K&Nh c{A`?2ZҘ[a-0V&2D[d#L6l\Jk}8gf) afs'oIf'mf\>UxR ks J)'u4iLaNIc2qdNA&aLQVD R0*06V۽棬mpھ*V I{a 0Ҟҝ>Ϗ ,ȓw`Ȅ/2Zjǽ}W4D)3N*[kPF =trSE *b9ē7$ M_8.Ç"q ChCMAgSdL0#W+CUu"k"圀̲F9,,&h'ZJz4U\d +( 7EqڏuC+]CEF 8'9@OVvnNbm: X„RDXfיa }fqG*YƩ{P0K=( $hC=h2@M+ `@P4Re]1he}k|]eO,v^ȹ [=zX[tꆯI7c<ۃ'B쿫dIc*Qqk&60XdGY!D ' @{!b4ִ s Exb 5dKߤKߒ'&YILұ4q6y{&G`%$8Tt ȥ#5vGVO2Қ;m#NS8}d0Q?zLV3\LuOx:,|$;rVauNjk-ؘPꐤ`FD'JɻXC&{>.}y7Z,).Y톯h7n%PAUË?/,z_jx܍>М>ӗom$rۇnu~Y݇̇TIwӜ'}׃nxuoỴRZ&Yzbm ]) %1(Y^9{q"4e?x+ [Vz;E|d1&ږ/0-Vb=SSO|k1A[|gbͧɇد;:X:@;afU=Sru CK >Y%LwM*t{zƝ$;ȾjHim @tBODɆj>0st\t@HTu( v e`H*1aK`3CmF1K>*Mk{_'֜dN${OT-n,'}6ȴ .#Sqη9]5zoX#ZVOy4%-Lq6dACYm*H@:FUф(vcD%F"i ' VVdmcOTKpwq.M?m12N[=tuw}opYG]2u<ΰ+a1tHayɒ aY(P*aaʨ@ΰ<pX X{k[%Egl1$9  ֲQ$'dJVE%mT{z`R$77.N|b>harNJ(Bň0ae3V#b,PY0TEu1L/]MTB4$`H6NI\nbǛ*AyA\(u|@~*8n]rMQ ;N>Sr??Ӽ]\+hSQזL rH_HI\:U}UE$J @ٚeZE0(8ŋ ϓ{+'h=TԫeVިO? )-1 8/%\hC(:=4< ,RmDRWfRoUJy ŗ-ܲ(4k%הrΒ]rύW -e]hx&gs7,6BxzxօoFMA['҉F=NGD4sTq1HPld=Q,DQ IJipqc2*;/!~x]y7D7@u邗`unn_ư-a9t_/.9tTo]r8-X{TMYtt =0AMUk}G9^UA,;Tt,"Dxl DfA\w; &`Ͱ٢x'H/jh7hM=~ ֟y[dI~fHIqC۶1Ik\)3 5Ķ']?SؠC"j_6Ÿ9؎]TTjm\D^x6ANbC ]tVUKe$,\ܺI `Qز@UӬ@B {~6caR!=A>\+܁<lW Gϸ}^w'̅dk  C 7fbU{3Se[s %'!?xL 2ڲ]>i+m^CM&WTj7ȗE!NC6P}H`k(FUM gul)b ;2n6'k}ˍ[`-fYX_pL +1wu(#'3"fxsuҮױdy.0]?ݽb+ uV4}rdM$ѢIA$;~Lvigu+]NC5ÿ nNჶT@~ܥ 7-mU,\rXmQALglNʆ P7k%v>"WCyVtnV K`pC?fE?~fjBwU&'ᚡilRї`m] leu]+?T4v\% ;qF0qV(]pP4W =d#t ru\M{Nj.~27)p|Vn60֭l$4԰vg`i{ 6uwŇctyX{>GXg&[ņzP8_ "J~7+0_t[%XU͍ &dtO:odtRWon%*44JٵK+Woc.F3 %N%FF"HH"\$ۤ_5UWd̡bh塘ZRI&{3TUSMCe<[%!:i -g[dABcAw`g*7R(#ғ [K&#Mp'XގL=s5Ǜ>Y+yn~F8I !6WB3C%X)ybLFB%X2U6vw8uUF+X|YukXxVO(+gIQp؎Z{TcR@MSRδ~+1æ|mq՗5$B᲋eY(|*磎\Dži`dZe j'V!Mu@ KV{XץF .Jg< ƜINs:b zĄu3=Az4 u5'og^s7`Rzu-anOIq;6z( rx߅ euPvIɦ7聀t>G;_H;2ʗ6 h6QװxmR JQUbTP2j˔Ni)C)HKE"$ӝ!@2<Bq 2oh80,kNA7,?ע|tC3.㤣TiHEIǢƅaeGF$ u2`d)/-st{E1kٌS*#¦۵_Vu3ЩpRIDr/TxF8g4sѓ{%w .ʕ+84ztT:eEK[[;0(1Q@ET0>@wY)aL5ׄӫ A^%f+[`sb˟(]m`F3 W((!5F-9]dDqL&RΖd}})7 k11 K ;%v'_3 dG8d t#MTU']h7^)O>?~?_ȿM4ə#a&Xi`O}6a-xm`8@;of,![0-7 4f kUy:M֖Esa./zʕy[/ݩqz2¼&'QxJE{cZ7C:?pM z*"#窾+ HsOt۩%͟A498SwWv|jNQ=-[ӓI]iSCQ&s~In/SZ % 'I Ƿ$N ̢t-mfeF;gUаQ/ .D%ES*;OLRX[vDb:7a}YF30H #iSpʳ]'_'ĕ -׉6tfЮ$zͪO_sYq+q艻*vzh5~Yy;,DiYTP;o./~^.6+zZFD& m@WXe{sa 2tc^XS?irG#^ŲDI'H_Ȯ;RJ&GT.Kwj;of¬zHmmS2ҒN'=zAΈ\b*K ڤUy""&D@iS=3&N+ǵtX^7ǩX"CA⥎å+4@{D/-:u5I꾧fY iʱ= %lHsd6+H~ Δ,&颒$tSL{yєYa$ H>t~q؈xRmkscXQG~gD20zQ*%iQI$!h/Vo^:y1(t˥C"*FFDEMAƚh $ /ɓzwG1Ƙl"oN:*xmS}V<"dH,^)?CpҒ7UΊ,*n.֙J߾?Ϲhӷƀc"@9Fў-Zm1_tH[A$lVE%BDI yȒv $FO[axr Y#%b Hw)j4&hCU_8xS] _N_Z6KhwefӞ@蹃DROo X"%q7<# '9l%w:9^1ee-EKQ'<1=iUNiAp(-I*#iq&CpB.$lٴާt!jU_L~Tb_,֪r>8P_䅱lw1ù=LAЦz38ckʖYz ~kQRL Q rGQ/ȆMC)vg1Xa!&'0Dp\~^=7jv "8O AfI; P|ޓܜ 8qܦzl5tw@,Mڴg$%82h7էoaz32h>`XT>%)pQ}Tgĸ6Coɲ=8f`KݜȆqDDbZ:B#O^?tNGw\Q.pPO @:Cg9dTcxRk&%])ў}VLN]Nbjgg`d]LGϸ.yҵUCL(us6*>B 2K^ sBciۨvtl:J;quӋkKϮ듃ԁ6Y.0O۾'8V%1M@)uIw].5km~Ҷ綝R(mtV3rșjmjJItHڒz>6nOj5~IJ|~!yKڮ2 h 3x}~ے4WYr9Ts] AA$ұ}21;qbUwRK #}u'tLi'^Y&,mCM)eu㠥Ѻ\a}1:V1zMzT}R,IA e<%!vĉq|?mtB|A ?dXuWLGml?*uTC̶V`FVY>ECmDnG+UaKtȃbeb筃kݴO~f^⊈ 8MK?:mM;ߵoz+O~e3݌ƺ(ܸf)*gCQE*pp^~x܃`U'A~E90t~8-2S󹞙nk56s&"mgVKA: X>7QQ-CDC'| #]Y1E-$nP4N0#C'dvܸȯ.vIH"ŐR ;@~y>Kv{) 9AG ćͩ$.!б~N8i"1KФ\L7/,U@.ڮO?mُa ې!rGHw@56DǑq LA!&mYJ*ixz2*{_;IYJXFfQ* 0kA".mݡ"3`Rd1_u6d逖`7xGMf}k/⨼0Κ_pLq7k!dT x삖A7 u/~&ӄMu.<|yi I?@)XJ7{ޱ?Q]{#\4ZfR-dVaz./f+yGNMGOK?2_~3\z=y}^G$*A! IcuR.o=MZ9zu b#s9@*иrI@*qQN||Ix;I}&ݢ6ɢ}{]x}_o>Mm8S]~(EX{wml"Ms>\΋"?|NKfֱn !d[s׭dֲcUh=Ɩ9b&2} -/f;M.~dhÓ5¨LIa6PnzɗBQiG'CXt!*<0U-(qc;}*CiKe@p&Em&x!i6ٱ˭K& FCfJ9%ٕQ·BD-]R1#]TROr}S [;Zcq6xMY 6seAU9c>Xf~TTX)QӅtӚe~=WtX-sJb?U'3X7J4l+Cj%LPFxŰAVG Y%.9Vnd8? ǫjU3k%E)OD:"Ϳ%E)=}l/'O"Q_4ILAٍKK7'lWQVm0c:%UEhZ].1lcazn2ͦ_DQP/2 re%_bR~r9_7*vrv |S.Z!rV%¢EN$i^B^rX؆ z1ǡXtiK`uk&LO./!Z&p:ˏ!_B{{s1>"=b'K=}|+: :8au"N@#=Ugzy]sTv||Aec Xi.gL'—Ʃb4AUqػ< &}BIrwZ\"t%>6ES5oaPqobb,v 2w s1,jX4W->L!NUy*Gݓ KmmlTbc[O`uxOp  |T!|ik3cL_ AvG i\fs$<;uI\XAV{ˍlJsŅjЙNhwfG8>Vڇg18 O3E*dt:|X`Z)|z&V*"9U_R=Wd<)tc(߯)Y]g5>.1C( .K3g&_P9&`|8|Ldl?6o AMҪ1EzyNAtRuxyn\]q_ߍ&zk.)Eu{_rjuWݚ;*6mMq!R{QWR=oVbmyanUn.Uqsy.?W8 r[zW*8nؿ[;vmcoW]"U;gm>?Z֒Z6`!2XY]-Zcp˿˘ɲ}MV<в~!?YXV+lx)RRfb-I7p)3XɯEr^,bfbKJ'@hX><[@ ,&,]$*բk-Yv5 '1T9!(*t 0'b@񲱥-kc6VnR0h& 0Z|ђ8 CGV[4xIIWN?Yt>lf@ Vi`D~ڇŁQLLkY <ZPKoma_u` !>Z;3F\dEB n+0Z ?&s{ 6(E|<ޭLk1Yn(F!%sx]>CTl9"و5 |ݹր|/#.w0ޒx"khD?O`-9C| &8֨O8VH5uH)28 Ǿ-R9~ +#e;U6]aD6Xzqd5y n';)VKL]O@b OIAG Lmc 2;\d˽$Mu>WmCEQuabAJ;`uy-u.M>9VsWٔo RS`S#m8k;(WAXq 8@+S@+' 8U˜z+ZU;=eTtX->9U-q .AV/|\ǔ%&$]1YINJ2]:a0OWvI.O6xMY0/M$ *s5x{gsəL3{$)ՆbG(}1wt!wVf;I&Xi43غgR 6 ݩJ$)}Ta@ nS*X#r#v6*;WJ-_@q.+?DK១btMp1 1Gȩ f,M`,Lr6E} m"8_SK$_#O;V 7=xLOu-ȹ2NKLjp*: 'SasyrFrcC0 ѱ LKV:U} -:U8t[=EAV$=i[mhm"roe5jqf$i>;V0eOޞ4ccc2J1TN.7q;"sդSP) 0v3-)-ٕAg"pZ: "ka+n!e߮lɹL V3Os\ဝ+A= 2䣔AzG\ ` \vc"Kj61O Px"3Pc /' PW*3GX liWv-6W&)cX |]O;C%8@*Z1%8Gk@5^NtY"Fbi8D'+_1&1 7U^k6v읨gQ`LRx+I&s5Www` q:cdʰ H`X;"}B=-/M~C>''1R[sdJm RD3Q{)bJatdq>*Ct/GǍ-`2:u)"\**dPdvc& HwMlF@a5`+F>ΰ-q>0*s%Q)L>$ćYV\dsEGز/:ٕycZtO 2ze31cDB/eWy!A/V4cbpWaPBIpqS<(lȣ'3K?e Z?ڠ8VSZM}pnqL f2D?mzq*a[~;DY〩b𻾋-]f8dBմVs6傊zF"daeY(R+q%sor|.v\sfa:TX%;3Xl= \k>kqBbB;t@/Cԍ)Ga[ r=nl-w/38ѮI*/=2!j\FW+[3=`BZWX Zd>t*Uǖ\*Fu6Y3[yBPj|LcwaIuR;uݷ㺾|47ߍeys=.EinE% 1zY\+͕߬VͭW_겼cazyU1wOw)Ǽn@6 |lk'Z|VZpsqL5 څB}>u)^v~,󿴝} 3+m𢛲Pz_Sp2auQAP*tLnIXA6L7 8UgKdT)*7>p{Pgi-b)>U6IXabPde Ӽ8Ģ8GɄnb'G ֤Mcv4?>HC78NE@UMc8>`TvZ:}O ?mI_K(&;u97N hv7%ylϯ*ei(VD**ւ:(Kw>}5TK`o#^]쫑ddgυW3ȹXJLDdTj*"*]ifRXB%Ug{=5A]渹וu?Wܞ~cW|υ[3@$S<}5eXtw;e|Yj7w#c)sVJc^f1(Sna WsAtj" ;  |k,qca%#O&r3CȖ[]q&x< !spa%@ljur^%Z_c>hZ4TYJrWJI ar1OMPˈOtT֤̕hlћ/BɢLWyJ,qKk]ayDM¬$,&QW-pel VR'ZXu5W'a,x!'@f2 ׯNX$[$/!J܊xh㙥OSZ/X=|ԋv%|W Tis^7ٚ/7y>^|p!q0lʜѹ3< " aT+xXۏ#-޾B#}W*Iò}(_ I4gJmʓHܫ?yۏ+LHOj=x6a1qLCDƦQ(-K?ی9l?|^"8o'ƥ,@6;yh ԵZ*+|v*a;}s4ׇːh 0 3@SV0x~d+[K'f/ʄiX}æʻ fJČLQX2Gw|4 ס!Hٗ\.2+nl>nN\oq .0btxFsDj73QfOs9 @LDKyʜ x"W>0!1b,oʦQ:#Tg}يkM WG@L69QB 378ik.o 9$?U˚%if92/m(n_{UUō5^UlhR_tSͅ/P*sPڂ=E99 ~DH}.m! fPZ uSFMiNOC-1ꢤV|}1: Byp- ` U yWD|{~j\4g? 1<=/`_{K1Tuc%T͡9VBiU2jdV|Jl{iNMTi .j#cZ0ܾelǛ^ZiVVF!=LsG4`,QՀ(!55-y66nII%%b}&ʚ3`ZXOJ16심ڻEVJ#mhcʴz<*[Mt-JQSPDrN|vzs_-RyY3h@pʍi4@0fO0װC9^7 xZP.Ԫ\2Ӎ,C/(1e A[]JF0[7ETLcz8 ӾmrsZJꑹs]<"nn@ip5ºo@sm%H<a\X76 ylmG}+p4>97ݻ־SueѺ)Z]atuI^qX+Nr"kJ{Kv @h +gX|M5Z“[dVfpuw+f%(clqw>/^}xRK\5AG7}%f)1`paS!]% E;l-_= ~<,"#"Nٜn\F$驘?%fhIn}.+bHR9|чj~д,q[Zw㾶,fvirܳP2;0, L#~̐y-ǥ!L^$&FXA0F>y}8wYAN<Wڒժm(6<-F >eX&ǰp됖S}6 \ŷbdۇ踶?w]s7~]42UKZ2 ϳ%[ 5ػ{ FDޝEslU wޭxC4\H. .j*+nq|%NTRe}GV{YیeZBZbH_4Q=Dg"-ܥ"KU`1^"F K:(ܱj=ԍ.Lן@5 p{{3 k_]y,3܅m=ěS00mBvqc.O3yu=]Bf sYC\ɣLjy{52ߋtel]!ߡ+/TUCϢW]ּ q37ό>Ve{ʛ sXhe5,36H"$+kHiWv<'ʘn=α2fxυ/E[eI|_88wnnm);vL2,$WDYg͵UoGfqk w'7~vvb6f6*z= 3]ʘCh,9~# ш9c9ڹq 5nBNtQbPMvo+ow&ۙ.Ώ DPzG{WZX}rKY0p-x}ĘKh9e҃y5Zf1؉KgV{@3 &9F>L?BЫyNxt5]wvvƳ;|/Y{皚3߷^D}yP!8CTT5n ;Gf4[R7j%aZ#c]<55f(MW7bvpm)ϸtcHm(ZA">E>~+-O¾Ta;.4F&.5 DB^CPOo>=Q O 8L Es- DR`6<~ T0R|ۜI6ke%f77=o#Դ,HšbԁĈ҄nNH h?{6 !vVH=( 14mn,0-YMmWf{ItN4ݹs$9$yB(QX8ȔfsL-㩈:HoauPPCٷ670ֺcujME*+o5׭y,@+go_QOiVV>J͒raUQ^'s'ta7\=r cSOe\~*#?T28qB0u73+o/NdVʞyIyYT9: wwZ%[PWLZ5OPodNhkk{t` \ː"o޿9س?8}8?x/rm^@J?U00t綈@ 1ݺ"=r@a|L w#)Ba*,@1>9QynQECⰏ` , M =`z2By D u>0(?"N_LOsz "@PH ʺ@ GbsV08ހ0>/pY/\ΌPpg@AГ jbUwdB\ޣ{7w!(ڷYevI<>0В;#N7@7~3e}d3 W @S5%eTB1%ew r~D;FwYDG9-=s$&p8܄t>W/͹:~Td>x=3pඨ[7=w57Õe.7dpn^ĽKTnٱ89/=k=sr= |֯wn7=Y{N'`779ZJ;7nn<ݵ ͣA;w}q[8 "wO.x`pxS ]DBidKH]D,U.N8Ϟ @03pnn[8hOw΍B4,B kr= $R"f9AQ Vɼp(%N|"3]=-˼Za0ycNH߂!g^$66e6 :.}>fkQfr9OJ02Fq`NA f"m"Dg5/GPE:\ѺZ Xn.DWUUؼ kP%JJ_8 _*82 /2GwLd6+AP6ZUtLѺ\,u<01 zOPsEUG+&(dΉ ?jA/_pUHyfIy[Ur`:L;3&w۳w~1}3XJ9qprH(1;d /aڰiwLO۰wV@eN2$Iu,UIY  03\RMyrvvtTҝÃ,IGn؜%a6J<f*K2ڎ~i[d@zY~s\-4:E}^\P⏲2-f1hT CgIZ[BQ5|rv/ o{3z3m5|k>-H|n{$</?>Dzt8W Dղ $`T^^h?P 1/(%A[oW-Ew:47(n# O A"0N)H4tx )ɔ9u.e,( _=N YQ2M 5/sФ|pԮcZ-BE,`0"~p7J0mC~NjO7\TzGZ7~h2;4pg7(wՙaUΎݔ27z  {[$Fr2\NHs?q'uvov{=h X>1gOx( ŻY Hǽ )t $pBuBv'4z K:)%|$yV^ψb;| ER}ݔ?9YWH*Ꜽ-f/(x4DÏ%SvKUA|>9H$G7jD5:/D(ځTXd8tցd@}\pO\3y\%:X&h/4_n1ګ1j?),!^zOi+4c_`ՍnrZ60s4[ j+*=)G邌`(""xM%j<[8 c9$P IgPK7/!eǶBlb;QdyH?][ُ= 56Ķ;,W H0O.8YN`UWI.̝eE]o[g?_XfJsɇ Y2e\֊gvwX!//8S/_ӼJ&6NWn*.bĝ `Dr y3rx6 L-+ Öb" (J%P༰INb*b@!Y_W1(a: lq\`?p .>`k3*BRBOsi2C@߼9*g.`#vZ8Avu" DnAz'!YH-L5Jev\dbZd(h@[^7ͪoK3YXϚs,s/5EE\b&iuWǔ ; l:/^zd,.6_%J[lݛgmUːr 9Tz biϺ|PHG ed!sqDqY"-e,K_Qͅ2oݨ w Ѹа`9P/@јHY*tH W,0t^3|`Mctc4No]+.-歘 ȐN)]öq1v!9 t7L5kxV!,n8Tk$:OyKMсK ˤ/Ed<|<[Hׯ+e@1y/N,t@>Ns fo ;Hǧ^1} uJK58?+Hw-j|/?~O?o?Vpz6jʰ0 E':>&zo~U0-w_>@鬛X i* \ă5$0Z:uDzmDz$-k0Աol iNƐ@|^!:deY/lyANwfc 1t,xW[Quai9ʌd:2 3CۍQS3,AK^~O$WO0U<3:h v)͖ZH5`6q)VpJз֫I!ox8L7YDe{h%k"{-ZMr-ŨƓCM$Czr?/P8Mp0W78qw"R猈lٷg-gŗxJWWh0Ef.h҅vH/()ppz(XM8YCe àkiIBlѿST~i1Qjfh.S*" -a-SQLN<7;.'D<1 "/`A9A'0d[[$dʳsi)pbd R ZyD Oa J8<>?ņ% z)a_lM.Ԯ$7\9\ bѡ%N!δ#.s=:@ U6=>>hD).dhti69#МRh0ڣ6;Zy`wsRDpP5FJ2l ۴4$tn-|GiZ1C#5J u~MrZ msgeȉY-7\}FW]qfńnp&mՏYtkJfZW2ZL[>eZ%9q1e+P56Lׅj-Oy t޿J&{6E#nHR7R*!cHd@IρM axIȐG=%Z{:_1!d b{~#9pD4S}gsYtJ⽇SS[z[; ܹB`ڧȼ?p ;|mFj/{_ṭvoi_K-軧Kz# 4>D+[ZM/awW_<޻j|7 o%MR>M˟7fUW6-mǹ撟H1??Ojo7jz&7bWC_>O_sm$y_K˔Y ;N)ǕdVD<6uш8Zݫ!$T4ˤPطp)b4LJ1. <2j3+m2AFH:;|wv8Z$ *p 5Yo#E_^KpI 6$V\:* MXo2tr|Tmk|:I`Xmȳm%S^+Rx|wH6l<-삝k~HN\XA%4NRiU23H:Ӏ6G7X*♣rhjN\hLxrt)HЀ1 ?0&=B܆i!pcq+ 3 UIcv)mؙ9(So>qfxŽ"<$78@VPg"I,*OSIgJq^k삣+ǃE2XfԎ(J";ar= xprNcԡ(FxIgGE}ͪ blZ$ "(38H~G"{d2_6O xBM7Ë r$`GO'c3 [1NY4:4Hn8:s&O tz(ts-Tp0S/r4׏" zs980Ֆ(-hP'5ǫ'xHK^mFҌZ5j.pHPk~J2^|Ok^Tt ;gc!~XX Ɠ%1KX䖂E xuNyւȅ}9%FQ>F% R$_.8Jz;4NU@>KR_g"c]hҡQb-Sb0Ig8那j;En~Qv $vpҳ)5PP(Ẍ́UKz!}]^$5JS'.., voH9*]s59N( % j'gHT,`Y Lc5cjz=]cRk?Xۡ[`χʾe"Pg5SI v,9aG_N xZ̔ws-f@_7+Eo3]ǪC#nbP7\((b3̗̓c4xє 1{/i4:[d)FX5o2B`[y5J5&Hhi%+ %<0J%IqP.,H'H.(0$Uykg Е4C\ 4MdLf5MHqV"E6W T\fj2Q9x1E].fu.78|425d_mZ12ջ_Tys]==lTcS2ѹj=!*j4 D$HqE!3? m[d8׎Ktb7tHb:3x>jbMa-K_#r]LYs*2AP킣Gkx$J ʶ*g!.3S=|HݬLe-kNNi@aZD>VR@ [>99zD2Y8cKQ?Y T)GC. "vIV-Õvkכƪ Z. O:3l:vюՇECf ТB-,c, !"osQ8(5 ^QTtk֌D:u^o8cSR,'⃀[?7*mX6_QuuRLbup΅I_1ah*Jg+* 1 .L  ιO/]<ѬzG+{.CvݝDz8> NPv1SaҡwS$]ϷNtq]V*}(EÙ5O"0ryحvht;(Tvg8·2_(F;2.yfY2;Gř5k{pf5z`y --pCbJa֛%֛K >续 ݂- Jpw\ꂣ9pS`r."t:xw(]Z?6RˡnRpkj孂^UX7$aߜޭCN+[1:5vfGԇ|~+w2bK۫Oa:<պe@%  {ixV;9>=wϒE#wK@ސ{zy;a#A?rzn(=ń|tiRvf{ /j׌Gl;z=JB fYi-ѡS"c Yڄ~ q"%V)Vi SPp"hI׭P$u%A.8<-)_cAn7.pm4 uU*Ìhwn\jʥmֳJuo0p0 HSx)vv`xPj@kԜRJtrKj힫3Uz˅jR3~Q䣳>8 h?O_jyg zfex=!s5`'P[Hwop<|)qUB~x_c_<#/4=?>wc/ k4xuɩS(>%<ɤ\T{ͬ-^S96UiLn)d9h0crΝPRJ7x5F1be-XEu6W(N5E̸>)Ȥ`:$TQdA޵6nc"no~1dN/6E@Dok3.˗v$_&">C2އ 6O2SE ;4s|neG+Cw4Iىo8HXogkx e'C:A+s/֍H udo3!uU,N ;I!?eZNN$86^a~=$pXFpiFR0z$-Q}$GgX< _ > '6-JG(Iiv]W0&g" 1F08Ж)啼h~?]dGq='d4x` /1B ax~:‡|pgPQysÏpDV1e' OD_&_&_輇>̒$a~0Uʇ%eq5'f_Oʩei1dxvS"a>!\2@ GC_JAZUn: 4P5FY+)לkn~C0 \|S)^W⌭~ G+i,3h:E'VG"A6CMD-hܤCKZeUK,xs:@wkR)+e3S 8hGo{Vds\Hsd4L`L@%p)ҁcodF_O~|CpϽOoz|#|; _ ,wY=^՗z&b*;UĜEy:Ivͻ%i6 Tɲ^޹NY<8g࿅ h ;d|V ЎϾ맃tz lh6W@^`DS;:È) ?Dx/X!+o 1.^exkB"Eos-ۿcLpwB_#ǓdA ;zh~)?E9鎋JTa%2W53!**,Y^$W|C:m5Y>.zl2 Y' Wg?2v2 #JV*1 eձ*?5ӯc%P|x㢐))xTE$`_ "6s[i0х=n]E'FE VI`D3pCGݥ)0oO@o u%_yooY WMF  pjg  ɀygĕjzcsVdhlOfrOR΋6$ӕ9Y Wxg` tMr;`vD_jW7ŸCNSETUNK,7q+cLT,T(Q.)9?+)b9vPO.<#yaFF_^v~6.4G*-Ijv'E89[y8l8/B|2ϳ,uo/rHB AP79k/Q%=Kܢ,*Yd@ZY)oJ/T@$Ì(&2}g{_o7cbzԵ@UTmUhd{BU#[mRZDkVȆcϭ%͟`q"4@`3@\ަ &4n҂=2ljkȏ-T90__ m07`t{ U%)zDn3U(m4TBS$i[(՞RPvkH$.m\Ip QIN0]&DmEoBnCL~7QEY\pTS[^RD}o2?l}}NhC)KQO+rh>+bB7<Bh6(WO}2ChPcvdTƎxżr#Z,Hov`4{s?ݻrTNq0 A5Jn]Yau'aY߫,en(H:'jG=Q'VH-cm08f6,or½̓y'.CZ +0'\' TXg!g)Et+mQ^O0VF cz1`^?0bDلbKʼnrK#9S-?<$J EFYG֒[D0NP!c@F_r*G^bNO7*@1()Q9V9e&̆ʈlku㈜eBc'3Ӹ[ Qtlvf qBBTej[uwLV#ho%?YmvVCY]ɭ/d0ZKR4Ķ$֚`3zZXf*P[֍AZCuj NS-O)6^u9Én[7jpϑU1〾z7_]AK5W,ܶ>o^`,\ ܗj|$@_v r,"xy~KS_#=GBK_=Bf KvE=7X N|/DF^}5Pue 0f;6.#ͱn]990Zk-4q3oB s^vyi>"ݕ*$~^bo^Cjz7Mz~pyk CKL+̎ ny 0oDD#1/"G"MAʩf`gnTVy gޘ FgWL\A|x4'~݆Yt4ͬ4H#|W? T VR#fo*NKOuv90o2XfADt"㴇h?(l(7F9;`gx.=DzDGPaߛ &Ny nhW?^6eٻm| #|PT'fjq5=ӹ+n wn Z PB9˹ee-wnTAޫ8hnfX-4Q;.,- Ța>@ $0^ս2IltQ0`~CqyMc>_r1~:Hyd\.L(]=p]Ef_z.tG^=J{(t2+ʾKRw|!E6g!ywB/ߏi9`99cmb BT?͆!ol/2cQ&E;6"z N_'Ձ 1.Ld =O{@_b"ePQ J&}=ph4!)-ߪ> ͕-=4\ TFEe) 7y9dcgA.T.Ts[BSMR(aB1 7Ds"M+qХcew}=k?@~njM9)XtlmN P*۬!8 w*+!  Җ)RO 4ŰY VŎJkf9bL` 雀Ɩ\Kum\SnՕ6pd pL+znZmr3aV&$̈́ y,XLmJ6Yjs9 K,W&SR s5U 4q7`PS8h>x'],Ă0<g;b hctSIeJgJHRV1T5LleӆK[-v L,Xx$wQЄǎ`*pOaO#YlZikv Zm߁m t-|C#"  $OdGiQTW^nu>)Z獫N" QyTLI&N ;${k%z-.K?vi-xޡc"0m„)ckT !`x(IdwOعYĞq8e'0oK)qb vD!I| _˜5r <˝#JzsL @8ej }࣒ۄ ^XEwQxfbN OBpx|\Tk'km8e@\aEu ,|w1l .2~S!w-W=C#fHJ lDq_U׫U݄DQ(*uT:? fb9ABY Y'8h[(peZI`$&ˎSm`*Mp>/)M)A UeVr gfq6{3i(5"fMP^Tq2 ͒JZz:KsBKg}R:atmQ8i*wssÔFGn \+R"uQl)2Ҙ/ŒqdT\`|N(c* Wϒ@y3 .HFR)Û8ù0.FQenI$8d>rIHʩYn׎qp& y#qF+hׇXDWLQ }!c 9 mLo65=Z-({Ʀ͓uB4~7}` ٻ>bzDȐ*fDL6YF"3:f h1W T U*OSiFHWS0vV]jݭjQ!;ɮtήQG& NdTf˝ V2b`?xYt:jQG<ɵJBg )rTxb% EJUA[c9 z+"XI8Hi9csAȕ @V)#rE1(/K-z %4D|ɵq܆uAF Y"d*M "p0)'n"vpKv=+iIxchV Aa80eF( U2a|s9 zDeF3g)%/)09A0fM{F0^?tX*|oS+~0iP`EXE;zK!FQ*a'ohrRr3{cܮ"Bˉz^ĐꝳO.:ri&D 4RLv L{ 0̇yLtgx͛CFɍNnD|__J ی1SgZIJQ~CҹkWXrôA@I |EAjKTω>yG 7>}GG #V5D-:hܤd2(04 y,כW]R`rROiX~Ơc.~5}q6 -gAgu2"v&W'Т`-H~QnV83"2bFC4.q+kN ۝VNi~J 0]oouMCgZp"H'N!H*wP4'c8 \-(2? EŪSz8'mDwZ`vp5?Yk>Zjyi\6Ri6oWd$Z\o1,byue7JXw(e}{Zij:Ŀ7upUwطfc<㵘hHg6xF-0`:!%bm7AVT+=z)#VΧa~s@J]1dq_ivs&Jy]fY{YZ 2'5wsV_Uz 9xƙ/Be]33 I Q@ r7o.'U:Fys;CaQ-nG!DxC^^4i}xA$W4{?9 Waa]a//S˗o3maFglʐcY#/2[Ы_5͉jc֟1rPɋJU΋Ny#9rҢQQ+?~Ո1VWrx:2nORCnz]1 /4A@tAAKg|}"z Ngӛٲd̽nK|>>klQ?xRT-cA#w}xSِR?甠ֆ/ATJ[n9r13XFH ڽ!-Qlo_H8pF\' nِntC!IM#.SJpKv=zUmy KڅɺQ}V-pNVmLjzU0jóqK50Q9fFKy9f+A m9f `zϣrKCv 1TP  _r*u糫CMY>P4U Ѫw\. DФ F %_;zdHJY0\i-iNrV*r *oJVƿ\MUNE7DfD[TK4~atf(0$|wnHmU5Qsqr mlhuy߆|4f7| 50c4V^sе~%AP-}$y=Ӧ$.DJ.oؿ+ԫ}Jtu˫p?齵i/~xUSV~N&3ذ!]/7,JׇK$nn'g*!!$m} `H48/nv6xf0u/R{/60u-LV:x%%*kYEMf?(ގTyePַlǾ&Қl&'#"M(gb4I ޸o۟Ye:'xCк,-M mGP.S|>!9Ke*.-r`)Q;Ҟpzɼi27\2ș'Eڂt`ǯVbK Ry1C+黇 'ǗvW~!8PxϷp4Zpt FL\,T ^-$ ^>W6@+0MY1F'ۨx湧3^GEnp@b6ʐHd; P<2*DQ IG0^)őRXj#A xiBwFcU!PB)FQbw88@ oT=ؘ[ìT9;"k53fWW)Td)F!)* @4A#)7" ƠNh1wCYz 1R*c(4;1w$h]n-ʧL? (pڞ<'$W(=(]QἙ[A zB-<8Thaj~ֹ 8ШUYVљl:r)DCkF9r a{+;),'օ$u@`Hitkh%Mny΍O)W6Hi͝ iEB:ΑY8$2I\4[Ji%Qiξ#r ʎ:ᬶStyM)j>8  ʈc%2whХ("ssD>PaP{A7`Q@;_t99{WFd O;.)Cg at1J&$E"%d`눌|*_ >TfކVg!H fD3!o5z3 p%G"o{ LQ\`/#~ m:W@>U6"0<9:,0:^-q9iq3lgN 噴UAŒ5X-aJ<d GS@s@hclLz6c1;+kFCJruJ9&)eY\ D@$b:,Iߘ5Ȇ912Qm8EB$EΡd0(lh 0B6}O-tjuL@r EsTeJxdC2 h`c$B} 9QAR}a^x1:pv`Vio:#3 Nz-(] &.HZTUzD1L<&q 'jz5%tD\P4I"&iY!QJP `(=)(^\D,Eeu0ь%N&&.g-\!tDo='K{6JW}{.{{1$ĺ} `޴B0I7M+kyZ!X֨Z߱VP d*ԩB*ԩB*ԩB*ԩB*ԩB*ԩB*ԩB*ԩB*ԩB*ԩB*ԩB*ԩB*ԩB*ԩB*ԩB*ԩB*ԩB*ԩB*ԩB*ԩB*ԩB*ԩB*ԩB*Q(>ms{#ԁHR"ԁ.ԁP()US:US:US:US:US:US:US:US:US:US:US:US:US:US:US:US:US:US:US:US:US:US:US:V#$[TteJSޅ{;7g|t=i5Mڏ%k8Mp٠賐F_x@sqlٱ3<|>- :\r@0sk=:-a4?JS5^|2\o|qwypV`1M/w}>guE,~}_uD9;H>AG)ވ9a^)9E1R^T0p凭E__6Y Pyv}~욠 !KFvr>^mMX{UIjVIjVIjVIjVIjVIjVIjVIjVIjVIjVIjVIjVIjVIjVIjVIjVIjVIjVIjVIjVIjVIjVI^%nc Jmf͹O<[LN uDNuR^* `4w KVh -Ϟ?\3u FI@};xi &gs7?wnֽn,drDj2n`+AS哪vUǑZ:5r_qd}[NWuܷ0oI̋rM \rovf%k5pEVQ+g6l X5Y)XoH1n ֖ \no5d8/pu?%s_i`_it_i~^+;qp8C[ɴ#\+2 \?w W \ {W027pEr/pE>vEV W(\+[7#&֋\8LY3uVRu:6ok_Lir Pn71m%y2O|$d-W=ɓ&o1ɫoFn=lZEY(,cg}C$϶J~kK;8bx]Y.:ݜ0u)FH~q`p_➛CH20B01ZE#,k7+l+lJ  O+xB}z^i?}?O&g+,  n ^{qf".^q]ُ;'ۧdڟdޛOwAy){nƣ͇obUYoŬ #THj4֧_s@] >_nk:WԞ(W[7GqNx`PQ6)훓Μ!.|>u:8ɧ >2Hx FC ̭=aq0kۥA'jN\kҐzcƋ =c*Q3 n\Zb0{7<;ÿ.toK643W]h[H)c7mhqkw S- Owiq۳-;d7O[.(a((-SgyiE,7j$DӞ;5Bj#zomQ wSjoLq(mq^?{9/lQMqas m}ŰM0M;yT/c4?4$)voG#?5ۘ|* 𾞾W؝ 6lD2"_5Na||w/v@#Ox,Mqlic~3$pԞ4~*1Zd2!<4O;F{5XTݟݚ+\|\؜lqzr=RN۾P>]ߗl\^Q|ZZ nbq0mB'_OrǙL~"x:۶]c0|7~­m||۷(}Q9Cq/meu68]5 ?m;wm[P||ygAXSwK{v6Y -~ǑDŽcS:I]{1b|C밭49؎e|xZzm#LJ> ._McԳwvo~O>7* [`-@|"c`=/7te\L]֚W̥&/|O^&W1sɧټ݌Kxh|z~t> ?7^MLrofgMG͎3Bf6*%=)]tG]Ljp"~.פ-4mרÖijueNUB8;Uaw>xʶptR(M4B(W|>z:eE9VyP7 fnG,4V1|=&&b7kξcϑ[LI715W\[cnwj^vn?(Jz;A%mˆ,R眇$ @?GǴ}Ǹ~XC[O'lqygQob,4aVࡹq|Br#F Sg[PRDK%2z /koѩp\Cai/ƗY M,h`L;1+#Zь)C^'-:70:W/~v]3o7WenOm*>_յL?޶gH-mdhހ7U,g7W+X+ڮك]2LN%6\W=ޮGvKܬV۫uƭ1Lyy.x,VKu9'қsQNrCo-7z3vn[n-7tF 7mInfڷe-f7ڮ]7f¡Mze.7j7C!T^5Ǭ:\ Q^`pkX@4#8tN3nзa^DoƗJ?nVY+B{vMm+Ӂ2u" CbS"(6uȿav\g 'xHZ65<el/w?>۠qAC|YǢٺ ?owU]{6U+] X6Vr\< 1+h%U7w .7'G-r6B+: w,"Y`L+57:a D!Ϥ1㸱QT?ߕK4ߓfa˶ℿyG-1-)Tc K%Ψm׍Ͳ8z!(bDg0#Ͻb!Fr qCY.[egX]{UrffDx) K(A^Yb1|(x,C,(91jIq0 BZ3⚿-׊dZ#_ր,x+7r`0+K3Af9G8,&c CE-|<3i5 @etШYb9|(˙{8%B],re YKFˬ!pK6 bhD宐{Cg>ʌ"\n\4*A!mrNm+ZsRaI, BY 4jcpXrm!7dpBrh#Sy, F\33\PTz ً(F>%<gyviƼ&YZ0.aE椃Fr qCY.X?Ug[qߣ_4}f\kVJ`$G|5qc%M)^2q/Ih#CAcι\1oFzErl$:Cȉ/\AX)%Wb7A` | S݈Mq_:؇$h dJftp`ߝGl6㝪GAX@2.XZ6܉"'Ōm>vT { o9ˣSh }ref0JanT p,lrU@b<`2uD/SQF[OrN+Ej.5c=g|[nU+[=iR q=fG5s.#kY&0bhS(& q̠vp>|}zɨ3gRre? hIBpr"'eN2fP:FesJKCii#wpW B-'-"w mq>#vty{P]*2#,]XFmd79D߸m>ъ&LƖ_;S\HD{XF\ !aiFsѾAe9GߪdyksH 1jR{n$OKpQ*a0%]JܐVdP]"'^fαD2RCoe%Ә@я. z+N F(`XMc#Qjw^dSGPkY$ߚHBo>Sbkڒ\ڂWh)sꍉncbD{nB(ocG?>/5>>+BYC`YMcSDݻw$oQE)>O"T-YE L ~-.fjg,`ԷpQK '\$N`og^XNJ&~Dy` Կ[OݿşӢJ?aVwjPs kP{X?yx,`U(WHCd:+c= $%b$3:ջ3>0ө]M0; iRKz wϯ&0&ʺtх@ =N@ m ~.OvtQQ$1I;iNcT4 _F HwZg0I] >p8)/9M(g;uwpKw!~gp+;; :yiLqg2F*i%+lG GG`9Le޳^.^gn_$:Sӄh7pGKہJ$54U ;j=a;̓4)#8=:!P3 J̄)N `Ў]'ra0R&0sg6gO?}X :;>IPP7Kt]#R1hn1DM؉ ut! t+ӘD\۔9ts+96Մ8 saD}{.U^=o#j%D`ݹΛrz]KJg eAIΜ9d}x|x>Twk|ŔYUgMGӷf]to O]^zz |Z&g}9-|}A1kby(}a%c:O<~X>Պq%wO~/2Av.[3+Ԫ1mU IiQG|9?~ @Vlu?ZW78`6Ulڕ<40ZjKcxy{a\j?̷+r_+rsרN(r,0[)P3/g9q(>BY$  ^j#aĂ}^VdX4?v | ˅HW<㧧Ox;h̷^X>bbe*T&+TTg~erf:h4 WP˧uI~^4c|fq{PPv d bב}a${K%լxG/]nd6gc"@PLu!; \xryA"lw&h|Fr6QCYԘy࿭4)#MBYiROl!Y:`6b\C)v{ V=w) |(xS@&ifP8L5.A^0#Sg%8myH`vYc1; $ V[ܒ_R岤jeL9> uG^zk~2.lHp5àaȼ~h8\ 3rz={0+Au6H T L֏9,R;M4IV'yVp\=bBidĀ`"/Ye/~ \eb•T5~g $lH]ԯlҧvm8vX|~  <_?3$ON͎g~ { g|ʊYSx=9gϙzJk'= `j>Κ"UAAoCV,ebi2.ݷM ȂX3/03hOt$^wy\d@CL4|$%la.IG23^.892w=\&Wr\ ~On:dcȮC>ۊIW{צ>]#rZuȹyҌҿC$D+"*ޥP7 ȼ~y2Sy^E]59Frr*W_tbJ'Dq0rY֝IwI" ǁ*1u_ã0u3ֳR(=uQ g[R[%rZw`DigsXWDN$IAQv^  xL=p8GHjNvt$1FN xB9l8ȹ-},Ce7Ci ̘! VP|. nr_*Ռm c @( 4>%BbwU^,2OJK1/K&/-Ҍ&ѳ"aFV]H4S]/XY-Mnp/ژǹDK%( = &odNM ;ձ÷Vù"չ*X: `BcC'Vܾ4|$J'0GS ,DFZ`y=haGB|\ D%ѰZh6w㡠61(agG9FTnQP ݣp$I]l6c) Owi,[A/k9RIJ痝&][&1Ǻ$PѡH,(nz ,$g}Gh$)[eeJ/rQ^6wUNoIj6e;;& h7FM|CAt v[3Ib*H\mxF)puTx4|gSR1,:!qt[å EAdK(Xݿ1@bl*?NpД ]Ryd KTHJv*^~0_~4|ő,.CI9% {u~o {I?+/ 7|'JY} aK _ {<䤩_@ TUG'd8:۹Uvi@@[ۭ@gKo7>JrLp֯UwKykGF/ΨlqT1irơxL ):Aǐ}hXe;gV$PuSw נyةϏt:9_-m.b+FrkLcEzK ,XVq ˒އrd==.Tvg}˕QV19\OB5tiZiL:~ Y3U~WPD(q0]BFA#>ۊ C 1 '_nb`-ۼ\K j:sY=?LlѰCRQqwnr`5Qm7:?|5/۸L;7ѭZ|vbJVщX +h97HV1YfAaf0WF=')6Ahmv FOQ)8ݪ{|&8(v8pu\e[(cQ+?c!Wn|Zq=R~)r~> 34c;`~;` J f2HQ-r Kr =_! DJl Jr/',s*Q)XG Fn>rIJE1]96Ɣ'),z)ob()OW68%"VtĠHD^9(];~.ExxY[do ˕A a8Wq%}zpLb2qKcX)TuwXs cENrH"vܫNr\:je5Wd].7Y )e:ˮYkr$Ql킧+4t}(}]ܡ~v5OW*,jݥ0qt\ͫtCjvg1gi%ZIaOJo!EH" ϛ_݇tE>'L-5og,Mv%lٴa>m"ok dw|]msAg.7i@/y餷XGtxGZқluX}k s$E3:ϩSQ>ᓥ5Ooi ( c~FV[kŠo^j7ᗼ/\..M>_C{9حߤOhE"s-ycE9~l4AE"Mit`nzu o|lbzr=o(asңA:"J[˦ [yo,sB_~[~˷Qs9ߓ߿CGt/=}I> !n,EMD 87p8/jDBh :87V et[eNcTN95VCwCr&NQ5|/G}KBꐄv)j΁Gepw aKB4D/KnGepuksS*> 4:*|M"䶠<(qGepIS\Q¦aD_dr,IrCWEiOaF4(c2tU']=]m>0h|Buү穙y`Uhn O+ĤYUp׏*5~^"fh#Hӫ} TWbGep:c@*IlUm9Ӝ_P,Bs5ZE"&ENX]@xTO$Y7 g5DBncW!7ᴲ22~BM~(7MPeGepUI{/48RUi3t5;b%RFP.(xTGOs. V KP<(p(<*Cd;{DU[싅B7Oق3BHF$da9(cӵ̜ĵpe$8†S2h#MCvC9a# 7@'xVfa*&%{ze@x)i86)|B!S~\$_=^zKҨS9(dIex(n(xw%(憯[=T4Q6t膴ü𛿟r(FY2ʈHiq#~. } ^ <0$U6}qJƒQ']77P9c$̎oG%HAL('j] 3>ó$8D&O6!kA~>@Hjr .P9]6\}tLfa|[WJMN+Cb IiiW7ր8J A6]s TvGept2k=Z.\%Ϡ2,3ds@$S( h3F =i]^c=A0l7vFۅ$dZs`7߻/1wRpL2)ή&qsXbsDey.CA iVuKr&f,&o#ۉŪs! )moDǧeϨWČ?2#a_.F&ƽHӐc2$nSKD2]zD͑!2IyB'hI $Ṡ(*hK #jg1 wOo=hKRεӓIȖ߸Av%ThĮ2jj@.bdn,Y CD.@ <*vX&[4x=4& "~)ǧE-2H*i#HXЁf?33H\S/G:28ؘO?ud: ^\:vxTLܻ2kGQ0.FߡH°6xL`ޗpnKnWSjҗWƅ֬7f7VdnC6(w.C|"mA:HChe&xKN(ctKᄀC@I3.l(L:S'3@/(y[JȈeDqh3Ekm#F$IOְBF ,3?#dTsFӡK59n#K:clV*+=Cb $y4X}4HQHIiF[&ѷ TBwm9CptM㢓P. vkvaB~'Ÿ/ƣy?JgV; ߶Si _:D$2MD\JttKLdbNۍmxCml|KOVeT `I%3YwicMG}7mr自F]/=jTNBKv;\N|*NH3Tj(ͭv+;Bmn8qpo=gOt^Vێȫ;Z0cAC_)M$PZ /kP}x0T$djL-%.,rd 8/"jf5d(>^N0&– $Ia-`$qznl+v_g{C#lr^\ bTc9NʉW–Q@ĸbkYCkgFK`*-$4S{yHp`+_ dDkQ} #劾Jq:A\Ɋ[S7J*1b jt^EA^#nVs9Co5un1AԘX"vPG1cC hcUhM]/w1 7tR!"DXN%`e1K$Y9%n:?O'u' O 5(ă]R x*߫I=־U[9Mއf0}[U\y~<U/KW>$}Ğ<,\CvbY#~ :e:V&b4O2A|%:d4ZHU$yUĸ%# QJż؀Xp:# wF<\DPF?egOd$?/>Ia%)g2 5Ra⮮4[4@008~ 1[NG "\v[OOU81Tz8gTn|o }؈Ymbvn4O1><~;}|\mTIl(/r& /OW7[U_Wo #]B|"ͨ%L-)di&{>V+D\U +O~n\ո~M?tᇩ[#L+vV?/lYZg'y=-_W3 [-ާ^'z7PT=R߭nW ꃅ<.f:wJw+CFоeɟx&x}3q ܵ?Mrxڐ㿟;o9\)'ӬHg&ჿ@H턍8$aJUB B<՞,No-`F0n:U-Yh`2nr>%"0Fxd==/1D,Vqs-vX'+M77oZF-^QEo5uαd].>v>ۯ/k UPSJcDg7OO5uuňhd@}},&i:d,&f7<˦އY+M|#2I)|</ɢ?x kix%'c%X/r U.O}\b}tx7f=Zາ$Uj^:$]Qx-5JxRbCuՕ({ғ&pW-7I,fORwDQ.H¢ ϥ;uJ!}a$ANi7ϓu^k&[ޖgFnVivcQ) ;sTs*DբkDfaRԥb[Wᰦ^'xCX/˨FȤ$OSiha\L3I8#kac#Z!-_*VJP:bp|rZ,;M`b!mSarGEߐ ͯܫs`cp"O}P*d;PʤçR M,8:S0ָ7&1єD9l"`yf(tjgȗFuscэ+1As܇2 =7٘؋a@ f#!AEH^FhG顎۔ѓXEOXf EzΉȜ"lY Sf!cw ݱ0ćwn<*j"ç/('`[a_:mW m_{:!CJ31*$&0i9' ceN&?f__T$QLXgsk }ͫakN)⯟G²Tditw0FjyINC茈܂5K+Ef[61#Gr+\6572EIJ)EJ`I Tl6!d(䡿Y&]ga{a`[(^)F$T$e qo>9j4kIVaF^C"Z7Dr r1]>qZ7Cu j1p <*Z.C} U;X-M"-C$_3g b :f6m$4o&FiFMJ<ǤB6Aw=y4>uH=?s'ijչٺ!^._%*NS!`yopX _}Ki߷*&M+J-FYx(^ͨ4FkLdz@1sQ)^&!b;)&Z'|cd+}rR^<|=j1su0Uܻ#nFW2įW}_xPlVo~>&ת]R}8M㳗<]Yy?>T%m@(ZNʹ%ssA$ă`,>I#C֕')}Dp{ "݁ OC6Sw ||cf^wt*=_|Z&BGVrx9eSѦl#SU/> 9ػ޶$WR?M^Ι;3$Y pmm$J); I=CI9A䘤_UWW=k "')P-#0)2KAٸ۝Q(X\ie!7V?nZ*;h,zpՍ[zQ|LRbq-({˘7b@'ӘP pP֡̅PͣSI o`fC!>[[GnF6 )0@$YLɄP#2aEfF*`24SE|#TTL $&Ae0ܼp"N$UPCE~_0k]Fᨅ~Z[6;h-Ê7##`jcl'Q[3- 'Okf>'Zzpoo:c6- 'nK~cSV 4@gl#BCk$P"9Ŕ'>>!iö+*ZU GX5"r*$ )- i,tt Jí% `|Gq"Jb%̉тœݍ 1`vhY8Lz ICː8k9׶߲Pj=p)ZG@⢊ULu,4ڕO|.%"@6kR9 E$)&5%Gr@N*rY WlU [>kЏ*;h,|Y&c{;UԩlR Be̺8hrs:}2._}4[J?碾XHTQFT(هJe7 @"#Y&Ѥ^z:-W;xr]E]})vxy]|9,9葔~bUpFSxU0Wd4 F/Ɯzѽ.QЎ71MH>rêe!_Lϖ`nMg(J58n>~}ΎW.tF>1tLù2Kݢc\SVʆ_*/L0ݭWûT|*v=T)nIk?+ ާc 5{p&^_@]$ Bc9_:PE2Q(~Cy^\NKtmFӁ-05L)[9<{@tmr1wLS񫯾|7J?=@RUQuE+5WXkY9X~(n*U+3eg zs 7O8.[Cάtyf]Letc3d;zE,V-ܩI#6뮸rWCW)c/k "{?Lby?/*LIajP/g{ 5x쐻VYm/1oeD/x (8u:b ͛ңi 7da-r3V1^x5^aW2L>Bn??ZCA.;6;gkeEw9ںhڣkhkրG!ǘ8-f4F(!"0ͳDX("L05kČ>q `<5$maЗ^S0qΤz,^PβClzxYK޷G?ȋ-&%! SۇMW͕_FL37)MV,(Cqy4a@qBJ}}gJ!'e@([߿9\'㪶q]3fj5o1xU*.e3ЮX*퓉QB+?Z&p# +bpE.ҡV(OYh]_DOC9>R+gQ¹~(aI7q,4 Z?o#'s$YWΣ33kltv{bt^c9go_pvW1Rܮy 1>\uShqq,-2{!tКm#bV cZ"YU|,\?B;b7s}1s_W1wU;ll%5LkI41\(mf) Crm8Qq,m8Fi5JanVr.vHg#ˀo<o'Ip0KBfPj0!hÈN>u)b}>f=zELX:C((2b쨎-t`j%8L\R>z͸75/{MZN;暏gJ֧w+LlUyɊێ8c4VY5,] HT!F*.3&";w‘דvvbx'RyprNdž8be]4ue^,6B gl0IvBŽ 3~8I n p;vl>i-J;$Xn$%AdRx:cQJ…9LN|(Fc]+XXjV86>ۉczu\X)kmųkqDUpzdys7}<o6_GIxyM{;twEWFskk58юqX,'0M$ފ%2[h"u5Pr$@p!4of{sa.!8)$ELDƌ10"","_sB|>2Hzv4kKBOV(X{ƪʊ'l[g͇ӟ!F`u1jϨQ)_tfd'$lѵ8u ȧ7sez3qitP>ܩZ8 ߔw^7Oم*<\c&PY03Ti2ܖRWU_yͿ[wg 2va+%ƷS"Cv;TU0hKGFa.lI?uL*ASwr]}HM7#0y41iQi;8 7e^@v`[P`:˟׿zOo1חsysP|"ag › lI i7'M%AZ&{=x 1k; R oKn1( ~~zoº~WjjyOש +6 aRfDk\,?cb eGvb8i[3]O]Fm3[9P [bjH;,D)&IXȻ QR;E &x )dʙOaa|d]o(IGIxwJ'6>!#wcsB*+(M1}wQS].w^ڙk7.C867.fQ?!"Ĝc\ň/!G%!anۆc +Q6naebiȅ,PX]EEZΝ]OӮ_Tvqy^#W+Co ޣLzSL=dڦfJ.Oa$%HB`?DB:id!<)@mޞşg*uH} LBt'鿃u]6v&ڞ )f QvoùĹB `Kow zЃEa ;|uD!2&T! ^ñq;*o޳DlRR&HB (KP J8xq0aÀJv0^Ef:D&@Nfavgk'x1]S n=^nTthbihq]E)+*-kɃ]['ea.ܜSl z=17Y,i(C#P]4+Ѓ$+>Z˦Mi%wm4 3;1o} Z9'le f?8Xm%KEifs^Y0??/G9!ir9z?.\II4}Uwmwէw.n2ծwo^-#4gWתֶgǶ o<"}ڷotgILN,+5NcIjo&]ݡT44 s%'wh}ߘs+.GחPr7,qro( o)<@t5{z5zwM Ҏ O=2y­/\;20Wguy-n Y 6pu=_L a}cP H@T 2D23ڐyWdsٚ#h {w`, %Hwi:'͋g{#\oV/جC߿ wT֎QY;*kGe uT֎QY;*kvT֎QY;*kGevT֎QY;*kGeGe[Ge eB>Lf]XcŞ"8fpɔ)Hx*Y0Lpo+tr?|\&^ +#t{&Ll-<V\WM`A#^ |3컈k^4m5tJlv g>DWk6=8bJCvGl ?C@mّy[]fPs'F?a[@(r|)l4Nu:?My̙ !} ۭMߣV} g3=F@ ˤ bE^VUgyns[ws6Bvsɿӏἒ~.s6؀z8  iFXJ߂܎?h%Yh{ yu?؆eHPr #TM.~(\+ y"|H4yƊ@QGL @p'!(pH(A `qD2qk5jA Юܘ*@ Ю*@*@ p`ֶ4B >J I Q]! v4i<) >=0T Τv]oWUv] B 8}S8Zr*NmXI$~UU@WS|XZ<n/+HAH$ gqB/W))3]1aLR ,!4B~% Q_qr mG^V_&2@0Wo"GWKZ0Zw3Q1 n(;{b 6x\P)F$|! %XQ(e Q%~<8a0a@%r~sGwevg3]>ZJ_} \9x4H,A~Xq|iekUZ|׸KypPI,ƺcg`YGt)5YMX!#/h$;.z{]^ɚ/S=O$kĂk:'PY ܤHᜑ܅\urL PoQk%oҕo]aREHPm?&U@2Bc4նƫu1֎m/lbxCu,Юg l&,,| ZrՐPAzF `Z/q.`c?dGd4r©6,TtFŜ$` 5,%DOE$.Et@YBv`kp,Lv-wB/_u0~O9|q6D{]]SUzȬ7p@PIHDAC$ IeD\ ͞lwa!X #BEFqH9cg1  b+RX 'kvWpNM|3/c6{cqjCb+Emv=/0SϬig}?MA_~OR~Rs ~Iϱ iD5OQ7Q_+>M͟x5 }Y>m 0 zVuO6<'1|\+{0/+Nz5Wufgר\ZTNXS5)WV3Vl!͚ǴZ) CaDJ0H)ECvҙ-fEqɳ3qv !"!(pH(A `qD2 [!wr'REwrW4Ʌ";U.0v.ce a12]ư~>wL1*8* >Uq@\Wci * AV2@n?m-Y,/9cLGlz3Kg_/1 |Wa6WĕqA\yWĕqAG];g3֟ހ1cRQFAVk++;3# eM.5wML{ufw:Y~E>7j]1~߽yݪGz +oRok3ckJs>ui.rH6zY5sM$anM7a5{Z^0@l 8ޓ^. )PNT,o"?/o޾62o??q{37WdJ( J |&Q!0#;i%K[os6 9( Wؼ#09LjrB1b4DaK%DFIH`9Y. Gb2febiάZ6+#Oekwș .Fb#^oj#_7^9`}970>@B2!l>"\^FR)cpJ@$  3ɞI%5Dku0iPQԶ\h?O,jUO=!P3F8F_zפS \A B*( Q>\+@2gj̻,jug6:w}^~z0H"BupEwe2G5DoͮUf02!cգVkw~7; k..F^T  7| vDFF{f?t}H_FЇ/zuio"^:M׫=%cH-F+,Ts΂pz.t7Ӱ$[8~o#/k/ը~ǒE2̀Օ*V출|j-Q$Xk?}V7v/Lg6o1ޯ6q2 = 1 rX4Meo7ta_x+΅cobr9u}#Zf]тtsiMւG&$wo,{#ƃgpb{d#4v<5z,?+UE/dZr<RVa~ bz>J6w%:1 IÏۓq,~Lǭy4̻3yhˑ7MVJl6EͶA~i˯ FlAg}J߷9 l0 rO{r{Q;+w!üV͝n`hF̐H-B8$y>$_ib U p$mR`6ݯxhHZ\-ӜZ,VS$k>!eKZ\uU8 t'_>fgjg(VInoHh9t~6J9F]{5xRJG-eyU˛A[aoI șԔZtQ|ή#:Lu4^CqP3jBd6.+*h/D%XQ(e Q%~<8a0a@%.*j>29^u)ePJƍ\Qbf4[k o?Ye`z]0d&,7H, Dڬ JCA_ _hh1vͺ5с%F[^?,'mnPԭ9-@UyB\-׬ ?S`42U^i#p5#ޭ {ԟQ}F}Xm$cp-qY8o{db2}1eM}cԺSD+7d+<ن }rF|xYMw)9^V[&;/e%>^VݧfL1WkkIgzH_&Ȏ}pMsnpb}HHvOr$)R-rou鯪S.WT08j3urfP4TJF9`ґD$2 RjDs$r(Ygi)g\Ym _Cڈ@ZhcHT{ 0$Eh ؟0ˣ8VxOQ+ދ@"ZR- 0N0F)XDVi&"*AZA-7vRo'- p%A!#ҎYP ih(Ͱ#QTj N&DqLvOY(<եY+G7E. FmiD-8H.D0pKNM9AkfV 9K;.*zz*?%O0j'WI;!m91Br灉>`#ep($&?br$ ݧ[hfmtwD@ҭTD2'Tfi%V!ʓH&N <=U ƽ P9F%/nadm0J*0J')"6]a- )i EeX[Ss!Q2FX+  +;K9|.$H2}J;0XttAڧ!]qҚpEJ.+1qoe,Wg*@R0F$ "w5b>1[Wɞ| _[S≂l+t`vs;X(.Ɓ4%lt9oԅVD݅AAks0Wﳵ9UOgك/ϮޮÍ A `n8WR\WU6-w3z7x}O7MݐnhVI`*&7zsx3m$zmDEӔb$eȸ ɝKMOC_lR UDliTOZ66_gnx{ -?ˏ?|7?o? 0ܷRI!0~kegQ{Sko5UE׊w9j[YSnwS|gKN/L[lQ>pR/sRwY:Z+RrW? b~STN3W5Q܍/T"" =ҌyC.lw77&җuq<4n@iAo{SKg})F&P^bC P Ȧf-MG<i_ɧW+|J>}%O_ɧW+|J>_," w R畊]o2&9VF9e8H Qo@ʀƧn@36d$"YR_Ӵ>ey90bQG"`"RSFDD b FрG!eLD3{nxtۻC)v0nu/_} IonGuPJ-ưkM̨3hQ9b0XJBԀDL,^cΜѓaaߝ-vﰉxi$5Cf$\@0U埽&]\V0#)%-rm| #\zSx|CWYGV팬v}sFwgh@)R* ^"_b3*)KN.-(=l[oыkAR F$f,Ht48؋FV4iY 6*2HBZ1jK*34<Ncʠ]=I▱(V(f\hA"( a\lpM˧n XRۄR]fF}. fs2?/VզZrA+ǝGrDTDnYF_V%U4Ç)ؙ=ZͣZn*Ŕsd᳦F8X_ϩc4rt?޵u5(>%pֵl}>41MRY&%mreWaV}=)-Yl*|3a<鏖bE)_@RǓJ,2@e{`E^K?󏊵.P6uZ6meiMM}{ΚF୵:-zRvR9GtC 'n>m3M>"Nw Gǣ~\":q@΢s Qy4fϚ(p>shkCM꼡H]i@4j{[yC;qy`Eg|ީ%]\P{ k2v6*O&Bjkolc DVe<:Bǚ/OhpkŖtO3t.PkqroW]x ct, 4M8VD Dj @=jRM8A|GtJcTm+B^<J0a(dP}@[B˦0ݪG{r4:#I֤j"oLZĻPϳrtE%qlE=km_ȵHZ[ŕ4*J R|븠?mn.u-Њы4W\;i|U!G1]JnV~ Gt5`J?pYUZ~ "{ pi,Z9XC9u=E~I+/_N^F,}vKuDw=Xu):ن5Չ?f$ovw)PaR.l, Xј_wKn E7@zY֗y%9o7|v1skK9DŽRn&x92J#"( -J))$ymkhgɶq:[']]oKau;LV~30baZ55#xXg_)X3A0ɵ r B($<3@cĜE NVF߃n|S(뇹iR<1}'a5F,B0#BXYW#RSFDD b FрG!eLD ^/f8]h sIRk|MR՝nۏl4Uij8͹ŗyL;2q{z$Kp3"ا~KI QDpC@J,s$=6^@6Էs''K5smm& C Oj0t\\̗ ˺`BqdX:zxSKݽvVkPORC{uPvLW=nbyNHm{MW'W68\dnUN~Hձ;#Vm[FTm'v^Vv3nR(FҼ@d;L}w4Vm%UEŒMԴh"m#eKGm6*li]}=}U+7Cj}㩪ׄᄷ#Iѯ@.$<gor :a-ZCsˆarn2_eHsPFnIgm[`ia(f&h76/ yjdɫGo!EIEHr@ӄ朙9sde B^,h~pTyzON7fa b2a=& ߃-a?t7utZtm,kb]`& , enUtԣ2e*<(VA2ko()L.zpQ/y,?TYn]``)h=]iarŵ @EmVNԍ' {*A :.o ڟgd1]z9|9Ju菇JڿiN>/u;' ל)+8 !tw#pkv!rc>pc\D3S5jàջb|~UPFF-%A`rF Fjh>Ɗ BzxgN()yaRh m!M$ZYf#CwvaǝqۭnS>^*6mnü}P{:!:t9ؔjM7~ߦ5@HY5çWw]mr@(65o\oTb8QNU$"J T;½1Xu^r[[R<V$# 4#fB#H a:q\*gEC%DhVLZ"$"MTR#*#+@1 6Pd4mYZCr FM9N KMD J-1$*=uPc"h ؟0ˣ8VxOQ+ދ@"RQ7O^ $ZsKad BDdvykB* {W5WG}I+}C렐KEtFw4PCfGz* ?5:pzUc9v?E E*CWOU10c8NZH ~ATD2%vOrE؄ Ac4mB3kQR@µ{k8z4!˵s[pP * E 6RgBzxK7xnH U͊5HitL*"Q* J3+DI$N |W ơ|ē DžM?EہG,RJ$RpI{ FW{KrBJ(v#{]]o45:iI5)c\`Y,X"ṐXH"!0Xt=~7Hk[%},k紷Vwvi\I0]ڔ*vҁ``_`٠vcscwp/ ~Wj%1D Rgv1eK >f[Y cXA8 _ORP;6Y88t|W(B85iatH]]6tgEgxX|\?&uf`-Uvi UV|QrB G9Fu1lʕF_jSS:lgMo s0=[;sin@ٵ9-ϳ/f,6 )߼)^/ti B|h\[kU'+`!TI=IwBb|7:0 &|a<)`ZŤ&Uг5p~JPIADY8*I9_'S^B6YR|IyTqˠxr|9@~ӛ??ߜ&:G`? ̃KP`ho~ @追9h) *TosoW6\w[|س5!ЭLn!@~~^|q\sUݝeTjm5!.&/|@bGU`-r Qc:HPb鸮?G#kaM=ـgv D*( !1Q5ddͿLͶ8rFBB lwn)1Ȣ{ʈjʞSI(45Iz3!$39N^ZΝf1l*si<ٺޘ,Fqm&;ʾq6 pewtms[G6?'_5~E~[`;_]?ht+S.]_xc,Ͽ{擥 dg's7XΤ+UkG_D`}(;ZqY(Vm\bTOB,Fၱk c6([8S=zzS[ܦV nWM!!׷j<ɸȘ reo@CXB{ARܲFuG:NW| XPCmz>̈́DR|k?Wl}Sd4 sM {@R%R?q9H`G2PYLʕLYrqɍ/D3*V6!t7(N ` G*Ż&;F`Y”+hs5)BRA%nLlWm[x%I]\}id[Mrșq^Ĝ22pH50V͸s8h7&?iiJ>j{JӮ]=EYpYY7A#G`si1B]iMunJgj0ۃPqj|*;RlƠ/Ӟ`LHNWZ!Fz[;S~ua.ؤcBz\Hڑ\JaD}hFbt61+;l>y˄^.!#.(MݲmȽ@;SJy NXCf!aSgwC *۶ikuWl )2{&K')a1j] ]'e]|1&MvpЛƋ/0J4r^z"XÆ^ߥ5zgW9@ ݙNU7f4| g'Mwvx{uuu6L2(ekfLu~"2fn:r364)dz4"NϮH=VzיAxsñ^{Z"VBHWf1?_7威B V켋 |9!k1F:^]vk&S݃n~DСWEۧS[Fr{F}4 j-$* %^Fґ0N09[az+$,r!cn*Su4H&$ZBx`#mƴ!a7XzeɘNEd(QHLd$&A:łKs&9bkCj6-{I%.%%H#cm=d.%/ wxwóF]WXʾ[خwBX,+e3]1;%ԧ _`RA#}T~rz1n~^c;G[Е۟–:ˍ4YX D%,n:TG'IxPEƒd4V6Bc]joK^.jY~fݺ ('2Z${W>*~}ZXܬd}+Y+xVz6+o,$,ۭk88΀bBr:jsZ]Ez!`i:C-[f&>Y`ekV&`1;|DAY: u 덢]&t<u칫ۄ佺6鵒zū}/^q\B2HjqMb,?PcZs>=BX=Sݠ {JsLH pCy bd8rsfs"jBc Fhc46\*iQuSLPۓ\2elrƐUd!w%7 Fhc)oFPpQ"+!IN^[K#1yNly*]oGrWdukC{6L>\kH-9{IQt~-9#jk$Q@RKrW<2W}W蒤9 BuA-(v#(omoI Qb҅:c`ІgpdηPުA>%<- K"_DdYVrMZBޚ'++D@A[ io  !L$GJ $^3+S` r:;[(n'3)0p368(BHoF| . ېyPc>ЇAp? =e9dRNZ$w ˙3OM:9B]wja jMjF8MWp!#({nb^I'QEς5(#(Oqwn XIQLى u>B xD Vh$%HIz--zw do)-`%P> s_oPBxyD[*HCz6eK:h>DRw$JP*)Nc)oy E\jRz j2U1kˑcB16(]ؠgKj^b i.P[(to;x 5T%be5U8[(tѽ<oBrm @k$&F7P^bǙ׵WJQ*4 m#R8 =K{_/[;aɔ˨$oXZ("4RP3yK=P^=|=RfYUR h6Qv* ji#BI9_WeGHX a J%o TALz+TQ)}#eI Ig(~g>Txm-쵱(@(ʲJggc$jG@\윏U+zW~V)me*Fvߠ1T612Μ2Qb#[s-7toZe^ SN{q =[Ϥh02dk0ˑB;xkbTh7Ja`roze w2)FEd7Ab1 zǩx'zl+il̢d(B x{3.2)n,+|-zW|D j)E =M=a -pʯW=. z_b[(Q[,ȜCFKJL -b:z7Bu3 TATz(yi ь-zWoZ %dDR(yaV$A8-\Sf2@ m 3^"P#r"g+JNk!e0 elSBTcdTJDP׎F N`Ѵ5Q(MȜo^.)qpTϐŒrqɂto콆W5rdƭRlX n?߾9:|J-_YC^0cO\lv?XA- d-֫K1zzZۃ;÷~7*fZ( D36ׄ(V>D lx~Jpܭ 4ҤD*=ӂKCSH!DQ@SIx{)H%S\Vw!( N+8+x+eRVE/Y}jI^7d?_d2!lv >H *N)v KFIG9Z*EnxQs `ړ4JC إ M1.2ķgL .k빪z9z4-2E!d(yHtCvƓ E'9<" sF/A^wm O1Rc*[RNSǶg\Vy?j82Hn퉐;xFyeѤԟZF'9ijTS;vmNO>}hwmM1c'&w0n'#zY?ۦg\݆0Pb_6Tnw}cKx*˵#wE i]UvAwW[H/+rWAlzRJ퓥0t !0Y:$A[xbd>_6_$D(JU Nj;x䝟A>O-! j96K]>Ux<|- 6Ɲ`>qkE}< tβ 6:ōA9x=Md "[Db Qz*#AhzzV>r&o;dy'-uÃVcN~΃AW]̧iy2ֱ%k-y\kZKkɡup ElMW-I꣖9/\Q-yT> Vu:h+vG;A>5 gxWZn_ʭzw+R4?,T6^r@)|1a/\4?.H|w0.~ZJ>]wh[r|I;BJ7並K*Oߟ] _?]?b0uyz[o1Ku]ǏˈMGZ㉝ly"$烓-;5<6×[ Q&_~Kܢxr9| _OZ՛>+|j mN&,5,ݻlSyןr|_5ٻKUli4A L"y ,y<oGtQ,&p [ᒸʋml.[g í]6DU(vu,DJvPV.QJWE+M'1ni19S^UédrV7RT#|?}?y%" 1C:wfIp|4b?aq=ff4sm}ʋ I٠|6G #'579Ŕ^Ÿ] 9u [W^ѾMJzL4_s~k񗼈-;xZAݢXN͸d1oSJ7 k7HeT [a_ @$j~{Vxmm9"Z?Jۆe\LO4lvt]=6JV {k%:R'.1VdĜA`ַڲ1F! d\at}ʅ1of4og[y4V3&,u gJ_WRa;Kv뎘}JNƁZz†#n% ;Ҍ2ONK^,0ܱţ d0ٲ!kb9GV  ԗбvPGJWV]Noڔ723V uW6GWE'm 3Moy41eO88/#69[\@/= ~@a.ÔCE/˚»l۩5--+dWy֋%Z=HFn,kVsa1uߚ~?o:6-18{o_뱹0pAi};^~rd걲{Q\ K,|u:3F-V:TG|=;x xř 'B8$NNVZF֛6^eWy3:J$9hUHNjQahc`WgDSzpB nky@o=WV7ego--+ 6T'N4|<ԮPvǚۣ:ɟNg<7@$L8:. |;YM٧,_`\O?HC# F {t2lao4Uvw߁&z*^4}Y7pFbwnBqA Y>d;jnk3~jl1^+>MZXn-j1ӦT}3uy5yڠbۅ6<|)U{TTʮRTnݓhjkrIkh*_O zUK^VV뻐9xJ1iPP E-.nU!5H۾빕,ǹNoWrD⟦%/]/WjuKynZi^_gًTl *D&nh|uoVYrtv|i-5k[Un_|n Ͼ(eB)T Nv;p]0nr>*hhne3|ȅd;(?w[6t#x l*)T u&Ob,\:}rʸ18yfyP[:4u9f'S@kPS[,O$M S! hW[ U(m)JBY߯7-Ye"{C;#D>lPR)3I1Y >g$ MNhpKh+mHx! xnlA` O+rH"UKPI`Yꪈ!XEi9@ sEoq"` &ZvVNE5 {4Ψbeq'(dA"Tw0)pDZ+h4>mIj{!1` nB͹#':Py*)EJ6k(Y+>O1eGA^  n}w}sf&ū/+/~IZ,D ) H yNsYi)5?',|4|8s}k݇Ⱥ'_r[9nv:{Ŵ~>Ԓq=۟U~Eㅳ+/k1G?w[ޯE]`MRn=6:y92"^[ɲ,HX{?~\ _TX`pzeQty  fj $VJYv{nR0RNA|$.P!]ڀtu+$ A0 ǨUڥq-@LgYygN"]O)S|K»ATRG')rh4]-/T핯s.WU+lf׋~6i&*4E]䋷\[v;j}]Yz4sP.W9-h!ze<'>*d)w6RgBz6$Wh.dn8Ohfa$V.6DJNcRY8fEk%1 pdܱXn{̦𼘕<7BAyx(D8+('3m([NcCGC)SMOant.#X=1C!Q2FX+ 0!t`yׅB0X->|9yq1VlD.,ZnW#j>1.ea{Wl6Mcg)`aqTo)|߯Rp,iۿ]/yz)\dx{:+|ɀPRX.*lj9]6p3DѬ*~O`&ѿ w)HѢl+ ]g #w2φyq>t+ .f6 QaT{kT{Y~)Px5]W^H\L|ppu]-˵]qFO$GW; oB #2kHoi< m~UbyLh\ȧqy6ihxW/tsp;zfQ =jM6+h.ͧ40xZ󐒭'޲:UeΖNS_v &@q}ՇW@pgPx~yz ?>m CSZ6ߺZլWnw}%-Tn@4j.~{= _лy \d:G`淋sꢘ_6fY"FL#`Yi'\[wGhbTĬn7 "6$@/f{= x1LaYII댲~3IAFvFS{j9תMd=֩ėD\`wjT}6{MmQ$)d0AD!(y52Nȭ1"Dt:la6<|#5Xs4 ;3zb{NəB+K9R"htD(?TT>ke؊b̰<*;.L6`>{=Wnt5 I]\~9[%^ _jhrOm~ηaK9 C݆^)HY>]2bRSLIp'InۦO_~_G7QHsfv/n`f{4kQ<ބ}k֫ KiPE9o|{(e񼿙f׺ل\Y}5 &| N?.TB5֮tqMώg_I+og&{\qhGTo4"ہ;Sl)FjLS59p\8rJ[|#,4Ӏ5.zV^ܡswi˨a*O MZJ'@{FGB|4A ݇ێTn~HsGp1ՑT'd]ʊp>rmN<#`We@q/Ӹa6_]%aYVe].nSNg׾7A^&,3aR/1",DDꥦ0+Cʘtyz6 bw!] kncɬn;Lqr3HG.<[{/֦Oo? OΑaVdY7ߜ%<I cMg6n}U@w{+X Zھkͪf'˧w3;sò8XS<1SLDA늩^mJM(6lidKڶ xjt cwf;')$6f7Jd-NHJ")gWoR&Ê'Sz ^x X_1J;4{\os7 `2|XlάxJ"ӫ TOދg{:60zL+yI&*/Hqhv Tji]$@'z(Z=P>V<# AQyp:mq҂o+6Ȅh-ZVgO΍wso`}? -_#˳ kbʳt/<{-^=͒{\s**eaHɩ<Kc cXj /Y.N()ya`"ZA۠FH"I1g!)fǙn5^2 C+vTfBU6ؾI)"[P(FrS*/J=@@poh Vh]t8wQg`V@l0ua 81 `1 G*@g[Mp5(* %BT=`Ť%B"D)59bCR aEFӖg[=.BM1NgR J-1$*=uPc"X4`gbOqJi+(rXM%L^ $Z8FR.b`DB`DykB* ;rag'uvFqs-i:(d@?T#p;!J3B#d=j1I'8ZgUҤ5BJ4CU|PeajAr$G3\rj9 r\3Cr`1|дYhmyD`nVf F"e>2!1 R92̈r.Js4Eϰdhu"{,XƓᎶ(AZ5:A,kޯHdE1/LX$R[ЅmSz,|_e(X:RY<ERjj3(X,،)C2)K]r%' W)|1ifzW ),, mu~ɀp=l\ӟ;חLG eZ30{-#chnEui-WG0?71QѝZiaz9N`.0 $-IQ`NYH6QDn24ghWuw"Z{O=@N?޿RJFHq|oGm7iO62#XP1g&k_rD%y4lX7}7]füyǟ|ɀ |4 j-$* %^F@HHLZ&9ڐa':-qXBg `*(S:$0 rt-0!\0Gd1mp=Kc}A<FHH5pFBb%#11)DXҐv!LR#1s:58-;N8Srڋ0Ş}e>BDVz.ϳNSË6xqP WTe_؀X̴Q*c e209S_,F`dVeiD;,s܏sN/$ċxC\ې-yZ?~i77>ګT힮+M|x.'sjea4lba4_F0 bak%Rlbj5Rq{v]{e6V=MxG*? ^cϪ={nǣzl'ڹ;,H~Y Z}CtW/uc: 3TaK>^6jlP+eR1EYkfkm#Gaö@pyf7L&%o<[l=,jYZd3@XjUWb};˜̾XR?~]KuVz˛+X}Gif 1kf*M<0,YϯZ@@^rli%% ,˻*b6zr (r 8M(rnSš Ȣ`;X+\$JKFQΎ(gUю4vgcG17*P;7tGB0N){a j5Y0jZ$}4=U*nq]^hc`O0%sH[]FӱKbePC:T;]:f`k.PHx3jg>MLj9+Kk%izq |yMZv`Ӵ f^.]?(t#`[Gb#Yyϧ_~k T[Q2b%[;D9:KLn.ZRmW _^L`T),D2""&ZH0<)c"ҭ`Ͱ{;ma>~<{ntS/xt5g!79ci7WJVYEum 4wFm6*GT KIw0P+1G)g56QCͯtj`fH}(|_aa5~z1 qkpOna)RI }2S}u]'x%ûj<Knnj${MJJ0%d3r?W%q85*!eaIJS\{8i޺4;Y_&<?vτ(t2YꥷKeݗ7ٶrxLyJC[cvkMd 'DTs,\(" RIzQ[ ~*]R_X-*} WmxW6Qz߹2jv]fDAPfR73fd3{Sfoʌ7[Wf͌7y͌ƲeliҲe36%bLw3J&Uɤ*T%dRLIU2J&Uɤ*T;X M3zv3zY *g=gWx;d|S2{JfO)=%dgaT %3{Jf=%d̞S2{JfOy$*$̞S2{JfO)=%d̞SϴgT+}D)R2DѰ$q>=%I$))o=Ejy4o2ʯ^sLJJ- RKFc2Gw^۲"Ϳul-+g~筶m.'K7o) t 9r%= oyds9Ouc /ybk~v\1MnSh})UO.pKL RB4!L#õ§mmQ,3+d&@U2Z[9a\ed#L zFpƐo|o_Xs`.fWBi6Tʴ 6<itLt9P |>2\9}ZtpGj9]z| &>JlC( T2ћiS9< =%hms'[N7 ^Y~5Wi\ɥmK`î`8Ez[Tlvoyu߬:Ok7 'Uofm>s[*m|\;v z,qQL9DY˲d9HyMu)gY5Nd pڂbpP#8GDHa+f|w5 3g-:/lX((C&Ԡۮ^m,;i)}LoZRۆ]3flimcK6=#”n0r4˭L4J;W&sBXn"5FڳLP}|,?Ӹ|w˴{L6JMoܳHSgmE]apxOL+DMiy왻-)\gfi*-,I6IF_OE_&g:mJʑ*8x[ }vTn;:mvrG=9t2fBK~9ė;3;#r9Jg "9{)C>у;j[TCzNeaNɩ<KcJe$K`xHZ鄒1&(G+T4hiB$ 0x2#$ : y29`g'᥅@$/%A5e@{N _MvS]Hi O, SfQM-,4FrS*/B,jc`G74+@q}.:-5ݐbZlgLG3Bi4ḠE0R#L 3懭Z8 sݪ 8?)|nzNq2eEfpɩ"2q ʁ#ǰ!rCfڋla(SzP5yᖍogy``)E-Ì(bڨ<B:̹ҠX`=Òy ͣE󘃱 n:ҐZi:[ :L>n)[yɟ|1?)+dsAH11&%>Z\},LIZ-)UILo) Sfٔ)2 ;˿Fb2DK&)%VFϼ^tc:r7s\ծ0͠ `[x>*T()ESFU N,RB "X)pvu+¥&LI$ iόY>^jF%Ζu z~HWͦ .; en 7[p2CX0G,΃9}F,QHRê?Pr$+@+h&J6T^\")9IE$s2JaQiƝVc  B' LF;T,izw15=:~<~pVѧ@\~u/&cK.xTƞB - |}KM͐fhs#A=?QL߻ tuf;9F[%hsN6WXZ#af#aХGP b>`r=x"bCLO.{s.?.uq_?(Ŀ$C;NO7-e[MC{b[4xjf]5ٽl͗ڀ(T4V.k%T[i~W𳯀`ד}nUxj"XK}=()mTy]@>]==)d& >XVW$I^# x1LaNHuF`GLRkIzhl;C|3c:p魣IR (0BP,kd[ cEț:lit╜&xg>t:@U9r]zJSuDِZjdC4M6$H:lHS!`6V3(+?ݏMNV:z F ʉ?3uu?ӸS-xM"j{;:_4c?9YD걚 s;.1O5ե ` ԧܸP07^Biu/z)pp] oŲQw܅9 Z| n-l}5rLJyK«Г-2LOqQ RKJdW1_~k!ZդڊKM,))D V{dY~[$(o2uu\=/ol |x굖Tfa^:}<X[8 ]ʿ'flCzmmukK3]Mي5eoz_oJVYEum 4wFm6*GT KIw0iC1G LƦUo4qB -}&z3mgY)Eؖ&fL X{6 ~? \ n'LfvF?m%2e'}&EYEY)[J NL6Y_UWWG<퍣Gdߍ#wy04~ghX)8 : Hc!ro0-50#4'(xAco6.,e%afvMw>5<=y!Ðd3\'t\(=X\1Xt90CW6΢VaO45" IQN)9;<.hw15";Cfiu @1T)b+t@Hm @$J{,؟6,,˷CU`+}ϹsމoigWFh|zʁҊo=L7O̊dzzG#nNwܟA_]g;bw.]lg4AoS~nmvF^|VggDMiWamC١>ZfA|^ ,Rg2DO'QXD{))Jnpb΂wDðq{Yef6x"wp;Fe,\Mb|AW-)t1dCe{\Ù=c|bYWgf))6^;DEˊ^]JԽTjװyh LNvmqgnW/2$Rn܊ tvW-i8[eKr}ڃS3G3uPڡXQQrݍ!oSfw%W s;,Z&hVnqGePoǼSJY**z 5x`UŽi[S'@'=z(\CtYl7w*1oG|?_>MߋLX;hб"{+3Evߎ$vK)<8'&y}c:"gsSNx\x )[DBq.sݠt=:::.Q7M?^PFF-%AcrF Fjh cERs"pwiJJpp^.V@i6(҄H@x2#$ : ]unȔdKG$a gRG.25/|v+e. ~LUէWwWxa*}ny'z TTr"QZTS t `E{s7&?N8# 4#fB#H a:q\wgEC% ^# b!hQ\1!"i|Yw䳊جqm 4 D6D屧j CRL̀ <:Nс#mE= tW r dD^kQ'ӈXE #H,"[Wv CΠz;6 uPȀ~DڥzA8 v!Jm5ɘ}ui<&ruBˬ_譣u ߒsS}w#m4Q$K"qD8%&J 53+ʇ M%֝Gv9 y .:#2=mSN)ʜ hfD9FaΕM=:d9zo+ y(wN2)VgF;l "yUdiͷK[C* B>d},"ĂNo) D){kpO`Y&j/^ȫ{S~qp>\?\o_ty\w%z姷һm8Qd 4 u3(,@UWTt;W[T8M^^*ԏ_U?+?J|4,T67IQֲ|)/'7)EneW5(j̮Aͻz1_؜d3%/KƝ5є)2L˿vFbm\6fg_B6RŢ3#gr8")[1$3hLN,v0I0EJA)Ȕe8r (t>{e:k85rTa}wV70oCwR-NOb4 }͒| مHUZWo*)t|<_ 8YJ~*M6:L9m=q#LNn`T٧liLY]<2#rZy_x}9==xV:(@97߆E O깥fnDY1.~VO(B-- [[b|}Km͐fh#0ŸO0if˛61*A[mjXAs>oLGC#aĥ/ːcl\iw[*"6T*&~+ /΀ݛߥ_ޟ~uޜ||/0m]`]Q}Rv44MۢiŻl·hW.*(g7^C7WQ)¿.kn*@Yھњ`Jo>nB2kɪz^iMܹD"FL9#}UO]ڻzxPFhM/ [1 #Ibyl3Ea Ȅv"y3C>fZ썤642o5̴ [G#$ (`BP,kd[ cE7u:٫wr![;2WѐkËu^vkW?]\9\!r;\`DdW #9wMá@ړd=$f0˳.}̜mpbIc*yrvU7P!K;P,%5Ք 8ʲ寓N]UmK_ŗbhgNj2G^LTM̹*ݞS<-ɗ?D=kL܏q f#|e-<]ZT|j5"ۀ{Sl)ҌM1Xq`$i?|\{oa27įSLVלCf6ɜ?~/\@^dzb7s3R6%+cF4i_gT3yyWM6Mݩ[vgAzgbYXt+3cxgZZVc\nꚠ M~]ͻRJ=l V\jWFi?DၱkT 0V0ezCC tr! =*G0 g83XuPFνr`DfBYE!{=:%t+`k U cJ#68R$#?g)Oc.*5bru֝-_H{_UÆ.o\5z1ng:zΞݢ)ժ{j=x!9x9AOn|4ͨ*N-PdhJe B6gPfe'Y72`kkiX47{OtR60Iȕ-rfah;7"EA"Zx^6V3aw|&ȒF3.bm[-)Vۖ8Y"U,Vᗝy"͝˖ 4lݽ8vy{RBrhaQuH2ԭBd 6W_H`U Jn)`ja+\NԮEyljؘn㻏̮g~?ټ9k)Fspp(#EѪ. Dt1Kc)$Uz@Tq:|GtF%%J4_>o߆#ugӛ?L" `~ڝ;}ț d-0'Tu Ov&SG_u;S)IWu;$|W7) .Rgg0@˲Kb2k&8y:. $WSo*MH&;l^ۖnYܛpNȘB*PxT&WS1ZQNyT3cH֮`X-?{ofGlq}w]%Q|)kHvPg<փ;z7ץfu2V " '?޿Ko:W!QPVP#_@rϨ\ %qS"\qȂWޟeJ5muOz]#W>< ~-vrE^/{Q_ `\ƢZ+BU@VLJJM.`-AKs@"(QURy4>~,4*εQWBygZaP8! dG&W w"|0\tFJNf2}*UVTHug+ `]/`^ep5y} -c,n0P,y\?,75Q䏏1GFq// Zӄ'UMn 0I'Mj=rF)1ШREPAnKߟ[tnro }wK !=m-e9o4&B 3[HJE#>,ΈeARrXdeL\DITY. ɘ>9jmy)_bzQ^{r:,/z^/_h7Lyp$0H$u'u1"C%yUph$"8aHęhZ *R@CIj[_F3kl2m÷P@NA0srcY'k՜UST&H)b^+*5Cy:+8BqXɗ6fQn y>ˏ-H"EȅH!)pHLL̩;Ym*OlgH`2(A +N; D锨ăCuhSLvJ3El.B߼&xտgy_|qXv/l\wf$Y߆lMu%3!7jRY_L_I#`AZV&R✎ &N "U2#T"=֦jnl%է|8;L_r9bQvw~Mܹ҇^?U2[>Gr2 yD)ig&&6hښ"uI{ST)Ƅܢ$OZ)q9qA_SCltWYp> ;i$pqm{#^Ph$Dq0;vqqfWs ,3j@DG*8l"XX]#-Xi#1H@ 8ޛ_N 7nք\5z7_<&ur/ ɶcțwx"` zk_t빹~RUy%`V]y 8ryLA)A~&O}{hTqh:dE ^D+k_hk(EӜphh_1h \Rmo޴4[ -}30äTբ q:(8Fm^B'D hpg{A( y^$ R" DG@P3HE4RaZ3Kq& (ȑj%Fa a:*e޳$4bTZJ9gzgf?nT $e%h2n-%#Ev&N$/8&ed$a _/&=' E1r&os\KXL-"O1LzoVkPχI+}x`4wEKLH #&xu`g@ \g'8Z8fq~59ph܊tZQdƑ @9 kx9)O4Ax7g&Q bg[Iq}Жxa2=r@uIX7K<ﴭ=v5K.;?;kMD|e[4$A{9xOBJڠxPII @ ҩGcOF;ڒjkY;ṫ>RGG/?Ym; '=Ym6ƞ@"G,`' 69-b&˸㵬^.(@y[ 0V!`{#{[ʤq&.'3|.q>q "A;S뢐43c qF&7NDQΥ\h&6FAJ2xrj9"F24C"gϓ+9?&W3n7>ˍ.0ԅ mA0 Z4q9 g4Dj1Mx?;B;:^(j/ޚPև )R'J3^%)XHڀF+4FQ#EE $۩GAܷc)P9<'`,KW>ṏgs>D:rENI!rtMip,h/So(A'wKc-M{mbesjdYM?{?y՟t/|ѥ+Gn,;R繛~/z({ V~jTw~]2.=w5 |2$khdaPNل}Ts^8^š)?DG'$: )P(7k^oQ+}Ge_J[\'{9g;{[N"g{m^9 냖K%h@1˙e^OaM Ц+\'>kU(Vz7ƴxW=]k{uƫŅu`vsNlE̥:㧊ܬy[sFѯWDksqiI=]6uÚ ;TGr~Bf.U|<^/=[9̷ xclu9Ȧ^Ɗ+^gpl\HXuT;9I 8^qY 9j48ol_QƑpLLñ8r% IޟD)Q/I)/롻7NėuoL⚛wƭ)^\Xš/?w4[r(#)EfetxaOg{Σ??&[LtO$=ɱbѽU:[@\vtC~|,?ε=$yÌ=_Kհϳ;_0QW,Ha|;WWH{2+Z2 7őSa#ĸ)I331[s$x;Fᨸ l"C4)T:/ n97ve><zV. #U+4/`Aт *hk=J"s!)2pK'N+0R\H\֥H߭ \g."b{C?hdC$='_jKOtEPT~_DS>2ӐGdVW1v Կ[OkD;=qR;;C@<lG$on癲; |DE$+/T0rU¨]_n4ˇfmխjR~͹ӗ|!uv[CSs i_nQ3} h}PV.}~L FO`xYF7'I7X5zd{#eϬɝۄy~DzҢeOv/8L"GCe7\A0a0,Fkӑ ŁbckL=.?LkPzWUCQ Wj -,@V5x\٢l&wK6.wJ\n8*%®WǑTXdٿk& kz=TL ?Xoh&NVoߝN|N:Gͷ{)ޔ)*klv~L%޲lǒ3'E< _4dz/x5UuvҴj_U`C͂B9P.F㤂o] Ox-w̡ >y?n8ߜ/{-sHYƘ/t }ӆNiJ.S w!vLu>$JY moJ ?xM߻OV8)E JTx2 *z@)&U8/qD<0-qdzɀ6jQ!Vp}jڱĐ.6﹓g(@eS`nHBZ85#-Y+vt$Eյ#isqnl.9ЊxX;$ +ILߴV`t='4.7.-P#^~Tntr=]..K?v _/62Wqwgr9KI,GnS18-H\d5YH_/)xGƧnZ=~f(#&|6$Qߊbr"@f1R}\{2ajZ1v^{?Ui8A.us i0ˊX֌~C?#ȚRVE5crDh+ L֋RĜ5qe*v)¡Q+Ӥ&07NO@|htn4ct3604@C ,꣋I/Hm!aɤ-2YM`K?̂<` l|I )ŁTH 2"X" dm*1i8W^X5 HAM $Y. !BIs#@#P8-5JkOJ;k Ql?jjzժ: I;>CEG 3dj=w}K&LSAJidWvLCVIx,X2̘I/BJ٤dELQXIB *"r,gNqȪ2[X7qv] [@4Z@4Z@4N y&}nM667xF]~{\)%?@ F 9_!4minh; G Stʛ\s?eG<qpwB7SN 4 C2<[ Di8*Ofɘ|JgDc ,耳4`]Ԏc=91%DxV"r) ^7X m&&|<48BbƼO@'x~{7W`[[[lI8&ϙJΧL $/S>'q'EJ+L  e w)q!X`譁hx*阄L,BeZΎ~*Hӡk`X42X4L1sk'Z:Kƒp[(M ݜPS{鐘S}]$/< SfzbKD CV貊8oV:-(LW=bthp###nm4j ;z ڻ'C k) 8T5˦x4BF'i[%\ԢG^ ӮӴ3ZQ(c3뤔guGwʼuG:0-2*4V"η`ZFZZ9RKh4>rEa4hZ3N=OE%O@+(ey"V) ùOIFӸѭ\"F9d DI!w^t,oKЭ]EDk89X}R).T.HRdY)o6PA=Џ00Ei-#ڙyk$ Bg7jʾ<6]-]z$TP܍?\.&&u<@lw (.qf;Pev&a.*>PH ()HSFkd:,O1Md,ZIk6X'._jhLz w9րYwJaC;HT`GU.L6R.|&J?ϊq{$ͣ\Z; \۠y2>j)ުNTX1, Z܂`,fKώ#8s)ܘTI欬LgّzȬGԻMJOg6V[Dȣ NHHYtdgr UJJgY^(YV6j$P] |5s`X(R' X&T:'1$4K)bIg!8UM QtF77ӿѻD9L F#IL `LF|,s<, h 㘤;PܪσijBRt$Ձ߯}Ϊ%UN/NISFId!Dt,( Np dǘrKlV#B=?{Wqʔ^lUe tU_K,v=:"8*"'#tr.SoN (˓ye N[lBHatH\Ռ\s0Aa0Ic,>Mס;XK/&R-mBgg#80l"ZMwN"-w9q 0VF߁:ߧ?_~>c>~~ $ ==@zCK0^34Uloso1fmNyŸ8<D췫oo]pC(_T3Y57_A/B"su%UYJ ,gTK_E>1T ܗ~]u|tM/[ >$z1 KSG6`"0]d@i( !(Ij 2;$=mC0JWІKoՎHBv4 @ $ AKqFn=7Iy {K Wh'I.v޵rar -R$/U׼)K?ͯö"g4`+$T ,sqߥ}HQ͹4a;^=z1ϬB !+/cH8|ʕR(c0`kaW+ Va;KǞ Y;^JpJ7Dd%0li&Ff:q<<$ˑDKsݲzPr Ɨ~ _Zmr9"yrp43"E ˁh"P6 ҡ('s+XК3K;mExkLJճ)/V-o~S-bX[0 v盧9_R|Fy3T}F XsY:p`rr8(]KYCd8>8( hˍBڠk .fF6p~6((dQ0 Q=*)ɵ+z_iGO<- B_#jk^׬JXh-yFN tP[˻l!jX9en@OOQ[{{\x۬G(lj9:L^(Qz+;Ibs߬*W!Dh'.'@-T(}?X ӓMƊ'*9ݚixv,[sVv_gg§fL9~\?e8lt6Mz}L.Byyin#ecG%ձdcIRzXuX^`6gn~ Ns]*F6JV!6T,Qfp^׽={ëφ_%,+Kd/J[vSٝӗ9Ɠ Yit6q jF,B0#BXYL^jʈhA #(H8HQx~=Y`P}Sky}=rƽѤi>cCOk/_{e6} #޻^~-\/U`tY'7ͣ>w( Y4 ab_Jт뻩VPOz1ߝd.N0;|:\8YwBNX:xdXD˷ zav)DԽLj۰g7 ߃Ȗm:.thy9b辄II-\wn֎VHJ*)&?W̭WŞiJ_0pŞI+5dޡ٠6iq6ngܠ5 ?V,Pj3{O]-6tmv[|5MTQvcܬ1evҜNDN:+2|!5ި`dxYǦl7plqӠ#d]DGsDC$Pݟ"ayg۹s8C 3N\px8z9P#2@_~"5W*f}ޘbD X-ͅۦL5KȰ*8R _TcHgp0>-:,cʠM#5/VSpf1sEϥ؆2*l\u=mk>N캅 $g\Kr@ | ;XWөp.`9Oƣ~ۼm\ygZn5Pi:*tMDճl^+eC6%[ͭN7:h_iSӞKɪwf^R͌{U ܙF)}V'X=н'6'Z>q٧*_ |FfM:Wi(a Z|z=dYjJOS,mW)8jSI$$&Lw6X'hYvMt]NItҤUf2SHRHHGB &uD9hsfȵ1OoL0&c,GB@$m̏ Øn_/:i.ZH286/H7mI#-zEƊ. @>یWja1vnZQpҌ4Es?]m:efХ@~m sW~{ ɵ3Lk3V#ܐs׵;x&*B_ƥe^:96/J$ӥ̉\%3sv8hfD%W]fR.zyՇǧՏ9L 9֧q 7Q쨔l ,r[c(CS^l&RS4#WooꘆY9~,g$4RAyфf.0=nͭ //+S%q!D!e!x0k5f,`ZFL&Z ih-'?8}kGjEh&RGEHEp^0#cii#T s)`U"x3$5"j* 8CvYI(J1,,VJ\mYDtN&wףÉlOvԤ9oF*URâ[xrR<+Kܘ쩌"-^cɞݪVn/P2zsYH1rK%@P ^`A Ɯõa់)黦&W)dkEx|-${ZmqS:iW r L7 j-$* %^FHHL"[9 NtVIXBcl0w4H&$ZBXG0yƴ!aw.:dɘV!!: $H[XA`Ir,aKQm؊+N:I$mvؐѶ[>\ˋ2ܻO}K&g>W^DMVo v0M?WE(%3 |͆$wWKj\ 4s0*>1j(LYU>%6?dŗh-N~k3?jSOyv`d9m>6{p[?DiMI=jWXe`mʾ[@̾(* ѣqObq,ZzqD v'9d<0EOj:po4IGq89E}u apQ@6ƒE ,ӰW5 Oޙ'9VF\1,R* Sy &8ZNaEX"+@} XPy4zȕjkin,k9c 2UwuS}*٤=nrLߪzCGds1=ep(|n۽1gvhjZ .Y8Dyc+pC\riW/.5EiVds1oZ=9|y榚pv-`Fvۗտ\ dilvEay0,F RS>*<95zE͊,RT޲x g>Ux@޺o%أO{ƒBh}oHԺكz.usϯr/N3U3Lʸ-),/?-~Wwyef}0s,qt~\LedNsS7bj.9tۮ.oޏyXeFp:_{œ=:&9lD4 2ܺo~ \!7//o#J z0!>:&}~ɴ*ϱŋ: _lF`XP4kVj"\`P :8Qs1ֹwM}p}sׯ B={U?lzHsۯ1+<=˿~< s ֕ !'ITxM%fm 28'FK)QҴ"D6=7Xmv& jVqd&v?˳>^x,>-ybJ+gy ҈߳Ri..չ 'R+nʇoH[`4+.0ԨCn wlYX1icK `D[8.btjř:ŌB8t߯pYTW4Vօ~{o {~n%z3I}HO湉CQN?q=L*6OUS}T30o֚cpB?ŧ.f\DS\+/_M)8_-sRЃ$ZޤJQ%&yT(^["̠*I~54ɇ)1]amHR XMsRAJLÊ!ETeI1F.jlUA2J["dS|ECaiB1ep*YEK>e5f⤓z_%An<k\b(`踤|5f1LMqa >9d1E[ ّ.Vm(:XMΥX&!.\;"pn}l}\;'tmbjmC{ W(J%vη_%1NMrH |ApFH=cTzV>u^שZjUrZ?./oNlfۙցVwm6P~b+Ґb҃or߁}j?~߯ʺDj̩ : {_ȃɚF6. Ө*-dmلEUkSk{*XZeHD\uJ$irL#]Fgc+tL7N>;TPe* tozRdE`ɤȊfSdEZ$u)"E)0EH{iW<_l/-t &ĚцGplel XfdScNľ;o|b8t  DKyw6`B‚Zb5cP s*T>xI>Jh1à ~l;3=gZe#Q2뇴ĤKYe\z+<.rsstǽ{Db{,DUT!hP7 hf]IU:]*43.GgUQbl էC;]yHʀn tQ G^̹?d9C7doy<`:x4ep5s@J'9: o8`lppyu~ ⣪3=y}HusZLRI_%@4E>ὕ}1u|bkFsyִM[ ,m>ѼbV ~צڳ١i߿|hز Vr)H\UnԡB!25٨% (E.b֣|1@3g;޵_TxXCī6ݚ`[vrD 6<u :.7bn#dv`v`<5:;0:(|bLtƴFwݽvVMmc拴]e݅i5L/=BmrdwKn׎6ܺ|f!+:ݳLGpu}7HNVFo]M[|8fOTMd@{)O.t1Y[@Q֖dL [Vt&~:m׆/ZnNj/Mb*lM1Ig_e_[lƳK~rb,j#H'#~\?]brqfkY  ɚ3+  @H  Ő:m7s`qvKLUyqLfۥ.n0$yKHVuGB˵zSjotBu'SgT 5i{!2^g3_Vg*#5 مrJ8`yujY)uDUo>&AiMa)T,P2zbT51 -qR<"p..Z&}rHSP*hMŌjxgrqXl =JfJ\T^{K|#)Q1~DmS|Qd8S>k|ծRőWJlhpֶDS99QRS:rGC5C0~h30*Ԙmt^6jR8YgJ)rQqU.(EfHI;R3QP嬫00F`Fh숳` *mPԖv$<+b(&G:(r,d1 (1ٚ11U!R6o;ƄϏ C>"*!:9(uTT5XRd.vCQNFZIOzo9pT{eT!F&r(ٰ2ET P' (r ҉WRc& v$4d[c.GKJF\|T|G6,ɦm -M~鱀o4y-`d-sHvZX-jseѳ/9`@Ղ@XD'TLj#cCu4KC^=2:g<1mkgbMʲ fCn ͗%LHqɍ Jl=drG#%ZeG=pv@qbtR2|{L@y^kۜ'Bz)q \ زWƘ:h,F+IWEqٹKX-DfZ$$ rc>dZX@Bj |,qvyt 6u~gy0c!Xw3Z[AӺL*!H!a B'ցul:X܀Ϫ*Z5*NB ~YZxGalդr1`WYuWCFi7K{Vm5l j(jM?0gI^X׀d}jĔ2)!Arx"** lFGqr>y<7{hY.wtI>BHBP ,6CLxAn>y6^yz_9J>_3ҍ>$Hۢ}m?>-׮B{{mjbaU+roC_zhC>tBcwapeLփEjb䏭ŝw7g具k251GxZ&*1Dy[-zr'4I '+`kOlh[7r{7j.3{Lv}Xw|5ѳ>GMjkRjz6V޹Y:Aڱai> 1! QNCQj'xx=HCfr~fǻ7?owo30Tk~o#Aݾk[uPߢk4]z9.⃼]pz fiԇog_LƳT@ۓztk2"W.<ȫ0,G lh5wAU:C=בA{UX ل'q?ƽ]&V vIU/Y*E$Y A#OZb )Hjkb 驭 5onV´>cR>IY$FId!dVxUlNC:ogҠFsG/4̹]2uW͓ܮ[Wx7~n7on%Wmc{7V-߭H4>V5]R;"6+"nueO2Z~"w"s׸DFr'r7*-?s? W6d^7Y&i'Ss"JDňP&(Qw!P?U~`d:^$YRtѕŬQ 0Ihj@ЄRQ^d+c@*eBR>Py_4آ$)DL[*2FNҵg|~}%{VE:̘3OPڛW43nK|:آ͛`Ԟ0ޘGS}x;Oheo-.5 A.Imh44: D}S1ia *M %XYGJ5kz0Z6+Rge0 FyN[Żq è+y:al!oCr.o3zZ{}4QH,p c׵9yW:# 9z(mC* Pp' Dozx{F =Y}{=" !"PT,^;e%JB,RBp:5J*[2>tMELL\v^ii HoeВ*&!;6vF-nOcHWwof4CWwr懋6R{80uwͨ8սyT},J~(AEMUAM4$ɖPJ *7ҧvN};>Af0U[ӨLZl\9itEZ "+R^L4:(.YtNkm0VmMI>F/Ie *S_3rQ5t2N22.Λ4ˋlBHDCnTi'n[*JMN rTIX0)0tAC!Xg>fN*-3n.1v& k55I~9ŞT@%#Td h e%H&DM1; JPN*ifd.J,Zzʦ$A-xa rhm$%WYgRv~XoJ.!ȂJAZ,gf:R#hsV"Х|~-\oN?3g+v"{J:y!2DE2L UT48IO\v<ӺeO(f"fV#&"$z I 1^Ʃ/,3XV^зȥX Q5Z ZbJ$Z`k% Ng ыGVƚt|r3W:d!D#٬n)LFR+&MۤY6ZEzzggr6e `싉*Rb9*fSV*ws$ \uZ9ꪲcGrF%Z6TiKaC `2뜩/!}/w ߮sW[ƀxf}{5?ߠr&eQ U((+%EIA7$t_bk>khkpƛ1HPAj Xs_2% 8G)JNeO>/5>=9IgI:gTIڡt:t6ɣ? ( UJw㢶)Д }v;mؿZ6YV,s>(ftQfL9 gaΕPx&\F𠢊6a!S nF T u+H:.y' Bc6F'1AhX;lu`tE,i7HcnyX# ZbHU$$\9DuٳZO3ߚTX.f*hTaf~i4V EA<O 0$~VXz@хJ84 p6hH,e%! A1@h\i3m ~ +z=OЗPa']Dkiivۻ_^j@0ц٬M[.Wvh8(=N}UʘD?w޵7ޜ^V̮ is^2 cY{ۺ+B.7I-0eX,pJeYW-ʒl\Q!y_ ] B:VvbJ]˰ex=mQ˟)f)>=Y޿\92~&V+J.&WL8@hԟ'7J6^ї7?L\3Dm:Y8gCvݛ???Ӈ޽@|xp> NHw//Tq\KkQr׳b]%ܱ2hg7}-r[/0 o?W:ߏHi}oQ&+6^?-T_Hne!/b&|[?h*ᡝ'.]=An= [e >Nŝ%VU#鱽 swYsb:tc .c,7HDTF5zɓpAXID&NKJW(O$]!W;'ZdB Dj߼R^||+k4TǙm$%L5Djwi 9&њ >˂A䈱4 m>-A tQhHHVa(g!1xn"82Q1xΒj}&٬&!OWZ/5?8FVeїa+M'|Y'_ŊnSA2%rЬ@2 N'N#3"g)]C'I` x%";ޞHnZTG݂![Լ--v*7o!ھYzB޻TfvEfa^oaqׂ&_zth@g 5=<~pj \C0Ld JL"zT4h(ܗa tZo{j/Ž9Sh/E^˔$v-/P!6 KJ9Z 5F+ Euc)\XύN%eL*fp1⩣FksN"<֍Y7FODKZ &>)1X{ B_" Z4|T@52o'TVL .էR!C +5d(a6G_vnV ͦ9#6ŘFL>@Qz/ۘm||29g?4ǏY^5v7]dvaҬY ]pI1'ЛENE=ތ %p) bTmAL&ʌdXGhk1.#_޲'k2wo{>[4|O9jwi+gCw7qu _?UYen*%->QWY&NG~S\_&tgkI Gf~xJA])seV¬y8\+vիV%G&1M|[0uҫɢtP  I<.lVW/+;&.n#q$e4\0tLOtgl fdaqpac9\_f$Zf$?Kb4C-6Lcv;1jLh'Y<՚x`;Žq 3%}=TZL@FZ/xnӪhQz?-8)}tyF *Dp8 p#*qA0DbW\۔3kb{>5T=Kƃxo۽nYHզU,t7f&38 =FKW }ڵBox seEHZ7D4.j8 SVcw*{bѪoX,OBZH$0[jb<2Ep&0mbkM'6DZB.*`wItOZ#Ps Y]a8[,1'{}% PMBVC7FE!Z8|=OWu ǪH;컧yѼa7'>CkM#t!M?iPM~jWc9젽ϤIƃF\)]"' $ ꄮmy U#&P#?wIuxv Hw׳ޏ.o"(L=Vu^ _pi*Fi&ruPHY"!(Ovr3DמILv/|.yÔGSAO {\b.DThŒsD#2@/o ^DH㺚BS!(t"8mLud;rՎsꋫ;-';Grw:_ ;:Mբ׷>,q򲵁f+NsSg/ʕ7H0e ?KR}8"S \&#U2Fce*I"J!LQ¢!-'zP!ǂA#5Z[J؄L4g,fθ^K9L3RP^y!pׁ뗟WG6xs Vt@cAr8vڨif#9Yh\*/IZbDD0Ha"^6ْ NT’*h h@M&ϣBb x"C)Z!>.$G **kKPeJcOƌHZG$AK\p^CQRţq[E3u\TJV)mvؐKQu6f:` vvqu{g%* g^ƣњn:-p~[9|pK-/:PsoW1/UMABtd.f2a-{F{Zns ?Jtrlz{ѷMSUƽa,PwF E:Z#A#!J557 NV,ώsr xJO̸GՒJ+5.GZk'T~Fdʯdp9+ZyW2+ϰR(SMۻƛxoRMjI7~iơ#Pcj@15F#Pcj@1ͥuǼC0 ل:7+]mwbd卌F6oL x27bJJu,[ZclLl=;<ئ1Eh.P4:rQ4Q*4*Q皦۞$,P9 *Õq|$&z tM(Q76oGα 'α(laxy؛5_\f 4v ⬀QbZsTku! :/j^N*B Dx64a,15je6f!!/G߯k9s6t$^|2($2XruJ^qAelnriJ%74:N>ܣoѩwVFa3z B'ʑܵ-k7 ǥRޔj.wY)fu6YF6ebnxRș<sT r k %:WPW]A>wuuq./d>M^)qQ1KmG:Zl`$H|@yqΜ,#LB!lf[Ki߬^%= aCtk3"׆!"τufI /=!!B)&GLOȽ1D>?Bӑ.HQ%schp$Cfe/HuQ{DgD^`XD"r; ,rNSzt[-v;bw%5,nϿ|Do+Yexu3э(v$O{@}%c N% P6K&N.əRrO|yݜSR'8r$%::Ga$:(\-醞0~g xZ\Z>_xOO[KwoS"݈y|N {w5>(bR$>h-[' aykى/d>p֍*F=Gf#mcvV{ĸH%1pGó/g+]$/ƿ]Mrw?cVҌo\ 7-#6/#7*|$M8=n]sx>n9Wlܴ{%%rնdBFŠXjqpEm>_?Lj2&d /;_>_ ~p'0Xeo"ؓIǓu~y$?,=|icj-M-U5zacɞrzfqir;෫/6^7#8ZY},GAdi+D6, y|ޣinnS!V|*1RlLNxߴ'K9{1V|tכدVM5~U2K`5vw^O 0+Cc s)|9+R_)}䡚C7!kBdH&s`K}wFl2zFnHI9PYʞi!Vadg)O9e%cdG Y&pvjvH'N.cQz'QR=NH>K|ۚ|~⬭GI¨4tz!,2 J*6nh"tNAV޹Fۀڐ:dV *Ujq㛗75E Ņ\^Kb0q쵋 LbWOJշcҸ*ef dy$痫Ыj zld1t_NuL]đwbGWYQfq],ӄ&H&-(ZPQuad@Ep9ZTBJtƹ \g @Mt yZ?aa2׆ iIC7w4̧m=4PT׿O>5Zgַ}|:="ކxHcyq=nV,4Ng2 --vWX5A ]2iM!0,dmHLI:s [&#g6zt{_E w{TR@/g4 ]ZBsŸϗ_%tδ3ڡOrن[+UYersrlQp& }@WK%?,~yXnlhш6PDWr߽d^0{|9{ W/,XPX(Vte廆iK?z0;Ŕ"j/Zm L^į81i{[ས['X͡ϳTTR{N<Ϳnh6\mr[LK\I#-ݳӰqcλcN p9^El͜|fxXw,7<콌1I%*m)y- -|nyb 5z5 V?nxn 7E o {?\y[!iU\PҁL➓.$%`%id~ U LhG0bVO 6z{=R(?^/joR/|Np^j_6|?j%L~T*y\zs'?s3\=Alj(u6R r㼵 SM@뛠AUh$}SC.(V E\١2+" 4 mJª\:0 7掠G!Y25Y4 K$SNNK(|2ܬF)7{}aCkeW{nͬu3mɨWģ4!%^`C {dNj:ɴ}\Q|;}.{1L=Dl//˥QDU`CL )@.2KəeNj7QyBV֧O뻋vU,9D'(S`I+@ǤA 3!*3%5W ΘҫV`Ɖk"k%}x.G$7FHEha6-x6pJ1bWTg9J|سRy$gkз-Z}}`'ݎ{2P}zU7.v![7mo.klf 0]嬳aҠ ZDR[a+ƸANeHyoi]r?!FRR!hHFd2=f]\|Y4vZbVhjq,Zz{5TUVڔ#iŁ+9mF@B@A> t>RVE»hGfޒe2,gL $wY. Ԩhc$$Ho8Dޕ6r$".eG~ص_,TͦdQjDJ!1)R"Ũʈ/-dlE^zIۧA(6kOj*j |.8ǩJ[JJV<;8&JIֺcw#D ۛA}-j@{oKSөsJEl.>.^gϏwcnwLB]<1G߆ 滳kkZ?1SSkmިk'\J>lQHuEEs* ywdЦM3e,5Y# $|1zt^O-v_Q^RрEtZA!vs %'`w8qҟ%W vx2+>KRT\F%~,9 x;w2ux2JSǮ* vPޑLc^W䅳_³10pV2d5訔QOaif,$j@a(HGi.bG`h} ϫCg c1c>nD?]&|pTaoݾ^aع6.M~^ag!@abf n:쭻تӨgW^Xn\oɽS߂ꊴ$Ӌoplvy]W?&֜lo0(NAaDp:U?Mτ {=l]xn]r]r%* -:Mmr{mgW2j)l%)8aKg2]D(mw/Al!C+9 oW~Z}&-}fx{ -,Ac[ oqP@P[(i KL1ˣ`hb&&.[A"7*4II…P&a9[inbH՟B7_D=)wd͂5_~} 6,M![hQ,9$XyE6ő"I1Ըod !8 7rk0XJ7#g?ހHMY,d} 65|ڦD Vey he9VŬn/.G7{Bet[n`BFkMaڙAqc4:a ]2eݛ{2!MHut&$ GaBR(VRY䓧v!0l M>kHXLY2<ԗ|c'eLZTs1C̑M5 ߓѥ1mFf:9_89 .r \&>ٽ6߰-`Zc$Lbzz j^kWǂVnWMvչ5Vv;ج;>A.˺<1|B&(Bi0²$lfթQ A*l /t3 3u(L"к+!Ii)+Knfʜ-!(Y=nr5f>.\љ[XL(J'#?:jCNe"!F8*=2vD2vCNf>VeFJ*UkA;@@>D ;{T2#?uo oZWm[Bz.=: Ҳg׷F]`]Z S-H s!] Quh/; b-(.t;Y8/*)hKD1 s.[']C 6F^d׶FJk I@YDz_@`R"(=Ӗ ]TK3r6;j7!x7bμޢwךS#~ޟ\(ݤJl6"B xyJ*VLJҳ '>JgU%jorכ~oѝ &;'뀂V3R(Gsobײkll6JRA J!ZY-Dt^DB Rm$l 28Z֏Z *rҪARk9m6։75M,kWUEW_cf4%^n[h׶F_0Qm %DW`Y&=k\LI1 N7fz7㤜~qɆ^f .;@[}3<MTl?κoѭħĎ v#LgR(ފS ϥz'BFu* |ua -`B>RAQEX)5 ʱ4A5Hfb˨.F^+Ua/ >zy t{ 16ޣ]߶pGW[JP'#QJ@ΔDL.][ؔ;$Wq/94WU!>;G/@\}nn?alTKž/?([7 rEyV}s  .z/$mn}q#)v'TO.=\/R[]M/7G)7{ #>(<!uO[`[1y'p@ B4EZdێ& hhH\Bx"VcM0V32ZfVzLyt<|\ZYv0Wg}z(>kTW|TJ8]+.iԮ(SltX'KIaP4̸T%IU?r&uB!q$(KHX$LJ`( Q ^{b RDT΂(A[{"FePWBTAuD"xXΚ7 f3/F9$G2Xo|,NAվbΔ bIuĖq49kQgoz =}`F"{ $/Th A(*-:@('S { jAo'v+Y)glDQy(bfI&4頜: >z` `Gq\V<,2x7Nx3jx%Y4VIТjRvAlXIؐ2 KGifFEKO6YPv.9!EbD KhMyfi(A`B?%)R!)ۺ!)ȑHJc$5]]U]U] 6 Qh"S X^*%5P[AuϷGsM+]+GՑY6Q9dFŏqT37U|},.Wz4jx4פt1|28>~X^=+?$HSU]uq;d} s}o8l" ;$gJrN?aparnH;:yDg o ׸nZN3(O<] }tG_uqi+&Rq:>>]\>*)A}Y|WjCN?5}hr,L6^9qGވQyng[s.ĈV7oF'`! Kyܬ팑{\e&;;A.n7!sF #]ϟ,|D ׯ`^|{Bo\tF&WZţN&9oysœ'4>|j~*28a^Jĉ?:u;爎Avt݇ǷӇӷ?ROOㇷpE<l  Ǔ_VZ&Cs + E/ hrsƽK>ϳLَq^xpzrZ S5N2"L>A/*M9qTT@/BB4/bS>2_>. M}[w~t[x\ _P၄@\6fւ>4^fKVhCa@qgUֆ 1B%XDHdIJNeAQC< PDdiRgÉ*ݙtZz@j⿤ѢyɦLJ]u_ q6J 84qeEHZD.j[: m;.[+Y]j6ʧ֫LV̱dNxBj&Nj< 1-3AȍI:kFUpAb"&Θ /~\q֞0tҍx>w AvKa8WvYZz8і#Jdmq-ӼE$EߣBށ5;֬ pP|sP;o&o~'U¥_EpkosfRO6?N\7nZ 2I8BG)\ +:λ[L7Z:a,ʅC#E[! 7-o[+"O凉JYosy֏TxY7`Qe] zÚɯ?u=H l{<|qtX 6Ak'_3:1IeU_6aUz}e>6Qϕ2ܕ*rRBPLJЖ m h*qOw(w9fyNorx<";W59$|;h8MGK7~YH{k (Ebh18o3 RHF%S69kE\`MaY{]y]RR:$9o2H`J39oO {\b.DThŒsD#2ha#eX"j4+D<ҍb9*N)}ִY!٠2\J[FHhp]y quŢ[\!qrTi8s0˥' j)Xu/w+/A2dA?<"1<@!pD%LHG P-dz@lQ%U !e YOkB2sI&*cTMȤ挍3.Rf a^[^䕀5\GWBkNeU s/z7=/~ݠn[3pM9JcPJI&E¹Ime+*Z0 _CD4ʱ9bQ pgmr'!(fJmc"^9vc,خ 3$,k ;vkC˵[ va*rURXGLfpp)Yΐ̉A "k.L2IHrԐs@-ׄHeo~l]bjuY^=.goiőݎ[UزĖջ]zirgZ.ƇAego ίAy;_q+'pg *NQs2IU+>M&' Ζȏ*4V7ߒn9wqP;PBZA=.#`A  +ۍB0gw;z~A;~9Ǚ܄?G^M6xrp~5S{yL!I44[[+Gń6Qc"IY⢢,oVgE}ݡc^$$0J#$A*-JaSBZ"* dDR .6q 儅ΓP'_'>+vd=*:4S&*%|2ɰ,TYw7[PMWb-EdTRKZaLIlrF JcnbKc;Dcs]^6bsT k'mecPkܖ9PO2M+'3CNe(ةW;% B~G`XIP&sJFƆT2bEeDh #AiZ5ȥGI#Cn6x!ۂznl $ p)cP IQĜS?SRXueg.ڥ5+hϝ‰C qor JK%X?%GD<n5qR <S/iHZXZF OL3eb"cNEJC/t̔d% PRj"G3BDN H4Y)ȡM`*52T yFPHڅe80(3Y`jDzM5i8+:%d3A OAӮ&[bG$';X+Gh H0Lk$eKA`9`R)M*V&HGԞCǘ$5 y _Y݈.UJ|}f|ÞJyGTP[{F(ޫO d*9qf6h92F,1qpȌAbjdI0"={3.b22DnT RY(FǃN=ȕ*ge/B;Ɖ[(< qLEV)DcXlrV;Y+W1kxi6$%_=;$8$%z)z?i(A*8L`ʶR:[gZՠV~R'Zp4&9̓}d5)F.E GLT; G1#ĞuiU{]ֶrSN TU˛BN5e(Ҩ+F,*^pɑnfe9,z;"Dei6Uc ,|»O9kZ 1 !Z lmRZȕ؃D]ؿ>\{kQ} FDPO"%p^[SFmGYž^>I1mUP)d#s\tLg b؇'n hQz{a ˭"tx%Ekõ& nPܮ?JFe[|Rv5 N}y; NYn%4p[Í[*$5* ޖݢ2qُ/8oQ &V?|.'i $|r$Vlb}t&8㒉svpl\?i8:LIk߭&,9}d 'AȌ:gMWnx&I hZ\Z<Ɂ*~oH;}m?jo5ج";wES.g|1g28>Q}xlx9ۭ_fW>~~~"HE @,9 Gdxy]&`t6bږ}O\6̤83nFlFn~UfHBpt'ZŊN =sx:n{YAֵ }Cnuӳj9dBFĊgXn}1YqlV ?sS+jl8]gb?::~*_?p?}<~ cYGˣ|3_|55ͷZZV|a^5yüǒ=֙ƥmHO/~a~NfAu9XN&Wlٹp2[TUNssMUOV'khwP! <4i&H͙0m"c'W[W_ ER6s^%@ѧzgt&g DJdUL Q +';+uL fx9({.#K?%V^7+V#g7L"8>V7߇zͣHϕgcQk5Ƿv龋Nė߫gK--*AN/8%#YFT3چRMdYTЀ)*;Goi u Vt@M޿y&.Mu&&eƞm6,dMjGbn =,z{1OfCt\fѡ:͢m6,|&#-(ZPQ0t d@ED rZT#JBJtͭƹ \g Jt yZ?aa2ƈٍIC774+A`g;Ƥ))F^q#F }31MCg 1ɡjV"UFvr>elņ(#7djttq{MA#z8]ʏ>4'gE!j[܊7_Wk'3~NQHa0)_z= ̜{v޾#No[TR@/dbNț;.m'<2l;ϙvI3v`>:g.U`VtY/7tRn>=Qɶr[@w6;=VWnnhшn6 s8>߽d^0{|>YAYzسPl-D7 av)EԽLjm< eAd%k[CZCerr]r{MNn7J4G1Iq4,;Yh>j7{/IwKTd&vޢe9O޼S*BD tң̆YBvso5ONx+mTMlDWrAI"2{NZ2JgLVɘ,3yjf[7Tf<."l꼮z{=Pb C'ޖ\V(GwxFVsQx+Kyk7AY>jľՈ}{+WWfJ Vh `% aU . ~vPUoȃZ#c,K$SNNrJ(|2~Fqn)WxsKnÆ^@e>WϯnJFvߨlXjƱGGτON/N[*v}֫u'5P'CcPlhzYꣁ =*^36-_g})),nc^=ǺZ2/><_AZea ;ːq"\}IOpVpx#>^rNQ0VI FgBUfJj,.fU1+WqC'Js-I(4zy$טG90k-xlEHlQ]шQUֈ׈Fܺg.Ke|%$XrY)xA2(8$iHApNs -f_U#:7F ɀEτ%D͍DKӃ/Azp"ڭ4s637 puj~meҋ7ˍZ~[,0f~;^P|OŢNԤ P>ʳ7pce|;]z^5ϳz͸ħRrՋA*QHUR^]uZn:GLi^]]=`zeu/_n= wOeekmŁK~mejjk_yGfd"/ ?܁BJ7ŮIW6>iu(Rnq1W_!EAg6A%9!(3*N%r>(UP HPB1 mčgR!_)욕˺0eqlZ.f8֕_rzMG=6 /1.1-XsĐ$_?o.t &*v/ff|klX@aa/ ļJr~`F`n*&箃X`_](1t2gKFd^a9P(EՉLlC^?T3V" d1xbq0:KP%|•FcgmI %ȗZrV:AfAKz>NRiˡE58 XȶygvCFq6NIQI)hemw1Ynxk*Ɔyў}KaR8Qn2`K.-8LRbePesG‚6۶ oѱjz;o*rv oS yiLXa|;WAw>U,6b.vY ~C-,DŽHirDJӒ ʩx)3펔6d0y.HkF-2. )[FGۼve#de*Y/#;9‰ȾH5dZ['aY.^Mqƾ+yaʴ*9sYEڸJA-ո:r+mmyU.t<Μ0;ǿJ o!5R'5eS01$f"ع|tRz辬~Y-w|}9h)zC㰇20APlbz?7D߰*U4ϔ#r }ҜH9PU@v_*NJ>rւڍjFB!̎)ĢyB46R8I,WtXl<%9ۑ''C|B`!O&c2ةdLu]>cfL2A>D8\OE]*QIXޡBPNH]%>u2t2*QK;58Q)R])* \)_7EjX{ i{V4j~{>N\`._LJʴJ^ֺ*jp\q@䠮-:h TO㗑xtgc.-7g?-kl7gߙ^yI *[."/UJgʛկ>Cc+RXXp) #*(hӾz<&{wܮرC~]vUbg`Qiӵ(Q3$ `1E^4D{R;,ź$P'ZDl ha[0 ^И^.rvܮW>* pm[o&Ս(*tB]MKU RYU4c,HzE$eJFc&=>FErU1FwtK;10<tځ: cܐJ.5*$Ve!11T8Ѭ3`~]܎*Ve񷵌/;D/]&1)JW:G`"`pR;ĬWYGjdY=POKUٝ75>ГS+*nxs41n@W:ކb8N6â(|2"P5,?/E ߴyq44=3l.4_>}H?B^3˟#Yie"*8 h95aa H#"BY](Pʍ"@:2,"1rb H/B"ICl\=Bo Cn㔶NgNv$-S'ɕ3X<s_fJ;Vg+A!UqnOP2=zKuרMٯ\RVj+Jf\,6 c]ԏ^/p밳 gB[e֌h&>80u('#< K]F,rH# be}DDD b FрG!eL\hgit;ƕ`j>?tpWtdKx]z0ލ󥹝_/mKBLr .h\ÅK[`a. ҟ^`KkhiUh4 62(#F刊!;A Utg9NŶw'+_ﻠޞXb[>:0"N]﵏Jͧy*0 ]M/7,Yydc.2E4^JX )XZYh6䐛d{y, ),0s׻9XnCJP-xb0I6Sߖ Ljta>2U|rw/k$^0_O0yֽnW^(^X(Vz2QCôZf']D=ˤ@x03dms$CO/Y0\ܣxm'w5ge67 =Wϔ#ԁ[hqo6'wf;|fdiy#ZhYrOfUNxSVl&jV |9_X l :k_U҆v&'L w O{^tQ-8fjFUHX0zN, *y;M29\o>o'oT{j`o<(֕IͤE;;aSuUy )iujJDAڃ$>SJ턎s)W*HJDe}ʵu/u2UAm ɩ<KcJeK2o>iJJpp^.VA۠FHe:IPrFnth,݉ "P!,t@S=Q|V2oTb8QNU$"J !`Ѻp/$wKI0{^T0ua`81 `1 G*@g[Mp: !`Ť%B"D)59bCR aEFrrVտހ-[5t/fZhcHT{ I?3`,Sjt )\{{)wokEBO2{ k sKad GrG 胈 ;gV6Ol4{!^ G]B@ QaG!V3@ LzőEqt.Oux: <[wލ[oQ$K"qD8%&J 53+ʇ M%Җ/"NCE`g]l,·"Ajځ*>JTb*0`TS&{ :>[T/yW#jsn}oqRsoj7Y=W[|RAhW7pKr3oX}À3%V\jDJAE+w.'aH潯&#SQB9aGTI)Gi\580-Q.wtQ ΢@J''Jǻ5F_.q8<=L>b>NVG[K/sۂH[f(t|ܛՔ,Gz>&>2,,6u3.q|rF}iV[gU͇Żكˊ blA̙?Lrs5KDp4bf"ABV$ ][:[ [ _ݡefBp`ż@7WmuUFַ:VWW.&6kHnX u4&VTe#Z\9ǭJ5k*O++?㻓ۇw?|pZ5"g+0Ta$lIT~VEK_oPe$UWTBfALA{P m'q?]k]&l^a$)Tex ! Oov2j'€:#i׻ s;k7sa:p_c sK,F%qF#Ȓ胢X/y. dwSP }γUt^/IޝO˕SH]5rP;l-)a !e |4OS p GRV %KT%5:"jq.ƀm 2& PCEQ9!nJȹ_GD! ΚUMpٵ%1>}>*v3}+Od멪y[T=*-TY<'u(뙊ꙿF?<k&vNu77?xp(x6_>`I ydirFR;_4*s$ EK$)˭T.zV59 =՛^滙{ "b?|wΊ^WVbX㻅Vb u\<}~m{]M+?epqOmu޾L.f @W DZGUȍb 5U/L7L1KOZhz_ q:Ie6VgQm$"*>!ˏ=c:c>c;;3 9Lԉ\H$0I<2Ep&069pQ0Rg-ךROmZ 1AxGU: [ȹ" H/ vEۜ,r e&ص: l|Rrq쌹=sFjYb{j cS?ǃZ\)]"'5$5A[}L@N+?#}pGJ3 _q t Vhc"1MP )K$)P5ܢB)s9gQv_ks7Lyp$0ȜdQ.1E"K*4a9URlDCPBv5ͿhSr5 T:4O"ڗ6KATq9*_Tlb|؜@r^'z{7oyr/an欽';k %LL)>P)ELXHG P%dz@"*D SZcG AR-lB*ȹk)BX3P y[Eo«zj?p@s&݃?-P`mo&/b6 SRIpqyYb!TE khnn.E160ceRy NBP#$&DҥJs?b0,]w jCڝj6c,GEL8&(Rc@n *.DT\1IHϓsP[@+kB$Ib8s,;b]H"FuTtơb܏Q?甁P<EaD"vvKHKLBYSWhmBN ( pH hd"єm(S<(`) k(1'gΗT .~8qguP{䱸 pŵ &d= IO2l+P4WMIǤ$| .nr.qx[دH=LYFn{6uh!7!mH!q7"ӸƳx"&d|^;%wN,uP%Z3@;)wFVG)X5(=@rN|r7|[9ה']³l9VWQ]rlBsXM\^!jVGUu8-goS_5ӹ}e86\e1򼜫3p -jI'N=WKkK.J'*z7L;DLRFjd]0 M`ﮒ$P4*~ѨNq9bR1t H-G/(Qɲ*g%Ue7v?cYZ_un )xocKϥip{*]'z߭ͶB,Ycծ[5_Y j^*ƣњo˺&<75U~OƳi9c59)VDkk*˳AB 緃p~OAZ˭d+!9I(?$X1lB>P~Z`/S)X3Jɞ\!gW\l*SUR^ \ &NW#X=rp jè(<LWzxïIx'dԎG)wUb}06 L#l1tVM.6ߐ8gZam&xU hB fj9=tT +ZXHp l ɕ>2}p++N9UW tDžT]?r=~\-ȴfvjǂ^'l0ə}FԔHǔhE9ӏf܈vb[J[cLxD֒hDr*ou!,TJA[>aܕIhӆ tQhHHtmHq$첃,L . )88AYa*&EI '!#Nx̵2[r=~h*5-;kymo<݆Bf*}6TVC߆Tx5nCq æwGr.zT~z5xklj0:Zml.))!h)E;) I4ɸB c#)۰%D%гRP6q.ƀm gT&@bSG'o"<6isI]_Cb?knY5u^rwm(OP; l=o+ETRW1G_K{G us>)q4"+5u1b5PsN^krfqʅƣ! iNO3 5vh}yي٭CT>P ]pUחT 1`)~4څhp`2Qf$G$,]9?U|Ή :g{tÑUXێ\ f΃lґ`*P I"XrY>̍S*T5cLڗ9hO@cN#:Ek# H}eY0нg9&[}i/݌K]B!YEZ K=~+ G뻞mʵᎊQwQ@mQ*iKG|LJ^f 7sɭ&+B՟M˨' > NR1H+fѣ N:C#Wy4B Bm hcR(H1 u@ɼ9Y+WlKQK &"{mIF/FLQIgtJeU)* 3̃a9Q' :&T:#p,# )xdsB-3uZՠV~O iZ>2KHѥ(Qza(q!NAqTQuxW':#j{^71LҨ+F,Hp0+/aIi#B])"G"|Xd[㕅ov(xƘ;o&\)=,';GE񘃱&nF:jqȶ!*K7AGŰF;Ơ5i&0q~\4%dp6y`V6X`,RY: 07 k %LR |Bj%&(đF,ehea mF9Y F> }C i]tC e \FZ8e)ԄB9C=$xN%ka"9V3n'K^Bd|hRHgufr^z, RDLOȽ1D>Ih^ػ亇fJЏ;) Qd`0M6l!g$= jo\q_*-Cc'rV6a}vs/<̵a`JI$R%1<lh+yt`gaM?LSg2") *ec9. :dB1HÉ a7ʃ >Έ(=DV:y}z|1"0G;ss{~1Y[nva.Jo=Ɩ_:G\6X>\>EfyGB?h`YblGcv/UTi|pTV:Q͕,PQG+d }6;)j`r|֩zExκN^];ٛW߾){yx{ͻ3.٫zuKb 1Q8^FG|6~|hcj M͆6ڪ.:9%K[g& )w~e?bIq3;mO~ʁӭ$[WQI$旓Q'Nh=斪xZbr3=S=7$ncܕ]4eyjޖ.&$C a/'K>ރ1vQ 'r`d̛Ի 3nf´+B. FyKB c21*$ dBuUo5F9cMD-=KYy4YY̳Ix@AFT~,N7ڗ^phT8>yG3-E>/ÛGl9)8r׏PSjg1R,$x1EKfJtFҪF9n]N'HG?Lcb6H>(&-$jBzNލYvbOy,?6ܶv9anuuQL:?>w}U3G{Jt^Sgzr)ew湝O;č^ e#uj`~)ez̆4ʆ qQ|+_Jyk7AC]{gZ*Tt3ۮJD١̈́UAI 4BЖ{*N@?3[4ʾVϘQj"1NƠd d )ЀRY*ɽb!n>w_cW& zDb eHч8rYJd.s>'@KO֯W]> {)ґCt82t&sљE@+˄ YbUa 'J|#$EJ l?X+~xWɶ;菇7mlΒI׷#i]-X{Iiϟ?߽gZy}d"Gbb%!Ibl5 "7Xf/r(ؔ%z0hI|ژLk9W3)ښ95j<8c[]u<ܝWK_%!'ۺG78ݤ|x'  >7t29rٰA4Vly7 z 2PUcK`'dB66QCАpdruYCp:G,[YcW#g~Zp.&5X- }@+~Khӯ!ʵwBgak- 1g'E3.L1k5&W7+my@rEGtMB$A#ẋUbj4vjׇS?+8[hjFT5:hăF\g.Ke|%#XrY)xԴ !y9́}U,gL H6\&"=RD57KS Ho8Db Y?] @zqLk:kqɶzT֋zqoKYX|%HpBҹ䄕&+ʒDPwsvp*lbnh(uID AT!!J;TBnjxmћ ƁcR1[ҧɯ}{==*x/+ElsA )5@6@+Hp:X/5U@:#&g@`(E; &|fr([#SݱrwiigAS3޼zȕ +_KȠ?1jnu;Һ^=-gy,Wf!llonW-g=6^f=/\`E3Sy(xN;_qx96;7I9ɾvvS:νL,4s.{|g5}v!uyG\D=r\t(Ch@рdqFA^L>n]_SP^2v^i.,qėsyhc rmq۔@3zU;%> tΘӹmZVwe~n|/9F)?QjekɼH8SoW'(sZ_&Âa)=@gT:Wlh]p@.*(OӍV6Yha:#G(xX b(r & bHf96mJz @#K>T )yL,+J" E#b!#6LfQH-rBٿ-SZ9fDymW6r`=l癠iAżN}wӏ?W7?tJƫ6~z}tQ]R' +7l(ArA"@U*$H (m(bU3 vt5l: Y54X) 0.mPAxɜ1?{W׍dJY" 0/` e^Hv{jɒ+)m3"[R%rlQst(ӍsoL' WcISa?p4Ma-S8;+a qf~a/‹q3'4Q,Y-`.N׾Oxi^H/_nzzei$}bQhn7ج-՛ockձw#lG:-Լv&+W h8V 1;t$8psKV=[^;mxfrCٔ 'T4u~[r}#tWP޾{n .w}Vm l-"p5Ҳ: `b46zLqnQB7H3T)  &J%]I!ŎO=zK ֺ9!z1[j̜#^ )̇6oc]-Wu^1Wbb~.G s=]fXwaeйY=n֊:7ڬܱuu4?!w囋/v|K`1^)/>;YxB dVy6~F9F{pr@rBLKWj*i%ZRM݄ ǂ׌&oQW*GjQ¶B{RS\$;eˮ$q5|䣳U}f[ï7xz 5tsWZ8%W<^tŊ}x.Q bZGxUcuaֿeX4٣?*jza ^X}Ǿ:=̎կX`̫]]#(ß[Gƣ| [1 ^y*pOO6r'>x ߡ$w ꓁V\?r]],p4?3u=+p8f4F~ V><0wW1,+qx<+Gߛ|jL'/B929qai%1[.pMsf켲]!/Xv=7F|?Y' p{kg U/rݵQ'*DFigݢf쫳xt%XEky{kf6,zljJBJ٤jP۬;e-8*HSy6,!yA*}UuS&bmXUyUx{l&frj3'Ukeܶa-ݢObhGz5 ՚/\ٲ QqrM%F1=F$\'c&T̘n'x30fwG b{yw߽)KNxk`#MvG%T1[cƺ=҆:&[&K9wL2A?1A}4L4q[Ic$&>#CQ"'ȨMg&W4vMА Oڋ'E^%cN·%_5X"ysQUHM]}^%i(%%Ql1=\5TO0QG;K1l8PuY-5>d5HTxH,ap̮ڬJ5*"oZ AL` Śn|3੧Zc!M1G!/2>Uc `ӑ%CݢXTGoO@ֆR3u ӄIz.@k'> 2Ɗfga{@QQ@>PX q9wԁ, YCJǚڈn8׌E `58EQg}W%#*P-2\vTZ8I=R֌ERA4c`kpʆSIRs$F ,2Є6zf%niT58@sH+#TP ++$d BXW H [OX,40LbDb)`L&ZJL t$Hjï@Vı~.~VP@S1Dt0GQF[5ȊQ$/Eye(_u4VT2Zv;v3J 1ëܛ.٧VOlϨ`-hƧm,+JU8Pl46ʲ> ޮeF*j/`&КsGܠ\˝u{koF1/M5qu(N 7cB*2‰D9!"KȾ欇<Ͱ K9}{9:]oOl ~8qUuˎw!t lCDB$a1-7T@ۖ d)i HW=$FB+uXSAyG nS/XP G2kN(dUAQ u~5k!ڱՓ#7WC[3cT}p$k^7Ph4j5ae j3^\2 Lčrh%q"m=hrBp~SM5Un4a" J.(,MvFC8.P,޺(`jI*\e_l';+2B>xcT?5Bu?p7[he,x]$*Qmr)Dv\ķnѱ't^ӽ"lEݖ#q-mȍe瘐T?;w]na{]vz\vGyU0~ޜ!veqڢѕSzɰ Ez%Y!_ q?L~rt2UPM@S?gg)giy-4Dv HPX.R4jo3Zݖ]2qIy+)uS٭Z}N=j BRpl3(.Z9]*c;\rCBW;Š- ԫ{=^0Wjwj MZ^Jw?e=I/_"7Zʯ L,j[l\ZY9/_b}M(A#_Ls[ 픟~;}WTn۳ஊUho_yOtTFtv.grv.grv.grv.grv.grv.grv.grv.grv.grv.grv.grv.gsJlpG$m)h\w4aC+Ni(mg }7v ,o.O*XQyE%`89U߆ʗz~vu dاQG~6 /m WJ԰% ;{vjا}jا}jا}jا}jا}jا}jا}jا}jا}jا}jا}jا}jا}jا}jا}jا}jا}jا}jا}jا}jا}jا}jا}jاjt\hewJ8x N ahmaٻFr$W~[(``~yy A[].#u̠S%[Wٴ@W".-t=VN(l8,pa<봆R?֝4bJCro_muTAgm괖vq< L6kdtzkgݾ~„%}|8+¦9Kr=2r>}u?'=tyd!6'.uWm.~r$Ĥ;/2t4bqNG&]?U!YOFv4[jzf||er#l:=09Vvtq2G&NI@u8/N9_ =.@b褏SrG;wRQ|e,归F2x(%G瀂(Ȗ)^ OA˞><NA<IpJ:n-V%K W$K&Ήd)$}} )Rqv5Y qhi6o.\_,l'<'o_jQ |ԂC@/J7I1$0Zy$N(,[{&%C8bDJdU Q^тs܂w7`Dž/%츽j}Su#TcwQyqFѨ<GhTʣQy4*FѨ<GhTʣQy4*FѨ<GhTʣQy4*FѨ<GhTʣQy4*FѨ<GhTʣQy4*FѨ<GhTʣQy4*'`(y:) = ˾1 ,;_ la!q={5#{fՊ8`~nvab4Oe]i!힮t?7]7406 @L3x!bұΪ="gJH1ca,#wn!N06ɎVAz0=52#Gx^p;yW}fNЖgRB)ur&ב\m1rHsJu*$ƪ%qܹл^ %Nr)4KOof{+]N㾻=b=}9m4 z^zNlLcHaw>J%):Ui1rUmFKbE[&Ή4!HX/N`YFec041J;jnJ \`GpRS|Ѐ%Wn}ò1mYjBoB Jvka!Gp*)M!mPc agCE~oqoC[%x ҁ/t1㫧W/%gSGvݷEL`郰>NJ;(] Rmhuc{s Oi/|u|}1+R>w"}?~hN!OiuzuD ogǓ?gmL ȥǓb wyۻ//8e}|QOL~NF4,_iEH Ӈh{/H߽{wɧdo@"XWtƕ|X~ -`~ ? )o!?./guoZ6[;^Ŷ!BZbG?ZM6Y] ]Zu9 V:( `%Xbcd۱dXe~zGY8@d(9*J7QdT\ AqN(idYrK .9@[:xݜ4H۪nw\7}Qr]W &z{E'ItT*Ҿ;Oy43B!WS|u]Wh54NkH34j.='Mai'l:gUqʜZNƽ SYwkwӕ ַşJ1nP2[Ƕ:aoP4ޚP/MزY_5:ad.h dt_^0㞨V  IB IB ֱ0ݤSVi(?ȲJ-sJ~U~䡻-.$ss0PM9p)6#+-}S kF6G'ip|CUΑ`͎9efhsQxfUu "؊ບ8'by{(}\nܻC_;4>#sF+l nnskfu(mNh#ˎpJ9=rRB)#rZC:?;,$Q<Q L%N+eN)1l^ uHiRoup|`kVJUؠx.Ɠ/m\G*jJ:Dfbd\乏Ib OJqqzE!wsz&Ytc A8wxUOvzwXE\8x_5..~NT~By~ξ?J`mؓ o73|?(i 3O+dGII^ 5E/a$+2\tog%κOx~hae}з(zOHvYrvoœ!ϫV'vD T>~]|O=-&[ҲՋZ]-H\;7||?~ϣEfJ=c;0"m}֢^ͦ_/FK8?Wo5pB=JGz0>3]Zu tN(M& IՎF)1ـX?X;uw\.d= 5ÔI(gQE!̭׎bGcj@fCJ)LFM9}D=c^+"`Mr{%!%E8^*sqրN:ү'*"-4Nfpفea.# 4NAR>xj;&F`O{V;48\ )s2\jdS9 (@+Fǣ@9rY`9H+0qRdN(eK z& )F:c"Jlg9z0ՖE-595[o|"TA(P0kb(_3h)G|4)IOd߽ |y+t'dXP% DPRfIQ9>br9ɎԪI-NzcU`lXy(y2&h'(p^qXL!N qTqêDD jV"{>ObyQIVV@j @{w@.aAeDHv^|(WMEj|jHN1r6ʁsRADB+2rvԃrRܪD:yT45`z}0QKCjuY)h!]?dT4zvzH*UAB{8>ZTl PDδ+a) #`wR[J5eYjW"=NtL+d]H!KGL.2LRIQμWY F1 }C,wi]tn"H턶D.#I,=8%K[R 9yTmt^K)z>T3=ekpC%)V\a kPH˭*`,Xe B^Hqq!i !o h ӄ"ܲscfaP;.4wsB[فbO%Qtu~?z>W??{WFJ`fwSC1uқ`i 0ikíq7c$ˇK.U: 'b=Q0FPs]~Ut.(nYWYKG^Ԍ<׳RbfXgvj"v]3$WNLV`)D0HA)c" SL0o%`\&!),In/<<:9N=( >hYߏ ̓\~ٙ)ILQ6y~% z(ơ?:^E7ESNh S>RJAeWdΪi~-~޼+_MOVk`tr 34_8蟜VuKeݮ,eٴ߲raZS@xmI^p]1d}1t@qOP,> .]\w R ԅJ]'+\/K-ґ#`ĥgE1K3F"L_ug ttͻcL{ P`R謃Ѓ! B=-e]ECyb(Z:W=լNה{}(.{zir-G/GMp)zb IN&Xp%/>0̇!~y%ﮡKTnK Q(mKYPp_nMmډ~N6YPV뼿 #I,^z=1)"ډ`Q6B@k&5ȝԫ k౛`j6\zvDТ@'hsØd"v:l4>nz_kB'[ҳ/jnmOBls(#"ˠܯg A&>L:8mqt;arOo=Yf%VWp: ϕ1Z e O2Z e Z@ZG!jyj1"撜/Я}xQT 5/<9r0: ̪H@4OGid +t ҡ('s+XК3Kw;޾;uݹ4ԚI.0&̭Zliܡgo0T<}X^seuKrOJc1FmP&3ŃՉ靁C!Om'3p3N,:v9΂$P8h{{=4)SZu1T)b+t@Hm PwE%栝@Uf68_ב7,]Mm]lj𑯳+ݎn;1ImN9ZvUhӎI;"PVv4 u+7͠ %_dPVRS@Q"g9͙241ewZѣr>ոHAuMx  4@K, ,`xӀuR ;,V"Ʀ0bmc[F53"1\ )ᤷ)0Y2bX `4xZ`֛VF #$$8pFBb%#1)A`ICa~HiE>\Z묫lˋf^;^❶'=r»d+i&%eZ%Sj5QTzt $x G;^| ^l\Wh (,DYsTqQLð#j=}r(}~|&G)<>~8Ny\C%HIQspO-yʲ{jdGxUчG?xr >>0?H^Ix\Ο~?0<.Bɪn¸( DY*%WTYQz*½Ws'@xP^P.`JJwyҫV3ߪj - flw)W/zT~_*1YyoCI xU< LgQQo ~|UgU,ezdlSѹ?6=Akz,5禫M+D$}^ .EYjCU' Qj("tU6MU5tJŠ!$NW %;ztEDH0^NW .mUBwtBtz|rq]]ܝw|~XՕtU =DB_ǃWd ރ" Ns| #|"a``AX2wN▱,N6MVq*sN" 1tsY ͫ?;0_ 7q /i/9v3|˝R 3A56Y,IQN)93ZX&z|H*Ԯeh-hw<3tn0y VG#d`Ek,f)_'O(n%ZrL3DWl%ZuregwGW/:1ˋT9#9?e;K!c/+/&bomv9>+R-Jm#UM&B,5 MY@ YREt5c+8B-ttJ(WvtrJ!Ej]`ָO$ ;zt1V`[CW .mV6L(nۡ+|Ϧj3Z>3]mWg^ -CW\y t7+mc$m]`NUk*}}썗UBɎ^ ]"V-XUd[ bNW %;ztEZnhdacE)-` ;$ZD ĭWFtBv4iQJŰ`#8Vd6e5[ sm^"] k1 .em7JvtJ2H8`%ZCW#JhiwDJwtJ1x+1o ]%TkW D' PJU{֮.oUBP]};tE og_j sm"Lg%7CrBtE6+msބ{6xk*խ+@+I*tGW/J~ u.Jp)j ]%L4J.wtB 3"J%k ]%ZNWIWJco ^ ͛cϜX5Ud<frɴB0ry6 ^(}{ij{4T3 qt\(;04l60xy;Ad<|{ݽ0]hbHdc9L\}L75PIOf}_~Pix`,(3xn23ŃtJ?/s oaOD.S%_aK8>L`EUޏ C3$e^^pg2urJou_*bۂV)8A\Z$ƗY9,J]{fqpf8$"{g9B BV{"zg|< .6L>2kJR+bi ˩?7vI??N?1vpnsn&][_ѝ.'-,'˂\0-ZfQz^@'aBϟ٧;&Ff_19B)X-#3l$^62 u[g(,.$Egj+}9\Y[ ͡ih{22Ƴ>FEn50Nj3 G(; <d{.@W:$,%hn7&BT ezvD\]v4 u+7 h~Y/AZvTJMm.E9@6gPf۴0 6V+z}:-\u!5i~υ~4C 4@K, ,`xӀuR ;e,V,%w=dK.))eg]RRd/Ȉ &1x'=4mЕ c5qƖ0>QH< YW䓮ƫ~Zs_asZ|lwV}~~Z!:a\2tL$"@!+$\Y&\̒^ c6D3N0d"J][^D$#1B*r@1ڞ]-qjhi7a4oMξ>ۥ nsKwY;EYSI7{a,W7f8ct%Z _GX̝t[ouvKOk~'Ƒ*K/0s.sL$eB5R#q 9;LSX,JylBAO6Ze ɒ5zZzx$ُJ5,Br£bD("-/3lhŷE\:> ?OGWoUÜ#W QY(&E$*3&{p.CU[}$ AHQ`Sj5  oǬC.3hleĮ&~Ip1մ/ j v' ~dBiAk-2HAfΜO gD\bDB9)Ċy 9e65)H" "ND "9x_[~9磔 0sQUFD5  ;W̡TƇP4I%Y$󎈂d 3iEUYΘ6@qV CmL )DK&572[=h7*#b5q#◛LCʸzYKK2.\R9Ai.# WH:$YtuT$BkiG_<aI%O˷w5"g7+4^7D?)Q)G I#=W Q<РȢ!g4Z x5C$Eu̵ y\!> nSJ>\R${yo:%rPh&_=CCOcHtruGʴ?w6軏_oOI+Sze~qfy8gb6gnكPƅH_ƕ9=KW N=$V2ΩtTgTJl֩?NB Y,6<{ӐƁcL&@h]QEZ$^$hr ٜzYQ% Q>&g MJl9diW> \M{^‡w]oAȝ-J"]3.ޑprjPKfzw:'=޷ٽn;oZ6rezx^w=?46=oܭtG}hA[|Oi[GGށ|űا_7 mNS1Fb] 6g͵0]PiKJ~1-/eVoRxbiKe8ic7&+UnV>\-4/`v9`J~`w?~j>[6M6i>ʫ暴;4XoϜu÷_F#W|7`$пV%+Ji$6?]->f3jwjuez*PO!(P`ή5!D|j;h5vkT !kt|F0<(̋lJq\5W9UXcU5链~:9l}Ƹݫ A9`&tUd-Uf!ol،}ku1lU| Z9T1yeL'&kep R&F P& qsHLC4 :5ĵuWPs wtY7[Ozmtây'._E\/eўԼEJâ\׌8a}zͫSw}ݦ .ɚD#] u \y6l~|}|p ăc-D;cset899l1j|dQ€B( )p8 B#N "{+3ȬD90>x|lh5J|y%>~SSoh;T0qwl+x9fGN'9y VP aSiCr2U-8) 8qNEoCs@ APQKOF.KWM$zv\B#JێNWibW aYB= nGD.? .d0F#1F pK IL9< =w!E\H p!K4y&r"Qg 1t.gx-23*&NL֭v^m97X'v"N9kAH <UvJjsiK\|Aw51x˻9NMcG54V즱fIbz&*AKVN?O6r'?[+S|:nfcOnKY4Zq"k (k0sʘ+ $J qȽ x&&)=Z7rʈw$Z}/|!G#֏-֯P٧QB|Mw_''s)u՟hRC˹:&#K<q"F%V^NnW@ޯ(jhy\ l8:o/ps>IhqѺ}mS;ߙ\?Ŵt]?ϛe`K?4mx `ˉ^J4 w<[?(}!7 Dcֽ$-13dPS MYJ+ g;EN@4Π 9N߉rLn CرȌ4"|Ҟ|[LaZxJ8I1/90Mǣm`{ AȞX$G9M4 ]A1ˇ{; ΕK]"3ɎF%a*&,3B Sh< p !RsB$Ш EzTY,# tHƑ:jFUMW% RFCB)&qEVzD%fA`X$I7P}b똫Z"13 [sVa>"M$5=:G!OP|]є1m&h= 54 c<".pzdcY_붵7qv.v2c1hXy*Ջ~NcQ` & sHf9QhMrAICԑ2ǒ{T )*ErF@ `H"C?h$ʛ.ٿp^ArT#|fA9;༰\-}\\<Qˀr䟞s&cXd*דH7wN'(g944F?g~owwLlv]Ռ)_u|?~~#7iUͧsNt?y`$ߋ00Q^2RӍ垸=DFtߝj~ҡ݌$3V~vhnuFhu;x~cF lO~|{JQ<4i@ RFs& J*KV V W V @ l29/z!}wFG4= ։!+kUn唣%YfTtsΏ~2=,(A4Nj!Xl9 o Q;&2Ԡ!; xjyT]=Mudn69]~[,vk ~;,k tPLX9(kSubHJpP[<(I`bY÷ cǮF%_#_^U&HCUfE#U=z/z/zQ2O~0I TTǺZ 7Y;'hrV, Q+UV-~%m"Yt,2垼 o2lwNCm1x SUmhb;|K!BƂLƘ7Փe&AF֣}սwP\3& .t.je2geB1uA~p:p;)}Ga2|nw;N]n"p%)KFJ)hTb>(􆑇iX(Pty0 p`I Xgl30)Ő5XX5T -#tħՒ5 &oxz]{;nV*fuK+bD(G$:C`dh$ o5^g xt~w鄞y^J "<9h%(P"ZP-3q3Ɓ'D[VS躏׏* wqV/#)`D66IeBQ`ˀ']1w \>X}5a }kԢf5C!C!RT۳R@J|^5 ,w!O g sԉK$Q'm42MKU`,'6q2P+kp nhQĖMZ ͪUwl\dAEk ̇<;n8L[*>8tǗ 5=g7?2L;P j& nh&09:=61gMp1}ZG)zu ,쩻{3q`Lڌxg#ؿq&v Qe1޸ӭ2 пC'ԃq1T y&Bΐ(5g^0!"2t*M;_޹fGν(yޠ7yФ**HAILsA$290N^cL/~k Ƶ즞w~E}~4XǡUoZ1JB=|ϔU'v_20Mu[b-Wb䔬e E[ٴMe6Jgwg^Ʃ5Y}RBz%4O0$T5$eKQZ IAgn a&1RrI{A[RsBHSxm!w+pVB ݥIT/ksnuzlx)q6&ؤe(1ap*y%|]!\VQ{R:8\ \jKB0b=$#1D9r:3ϲ#@hGV a% 8:FZtLg!5t/>[ }~w@6N;EDW{Q1E_-%ACN XV.uvI×3F4Ұ)QNtFəg3EY umC`1&Y, 8 9m¬<Òj#BMGtJ u"%>1e7!rFkAƀJ.1#wޢMJkRzpYN'؁<:$2VPGWPSxN'GE,K4D ;$M>S U&v"rs[9ZkR kdJ!31zxŽʮb;IM=,$qA[Q H4/ʙs2`d'pkI=V;!B="$NBr୷@M(3$C2X]k]a?ۓ5r[FB<0B'KȲ*&B(x*5 JD1UT| B1\+L6(rmF0FA dc2['Ut֐1J5BF1vcPoqm7],O &_yTko-a`JI$R!<lh <:?gA DӧM<$$DRL[T8}!s2Y!1H nJ2 Bꖐ&:#"x&:"@[EA]׆k#L& )Bq{S(q>+.(:zZ}8YNhO>ŧ'k]*D* NK7ӓ2iw~h'c5g ǥnnZ´O|S5.MV1dq`5C<1xB ݛDG1 Re"3o9{nuų&=p`&|?$'+O+{;yՅoHDmkdR;Rrqh=v%}V.^mxGGgWjTy7ڳ7յw=]#ٸ~njn|3]xA `A޾k.9pô0~dlL3IuL링/,h ,Xpп/jyorIX;`:u{W\B-fWHXm8ea 8ǼIѠV[Մ*.:~E^ﯿ9_^}??{Bh&I(AAy+OmLWS| ˷-:+w|,ɳ+Lo^\܆T?Lf^Ʊ㸜6'?JjE+ Wa׃TGl"oyo\ T[ina!V)1Y4PԌ'=V6q*|g~F!Kҽ$ }Vx/-Yju4m@/}gOmLӎ:L \" DiGyKD 5p1 Iy) >trSUZyt^u_n0"f @Kju/zd{H~ZXpU- z/Nlp!CtuFkɼ-q+PF&0E6%6~M&+o(#mj/ly׆]~mؚ9Ɓy[jZ2Z;v-5SGirnYOafNfq66~' 6kik_׋j<% s :P[oA\}sbY_uq9+VY[I3ɲQ*~(VDٻ6݋]R^yVPJv'd6iY ;4tݺ"+ٟMiap,؝.4t^!)c}{7޳Gد3iy,_#]iVuƸzRgOj??}'onuܦLZA6mzIEϚ㊰NG P+3୷\Jٵv0kOX{$_ |)ڽ/-bKR|y1O'~l{5,/"M 8j  ?9 m址YyEtKIǜpcZ"ߛ$|Y~BH|ʪF6S_^şuC(suV$Y\gEr-KqA }b`#Mp6M>%~lZ ǖ?l􈦝,qAK|T{vqޔ[t6Pjrv|19`K?y@KǮJH̸ ;s\Y:AB߶ozh3Dc}&opT--xv$ Ĭ$@33huA~8X,BˌXvsV_Z( ?ty܂^@\2#-E g~> Fmtw ^DŽMbo-ϻBBMrҸ2 >J8c`\ d&p;d|˺mH6ͼǼZДrV$wW(e:uRI@ۀ7A>}DƆ8PA˘[lLd7w٬Ȣ [m2d2NF2;$q)R"CNxҴ EaV;g·]R!vW]{ݙ^!:aFH%a:&-!g)ct&dPeX͕e…,0f N,;Ɍa DEx-G6HEf:IJ][v]AF'GG]TAEk%Q^LFM5Ulz;9[;nΛ>R9j< ?!c\ NV 9Lj"p!8 =^HL3jkHʅ'!`S.`* "yLLk+319cgU+(f?{FmY8ٝξl<mMcI<[l],;i[jfWȪz-ĎmmVd|~Hmedw4:Y&Fϣ[lae#T)yf JYf ,/c;؊5!P1J%T&`g[BMDO9"u[Άb8PcڝIDZXmv`I|VH83SZ# d5>d]Sx -E2}E[75 3ZE'EmM&xHGME,j!tl;n{ؒEbj"v&Oc{[ĭ^,mEib4ѣc]1ȀJ@kC[*Sa,27& |1T(K31X%ɀUѪ`p gE|I#i3Wuv%%Ochzŭ(f+ΉڀJTGkzW>{ͪn隤$P]|x4̹+8{hf-5v ~>(a[i0T Mmq,~f]go$vxyg Wu|j9Ƭh0o¢ey1[@`ًI>&ٺ@JSXSld2Z}*kh:5VJ!5?b=+3ǿx{6`^\ ^*sȾv?h8@7uDn(݊+LTjh̎&Htv6*s`A@es}[ׁn!)B߾]mΗ]Wn~[6tSG\۶w]{g< IR͹0uí篭Yt;:]پ4v-n8ۦ|NVyvyL~+4z;qL ?GO] Ds;ߕRfMKn #Ӊ/>M[񭛶-s]r%t(:O͋}\={.sm>DWGF]ǐɗRFӚ`ϻLӟKQe%uA* \+F"pY$\Eo_mcErp[S9,n>w!`(rFKEdRxNz`Pa^+it,l% $ ĥ `cyQWOxSśl;$r;{[߁Qߖ}Tda7|̎|/ޔZ1M%S45vl۶}>Zo4&ː75_`SlKI@&e p Ŵc.69R) BLHsI؉$2C6[ J&rNhHќhVP7ڭ61B QbQ:Pl,n3 @R[CO .*dLֳzl8{z% x/KV0C>!&H%a3N) D%] lW$?XMuQPa!N Hm]LJHNp?b<2gKKʡ 6Y|(qz F31u5΄"U2 c(Y.pTPyutRu+{ݺXЍMhz8zw܁jIgIK O/QJ7 Q"G@^IpZAZ~tc>t]HG"Wii9f/P-{xuS%lؕh/;XꀉtN1!FMb1JdKƗ+-,)hKD1# 9WV! 9LbQuI@Yc- )X NR$yl-HkPEv]|v\=B 2;i7S̫="ض]My/OuU^0 =`:4/T?m%E`.:@C֢k M7EmXFKGC`G:7S V{GI锃8ab|<>c&R@cQ١`YUrfĽ氠k 1Y 0zypX<[i[jP@PG}ַ.YSAGήGsg0%9z(M!Pt'dEQA@qYRtBdΥPT9(=Z5T5DMGȁԩ3"<.O.3#~u=4׏>l3 Hںz *&\۽W˧ʄ<+n&ǐ-R +'m}#(W)j6ވ!MM3!|@ #Sps,.IW(R'+UIک gY@kbP׏lW!bl3塘R)fs{1\ߝ&֚1QcŅTj1dʪD!)e&`ȡ=DP  T M>y a$\6Iğ $auHytݖd-A Lgf2hd-Js1c@#S50lT$kW @v#x+0c1ֳ K^^\uc;20=0mUU^*UɔTu9W ai:Jp}\SA 8>S(Ob}ɥZ9)$!֎OG6Em=&|@$ϊU&M6k}N,(!2^;4xf&EJلZDSsϑeVJ+ب12 R;0 g76R|NR|2,< [0+9#%Y]/wTo#/Jr%+]6 <1Kp:U(\b* zLsI'yQ=6JJHNj`QZ`.A Gd(X zM)hߩQRI 8EBd"댉Dc=l8{YW5kurMP$+,.CV)^+!y& ĨKJOZhrVhUKZnlD@+ N`BFM!̶s DٕL8H)CΠV'E*HBe!b=2D4 {щwq@_D&uґxFn4*ލʁb͸o!mIA[BJ]I{ \b"2T,I׶<".90EbDҥT|p6&y0 2!zP=`Iю$!;#:AouaMatW/<舛4mf>ts@[Hl>Sp5VvtE<ʆJ Ԁ Ff̙<<^z+&ڥq y8'<?Aeu*8$B a Tp=%stEXpi_NrrK9Cxrq<1M#1: eH&c!J2ARNX:='7Py9IkHZUIhl0`quTX(]5ٝ[(8>wM4#-up⹏W:O Α„9z3˜łr%cq _|/R(HȬq:\X#IxPQE0E)$`I4 zEV46y+ ]&F+8 B+ύc-fH nSjS[YuQŐ>TFUHooE7-Mcu1ꜧ/5=Ra,n&ۼw0&?`Ϫƫ [ӆ?(ϼ@JoGD5b 1)!AJ]XtSGwg6hΰ^%! A1Bmzxކ}$=x_%7umxϭ BkDkyz>j|Qj,ՊoGsgI|k׉K^>=g̚W#ٺwvO7ޮGgJ| _2^\V]q Ɠ޴|߲-'wD㿢dlw6rmZwƓ> Yd<[LÏÛm7KȺRl!7uӳ%O 7,>9N) &eP9`2K?IS2Yg .Yo㗟w?~~{߾O,pKO[`04vt< Er,ۉ{1}K%YU,Qrɮ\PU"O\g6,o; e*57kؠk-Ju7@WnѸژ=-^@rl2= ֑p51pOOPl\_&4QK;RIխTKUfr f2#@%n'Sm]k-qr> #I!@B .R@3kAS sngOVhClj0ĪH:a:xf:kPe1QIHEHdIJNeAQC< PDd t*؜x&ۄv>WK笥_z%TҠ^y+4Vjz]CU凮Vp,TJA (J,aBED2V>˥TL$1}&(s=DU s,PKW-&0 u[|G׫D>UQ7ϕmiˡY'Y79-9iVi'#3"QRL'I` xx,=;YC$ňR)\XύN%eL* BEQ9SRJcb<QCWKؙWǘTK\8kdA;qZX=|"i(a8DZTW+bC%P*ӻ_jYml>oP\Erkvt1^ 4"Z" "tkH:`2MzSzfcx;9'8/gϭ6<|VXox>ܕOY0$CFԀ% ȹ2q\ MJbFS>xí"ha* a/!U KR 5!1ڈd#]HAv#EI#W_nviMKq6ZwiB-ݪ0Us5qc6 Bud\.gG08^Y.E*]$$W5x Y:MM7&mc=|CWi8aڢ|G!g 5}~:'4O xCe|mdցerցU3? ~˿- BF-#rI$W6ښبɟNqO~~Y֬-?'_ݢ4l4NޏgNDF~]o5QRGffwaN}PG̠8uSuj0Y9ď~9>?31뤽v[Cv4SMeLPΉ^_j+K~Gxٞ]}b?QҌ'7%45>S]Pu)3A q"4vb;m[F\~fo:g [m6Y$?:˥f2-y*<>6[MYOODO&97k:ٹMu;e} Vp߻5uk,[V~~=ǖS. 4-  B(2R21*y }:MĤc4)G#"8aHR\Ը "1+mD0+ VmRsc+ns[v( .켮aC)#?F4Lj q#?GO108-Ez"hL='oO#oOr,vUOq~_7 㫋F.*B~MJv;gquQK& GݨuVoKOXɛٓsx>a2P{D-袭5![BGԶ?;Yw0T|=-cz `n @LW <>X N^*ťG%a,}%Dvk #Xu>XW|8(Kܛ*EJ+ UZ(UY@`cwvvMެsD.Ev'x4/w,⣬vf;DECeYL|P7fOn}\k?Z6.+#/>)Rm}2~==UOR˳a68-Ďn\F\+8x! gtq]\MдB P]^#{e6#wx4('_LnF>O+|Y'ژ3k7L-ϟ߽=E.wJ, 8 Q*5"V2y[8T8`e*pP{Kc5IP&s*FVƆTe8iS @4MMSΓ0^'R5J <St)7)[bn1n+R-,"T'rT94z雩yq3uj7LL !O%#.rHW&Z;͓)9ֳ?)}#h3f+vd=*:4S9L^•P2ɰl*xpuKʭU=1l!vkp[2^*M{I-t69MBB!蘀دsy>7B[cmh+8E RǺwnDW:7v}>:pDj}|16F&ךROmZ ;cGe@㐪Y_meM?so{wpc\q0:p01\2b]`kȵۆ~:6}Zd@i7޵̷ fpxQx|[ ZNn=zD(Қ)_6,Blp$>XA Ӛ#"rֱ$9wX < [ŅNgy0ijr*9IZNNpkB "F"M]q.@(2& O5(O9T8]1qv\Bπ־xS grŀ҃hk&C'>P؟id&Yz=zV}Me^L6JP %&2pćߋ; D\)Q P)1Zz7S7 >Q$>ZϤQLC *otqSH&oqZF/"~"XPN*bп{w2&=S-Vf3O5UmH.70؉bUHᢎTpV!>A{Ry)=QL3FkG=koIEö]bLf`/`9ȒGd[ yLIoYβ^"FɣiFd3g? J:{Gf J:囘ڠij~GS͆qځ:M]v9~;rߍip1 8?7 zc(a |562P{0cbӓ9Q΁?aLX5t냸d$0n5H[1"V&ˤA)μ`6g':DҤanNӍ{bc4tt7 x>Q?O_5^HtFs]) ̄sA2ುF tvvOי73ڸe[{c/szVӻUxoN_1J} Ӥkf>0MuwѸbӪRbEI6ȘTnk۽ZCؾzE;dɐZ& S&mKQErP`2N)cn: 6V)129@IP@1'Hb>M7V2$y]w JhԾTvҾΗ~rLZޣk5F蓬ʡxn)Qɉd!J-! md,0F,1Qp:yK6C̑93ݹdT`?*q5D8\ ):se\bSzQ+Fǃ=ȵ<^Z!qDa,+٦NZ41A1  (wc> -ߟ-|7jM}fޛY [g"C*I=^a$eї(E/T9j)CII]8gυA~bOyeYr(R'TtLMFřg$(Y&3W:StvҒywXj ʐHD]1 IZ%bpݺ&tG2uyw/xU.hAmQ> Zl%1gQ;H0k/eI{n#BXXɮ0_g$rW3Rv)g HFcF< $m\kpDpV% gcUqtv3ܕ;D!E'lR!HCz2JPץ("QN ` 62d$s"8NOGyo2X q6Ye~ԗ̻(_{uLɲp%g?1:ÒQwԛm]*_TV!ӿ̻9ۜՠ9TM:`mO-s<B-=;9άD9J+Ҳ mQJ>34K݉.D4:fiqh"aJBFjËR02l2lnM< #r!abi,W%*x$T ,i!Ysr,Mٲ^R}fBF'-rѡs+#RHg25>iBO4 uLDDQT[AJcTZbU&\c$rI֠VK*S::[h.`ٜԳݖk"=abi]^lC':sPZf| DN"%$BS րqR4_z폦:BI3:蔭ud# }$DIt7=#uHwrQz{e !I;esE; {SZ*&JqS;AYԇYPSt6(Ii8!PLp%g)~208?LIUͳO0ė%>GaĐ&h\spn:u:txr\(u\͇mlq {g"z^.Ggd(MdVZĭFє _ \VW3:'>Q{j܋h?S37^Ecˊ 2@jb̩?̃Jw#hìk"pR12au/]7i4}B=1mᨖOz>BO.7 v󬋗l]z9$JG tD>ߏu{+bhI_׀075D{$߼Wzǯ޼?_@+0m((vo~~wD}S[4bj jAw͂o1S]r˼t]PzmȭHfXj/úB$_"_!O^l@X}3^`OS-B@|;/H~Pb kU\ojD߸dIRT/'K>)+1+)=B'v{鱝 [ frM!JQd C`L;棑YRVet:٩hĐf竏w[yF=M Tt|vvS~Z}k Vy)UW3a1: 7<%gp|6ց6[DelȖNtvӝ"2ѽxfV~/>/dsy MjΚmFH-)igj:{f}5NiL>VZ]TB\ />viˉc6FX]bE5^p KF钎eKUS2ȲQZ(QVAmxؼMjI&+ă#O lC#Oᄏ?8ֶ 0I F5tGsş{>8rܾE:Cqh3I>vn=_iCϖVNZ64⽎zM7/kdohn0;,6jwV?Lgd.|hGl_$z4vhG;KT!/q"w7 샹c-TB}b4wkT*WT$'t|EVy?XhV^ C2)2 A 0Ht . ԤCGB2y[ %Ib&c*B&]gl6V3vkWڃ4mK%u9r=v a ۖ/l%]6Zǽ1DƜ˺׎ʧu͹8oAZZ'CeQJe*RePZу^ ?ع(n)I 0 IcZku.ϑ1HT@QBUچNF!b ːqRr6s9x'=iiTXa -nٻBㆺի`|+}+sNXQ )eeJl"ık`…RX˻m7E$y-G6I(MR@q޴Aڲ -kR:n~|S;;r8׹-Q$ZW",N49]ᢰ͐i@^8Ia 9s.1&$b`MA:-0G/0mG_ԩ\8J{: eA۔]ɗVYyp#=ZLBk:%cwJw]Q]eXʽ,|PYĨtV4:ds|ş`02O\b3]oGWr 6#%86]_pIG,[&eR,P"%aq'3=]U}df09R9kHi-"<1nЃtdU-=!)ٔD AC2%1tY4eZwV[0y0ےCY.uĕP B4]t% Z HAjΜOg\bjLS۬)Ċ1A EGlMBKG,C)"j4ug=IԫCs-j""vq.Ҟ,!/hRp`:SC)H@!iC s}YΘ6@H6\&";RT57K h7ֺFh[dU\ٖ<.bgr\*8'(e:•#Bs +MHWGe%х,60綤Py=܂ _P,?kZ`3tT:&nNp㞼c.>E/4I1VS DGW8Cd!TYyUVx2!7 a }<>L|ʎ{.nވY+uoiDKӻS<ŃEDsЊKs(}8}!!T􀽒:#h:]S,~"]'k;G>\Mdn ? }yaf>f/ G0myc>]Pڝ@`D$YFI.T3UR嵉ޏ3zx2A=1u0? 'MI(M06=p({q !Hջ6ekm+T4r)9Dܓ_Vy3"fEȃ6BJZZAzYĨQ \zT3&%E̴1"޵}.Y t̮xt ;ka|uAb.vqP=u=Һ^x޵YQ",eE-G?v}ƫ 5s<|ŃRmM- c`.Oh.3! T<?5s$'ᴒ7y;nqwܼ7y;nqkf2ͧs3MX4c \]IQ#z:2#v\u.`[~lGiUunni\Z ma@e, }8 Bg.z+L d)mjC$G..AۧJkPkSe00ΰ(2 FoNa1[b1G~Lz=*6M"0QCk} & 9McVCKF޵=Yl[Њ!D#=kIY㜜MQfG49iZQ|&zPq@'7/d 0m4RZ1Q'J; I1ZV-8-qyZ[ߘa)6@zV.zP*]P'V }EëH}BbNKM1@7-ף/=؇''f7j>̨-|xmYFw&w)-"wNuݦ[E8w]sE8wt#/93n/+Ozˁz"!:] ke* *@5@a]U]WUfQL09@Đ4 r m*2%hԑPFf%ۮT )yL,+JR 첊E,B0Gm̢mugC+tHϬDpHK^I2[6FT8E>>Blmq9d y7)%ERtgܹ5m/DRNR$߾jEv3v0 K NJ ʂLUh@Ώ§^`?i4߾ ߆ëwEU>hWSΰ?'lDg)}YOu uZXgH8,r Z V̴W2S*_hK {S< 9ryz[4*#rrDѠP%-;J2 :c'(g1JWFktD(Kt:g d1^52nx}v ~,^^ӀyI Zr~(;gݥ^cڞI@̡F=W2fzvޠ`ԦoʣLSapTЀ:sC7EZwҵLKt.n;YM>ga~=jϰ2YW5*M~\^+/Njog` j[,tOs,Azl?^^Mn KS%sDd*,-J دQ! ˑK{/O L,pp:"?㣖}2lT^Vp{)~$Csہl8fdAPSgR6ѢVwH@4d{<Ԯ .ciJ!u db,'Dk>g`*8m՝t=ˌ=C2k3'(&alSLsyeObIr)HMRҨd"̴`Y$Hd 12 g U]a:D,1$="% f d.B BP(G=!GZXu+tr8P6F 25b2x^s&]b&L CWh Mzw}B\M%*#2 #Va3Dgd|VZ@gU- o\hᑉ[vQmBF=Ϳ)":,GCC5A:ttѡ"k߰ ]!m%Av~ nHhc'P eVI)Cw'(:k^:]-Z7wJ><8=//GGVav֞'Fm&_џC ` IvL *H+͙0_T&`Ku ")L΁9/z!wF\3`HI9PYʞi!ڍښڞz lU-M?bv47T:.ͭ0DJ*CfIpJ9Fh"tNA򶢼 z{!Yq[ [gη6K̡9z͞0X& Ɠ>Rek#V*X ӕq9!R q/NJr2OG Nh8҂"CX.PMTMkEvK, A~8kŸ*ywtnth#w[@b}b|tsOS1aŮVmx|BoFu˃|7U2Q}5cR@@"ZlUYFA(ZZD`݀ғ7)5E$EX9=mܯuQiwx$ò%Հ Qgg٨|P x<7a rgb,7b00H܁ cS#!䤈g 75G ,4VugB d,ݳ혐||ZUKy{z܂RvPi|V -Ҏ &ɘғ)ǜRvd` #FfB&ek;3<mLBYfyߠV AK-3FifeF2"2jxٻ6UVv=Xa. vgn03Q#DZ?(IA[UjEH*,Ek> ߟ*iG3k/^cd=U0MRY0tg-kW]1A{.7&֛Ej01tQdF~Ʋ@d%{/!p^ [_@vZI::qi$.0o,h-Z2d: !fMRHr'ᐛ5*ʷw]gtyč4Vt-$4^p$ ~~(I;KӸMn$۔A:zAm f~-7n7rkH74iiW5V)s;ImtHo 3v4rA~ws"D@>~|9~9 8oz7#qՕ>.2V}1=?21征2rn5Pɉ\~~cW%9vR܀ 1.󠇷xSb/ ^Y9$O9kZ 1 YA3r-ڤ6+eל@"`M<*HG-W:Debb^6ǫ[-ϗuDɒ['$ܸϔrz:[d毕PD΢1G5 rH&P ('e/ZiTR[Qlea mF(qlqn-euRTV(cGB0,]z$R md!sr,Am9;vhC o {zzZn_ЦICJ!EWMJx,H1Q<~cik^E[1M~a_Ըj_Lgkw Qs,r5䄑D'A@-U Sc(bS2õ 7U[ >UJ:&8/-)d6F[g6,?Ķ5=ִS}MB$ŴUAl#2eAtV F}8q!M R]A+MtFDXuE$,7,)\0!'[.ԵU7b~BKcoTXB3śz_p1mggZǴ[P|KtM*Q#0&JWw?|.F㴌~HoACFKqK@ 0LS2h xvc7\][d1 Re"3뜽"7]ҺIW;9=)Eׅ뷛>9Bwѩx98^ۋo:-GCElVNwє?~4{J;lm}m8;2&r`n<Z_կ+ ],ޭ>xy0E;sMxtn]`2tQ%fq8y̤ ,oHhN~\6)i:_z~5l@*ݳo־gKͬRkz02<(jtY jcPMGrJ|>! ^N& }+c. O27Gz`Z`Z^ܕXvq Gi C ɯ!)0%lhVѣoop]4@PAW)Jd$:)b R)Ы\@̐1H (m&ʄAZeab eHч8rYJd.s>'+M) j0Se iTյV(rNQ0V.cd1:2I2B`R6C`Ɖ5I@v浒x_B/"mQ Z:i gWjm;RZG?0|yӴ+FEĜKCLjp&8ЈG/2SG[CT.:ttqQ,㞦aˋ|yAy\E/GKޯXc3#|)09rrgDJ.hIm3=8A!AjlIYI0MM4$#\2|;f]3Vȹ_c$]s1gL-8 Mk7 nARJ֙.i$Z )HY%.L12jLmVڔbŁZȹ_vNV(Cшc_*kD4bӈwFediDK.>/\ A:kqɾzT֋ŦBr9Ai.c)_Ɂ1-t.9aI,yIt! Ś^| x0s-8}hӇ6= u{N F9_Qt;$ϔ ; p_ 9+RmٰÝ*mp ܹip n 7w$79kp 7wy;o~R;op 7&m 7wy;oq$Xil;op 7w0ey`jp ;op 7wy;op 7wy;opL7wv2m1{4/x3TH@"C˴ j΄z]p=v. \>5ڹ|2sϋl29KO 1Y RRNT&gZ[LJS%sJ%c$+ Y&pQ?{ƎtK`fw1>-,D<`[o6e8QSjvU+֭1LjFa:q1RS}َ$jƦ5_f|W%v@f"Ta酵ʙ+jJ2`jk=3rI F1hB0z&»Fo߃^wvr|kTJ gެكoG9} <[_H=yVi /&t M)[1v+(S[\k[mpJ7אE+-! A !&TݭH}տ__82G_|HT ΃]u 8^qy;f-'_.>N.g6Mﯻ]Û6m v-DyAkZZm 5oYwyJW( FZ{T)aѵ=UPgk'P Le VEP&鳰Iu3r Z`LbU) _?BF>gsx~?YMXx.~}uuk.2=KF8K̤5&[4KC3YDJ)la:$ 䏊GR VYPL*f 4 _$"Q-e4jjgqH;;|Ͼ,ɧ wE|[yϯVx+BE)#kK*9P82r6tQ +([i QQ<cHM"ES1=Y]9MMP~+vd}u+į,8"2RCg֣&z]K~Sԥtp1;hmv%oR57roJkJ;5%QS?roZkZ\N}FD8 -e>=KWÌ`Gg q};7 hSZ=1I0ᄒ] krm}o}Gk^sKLU_X^yهέ/pիBXOT#WߙQi(agio#{ 7e=Ա)]8 ˫};t{T,B}<1 kvfޭчga99ݓn|qe-jmfgAԦ͎Z:<|`2uY/):W{n17x0=¸eDW"Z"=Jv~Ɉ]S+'(%$6| yl<3Z)$Tҵ`yisHX4q3NJSF4=B% B@@"d֑^W90ݰ:·GƼ-ҘOin {,3jYcKe![:WU^=t ~ ?m^HK_xӛU{:7Չe8>WO#؋/rAO#7g\_oO֭<4v4*͚rSffzݬ{9;\'h>U?T>A?@OZ)mлǮ6OE^?lܳ| LxuZvh7TcVUxX=walC"EcrbX,(V1פ7CV6?P'p$1FIfD &OH+a ko*` ̤"zll6#爴{ ;)KuׁR{9SWoN&m61|EKvҝh̽1C9:|9VWrz\vŴ*I;x9Ңbc%k{8\m0BaLǤ{ 5RBma ڔYG2c":}gaS"5Ey@Ɯ"b$U9W.YƂ1(:]ɀ9GVV٫!dU~(nƂ)(("+ؔ-RR B2%J 3_*d@Ⴊǥe (!}Y)$[,$VgE7>ZsXP KiQ⢶L8˱<r],;ڽbo̓:"r%zwr;[Zj@5ʇ΂ Jb&RSff*޲ƒ-J>yr rYlQJjsX3*:Bh Oq~qY񎘕>4p?2O/j@;© !)Ue;B*ڪ-h"ŗ!2ж{ߗaUc/f%4ҪMm]S!¶(VO/"9ߺ{F+rkl-s(Zw EkèG7oBFsљU%ZE ,`"`d)^bb#-dF؊NL"1IGQVޮI^9;No*a(8dhGLcM{- v|ޭ*T?դg,Aɺ y!mZ)܏Cv19 ιgG %#0k%`..y :J7A",97&QB EN;q -e)J&^x9QxFeo"e% sD岖:%2CVEw3_*Vi|{a-KSca1 ؙC) E$fTB,dqQ 9ږ46^d$J[>$G LfPYjȭ|s-%e82>9B0BcQHiM,eI̪S +Q AP0M- ݅P77데D4c$#T蕫|Xr3S]qʱ,]а|\-fiX2&pGgn%$묷(MT(g-Ƃ^c"!Vx*26 [P%NoY#НV=s4q`Fx/}^ ^V)2DbXz_F 3=qт EtZk ۅKW;kG6qc$tכs1` ~3OW~{ 2 ڟEe.`:3@LJyb 0DVۘ$3<_gTID@%CK_AymbQDٷͿHIHYc -P,IJ$20m(D?Y'tQ("9}-4$GH{[2g8'qz/? Pwgz_ &Ydf_&  f>"խnYV[;[q%V^ ۣd)G̮٭ķkAvlOW+ /6C;͆sV)ɻvZuߵ*v>K|W-"p#QCgܪuU ܪ+4s=!pԃ1WU`;c1W$-!c7WUJzJrCz*]1W$Urr7W\A]rI`*Į*:zsU47W\)%v\q3\U){4WVjyW+e%>zVL*?b*&58Zh Ё`\+ywD9xG1\ y.)p-]wo?ZiwfdtuТ4GDPVu͇:>~bL6ř~]ƘN'_ssfrfraebJԍϝ<(ٕgvV .E&mfjf40Oߟq-C>=8pb %iCytLW6HF.P$t1dS*Ґّe_} 曣/WWA8Cwz퇓۳f? syϳihh{4M` gPѷ:YJN*8ZC"*Z2 :dC, 3iR?{L(DRHyEn?@r ֜AYM=`y8=8[CbVC㵟~G?(?U)Le6AܤP/)zIo){3i!G[_D\[NOt!i S}ŵh誔Fz2*-ǸdXY0<1g2Vi'p~Ϗ>|.q\ dziw[H_^{}oES۰Gsdp .X1&B `da֖A̡4O={%V=[l19VlN`<樒EQ>&p&v+̙7*r,S@!E ) K_w&rbkLOoos<3OGð}4IFOlݿm廇w7ݣ'χʍUw}g_Iw4Kc~ZD+WZ>5_>Fzdx,g^3zT mpԶs0ٮxno~ \pR1M̘!^AgˆI^rыJ%-Rb.mkF &O풂1 Q{0%&'8 V:(} 9)Qަ6㖤g˻JpD. s9ٌ]mpK=tvaIh u Vʶ~8Wp9CνBpl4\2(MFŭQ(.b͒@.)8ST9Rexq#ȍ4"|Ҟpq[LKZ pnQZa$2nm}-J'Waq,C24+g$C}YMu /<[ۦUZd5`lT2* -Xi* sJP!U]:DPƀI2X1xmReP$Ё|bZ+r8SK'U31X0<.wcDЭ!H_y0Z_lJm.(]ut:J~m;>pnۄ~͠~;@ہ~ fEɉ6veh\LJz9߅!0@쫚G'[Ml_}Ngӹ3sSJ̪X"*.g}тJF< XFXP"E0ۛ8:5N+FQ2Z[$w;2)N LsȒ]vvθ7!^}k;/Ǫ1:‚ bll/mu~LZ9FG|/6M?L֐LXu0]Ԑ 7+#^GO\`.@ 䲤dB GBs!\H< AX  LtD~ΨA6(]kx-27*& $ ;PWp ȝPKH$2' JޅU i^ʩ9iYZ;.Sog(v1UZyb#0k؋ I0 +u3j0}" MFܖiFLFQ @ {2y%E>ɮ q\ׇ8,c45 &G8 L:HM& 鱐aYePTf|ozdߋzlA᭮)Y 9K#aFPc@)UJ 1%4rC}Y2J  @G!RcX[oQׂU;V0e^m㕇v8 U7.]w1kUpQ}Eo}Av( ldg˪/#i%±U)c~IoQwRSjKƂ̪,/s}qGG)G)}q Σ RdG$yk$eKK@r}SdN0+pٴKo9&wmmK rq$LjC"}yL<Ro"#QHe$4{k*y<`%2CT /Ҝ l87Y.Ma‘n Dwpzo] MVVIx &%7JBPE>FLFx)Ѓv&@~T9 *8pnj40Bx$\ 8PG'bzOъ QxLt](Fg J&NƸ-T 5SZr4'!DHV8FM@GNP!IaIi."c[̠Wp}XG;))aB>ТT0BjR:P)E*TMney2;5gǎPXJ փdOzh@X1BwxO@Zĕ!$zG D.I h6J38~D45ͲnRIƉ\e;T<_oW [10iS0-NPD2V`ρCAe-oLn+CeCRU?õ+ƂLT9&TbqdtCi,V|K z$_^akzi;(#Ik+2K^DT@D. P$ u+Hw)f;"s{M 5O=$J4Ui#$5c~L~l@l{.|rנU h3ށӣշʣPE%`_9 [s+&[7 Fת(NqQIqN0ŗo >5 aPNXJZqNy軲3?sTg}^so(˜B;HD>\kzmNǧ'`<çT}p8C{(R"Ujٞ/?5*-4K=J,%N%m~]mY>ͨlfťgjZuZÏUΫ׋‹M`lKTl53pFwrۍH1X̫,/V;sAHKO֞Y[7@6xR'hY5#ܓ ~vX!Q+YTa\v:xIhNJ~|ukĭ|$#M/#x. Ü$rh18Vhq#=vam0yګuRG1K @+\Q Π0,)ũ5œQ:!H:trWiI{gssꐶivj_ZW{?;H${(h}((cCÇi#,Shp#$ Aah=I(T,QHrZNNI!HFD+2Υ :ط:ik* Pq\ ^&iGrwsrDK> 7mVlq=ȷeDxtU}dch$(wqXL!Ř,*H+KYPAi)pVpP<(l.TO9hyп%#d~) p$ & !UʀN\EǓ }{疠b^3&riLL ^hRkY%+GuRߙ!> }%MpMQԵg77 ;PYF끔O\k#7s+ (;MC)lQ|"A^fўnRrc9Qz }m ^c0>U!If3r'ˤާkSa%)I^]QK09i$pSgVoP&nKE]pH]@]p:#)b%v'?\!Hr}W;|yFDwN?֩eo&aB[*S?|; -RGv큖ЂWaս7 ࠶ k#nݪޑ[\|2தIO*JknRkiL چPL_4̞-c<0@:k_2G8nѠf1f[o(^LrcLm/T?3>DXk!tK$5ۍrU5dE9A jn5G態1+:]vLØU2{'~?t$\jF< *]408s:id_M~p{SQ8^Dyc~v^|+%[~oNHb2T7^-`-.I"{2}&n d1;?L'4wCWNQN+"5]|l2(aKCI¨6;N/ >/T%ָٸA~[$w0o"T$bϋ%߸9'{‰sQQ}beGNZ66F<:˻2/bDu-F.yHP\Dj:w7I*CxY)WB6O_Wh9"{bL-8᫕~AS HQ/Eƭlf{?*1y@-&[(Ǽ GgRպn[b²F:ɓyRzO&ش>hwE_xV|<N^"T p6]kRobя8o^sѢ?]"or=?\T#SȪ:ܻQEWrVmٗByWlP#H<[ ^Ȇ+@-UmЦehմ|<7{+K ;Ta_=נho?jebhq:/;X.n1nwٸ;(^1_])]jaU_ 7l}0seg_Yzҫ>!Ԋ_V9z' Гt:I"3'=R Sv>{,}mpR8t&X`ORƀ^T)z!TBn#p@sJ>K,tyÿ6*bbFɨh%wv6ޯ5y^Yw<9onxus<8s]:sa{oտzٻ6n$W%I펄ƛ6WV 2ck̐H" YEU[!t7ݡݕ9mi>O%"RZENjHjYWH)xoKEPb%0ȗL)Z&{ #F:*QدoABS& b\T:7"ڗJYPcs5Z'98Nr/sXO=r{r8M4-ocw,q#"W61Zi9.X{Mp%aQTuf$n,ydbJyBLIKp%2@!c4[R,K 8e=k 8l Ѫ$R*&XXNW)WbglP  Z2}v551+&4xY|m/S?:?Gl""Ơ)D;3 hda++lXAUтaBe@alwqaK%8 A1TBێh&I+] T)r6#B.p@{ pǘTJv&.%9$P[ 5e mRbA-d )yT5!$1TPqD 9uOH"FuTblƨ_g`D,"( D׋%%&qs9Jë h4YYfkPQ0!XI7lQD4"mpjT,eqy"{@Qq(x<Fb(&3EX8]\+_g).Ua\\<⽶ &d= IO2l+P4W#]F+ $&r.j;<|ې`k(ApE?0BjJUj R<\ ş}L%\DBeO-K*J&򓯥P$Wު%Ogum,:oI)7~:h`]3~igf.HUONRjgc-)s~ 5Qqc{Mҭm1\&Xy[->}FAR+l}Fe@ǿ"mhw"7¨3KٸCrd9Sgxl*LL>(6 ڄ }Q8uŦ&תW#:Ӫ7<؍-eȣ[â,Y!n;#`A:Btn+ , eNC)>BH࡞M.iwS$,w7W('R5;-j{_">.<+boT{Fw^s`¦`)6[xSH}]xs\H:RH5xd^'~s*VYv*=%rIP&s 55xܦie簲T"w5IP{0b=#a&Xg?j\H-PL% " )2erG۪7t+vq`e$S渠#-) 7#k=[vѽ}gy=ai1N]ո^і ~wTL0d"|nl:Y?qFG8ږߖM}W)0՝&2q+zΒ[Tӣ^nաO÷D4;d<v#͒mC޻6mv0^?_}{:k;oJ o?v93|{xu+1M%R~%2I(1[^N?Ο'P?QPbi(!b熭3:jqE-%dKfD!fG+]bKLXN13+pSDwG.EODDr*aT)KDt EoG.n%Lgs&8 +MڊLH%増 4NANjnA.[7\jVo:YBj~Oggs3).(~A2lJ4RwN3i$Xs4˟Lco*tRD҈Np|޸{եL3[Æ\ë6 ~suOz,gkg7Fzz9;N+hS:ג0 Q&,l2cw;.[G'/U]?eՑMJǍ`3m$⧹bV*jxU"u! ո9}}X ?;Cfb~/?SYS(VdEE6ۄwu<};q#79uhw.g=E #xe#kUe%k_kP.d'hҭx5驠\s{WsQaRp5OUuըgѠ` V`*LU0b*#1E]46QNyܽ$䤎Gv 1"UN+| ,Q]1to /o R\&tkS6玾r85́ƚNw":v<'prM]).U;VIx:i& A*MJ07`yN"| $wqO|$HAN(Pd48O֋DAjBh#HFz- ST,I+ƙ4"#GA9K Q)C% 0Ĩ.z0#5r߀-\0*>G2X˒4poEv&dN$/8& b $L %EӷsOA cPBu!q!SJ,a1q6>Ap6!ʤULN:IKm9<[-3 =whX٫]@52Jo[ P2jpi]]MR yFCVDm6c^j?7ou`i8~onCRN%??DOhJh',:8^ivp%?jvoNQGWh0ϑ@}զʼnp9 `du.X1hd1aPN%9>WsN~9/gD̗7ZOHtR:&MBjIz_}=்_퓟f*f1\ %1`rs=H5~ >:췜aB$ ]=]놭PYޡІ874nh]|<~YL`v\D+{e^\뺱k4Ną'͝ieP3lcZQ6N{ F.? ;~pt~zٿן^}wF>{㛳w?Fg`@]GB['d'm5- ҿ[E\C(9@!hϮ91f~6ꗫϯGٻFn,W~]l>._ L2 f& 5 >˒"vd6e h*\ܯS? Oz6 ̚+1qNW8/B~E])Tݔ_4: G6u<.jI{[lnH6C(m$ R/ HmͬM}i [e Y'€:qr/}c sK,F%E##%)9E ^$\@AnOȞNOK1;;ޣKg52|/C!Ɋ\)rjaoOLxD T3[ɬ\Kٍ?8:X&!N:yWy|KH>[7T>( QLNS=^<}|p<-'틅A7/ٴus9hDů9ݼ5kq2SEJKr_S'ׄpbT2K&ZDKR1ANrTX %R|L@>8.DxP v,h5T"=&b byj⬯ƓzRj~b| n"xHp| kksoIĪgܒ!I#OOj’Cy8&%N)QV0 :D8"ܶx0|N`,PH6"D@ LdN.J5*nݫڄ"lŻ9~aL޿O7[[b{6- %»ϳjGkhRLQnqXH> B0RФxb%(嬖TZ;)<}g#t!7Q a %Os:G]C`yPAL׭~nuc #ǫd)xJ.4ѩh"$$s*MuRrعn-gkѠmja?JLvAew s?R xh}ȟ'㊆sF<&0J"^uRĭ@PEcA ƻA>Mǜi.N4ӽVUg6ϑ yy8Nz_]Nr7ws}9݆[?W': #mԾג]+-$>/hB?У:0@`6e~Y.4K72:{-N#ih<~Srz6bs^l2zYvv~oOzdHe-AR[T3T8q:F"#TPKqS`)tWr]*Ab}Q5.ya뱪'w |zRwe^k/]߽P jYr+!R\YReVN-u_ʞK=~){6R3K&ē4.fi]jb<2Ep&061IJ&mZ5  r]TwItg}2*Dfv]~>u+33= j4[I9$\]>\vUÍcǎ^1"_Y̳f^I1@CI-и`^-Mx*xb4~R%e/#gI|ZqpTJ ܅XJ)nY-dp,Y)G0Y=*Β'&CoNn9 h ^4mm'u#t6ۜYZo*yY-_,|ޒee":}\j2?~p?\<`Pq}3 z_~M=̚iY8iZe7GC?ɜy1wq9ͯʆsAϴJ'F\)]"'$rBWH)xoKEpJ*4a9_2hc2,[G@29~4 2O}D1.@)E/U8j|7Њg'9P=[cyKu*xCƷqb#rm;%kopn(_E^gF,cc'F*@!pLIK2a"2@1-MTY U#JXtH2Bd 6Ѫ$R*&X8W)Tx,Ba,Xҡ}vf; z$+IP`t5Og8bŬ1M$ڙpqyL,6,ecslwqaK%8 A1TBێh&J+]:fHFl7-]wCn=7!wI$3p)Yp|@~T* 9XqָQԮ{8u/-}7*4y5]N| C58CUEV! Q*5 !rms@-aǖlRvOXq8 ^2[}tZepZGw[FoE`?{3L+ku6Vrsݻopt5@r ,gSWOPW2_fAy2NYpKP+iV[g߶Nz!J>P"WW,ȔRW4ZUaѕC3!l0YLO֝-:9[qr$鄼3c`F:m$>z4 Fو$˪ W!H#(B񝌇?>}jlBvgZbߓ4E?{Wql q &`؈|HJH^Rl-Hi$ΈcÄ5ӧzV<=3-<! amLQFH)}mLއ_Ȭ|j]:&@^DP^9 "Z6X8D4%D)$e hÉt MR`cJI`\aqYmcLD"HA5$4Q18;"ߗ SB{E&fx`M\[DHPyzuwB Ldm24DeL8PP>ۯJ(.#1lxX@閌Jjie^_XjF4Dx83<<V}/6Amj+d<ԏ^hre%DO s+HsF|"k[Ԍ~;I!mnէOӧ=>q볩ݙ ?eS7:ҒQjJkfla¢a-Oy+(aGaEf(+u ̽}ZIg ?E/j}qɚo;TrG8 (:( :9<2Qi$bHcBeLXό$!2&q'ɹxK :?",5=18Ow\e6%{t߭Ez-sCңGR,{wn\cBInKp#KI ̫Ilq\w06͇V1Z8Γgyր)u{b ')yTԽzWN }M L6J3JJMd`Ȝ ԀSQBp8YpSM Hu @ x 61Gї`SÕaɂM^QK<=dz7@esx棫. [eMI{ N[Ca/5;ehWb6rvI<lѠZ!AphLS*x̫hdE!reͮ7BIw&FF:᭶&]Ҟ-2Kfѩ'FMyڃY^iۤ͹qj3߇c*,,Վ%*?QR1h_umq8!R2-IՔ/@$,XBRTкTТTЎTɓ ƛ0n1IDx(+i$J Du$KM g@tb0Xs` rjr+$R18;dxL#Y^°VۛO?8%Z5u/쒷ggel2qL5LR1^}Lԫyn?Z6/^h5% pϭz`0!ʀ ]0+tRv(9tutCtUiW*Ut(55=]!]![i;DWب `CtlV*䲧#+Ό.º2\љV%^]#] 8 thY*:J҂ g֧1!9=2 FBQЪj 6$l' ~鲺4c@|@P!7' *.0!΀ Mg3C +1i:#i%iK SI;CW&]+Dˈl;]eWHWZ0-Ct | ˮUF[tuteKC a++3trv(kӕޱ  ~a/:GDC4l?e#z=]=k"o)o7j/?NPhU/Ioz5X%r|> d2h8%R#1E$X8jHD3Q΃Z^>Ts~.\r|kpoM>G3xNs98չ=`wU S)^A>~x+g <i9 2m̩Ʊ߭VʎJkiTr֒ʫZnx24|Y9g2}qm 0Z~ .g'H*Ɩa_dm5?;!.?֜qّ țy_+[FKUht5lĭ**ީtg#wڱË̴Pfږ^%b9Uc-36Vz+BMU*7^i+3 /Ԋh(^_z_;͔hLoNw~E{2zY-↛ܩW >iVJJrh{QxBli#+lt2\κBWj}`FixOWGHWBiCt)]ev2Zzu%r6ޞڻ1CFDF8Eڶ~rKME2Gݡ ufZLF+di: ci]CU+;CW!m+Djyz:҆;DW1s?3}vjҨa~MU:5բt(%GD]ޜ#E t-yFb/hDt?в+][[Sh^3t:Y~hu Qj{:Bb$or!Bc QW-o=]etut Zmv|bvr*O6{VϞިN/./M~??S`n* >N0q cp$8|:ePd:ǹ\0.O6bo#~AlĈQ a+ǟ@Aԟvt]R?| Xty KZ+ԭX² v =6&]P#!!SU[meSeIEr~}Z֍Sh tJɄ N'm TS b`}J!$oyI[B!d}''1A71`(# g1(3 ׍ P,50U ҉`Q&5.pEpI˭4$K <-@W&ɞ'90Vao~

^6!c2)-UN4Hѡ[k$pmFy&@=[j<^MZqR)?Nb^Tcj63.1o N,-P" U}:q7myM[wݴq7r`4DUq39%@$S;NQ S}C=v˨p\ۯۍՃ(}ð\N'NZeFQkcd gц pɣMH+ubP<1C*& d<'BXFJa)c&zڰ7i'ިe޾Xw;jV?]D MQt/:GݣFQ=j-}Wہ(T6k||C[UK#WbdRRA!O&;͒)<2: hs\6d)o0%uOyfLpThdBA2.$udL3yp1:{<ӐBhv!I&#(Nzc)|BZ?#K?(4=ǰ)8OԵ:M#]#T@ 0j۟mW06+kkj#Ic_ƶE~0ޙ91ZuE'D6K} N˨&DPa0#4eS^!. J=I# oMpolg<T&-@-r%Qt+MgCespL Z" B{1 (:QsB8&Fxae!eb$*qRI.P3orn̻:9,蒌T;o ]3_5QӤ 1T<<"tK6xAQ'阕 s74*TIp =mCεmE~ϛ^Cpe:V ^ ݁Ok۞|oO&01,W:7vVSvw"LڮaqR5mAÕ޺"癗pO.ϵ[PJ yJ>K,h`)xk#c@jTDꈆ7hT%ih\7MjTo{;pdO!T ey!K8m\F w/W?7𢡊ZkP-U"גWK`yꐟցe۳e[Cn݊+ '"TƘ 5K U@%ѐN7]q.]A$IJ6 HJ 4^&GQD@NJx4$hbft➁=']'=BNTp󟃜2|Ոͬ:Z?w(2#[.֊>1F4 c;8,bLFaIĬO(wFQ8+BO7gr<;i=ynP2ax+x@$!JhN @!O2x2K.*#3\Kc%7D)5m"D5X&żv*]r{43b][ -<)8d0tӤnF büdGx !xrx󹮳g.5y>80c#W'$چJA)lQ|"l/o/R=*K#.g4 |+WxX|r0xQh3]˭~kG:#"2ic8tU#b1{op&DMx8|;)p~#?._ZdQp_"dܬOy4@|c|5h:DH;`e@)*;ʢ㛙NTMoFzѴd/>x28f05ʧF0hs}MMz T!p9_K}"򪚏)w`Na4*BEr2ūWşyץ#‰X:mITTO, yة8v̴N[^F%$vA3"Z|'U^&:/J清o# qߋ7@q!Pr?Usb j(+'LS H1"_so[Λ&ْIcmg4פ{^Tkޅ$Xx֯.O;g?D5jՇ?O&pƿE"ԡᲖ/(SZ)OSYAhaܮvcجo|{%LGj ~jYen&U a.zʤ.>v_hX tqkq:ȓW zA>O'Z!2 툈*(Z(q,gasb} R7zbAh#` x>;w8X%Kn7DyO"Sd^xA%e|ԍAYdzOjyms%M.e<M:jqܧDpe6w>̟ԗxR 4<0fcn_F+E) {Q%!\ǘ_9B&[Tǘ_ 48N"(1AB0UJ` $m(,yh-JU>P-!1>) q\G3w5Ahtn%J/eΜq[t">F_^QV:6zT)` \rJڛ!9[cdtkπ͖>k5nɉI[ecĜ X>3h]NHLxY@ث9IƎS_< |z.'DkD!T %0H-"olH"zM!8"{ qo'vҊy|Ohؐ$#AFȵV BQ NvmuEwCE[w/|3ިSRL  Nj]Ҝ#AZ5 9AM$ϛ%֝Gvr e%]o uF"we>1-7e7aSRc * !΀R)"yPEIj&H a25ܕ;X^Ί5DbivikJ$r2 6y`#",ï9;(j^R )Em4eSi./"kWPh'&pX+g8~D45aYTaRk(cgq]qsSը|ԖhI]Үa]4&+}ٓrGwJVC}>KHhF01W}<#އd~QkEي Ze6jC#f`sڨX=1!8?QMq ouby X5id^,kI5\lhv :*#׏|ȶQ۞5Q Ki "srr'5TdC̓?~ԩaNygj~!_Q7oSfwk$ RMa^s^S^P4l>] -Fl0]>b7׊.e܋cPQzc̎J6 yu?{ƍe`췍be X`3YLl&؝|=G,amVWwm*.sNSKCixժ E|W<ʋdpVl:JG|pY却+E ĩ_*ai??6^*Hx?Pjz>&pQ_F`^AڳD'/wM^}6D>)UU)`D@1$MIJ/B2Tu:0[(sdO^/$s-Xg:t^?/<+@y8ӽgH̞1*#e@C~AKn٦Xc*kmQ@{4>.xkR6rĢN RpoUЗ2][*s"Op񔙚QVY * 墼ngl?i|-6jX+:.ktL9iuz=oMޭ-O_{eF۩-$G<\ugԲ"J6Zpp ^uݫavwZI?Tt>rq5 nWt7 sOؿX} Jk̋2#uecInu-sR̫73y꛹m⃶`%t/pm z[R;{ˈ?:y-,œpmɵZ=Жz3yQff3۳,jqu\L.3}}4/8{R{#{wwlY#xdU B⬅y¿Ç^ b~1mw?k%Z!tac3 =Ņ1fY:CǨz}z1>A/sU"qʰ2q"(t>+!ȷ pJ"c䜔Ip _ R,@qRN3k\T.,3g H_];Z+n|uwzPf]O&E}:-tu׻839rR9.1y*=NH&SBbU  vĊ%JOE+cᏨ(kkT<"iuBS9MpV !j5ЙQ [A !m .s*+؄-*M셬ŨH5$ch8[峭vREn`̜݈.-c{z~^n+ombxܞdF>w~|Gmv6$4]e֛ӭ*3'_O>  281g{yS08S0Cޘ!wsU/Wedi[r$\-J_ߕ!y=Q˘^q_[?sx׷o;w.}C?]xC훲Nkqfڟ3 g׷=.,S8U羴=]mRUa]&V+c(WI*ɑ!Zj,Y3%O!x3pw>rz|dQdFhOy$ΐ{AǛN\o_JqXk?Jҧ_aq$?tz(\`U3ʗWZp+' ʷyU3ؘW\^ \ZYt/H7ݫ@] `瞹y{ZiQjv zK+r ~1pk>zjW_\pg=J2C҅ 5/mw}<;?o-4@ G:6?i-wZZŒ7+鄸Xjj y*i^ hM[T,X*)!]ˎ'P՟Mw~/֔ߚ7 )-G^hM=f~+.FjS06D(c^HVVn۫3qy/8mğg/s1D [Y1ãj/-c|u))jQZf Xd$#ո MVg/ϣ((iFCϽlIB_x=UͿ\~_djj>hR:jAhb,N6\zCd 죕BY`mSZuIJT+Yr"7;:[jEhbsgr%rf[p1KXFT"JzL96CZ;Ĺō֢kyO|_GNW7T}T)GEV-d)Ib"9!p-\]&d11U)#U'75Ǯ'N8_S Ai=-a[> H*2[841KlKs$Twb03&0b*h*M!jlt~%\=[obvh w h_r HM"Y!-^i01հX V"[KHEio21[E^c14躘(%r, N&!+Q)YC%}37bF r0hJ!Z4= ^m.PJ2*MVug ƲX[]Np Fka:p2Up;Cq8 -,M4u%~dBG2cpm̭*țD(h0Dٍ m( CTd, PilE?`өL US%Bђ'X; $ȄR1wC;D*Jr eT|%23fP57.WRX?4 1d a]ok,d ֱ\ NJ$;Э1* 5s8b`&|00'[`L6%8()8ؙf(xvXWi_@ibd%TtgBQb\`cQk,T \=;@9׍AYHI(beIل F`^_0($#cIbi"j8$! XeI n˰kƛ },\uXgL,B/tj Ҥ쬛[YƬD|M1@Tb_lnFIJ!`:}gز38L[v ЂKVP(k5. ANTZI|tx K8 ,k>rdpKRn҃ÍdHvwnq$+ xݓy'0/]lOԭey(R+RhRRͮ U<,$rTDU~i?lHhIaB8/bhmPJILz;/u>$sK~p ctZKn`65 _K Jl2[ow;CDm1 `եV(TПws]0M]ї Ibp8RAaۛ+aϮd7&v6o/Z\}umeD&=K~S/u I4 q2+}z +6o1#hۺmآύq>`z4evAdOz-ۮAGQhkڱi@FnQ=J~>XE[.5chpv;mqڴm}@σȄhX"PudF↟w©sRIoMgAiҠd@~]B3zw7zpCOXtOZڭi ,γ֛-خJ|Pvw!Q@#M7(j$tޑЙu v@=Bz]Dr۽(-7X0@apPo{6廄fP> С~؄qp1}z{M{zȝIoB`0!#ޢt @6s㺡$67{Dtmm>֦#~bi5h[ֶIi z3c6'4BIˌm63n(p(qÅ)m9uyv yD'wҋw.wNv ضhH=SWZܥGQ)o#Jl@wڻ7Ám+(>_++HlSJ}@(ᆸ3/}҇9u8{ A1^87Du1 oGq3R:MFKCzJXgCƴX̐]5Ka .E:\+5~c |-/Ӻ!lgBS\m蟠k6dϟT[ʅC x }˰-[O_贡@|a TNzCh-Bp6hʆٽ?mQSBܼO`q{ >}6C<9M9\ܵIЬC5:|:q~ur)W pkRv> 78u+g;`q\`(PPz|܍,E_{kvhb]%v-x!t8t٭qeCn։w{wo E,3Z?nyt߼]w9o_a 3Eu 5,̴ł&WMitWNap6GZP85p+ \iJW4p+ \iJW4p+ \iJW4p+ \iJW4p+ \iJW4p+ \iJW \ϰeWMEL ?pJrZb*;\iJW4p+ \iJW4p+ \iJW4p+ \iJW4p+ \iJW4p+ \iJW4pg\᭤UE9+(&p\%_hjQrjJW4p+ \iJW4p+ \iJW4p+ \iJW4p+ \iJW4p+ \iJW4p+ \4py!eL.^\^>]agr >ް:"kS QR>@]YN> $'Zχr|7dHQɂt`]1n RtŴ9ծ+nMP]-QWbݫ+wgvþEO;pE~|woAW;U^\Dŗp|_(_u߿?5~n[⨃+_^>yLÁSjx(0ZߟZ﷝e5~:u@<'̯޾>qyQ݃>3C;;O__y?=4Q/6ADߨJekιvץkq5۾#Q#3py0e=^{떊Yw1vZ$)p>ϯ)خm[0뒂_w9S 9A>bP|\SyaGwUњʄ.[Vooǧyy䲲u6mxȂN>~;Z璤7N{fƍbJ-W T tHR X)bZuŔ^M^-9$Ao]pz9=3Oruݭ>;/o7<cɍn-~^܇iqvgޟGD ;.6۬zՀ5}OzVh׺Ovs诩||zB6G[ qq&.~Yl݋7]l>Nu=\h`wsOTcdĔjR PSZK5NjDD1!M;0ۨjD19M:.?zR~hWhTEv(F9e;M:5F6i8upVm򖁓b3nb֓fJ^`KoU弼.(EWLlbJ_TW UZIb d]1mb@EꪸkM{ʚKm<%c\noه" pr33nR4 4ז(I%j:c+?'Z1bZ)NYrA}FtA7ɇ1mSmR Nl+ƥ$EWL2iu8ncJ̮IFt-j4ވ r=YS+-N@l]WLHu@]9tq+Ў,TWLYXYtMT< D9b,&{jSVW]-GWCJx=f1Ϯ@+@WLj*ƦRT=pizm 4'1f(dk4(j:b+Gqi ծ+P&2*SEhqL٘߈2%iD]g]iIdhbJ}f 8~cy$` s?f~0͵4ښA?AW^uԡG]+'ObtŸ1Ih3Qb&p΢+֗(HW+M$EWԮ+ %y+^wu&цOQ< ԸҤ#giM;&ocw`4zޤ{ԸYˤFhU٤ԏz l(jF:r4) OEI-6٦==dmT{͔A[%! ]8ξz4\+ OaD%ꪘ} Twƚ ǍҊ49/nK&H4Ӓ]L9ʥ^N$#iZ btŸ!JӖJPUW U%ꊁgvIuŔQg-QW%R$HW%:Cr+btŴY|LݷpЇIM.yf]M=_4Z?Skel0AWAuԡ&c+ƥ"EWLrbUW %( ''FWTWs1euR銁g; 7)b\}3Ȕū+9 !yvŸ)Kh)G˓)FIϮq]1sGюS곫eX;Zij^:si&̛ٲw6s<~GjM3n )fj4(Ij:bW &/1U%Ӧ߈2e  #qO,g>\cTWLKտeJ͖R~7հ~] zfǸN|3)bvt1& :ЙI6WYzГI1'Aprt"ꊁs+&#}+:[guUlIGPyŘHL֓M3nR4ʹ̠$^=*'L A6W+PUW Uđ$]1pNbt\k]1SZsA*#bܹFSֶ]>mK}ce]M.~W p~v5̴jlLe4AWIuԡzI$EWLuŔ5芼 '9pܑijS+-FLAM$EWuz]1%eu働VR3)1Ϯ@gWLjBU*x38 1mb+uv]1e%PS-6\'lqlÏ;4ވ2KO%#JV5@MHt)Ut++#:2j1 JW[](֮+:l*` ʙoƸsF;liɩ]>oL"ܫf$DsgnƙN̕7tUWOzB$HW ]7?{8J0ǣ;,v83s^<۱=bK״K0hODQEW"Ҟ,4@zpE;W(p=v*Bi٩'Tp iiA&4F6漏Zt(}v~lR0]9(kl'Oe9+ rwX:#+ޟwq4ZvFaqYޙvҞkqRmDb͵dRtP`ޡ, \eiu#( Wyt]X;*;3R m,%[?&\1J+PvA5R@ Q[Wni8Dug9{Z֧d)UQZ l~qdeiUcvYJ{pe;W(2Jh;\e)9ӛKB󳫣 '>ӒdD pezz++PUVUR> \Q\ , \eq% \eiϮPJٶzzb ON+XhBq7]+gm+ LϮ>$\ MkdIeTn[u B`9f%Q/P :\bVU2)4p]TH+KDD Aײ*V8Jc{nK\JS-QIM._j̢6r??!kpr?5?oh=w j\ r^_Ul429#|Vh?bVpͧ7,rVxc^ê(|y V7YIyu0UPu)d<V̆[bf!k 9Slҟn\U*2{Ϟl~c-cO~I Z5'm,$);c$:OZ#uڭS Vn/O&C`ӵm*楀4 J4UGƯm-aF=/w. _q y$bi ܏FHZ~huTyF"\WZ9T[g>jbm_֘eɅ]ں ?;BqUFZό2"}/[? ,_(i:\땿M!~; 1n B u.OQlJG#;SGɧ1\_,Xo̓Fw;D\Viir%miε, ŊngIy}utpmgWXd8;2W4t1KhSUv+>-[e}YUKWt؄3q%P9zJ ccśTM>RTY)uѪGʄ6,m K?M.ʲ" l+4)+c8ݹdyOtP^Nȋe ?<N[&^lDeU :NzbwH&i01ǁs8@ ,Y,RcT'v8N6(U |X1ͬɏCJ=},xR7 "2r6׺~ W,Y0PgavOՋQǓYU'.];Xȝc%βtf"\]^ǧ6Ve(tż;GJWeW9pvfU!,dëս/j~*Ek%wt~_3E{*^l26XSeEwWnevkO^0!#hpsz R_WKE&;Yzз1a瞅I/y܇a[=:d||7EtҘ=RP d%ifupfApep[HqunU:eY?,mœ˅aС,NZlWŢZ(KDb9'7`zg-v"Z\ lxzb]s% 3NThcS8"%>jϖNimb-petB* H402'1@ԠTl4[dь3D s8q5Blv6c3>~v;/S9}d. :hp٠xtJ8Ta S8hX"H. ,kԂ=R~eyfc,ȸo5 ThA(^2F< [pc<qNH[BO:k_{6~E;S LR0qϲbۛA9>^U⮼p!R2!-Iٔ/@#$,XBȅ-w!i\H"…H7Ā~H2g5( 3 ׍P,5НY P$ΚLk\U[ Yj6&ӗs\ndɒڳ^q 9{:hݖp[؛Y4 Kwu3ǭ8_?jW6Тے#JJ}&"qySPpbiRIP7Fmɨ&D\ V90*OazN )ɔΨ6@bB*B\e |T!->Ft%To7NZ-:h9@Qkcd0BJQ>&1(!p (1R>2Q),Eq#8Ɍ&zڰ7&~#^}̱Xw`:j2 nqGueY -rS.NvQug\w=*(]_|W[\IIF<V#w%%Ү=N6uCq{1׹1E=S ɸב1<^Wk |3 )fvH2D!AvENbԚ؝h.E'1Ph:i)q%;XT@z͒ Х e,nW?$ZU}G'@3.UJ1!"ZX8DT%D)$e oVt>ol-'RW8sx1& HA5$m1q ˼ RB{ECg%!$AFৄDT@DcDjX M|yMCPIcKՂHJ;:YNS:%܀iM`3bKWUE816 HݒRI-mKA?w69 Q!q8$:&LmE66>޷} ƶrdpG0)yScPn m"Ȝ$R0\:ߩE9#ay'GlTS=9v=Hf<[ӎ+Sl{u&EeC=e nO(.#+=ei)},%A= F 9mYÑm# JXZ3+u ̓nO֒_KWeͧiN9%qDz6^k`rQS%bHĄʘIB\ʘA&$S!/5'?TIEXj:1qٟ;XI;^NqJ}۹t#ڞ$S;ƵdrXD0>œ HXCwsy8,қ/xì"a*x$ua=0v`a4'~6"و$S*N.Jvl9r xٲC' doMnlj,[$»],GkhRL84$BHF!)i )q J9%H+5xtvz ==7A&J"HxL=TPB&M# |$XA`s˒O葧QYoe :3rt!R)Lh&^D#}guDK#kS0,j]3SU]U?0V8dFKO(v-M~Ky&0hT(xqT#b1G+8AExO{U:Eӭ`mCjݐH|qi۸my\)Qz1N3ni(>ܢp1&şϣ3r'˭7c֓.2EU#|9u 0~wfC]ݛ̎ DZ֊f}#/]dUp_9u$i-& pIxs53|9`8IcuFO)*b ~2Noh<\ aQibn8;&I̯acxy)diV~8=G(%=FuO}J"9׸r?I-̠LJ`ɓSp^up"Nx[V?\bv*$3S6ZϮW:mB >['P!+@E`*d8ن^<FdO_ kabJUx>4'֟;k<ˠϖcp[Tk\s.\eͻ𮜤TP2/#^ZWDWA߷D55s?M&_Rڡ` zMk;\q秨Ob󻥔zݧ\x8k(jl7qMo],?{^[Y z68/Mc|/7:M7]C " /OۛlR򋈒/ξCF EPg߾}+nD,QĬbGmP6h(E04XR }+۽dR} ,0:=g`Q0% )"R{0I^JT[iI-t &:NrR ʅz0b P+sJs49 ]su.e5F;ܡJ7bY{ǥ"L2KnSI{6s| )нv&;wr%X|wT(Pd*P19I 8`A Q|rR[#q8SPddFhϔ&heXFT (gyr9[Y%W}b]_èȼ#⓶:ǒc XtH@g$΂$֫9IƎ"kFQo0GE$ZF"X<*G IDIT24G OIkMCĆ> *8pnj40Bx$G-N!e{щح-ͮxV[H[dloڍ{)h@j L'.iNBƑ p&B͒]Di[Π|]XG;f6AcJJ` Z fBH@3`TJ*TQZD cma2=Cn+c`bt9 kp?lAFb>vi+2$2fH؃l F1Nlg˼`% K+0X1B{Jg+܊oµ +C(H \p@8RlX+gmq3hj eZʇ ZS@#H'x 3s$5VB棶4EK*튜-q.X7=9>hy-FtK\t>+Pb@:Q28 02Q`4lg9dv旘b|B٤븰ޯXަ|iLWBЯEI*H 脡DCAe-/vϕNJqa 5xSõ g7d5Q.xd,C#QXǑ ѳhZ-%4g-/M4mpp`Qe1im RW>$ >9I&1|/H ҭb~JY^HAO=$J4Ui#$5W[#=A]l>ٛ_uU h=޾?xOߚP.WVsV.+ַ6-qxgfTP=5{UUjx\m.nf=.Q51'pFnGd\"O!-3)B[gNڦadאs'~ԉ*>NWz~6dog]d۬m+YL+dJd9Tdq@{ԇ^~iP5*㺹[d:<;맃_y>x238pSdX@EFBkN$܁KSxSk8jjnS]7|y[(>׳3NaX/q]~|:!8]mU;ylDx DZ 9Hnhh2SegTXÎYvxIhj-^[#ѝ-]UH-/<09ahHB٠rg<8DZw齏&+aqثuRG %Q Π0,)ũ5œQ:!HtSx+fg=DMgfe`;hvP4oajt@65GţW&@],U/=QG5*'wy(lx毗 / 'VO "~U*Z5?&xï;n4| 狯\c\̠J&W,#NB58a6(MSq^4 ;{.2½geɕJx%f6ꄾ161aAX$XcwFOqܙ]qǮhm{ ZdUHXNӅ֙ʧFQ•b 8C1'E/#hI\ nwmT:oK*CzсF]$ᄆ#!d$jЮ村s>L++]шm5X#ʽFkkXR9byQAGoAcNsh(X@io\J}cS`P$AQ]YsF+~0ud]YGxۻv,nTvox: %P$8’Tf֗/%N@p"Ķ˯[HK.Ϋru*ٖ.boBr41 WH:$Y^,yIt" zxϹ-=40a <Ƚ; $neEG\4ُFwf8ִ}x\̓sux%Mz{RUQM|8` 0V< VqeEsJԁ6̸)$V}ˏp4KVUH~Jm=C&n>#NpҔ:\8ihNsҔY*sҼBN>@晏WU̗׎'OkRۦo-'rRp. |}\Ia/Ox Sͥ%>'܁p`͎+rsrIp,&ƻ mu*fmHHKI:s [&#gZw6=΃m5>rVGVy]oշ=1Ubqf1Eg`|LR\t >`DXZaH8`@ȷ `w' <([t*z]oFa3z B'ʑtvk8tMo!llQ}.^Yju?]wH>|pZf>seN璾^itd0уԅ>Y|P.uf}asܞ~͚cs51#( b02<p&SEC grn0`)inYN`Y%n|Nuuk֫Z0I얎WcD-g!_y <,GCC5A:uC:v׺^ m 6#ԇ((47kĚFdBx^$էD@4Y% x+rsU^yŒayPV|\_ZEbRh+q~U6zL4w][OB/)ӽ``Vj3.mfqz1N8OOq2XueH;Bšh0izȵ^~6?OO' DjeђvV^r iX67tb}e:-SmFrv4vhvH뉐T&Wٞʄ-6lOe>'+5lOeг=F' >cyeCW~Lf+Oi a2C$?PX+#d >ety3<=#J~Wˣ>Ow, &fYNM&cAIZ(u$(3ǒm[*ŅG<&y2p!kXhdn;뷵쎳_B/jN9[nGO]y=N7S{"O0cmUrh?^Yr媓^TF(^++UAWeJ  Mzvn9D=j0YR2$>"Kt:g d1tSo'*~S^@@yOzw}Z<*iՕ] įv~zOʴd+IczHc1*q ݗ,zΉαttNpuXs^HiAEse%@iɀJ8.h >J,tuB7Uy~;?`cwgߒX_]=|L=YܕFo4+m#;67;;6x(A1rJ l eEreVg%:him1:iHO"«7F}Q8k}}/׫gͪwq[cޣ5[)Y XXPupJcNz4,A-A,B7 O`|pR$BcS#\'O ro$2jXDilO$% N:[&nUOOp oS+fE0#:[๑>D24\z8"Sʎ !cd,\dLVs5z13=_44C j <2cV(mP:vXUDFd2ؒ?AMffˠ‹.v$yIXw'x8⼈a9c2NF8fzV7@jp矿vX;vb#]tvp'ï̾ Ф~G5Ēbu@4 d=$Ml>i\6e\lVV'ŞX~1)f\?t2>x3v󥋒|+w8K߿pΏVMO޾MǥUl~$nlqл?Ŕ<?W$Nٞx˺Z9-Tx4=lƑp1l:ItCpyqQ[hV`v>]Lt\G\ Y}eJyOӹٕrv$9RGܐTq4eNWzbwIiנiw8c5㋋Q|胉Z3+*UL'o;K [a"\w[逴ϒ_ RS $b|$Rdd/ Vv [$ *HV$ qFNK1`N%ᫍ}}9FlK_YJ蚒v+I ep{tkQ'Hץ@! ׯ8\3ҳЦ|yMuAR8?:&rjq{stl21kwk>TvxuၧDEF5vZ][7 Y;k~ʻRL8c/Z;ޫ/bi7(喗I,/I]3H eR(5g^0BpEd"JSiw,G^rW N+ 傒Ed:T2gmC[Ԏ+I5֯.Kӭƕ4;0]lV|Ip!U-۪.W3{7s뚟YjD Z]f1T) XYT3sdLB"u—uN-oh HW2B .E|Й,1Tai̔7`1i4tiV. 1'HfP"㵑$mMg7-6K=!e+=!ielV63&YT6/jm$NhRI&ؤe4|IP\ѥb4sձ>q$8G9hK`.){,zQ+bIcsʵge/B;F?"`ІW',1,02 jP2dztԳZ~j!%Y#ȳqy/Mȋ});A娤3:%ɲrW^ v?i(A*8L`g{:W9.`[m\#oh a2#TH* ):(A,wgfvfؐD A%CCpDVgP+LNwvN )-p$6䢌GwXA#GB|z+q ()Nvuk"]MT>{)h@j L'.iNBƑ p&BÒw֝GiAcICE:cuv,|*AcJJ` "ZA3!$0Q*Xd\ 9EAd'ax~5+c`ߢb7N}A 6)&(@. @D;iv&l`&Y(/Q}Ν#N4 G1:ɂ>p&ϫ_񻋄KG?pty? 8o/㓷9ȵV]jۺYLP.*HY!ک3>z{wPG=k;AU_d PypwIhj@^/tYnY7yyoGկu͛xjyh0Py]&J-7%NS{rcq;,w4 hP_XKh7Ѣuy(^V`% K+юX1Bxh|-ǎ[o"^ސ2 \p@8RlCY+g_0Ơ٭|\PRqχQ;IՈ}ԖhI]]b6ݦ! |+= tȯk@!!Ed p4&C|u(0sx \5{7b|ATl-Q6T0%O!_ZEI*H єjp(;mj+Ry Pm U?{@&15Q.xd,GfG#Jg2KI?/s!0[Jn,J2&-|`X='"Bpه"'inRwԭ uDt0)5 j$O=$J|4Ui#$5Q0)WN%vq.dk~,LE[p!)J˿ݟjЕr1şp$yyXQR*"nm$Kx^G#@]9' I(Tv4>N{GHlhU&ǵ´QI}60VE8B BOG"I`jwəVEw_I:';O>ڤsl-je8QR}έe֧o;$KC)sJE Ui*Η{(֗r܌@"_bJ+V:+3m"ps,JuJXDžA@Si*$,FiiL4293z+E8]k5wX59Uγ @=t1;CVXWicE%QMe9M纱&Ё$%$Rz%Z)< o'۠Av'aMZ?B{V[5Kξy$ցA)6RORNΠ#,=W"y]`gkఇ2pC8 N2ϒDGI<%eLF"")ns)N1P7@ :-)xkBc!Dt9o|3rHW^" |An[ܮ#p۩k*>48Ogєqq:DGN5_=::8J]ٝVwI2}vhZ[K "IQ0g0~6S>u9-vzdPlDyY=% &*8wAXi}[~e<䤪貰$Jq \7?!eZsApR@&CTD̔BW  .8& ^BF8DIS F꘬gI8 QQRJk9wkJw]qƺPt NnTN)s\ddP~Yޝ0x Jwz:tOrM4s9YkєJI!W,ʀ'  _(MSq\E=3βJ%Q3uBێ J t;#n퇃8gbnM ߮c[i]EU3 :S9 %\)3sb](;Hp̓h=MR|@,@ 9hEuM$ #L||L2F5huYȹ[VF}hc](;ֈrwqK*GlR;sF[adE9 P#[@-$שF@&)Z>IPTs7 5w|:3rV=qV:uX/^ť6f㥢<$CК k)ĭ#G*HVHL^܄^˹+}ӇPaR O#W4[\AaU ApG~Yܐw\~OCMM"GBQNFCڨ% ޗ+yj)` spr y0ffUPԟ|,.ްAk>,ڎڋf6_Bs})֋_\/ |ZQoWӀ_P.?Bp+ȵxq領`-~B]k\Joo>}mNO@.~[ ` W̟\r%qdEr B,~:MQڔ0Y$F$,Jwj ds J?A vJ%2) u/-k9Ug[9·"s$3Q' Q()f7+ h(;I:F:s(IƗ?<73Cff^ܠ3"6jh.]^O̸dCoqr9.u5ܺ[/s0i;nYu-Klݝ7W\0׾]^ |6Ux~X+t~GY}|͜@^Sl6]RTMyY<8MN6/=,mXynQ.Eݵ) QOʋ܏\!p}߾~q?j,~TώarQQG( Bԉ;g|MWzσtMBWmk/{eyz+]m7A~Ty.7osvCo3 Ƶ=7|\W ^mߘ26^暺|eZce?=Tnʕn%mEE`g _yfI D3X\ |$)|X==,VSl_6dtYP5WUǜykIUa&k]Z#i{~j9[ḬÇk/ΞV+dZ&א2FjH%';-b5.5g3M;IًD?h5K?C*PUL% ͳb|??ǿ54F^R?G|{%MDpVTk]-gZ{9_!)VoHƻ,bĞOkèk )ҌEj"54R;1 U=N5=g{vtc]'ggEhZ`v?ݝem2ecQ"{oWk[5)*M l f۷RL/{Aˏ{-udш9Ilrt@F&}x6\Kl"$K ?>}9 U,ٮ[l~qz:Lru]PVj󯾼-+9C싐?|]y ?Vַ*%ZYT.HU&dv92j3l/Ouċ?uҟF#?6>vsky o|,]&:բL-6$RVB^gC%+H0S(jQB"&1k-bm.EUOzc-ҸH+͍ԐdvC} BlT$)Ikjɸh j4Z'W޸)bEz e>5%(X.fm.IiG )YZl%l&V!ZH-t# 8nlJ9ll6)JlɢrQ8*GpGF˾;V0[}$K:$%LE6l#M}Xh4IdPQɔ #0sw'Rh*ci?-4,Jzzx/dЕ?.#6`oO4z'Tj\iN:$dyJ)]I^OX dUYk#kV(%9ʂM/sdJ>"Zc*xt1 I܉2Vm#8: Bяhk0uD,X* v`KB@DI2ȐMB>im+U@RT$QhOI=i]:g(hCD AidMS Y 53Q"ZXF.hI[V JAd{Q.%Ɇ0H%"^& FC*`5. J-OXnEE F'+ p-SΡ]ántg*Ih,j]r7%N2&OZf@KDjcl5i,X āNS<`%isvGu,9DRY3IuQلJq0&`#rȝv՘80ϧ}a|5A+hڕny]@F^4ZI|4x K8 >m9@C>+&(]45-*h`9yLΡ&x ̤BB'sA()Cdʫ2Fi2"az BM}B$H ͗Te2jnu ›Aq*qC UG2YЩ ֏D>^Հ\ŸEm,9貚 D w 2}&|LHY%,J 'r(d Fl=~->C:iˢ9k4QTJr[6 i(5&mVDA j ߤrD [5y+8Z*a Ɛ-^.Kܽg6+׋yy|M^ s]W&v{@@0uKhfU:: 4Q\!mlDv'"'2^UۊfY{ZQdHhh@Ƥ ӈ rRZTHp0(!A/Qkp CC*C(N>}{6'yx>N Dk?d'@/ d c'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v: &l@=\Z ԣ^ZszRώ^/R~z~pݹֲެ&>#32.|6ƥ!gc\:ԍKeK0.}hDv>!<7Y'Wd$+DUtB= {bгoWWкKg/\*g ?_6=#vݥg=\r{=uFV opOW}vlOWW=c"Z1=D3\ ޘ}:]ػ]u-Zfg|/u/c<>[Tr`UY5yr$\Z&6sr ƑPw/X8VtssXȋ5ƹ}XWpS #!Hs0d^S9Y&+#JRVeN>~^\#>Bߡύ/6>\Gc}}q_pP4Nn}WrQ/EG~u^g4Eo=)iWoWU$ROIYPG9zHULȩԄ1ȋE~)/fːy?taڐd|6[8;_B7<\]v/_\:7}<9 ޝ|t~wJN?n&}_|3^C%>P#NxU'm縷c/Qj}ݱm_!ʛi4oEDq7rQnń)S7UhQ恬gkf)ls ]mU_Olnxq[߆Acy}{I!-4J;\7r9m:gKuYV#gh[[\ʍ+ _vmwaR;^\؎mšh䟆e\'{7|iXa) 7u9CZ>ͮ<>ήOawiu6ozk'|[oF*><o^_oqq́yvnuzw?aA@kv]_0>5漻c]ZZM\_%_Br8|yRIE 5ZtԖd2%ܧ>lYBUWۋRZ)S|1dUE9?5ddK,޵q$؏ԏꗀr^}0)ѦHd;F)R␔Iy ؖ8͚_UWW,K?LF..Bx+ maT)3[9c0ݡxyZrIyM:$sIR@10#ѽEЀ6FYVl.UdQ}!+Q2j-;0;@-8bf`%RGj1ܣËϏze,\|FQ+zr!vx-Mom=;\97KӬ2?فf2t Nz4i⬵Nismg&76&sWmBIg >e|?f7N]i'Uj٬ GVNv>|?x;&契a2m7NcG a.+B8x=V2.خOm9=dqL:frI̪r<5nn]ڰ3*eŭfVJ ߙ{W͚0s$/yܻ߷^awȭ|cӾua||@}w. !.:SEY1ս(kՋ /sTx5q?wdOm86O~Bm%cFZy#hy?[~L:M1sBf@лJ&K5e>BI75U*͖[4씈\.h'Ǐ]?lv?\7C*{{s5s!]& :hXnM(AAtJPPf*XAure_.B9ˋjpP5;5x-zz&RK(׽WOv (&UܳMq5/]WAB)%"J:$m+DS&);rO.$ Ɏ΅#r!(\H Iјhob@QF gDYrH("1C]_cg˙]>&ɟ9os`<˗7;!}Ofm ٽ4DcKXƮ>2Ĕ?RJ)jRݖTRJA q{SHiIJJ *+PhLu@ųQ2vɨ8,u1X}.Y $W+Q)7D).ʭa׾B~rh˩ϻjwN%To/NN4Zx4(1r b!Y{aCAB&E*H\@D PD}=MνZ +q1v+u5}KT 9kLV=^Q޵!ElUvQug]w=j$,3K=Wbd%5+ ^p"w'%ҦG腠ס .rcs5s;^  Eq!)#+J5 .FG;kB(SvɤRlGst: r9chΥ0h椏.(KHli.Vd-6ɗw&\J<:&DDbKǁӈ1BKѤ,qQQ sURI 5N(Cd"ws.(ZkHºR2!H)ǣ)0"!@\B;k-Ҳ(-Q h:qB"[4&|IB񃅃߆PIĉH,zJUtir0B'57 gZ%ka1$UUgU-Up\:FrÅnx61%eF0ϝMNӨP@$:&&v:vD:6>׎n] '5 ؆rdpC0 &h('5%2Ta%"IP&s*FV|*oYČ ay%b [9sudF2[l̃e`092ˋmR=!L̮4R)skرW{B.q9=}՞ G{e.<7yAVvfuϒwʩG$qDSl:(Mur[ f%bHe>q!(\Xύ$!/eL*$ <#ZĜS)O~= è)Jxn5zLqN.O_a[I~<yj {>LG8ۛTbH.{yY A KH!UpJL09u^U|`|h#BTF J=>XD0Wu:%* *7F+B/,q ~}Mz#91{}{#"-lc 7ӶaށC0ĤNR(:*RQY8+lj>a tq{*:Ǻ./0c:ŎV&5&q ^H dQ0C(ɥRBW+T. UXzw&qhRb}_&n|t@S#R3/U,~GC=|>VXC%*:6G6-;v_ <&vJ%c*iE$EDP$1 Nm)k Ջ+QǠ6Κmx>^tUS_z^FԀf!rL&%N)QV0 :ea0t u@ hH~6"و&"Ix)0U<\4j >rrem]pd,~B'^ dû)"y]ULe75LS>#Z578b/lxV M1-ZO!$F1RФxb%4PY-2ZwMnBgzZMOgy^A&J"ax\=L0I >h$# )Qa`A[<2'D)xJ.4ѩh"$$s*MVwA:i)w)i}1?b]GB mcIgwāyd!Tˆgn=&inx8D( EGC8%$ $h`<*EA;m;3t3v&b{R4gFIOT.q+ЇWQˤ(:`*iq1NDu.${k|$Tտlxv`s-޿G.q6ן3g0\8 (KSxћ?zٔ }o"ѸNYf~s5|oo gL8a_^-ha.zȌ.>x2 xsˍSO>6`6ev0JLQ0oPLq4OD^S/?ͲZ4$^!5~Ͻ:%W%V`!T ௏½51:R: .(HwQ ^S gcBLv~0@ĘYE_Vv K&'],lC}}Wi JNC3BןJՒ>}&i^Fқ'IGڒvI-v4#ݥp.~.+p16t_Pl}wޔP~B~׳ԌnL1j86p+mnÆ-4WI0>( 3KT?uhfv^fB DŽݔt&&zZׁFxosÚēel,~)( E_8L U{Ee x@ -%%Q's.C٥mYZ|}ir+ }gsִ`<)p 7uD:GS`Ijץ1zY}v֯X4lDc/^ZsO8GA\xꘋhE48J ]944Aۨ:8!GTt*X5=,S-53Gm8SESA(g"=FdcIJybZ)}VHI9zo.jg1)ь*SRq8\>s9霎K?JK]ϿadߺR Eq!~K%E=\'&r:H A %3,Oe4!9<4] `(S(8z(H`GCh#H>O 7;o)3Kq& (ȑjPIErtT.gIh hbTZJ9ֳbg^ lAPХ]T )#,[#I+ွe~T] QmsE ISR45HFbKfwMOU=_2h$gїù@/ ޡb.$ʥI^ |Vk3̃a9rȝRg(X3 uNI3mCҼ8%A/N63í%]T($14q .o R r99m-q֬2Cmb/EvђrGIWmXR )L>LAoR @OeAH1(܉M5 Nx9m2`.#hK$dc9xց "Г!X}kQ(R܏qJ6nC0U[ >UJ:&8/I-)d6F[gYxU#K,MkmϴH*qBd!32GRNdwSRў6g`!*ܰsm6PnP޽'Zh,sˉ LD)Uhe;gi6;c a2}kxt+bx~tr$ s\ =3&8x<,MtSo*\vi!|(g5ECQM+8!g\2qx?-%xW!v]=6j˒ G0x2HX[\\tE։)ӢAkH9fs_}1UfK;utWՅo@ipp:w90k4i]5٢i|ms䃓E0P*1nJ7ۅ͑x{OHf+`lH#{eÈ7,ﱰ >?t1;ѪtTV:e.WZƧQ(:Nm#sKחC*qyU[ZWtI'~w/WsGoN?~7O':9}5)hFiQ8\&˜A" }z%4thiaj'Ƹڼ%4K\ӛ`gKnc@R_a,9WWtjRD)W^+՘x|3⤋[E*KS!V|(/ K&'<5y3^]&*FK9%^xd@r AFCB'r`d̛}ԛ S&ScWI!Jhzȼ%# EZr1ZfBtZyUi7ɛOѹq%Yjr 4^!@Su@kZ_XF MTxG[G ݵt>Av9%i "HtRR*W0%1H (m&ʄVF2;ːq"\}IO^. -;fBJ&n"]K)Ct82tƔ9L"d+˄ YKa o|'4姄^D)$#1[:Pi Ȯ-q8hzrKqP\jZӓM-) %iދouxQS>h)]bxSSp8U%[?M e|{AiV9s.1&I2euFq ЈG/2nRm SXؔ%=mʤHgZzL 62&؞^d[)BX(XX8xK^ /6/e?0ɮݓ?nPnqfFr0(YSY\) E$*g^1 zp.C3ؒ%K`2!EMM4$#\2b;f]3ƶحs?b~/-[Éٖvl j6 Zސw1,pdAzF SܵkmVڔĊEȅ( aMBI$A9XN3RVE y[~>NhrHg?>O#@1,|L8@0<\pV6O|"Ei}_f}7<$0~'{j71 PwsW`7)faEX]/3Q!N}Ƃ!"t;WE1K.U֙m+!,bW/F+:3pE:3pUpU{zpRs0;W$rgઈ+ͮUm+R2W%v;{WE\%wm;\)^$\9jX/XdJr=8N^-4fL*G㦉fKgUsQsU y P3#K{7[wI`wǫ,ڝ*5[ EJKivતڰ*I\UVöU߈HR.tM!3PVMrŪ ?bӫyMU'?~O>=cj])(e,Oӭݿmvv^,ӡ\Ga-/3T#ĩp6ʀdkmk٦OnB-⃓[m6H!hbNZ$5U@ xT3&%k(?U9M%d^7̅\)ڰϰWS{(JJgvxHb1dԺi:{{:Y} gZrzrnWMﭞ|)F -Wà_{^Gbйf8yz7˦<ιwL5?mn%f&6Oy#m>ľlS1=!&XL L\r[ V-2W#F QȌBLw X5E&sow6M^,M߻Mz|/߯>,?<u{61_ LZ6$Y&$pZxDn#xg/|m?;9=CZQЅiˍMtiu.}}$hKbdZ˔BE.V:a }ȨSpԆߍ$;[A*X;dYC a,7 jtأ)=g׎g9X+A@z4h(KFBR`#Yd^>yHNXW;c]^j^/ 7T۫p#r?ZMe2kW\Hfͧ'gOz//W]Y'li܊\YP4\׳s EmKk]Yo#Ir+y5P<"00w 0yD5CCIjR)Dzb2U}q- scBRn:AEmϮb[ti&υ.JS]gB$˟azC:C::Wp>{P(W Z`@P}Ё3ڔ@d]* UXؘ*F̭rdЎ.c1謣V?W]DgS3%ң7-de^cHdB7?8Û}}P[۩;ą7֤;դܨ1} /ޚWcjr߫Ҫu0pi՝@ސ Zώ4pE.clUM&>߽ Oc T,;Vll*N3k5@Tڡ1xu#޻ށo9\׋,}m}>m'fz$f7CT>L]_E0'k1R.ĝbq5ZOi֣O.9(Uu$و.Z>r5'-bc5Z@USӠg}dq~sւd>ݬ7ɰzBAݥv1)<8~sB(&t5ꁲG,a$-#W)&@T@V!EGQ I'E,)˙C1)1Ig`q;i=9>{|w50|SfYYF²C1˥)s{z=Y/u w,s4U:a*˕(+QH940" Vkb]/DBu"הOΐX$5EödbBg ӂFIIŚbk$(Эg[ޮϯp~K`m~8q`Dou0!a)ƚa6=&1^=^Zy3|њҽ, miL@l6Ȇ3;&;PtZP5=xQYc\@֌L5ߎ03C5`Si=[zg},9z,佟_ϖ!ugz\۬[?8vʁZz7ŀS"D09-_CpP%غR+xbUu5* 5Q ˠ5Zldk8Ned٭wkeђaV(sӎ Lg'z1hQ驧zq#VśVMQYu?2q#qW_#a^<&!5EWE ]G~Y:Ϩd )1ng>QMbRP'@se_Y‚(kNAs(U* @ήT]NcVV\BP<ά8lDC %nFWuiXp@|ҹm nE>ޯ?H0:t(jBQ6/BLRIJa-XMUF)LFO*צ +b1UZ6FU ^bB Z$W` \>FUa`]L=2CISƣ)1 +asr [+T6vj!̢KW1F9RdM)?BC렱 \!Ї\tTM7gD`$[E9%jbڧqKgrattLpVn.L&~ pPAk1HHTsm}vޑ^`jGA;H@уpeu{fO|dYks)N7w( f4S ٲ x_S=z˾[zTrm6曛턴zPa&07SسdG^So=並+ {n!'^nV`f P]&uN+(bK=T3{f `LP=sV1YRpMe]& 䤜1a$Wl⼹C%csAw[pp燎^_WG_EYr`DDɻL(v!IyQ|'UVw< r@m vwWB[ޘ9;&h ߹c1F߿^>-i~dX,~XK__e, 8ibq[0ew>e.+jb%1:/@Ml&MRJ0#J06* 3>@4N Ƭ[gX{ByaS>n@ݖ~'EL--=xB/)tG1Y m * *G62 /P Q%\??)QiBrY]9Vt& wl ۄiGLaG U vk JY\J,+x1 HI1Yj !RrOO-7wPSuwz Oit),O)P8J?2RK+!^U"Ehky'0#5UG <&DDb"r:grBև01Y˧-f!H|ޭ6]O[f;dӬN{$O }67*7=kƇGߔ igsG ۞<44\ߌN=i)Þ+bӿT{s½DNJo ȿ~9m= '/1ع}7mYa6ƸES^ߜ/-~҃M FqK"15":,.鼮6|;rqvxe֭{B`WaE=7CthEE7]],v̹=tg7+k0^ |eY.ur{j}Wfd2^]&aQYS .sJ~vo)n'dJ6)- Xց:V;OՈA H'.JY:>(rՈjꨩ@B >(&JH)0ϱ@q`);>b0F# |4:p]9L1SQh7$fLP I[CV4tҡ B Sk7;Hk7_Θz le{AG5\O6'uzYJ-+r E89TR6:\ dS =i`;|[`xgTZc-sWN%2Amt%5iYR7ָ>UK&QYWa 0\&R`gKڇBm#i*>P[R26%Pޑ@\IqvZ ^n_=u*W5FQ'DeEZ%e0ba9]JR8AVMk*%isIZ,dB]R*1G ങ c\G{CgRC"Uf<:Bob\I&VRCR1:!rC%daI پ̩k$cQsO|C}CWf`dD L(I(Tt, C5&P=XW.F;Z!3vhkIUɐn9.J(wǥ(K N SɑMСG$RG4q>nI8=П&F_Q@7On~N9ƭE[d)O9Pa6mG\_R'<:3jx{x9㉗{,"^z,l]P\ Z R&tqQ(0!D\,gj{ȑ_wXY|)sf;.m`"K^IN{nIqVlQ3nlêbȅ`$Y6 xmDM(3dwdɱZY~#ĿټhkEA'u-)y s>))[&Q<cjx}4i)ίu`' -zʻ\ Qd@4XusCTE ۏMp݄{S쭼Cx}\{kQ} FDN"%p^D m%m鿸ƶ4=Ҵ LHiJG>:, :dB1HӇL2d)`V6EHwڢ&:# :"@[EA} FSa0\(noQ>*%rUpoŹǃ?r&,KJʱBd)b3PV(ɨts<_vUM*Q5&Jz?|.ƓG?$W!NlL1X1q9;:5ZJ._,,L.K#_I/!]>2BLdvqM4nxҝ❞ӎsj3yZ'F]L@{mO׿Px<;]hX*YP4 Q.rǏf\9G?/hxMOޛQ=1y5VwNڅ~.|w|U0H*6u?ҽ+t6颀MNpѓf|gOtN hZ>ͦ@/9>/oz8;{^]J+W(Gq#]D>?4}, lF74iNolښə6{ܘgrȞ3.u%N)Gw^Y<~m#~D١ʹJ[Ah--Jª\:JvུD!Y2y )f6!fSR|TM=蕯!Ul>PT\<$ZWۥ[U"eHYVt1Ѵ+Эc>ud-IU ] }We?8306԰xܭ04(ِ*Ȇ!XO.w`C${C]{$=zꝺJLV唤% 0 IcJ^eÔDB g (#je$w)!EEf)9̉4] :U6gF>$=PU׮:7[!:aFKXT9L"ʤ+˄ Y^(3N Iԯ姄^D$%1=u fcmϮ8{h<k˃*PmܕB-t96wlwW_͔4'݉ǝG⭌z]Keb=w }|qCÝX1OHs1qH&S\g"7;4 %֐9F)OMYrAЃFL.K `\KϹ j#c5qnGzJZ3XP6,|p X;j;:nLf&㞾+Flf/G s\5Ҡ ZDbycܠ'2$dphCn2MSQ9quc}I[+KnQzcR,8a^c K.fU1rH.`MC }%>C iRF[?럡Bx< -v0Hᇚ32JZv~ P'>9GDӞ1=0{W:RNĻttܽlۈ/R0/Bѧzg4y=5\䱐}BuB,uL fF`AitI()5M 0E)lTv0q>;hG/R$x,foڼB;>Ag͗oQkaT9ĭiȁbHQ8UJmC!@Y֠(4s l[EyWנïwz-Asvz6[mgoc`7iu/&%؉K#&\ TCɋazh蹜`J1Q*luPI.blm"C XMmDBX%co4̴<.mCI2%"9NӺ75?6>LL4|e˓2':TL#_1橼|[]I^u“Or'?'/EG?l~y?|pb&i){?qz'+ϳegKJӟ["\^ti&I `$[3?gc?΢zruO2>/}PVHwG׶i ;tO oZtmc havxsQoxu嗎hvEzZb>]{L 㠏\eyн76ģbĪǥFʸ'W7{;^yu8[RvqJJ!5-492k;|=Axzp1h ǃYW3Mmi@tx>4cIK~${.`Lv.jeeVg%:him1:_$@i#*fmHLI:s [&#g*g<8nQ:p1d4ϗn(=3.F+͖} 2˖(2`)uJΒ!٨|P x<7,pgXAC#m-NX@~u*{$@NJd怚QNʼ?m1kt FI*+U!E0>q튺+>heմOwiCg?jMO2j{vptcOTċn+7afp`Z&Y$ O`g1p]K.2e2*1X`a.%J$X>yDt>eM J9ꫭ|1b}\=Ǥ dVuZiۑ8c/qe_[~,Vw DlOvޗCz{ek}ؼاyڼ{W_71]-omqxj{OK9h>h3 8zWu4̮&3b)Dyw6n?˵fٻ6WJ/Fܻ]$]$|5Q-˦H.) S=JԃKMq$ 43lTU:{ilyl<NOg6͝Noþm 纁J&ZLml1q7%N"*E7BۓPiLs V~Gʘ_ͭ @t)+I`9Ns_Iܣ]ʔ.rqg~k魦&e*,}>]蹞E.]ztdc̛NW 0~vwE&NV{PwnJ1]3n2W-jb%-_ /liݩ̲!jl +7]Y'XVB>\L gh7c@xIULe frvRLDYz9: 鋞e 闗1Ti۬+p<BӊAiq='.$%`%iPs N+Iӫ6`tAl7 H8t|;í@eI4;X^$_yՕmFbyP:-WmjZ B#JC,Y<59x2rdOmYiW BӴ-1XBXKB'&"Kk9 B9d! h@O184TlurpUB ]5qE^u u[bOYBɖU1MOY~:b+[T89H4bŘN[`X,c>x{p24%k +hɆlPk%#q="[u [%drJ "HtRRʆ)T?&nC=uKi\JF4)8:SƒdPpH֐<@[u1i]ֆd@gB ђIG(~4Q=b5qUUu]:kiC56 ɥs\ƬS,c$8[B!\rJdQYD ~~7s-?4pa;JƗes5Yl?㒐eEGܐxG ՆS2G|W!3Ưbz1(A@|> QƲ[zsteten/tbueR"}WpW(J0E$YFolJHu$Hō%*̾ `)0^kp!屍HR|rXz#w#G##fERHI?L$I8E*Y @7I8iRbCHʑPe\}'v5@n1t7 ?tek!lPkfd_-Ac.o;Z\,HƧys]v35ȅW#]]Vgwvzk{.=>̲q&D| ~v܆H0BCw8<ArgX]kI#budFX!:b걺Cy^#~x3>6aIp ma@e, }8 Bg.z+=Z3s?})ҿ~?RjKnB(/ukqr6 sї eN"I+0>L d)m{g=נÇJ{,Y!9R#zs%X nĭ0s$htd0уԅ5m?yst sO9ww9}~!nj 4Y2<%SB:E7g g㤵ֆxn/]i&I-4ME򗭝s qpmǛK|ЁЁ2O i ΢sYhwMLBm/Y]Ͼ4[v~qyvb-ߝ$Z}EΎEdt FhYr1Fވ`q]vD8^̯Dͧ/G/o2>˄h8O a2C8PX+)"d >e76?*"j O3"`"Y`SdXnsP{VJ)0̱d_KR(IJRuΐ5E#b!+6LfQB-qvB}*~`ƛil- zޟ]<##uw"DjsMW]rl 9rIO*!J9"sMP5Yš+A;>~Wx[z5,FbhmNwep Md,8ƵY#sV묊ո ^A9gǪ1mY f98ߗ/1PHPO;;3YjGFsuxWM/0ASp.|̍u>7e5GÆJȇzޓ1`Kd;Lv=hdJcW꿛_ؐ-Y)l w$&YYÿ `v~9!)yꮿc{߫[n^u&DuvCS؇ZRXzrAoP\Ɨ"g\]+&?fZ=FjbWdzHj:v[(Q@,?GjqQfJl/j2?-:ܩmj#%O6[P9 '%Rmՙ&z3Y!OtIק8eDkM(F`b ňC1T|\s`jyGC1P컇Z]^5J'އUm8WVગ ._>ny\g跷2Ժ =TмohTҋ Գ2_o&w'ݷΏPI'k*CW`;PV‚J֤[Q構c=4>g8m\z [P/!R+j+Tj7jShG:ɾk687$EiP;_5yXq-χkKk۸_jݵ8&f q6lIV25i9SO J!ƩC(N^Y_Gj\/ɛڋK׻S$BMi9cqדr=F'W3aʍdl4i'"K#0FdB4HkDhZ}NFWd,"ZcBQQWCԕL)X3WD3=h:t}ب`1EWlѕ=}VuEf{9 ^NGW{a_]mJЕuujsTtΙhtŝ)N iS )գ+4c<"]3tEb:枍`6"]0htVXtE} 8u5]i tQtEFƢ+*5jrjC͎nČ`Ȫ;6OaJJvhq55Mz5 d銀]<`>#ZĨrf D+v"'L,"Z~Q:6jvX6 AhtE& ZҪqѕjjJ#G lkeODݏQRNzNtu4"\ i ^WDiJ!&]!~p(5u5@]IݭҡyL"uEQWҕ6F\]nj}NϋlaI/펷SQt+I&stJW]Oo.a&a/XgyIb[siz/r툡.J^%qX ~Y]cO.Tm_>x"4Ӳ&j@g7_7s 'dr[\dO2X3V܎u:{BwFT=&EmOb/cl N P1u-.6:͈ @;J.Z*\DB`2]Xtf娫aʁmwfLPSÝS1w"J +-@E+[=:EWHD"J=FWԕs;6hl+47 Z(Hl* 7M#-,tM5=DMƤ+&]׎RQWԕUV"pE4ѪuEz1D]a8ZkpEWD t]e흃QW6z;eX<5]Wm5g}o. W盵U}-Ԋf\-teG]\[)tD"`pp.t]QWԕ0\D+vDW+EWDU;:EWRSRKͦ }#V s'$odn `ĸAK4{wYs6;ZUT䘝.AK%b ".6JKhMV]v1ʩ _W 2]fh!x]! mQWϢ+ %؈tE{>>]pu4CB>F] QWΙڣ@|"L"NwQ} 6L.i18ĢiU,tMehƍ~MXcJW+C:i1DiF] QW2m9*]nK;DY[`pt"G le4B\â ]WD+0ݔ ll]ߝVN~uՎVDeh߹rNz 3 D<"ܾ_oG ,t]!ju%NG+h+<(u5@]Iz(|=/`Y4Lz )o ~)]n~hJpy/rGyE?߿njܤx}V}r9/R*Z(ty&xks{3 o#֫Wxwez{?p.nYT%ok=C= ~?rU&v;VEnĩyK:Iٱ;~o~}{Lj,ggg.OJD-eugZjcQx< auBBߝ/>p?b=3M+`IΖEr,ܡ}!g,jbmF ۗLǮZv?j迣k׋G]p8ύ6䐠/a$x`B~_+riBbGxlsl g~ }ݾ"I/ w*>},!??ux= #vY8U&o1(;Oxb̑eddbA'}X.ūⵧSS`ЀAw}YMai}l.֨. 3Jbkt"̸ٮj^~O M~M>`SMav}~*W۱~x?ML8=0FwEGŶUN };IjETs d,QV J6KMS6cQѻ2rx:'1M. |dz+"QK#-Qf"IZcZrITΥ-xi PNF>RX@y_dˬ,s\)EɊAeYR'S dt0vpX *;ҞBvW]_Mgq} uĔ;L9?\j{T={~;ĊjEN,2VB0JRy +Eu5pmpY,KPCg8ɸPk%oOEsmb%!5: ]y_VU@a_%<]}NGj$X|ī{rZpx9Ԣfx𻔼~fb9K%ww;M5t8}>~#?OG:0+~x`W9bΥ +) 980޻4O Nh,`8UR)6(%j(J"ژ/ 72\DIump6cwJg.dBձ 'u၈Ҷ8KZ6LӮ?ݨ= I|yf Rv`?B)̈E[R*'kUScK.E:{i!2!iSȍʌ*@J1JWΔ/Yccw󰱳yQbbJG(V_H2ΌnJ ̥ d0paay SRĚ+!cds]Sx# Gt~fE cCvpaթ_^j^xc(F|jDݱhшbi2WJ i,)]Vg"KE$K cm(R2 W{˴S#:3o쵒ܕ,@?{WF-mY sv^vfE[,9~E}YvԒ-v`5E_U%K31X%ɀUxxdhU"v69lBٕ<.ڎbor}8'j*SJU]BS6kv̒BZv)0箤X}=|:3.fkNo<_XS󫓼{|[gR ,ӏ:/)'lP:])_J([-}([#oP6BQvnS?=mLB{IwPvhyx}>Tt!o;%癗`G;L_v'[{,_[u\m ( ο?|Zi5&4<[?" FOxp*<ӌ& yfH۫?'-6f/gk)~fe#8y=cʹ"+Z#mmu)a Ju]s*u4'sap[n歷wϏsw].\wYfkˆMsrrq|n!W7:#J6(XdIJ'Bz{9WT{O?}~fUxsCj.&_nv/nH*f< RQbPT^472˂ͣ GmLQ8U=}R1ˌ}ǘ] M[(4Fe:^9uV %R֨ѧrrْpq>r zze7eh{ s o/侮WшE-7SKi7 !o,Y52?HSS9˶ǧF1oA~s7 LjIB1)&mё(')%N (>UVF|&0>yKDD g@YBMgRiʹw9 횁=C6u&CfK/a~Vk,/﷽y-zi>QhVP76BSTxr1bQ:Pl,n3 Njk!"+PEJi2<zJl:{2!H)ǣ)K,c0KHZB`$ FRXl:0$w5`ɃI(,MhB]XJ$A[(f%&CllPj&vK1^xTfV|28T@@ZZcb(Y.pTPyutRtu|baEB[mh8zw܁V9yN3nd/h"Tl0x0.ƌ^GFAZ}qv_}Lgӹ3s!vtꅅx/=:r<}uibWZ>be&V9K&(Qw4Ջz~*]ԳHh-ŌhD*$\\Z䇔I4"n !)(u%f}A0AI<-iȮ(t5ݎV蘢Y.x3eɼ#kKД;o97!..G_>=cbG<+n&ǐ-R 2"pC"R 39~ő"1n#08:|cqIB-:XJ],l:5x#zz\13k/V~Tm܀w3˙)fs{6^_ݞM }BFkMaژBh*5a U2eS'xR#xA0X,P 6)3o4 p$&l0)#1u[ 30uQ0ˠyJlN<(njuLL`QJ]Mghs N]_\.O~y;q5|50sس3^0 +fbzz S䅛|%??׃Q`R=huǢo 4(l Zi}<'> +1>4d!R"df,%P "FYAX1ra8YNRTb;/%&CY L &tBI16F˰4HkݩŢL-!FV}AD`ۣi% u0Vg٭+ԵHttlU;0eZDuSS.]1h)AQsEݏ@Xk箌r l>_DMus%ĜdL % Vq~m=>[y tTQIZ1`m9D>D&Mk84xf`&EdJ4%7|$^J+> Rq:nYm1TeXx-qRK~GV#J|'E*HBe!b<2D4 {щw:2=LK$6%IVJksY&EZ^{T644)hU(V "iAKL\%I֝GvsEM==LD|б-CQȥt9)#Z.B#16γx8,f ѫGcM\vt%!ٱv z3 k Ӹ:^Gܤ5iMѽܺZ3 Q,$MtY廯?^?F*-5CP|eHoV~C񾩵N~շ֋r75i68o ~j15~0x3C c0:J|9>i'T @ yr)NάQ T: )] |z*$!$!(VHM:<]t-?az7 D:}}x~Bgީt; 3#2k1{m]of޼K|5Y>>I{!}k׈Q\}u✗zw;cpյ\T_ev᧛oׁكd.—Ḍggݳ]s Ɠ7׳,oYVwD㿢-##%8@,.y>XT1#̟ ǗwG׭H_d(rX0dr*r*#m!b Jէ=Y;H{Ʌ2yثlslITkdF 'JjFSR!$+́Rx3ș^&/$uѹsY55ũbjusK= :+9iR:džzd1e6j} drH)Z窩lIP< EQ: k%aTA8:\!dNK:vbS*b\.;83+`p c׾F~VXhʘ!*SL&*PCs8kl栫*U;*˓(˼?gP>Q^9m˨ѱwIRfHb$M%S概)ga41y'.x$@Fg<L@|慜\ lwlό's~bZ=:)G~}8\L`Q Z rskݲuBt;$Gm=gOPW+*˶>hrA)Oox5\DXfqߍR:!vp|y%u8`]8|UXw`~ɩjk5"CAt(-5.z0&Xs4TgzTAwm~ݛޓ?tWU^V+vU߾goWV rBvI!*\cs;J/Ėyl1X:ssgj*s̟{nl_q^Ϲsi-&_'NZNF2t{E}@,|_#`x@sm[Z\!kɰF;UJLn*RQgcO,UHtA$p!T׃2kC߿.}wpu'x&B⹻C>L[§u=[[_->(;,w1oz|ۖ\B,~-Co"[>,8gM-U)AGj X!#k.Ft` oQm=`JFn[׶ 3-'gggˀe뾆J[ֶZh*j[ZȞ[x j-5"JuZ' Zem/BP/ֿ| zm?"mq*=ķ?Ի7wfuq^>߷CZ?}:(2<;f!f'v=7Ct@vE7uytqOןy^|yR:vqzzȪgO/ޗiiL)ŶV-Vb&k} W~eaY6bqfם͛٫ Gw/vs>ߏƻGpiGOoJ?Mi hY`5\hYCk-k(=E^a2Z< κ׻C+Awj( Fzt):] `jEW냡+rubag{k|^z \g}jOҨ+jHWL=? jI ]5LBW qNW ~HW:|@tUP誡v骡dҕVJSkԀt=jh;]5ތt-n8Χ'm5,}9~ PIDi~#Xe2;-5T4Ӵޗ[W oQ|~vR~)/iผwg:7 MPcWKx(Da0] \׿uZ_J '~/2HCes*p?Jbɹ$2ITu#QMľ;:)Ye.g"lbvףϠoA.k!T;Jы>T=j-,{mEY Y_Zǎ7v-L=Ԝ}:ᓍ3nCi8WD= hY)!.*]2&UN)R \lnp6ﻋ-(7CFجC Fp0tZtPxr6cmV-+kQWNN~ܸʵƵꡘϫ S [oAO g?~a22'29pߢ{"x׻[:(Rk{8a1 ?sfAKJ‘_#M ]MdU6pJ{dGztuc 8"_.k7)mϬ+02=9i=&ڦhWwA(}e7I4/?d&sKlU tWy,{x-}7t!yCBtJ&j6獵~Q5`XMT\Vpkw)5!{c6kW1DBW4d-bHZC kb6ĘUZ%#U لܡldp`?_*[ђwʑ-%"ЗetI&#QA%V5c1x~lſR)G `)E #(+tA z[IEC.[}aQfƪAf1+'eltLŢ#;_S ִFf>꫽{ي@kƌjm}Ii&E:@1V*aߦ@ƹLhku&_m kφ+p6p[oD # 3 46.F-[OMdC1oϛiUB %V2*J(<E0lCA5FHzA&gu}Vݐ|2Ev)9DeY_ɢdA&&}h!nRZAdž$( `brQ-*# { V AŤSҎ 0 ^igm A47-mFLȽ`ALAG%D""KXe0PA^FV,Ĵl2IrHd- 0VLXtXĶ>րZNlm|TŶ3[ۄKŒYqH* ^v&fe=>8bv0=iQliR#Rߗ-jed9>DC& j[M r8 \ImާX 5!|y1DXl"z P)vDXU\[ &%o˰.ƇIsj-& qw <3 N/7n? TelʼnI8QsHhd-!0!ce?]y/ M{7].'׻\>* f=f~4M1Y6sʠf:[K9`"(}d]1Q)"+cs @+Uʌ08"I] 8*H0HZF)3]SA52V^W0ː>D49n FV[,G~9 = KA)or#BdG薊4E&&>bE\;"AaCti8e "‹9Hb2 $ |<#f?#AHAS!X ^W ?!-9#2ZF=QرbX`Hȃ[c"JŗX'DIJm)ƌ~cܦ@393!Ȧ : q#{wj0t 3udk)H0"~"PqB& #]Eתåa)Mv6A8a8f1&6dlN\GVvHd84Qo;{6@)ʳ!{SFi)ef{MwBj([h|#d_ODVV fŐzY ڏa-6(<毮 \״R 37Hf2zVG<#tw4 bPi9b*1߅RGS&eV:'Zǔˈ|X1"FӮ#r3cAň'7|^Ì<.Wd&%xX[3BuSj| .ndVƹGi\t Q)2:]Vi9&Lj|FGַUOa/4Ń '6ghźⰈ˵1:[FBv&ru̎u ƯҔE"&r*S8IL`х(2b@Vi&I?b+k DrHCۣ^rYfڡ.Bf31?@HbjmU7jZ"eJ3;8L#!и#j͖ۜ2q矵5ԢfY-%Aԓ\Џr3uʆctEl4R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH;r {qlJ6Z(@8M@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N#v&3¢eYr_Ӝ@PK~N F@1H@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': tN ̰@8 קn@PhN Y@Bm: N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@h='.娩zsm=P?ݥzIzyHz2.A0~KԦzRV/]W"8+\AaWG˱#\A0y Dn@}n>WPɖW_ ޞ9ziE;-ej9 +gk*6YD /\{Բ]=D #ĕKG\An4 DskǕdW!WlW"׹^p%j{kQcg3Q7+Qqitu9;g]Ԇv\Ċ#U0ge?(OnA'lݜ7׷tsONdW͟O]|֊yι?X;tb̹Mg`O?Ya'dzq+ɐDpZ{տ[ *=breW"y#*jW2+/W]69qHp4Z&W^1,SҺp)ͥJ 7np%j{4<W2H:s4J c<We⊜I;•ή\An6JԒY;D%G1Gں͉͛ NWn+}Y>Ͷq2B?O.1P_~_ sy\u.!)-`܂; 7O5zK)ͻ/.뫋v&@KÇ<$?1υ}~?0a?ך]YИo?iLj T|/qOKƢUn.Z"t7ooM oA1xy_> ~j<4Cn`S0lB1%}Rw-kv# Le?7qC<\ߓ0;E[zR/g㴗WN|&jT&S=%%7S:9̝֩UgPC̳hxac)B>ε8BJ;CJ2zǕ cU0ΛNѹu֞ IJ<ͿA,aZ~Dn`j߈J{vLj8pDnJv\AeFqBt.u+6v+KىZ^=DO#UQO8 L?cE&z\J=o|f Z$er\62k~ pŊv#\A0~p%r\Z׎+QWLj+pL\\^p%j}Z;DeȊ#hVV^dY䭽ɢm[XdQcد:? ~.sQAEV"ŭ7}j&$]~+Q-4<2@5ZSϕd96sc45ܼ 2IQ ,uH;yuVv+Dp r&nɠ:B\! )t+$P̖L]#vE'CDr|cozz )vi$P.Sb1m'\ApD*:C H%3+Wɹ}GDى\JԮ?(q]0;ێp%r9+QVg'*v{ pf\͖ ޻Z&7dsd2օWAqخL WlmW"Ѕb}OǕ\f#Ou+t++GYv\A%FWLj+1u+Ot%rC7ɯWPU\#|4M)Eb+uk4&FcE#6~.cElQԫ[dJ. `ID[!oB]wߗF:ل>O_[ys?{m[7 /a⼸~}zWo'r)[u+蒯_~7k=܇|gQ'پqUfN%vQJҕeAt+U ]eڔBWVǮ2J=]!]IN:BJ DG[a> ΀Hh:5F :*3J;HJ)0 BT3Ő*R ]eQ2p#]ie<89)f2CFz'h,UAt@1tU-&fvylFɠoM/fJ~F(Z<:КovJC:GЕMZ*R]eCWҥ NWΐ(DW9)-?P 骾IwJE1kY.۩gׅAjNq|bL17euN.[ cGil[[(C ̀ fD.vFٻgbs_V*pu1󭈖Ϸf=]%] ODWehiUtut%;P+\HNE^v@=]M#``pBޛ4fc|/Ca i8`{Q?ߌF[[؎'7P.F- >_`>!I1\ \Ǜg~62|7? /nlQ?k~kolT*WGEYe`Wa/)Wo'O)?.7\%q_>\cX-z.ȿtXa*ljNуZ~nlGK2xD>j`dˠv> *m~ι@1b:DqApegf5Y J0:g䚗˨\mTX^6φ5鄺QƐKeE9FYV™ c0u2:g2ݤٝU_u”ժ}}3_ȹrM&{[ݝڎϷzON9MDGχl7ƫ&_^w馏()9ӞSE11.DXPq*P#4Z"@vyU} J֪C:C:}KOlc\ZP)H(JB2Q43:E8 828mLRxwxz}We5 s"f}k4U3.=__ulQQrq֧J*A/!*O?#ӏ~NiQ;ér% H40 mcA㫹'ӆSR$:&8r_NgŮ [aj衳Ceo4Y>MG_nYYpGt;^ؠxtJ"*XOJ/\ċw(;H7mK*Azf<>#J>:U$KMt:BB:12Dn--Jb,/%/wCf/k?Q+ٛǹB^=[hݵ6;ou >nf91{}"q ONnǍ!l6Uӑ*O()9Fv%M"@ʼnJUV$AU/d|5j',]^zj!K!d`!z xY)B0%2tRLHr>uK˧n >oǏ)eũ-_n6:8Zsus5agG 3!hՈAĤH@A@XFJa)ڈMf4YӖ58Fun9[GCS˝vH:ݱ-r|)zM]Tg]w=l6/w<_# *x 21`EJ4KJ}g_Wa27X{6xS$3 5P IyL`zA^)\7iH!Ɛ$PO d'AZqHo>I!FR6 m1l *uN53:)lA3!ԓ< q"4=Wh&@>J<:&DDbK8]qG .E IY["gBIgNϝV;D!RatAZCoVZs`X-;b +O8+9!œ4 @?%%R'%#$Rď7z(>8zjA PQiM넏r"*Y &@Ygij-/cT<,tKK%2/r/,5lrB'>0zd&ljy BɒoՓq#jqs7mnGKĊZIdBpqD1)Chdz,\ȤLVs5z'cO?g:[OT=yn j 2x0hА1JDT:[fqQD)=d[O4OƉN _ 'c5^1r*&PT2* ̳ĵ+⮘A{ M0Y }#WMp(mD(ٍR2w '{%^mitBg%:cLC2ER H-9f'.$K.2e2*#x'NWωm;%UN,|Z&4i6=_#sGINq6oFyqeEY$Je"){?I' ػ-N~6 mUA2Gջ1]лSM. ~̾~*ObAR_8o3^2Js6tWw6LWvGhַ}28cj'o>?^B}_'iڃt:- ѽnVRwj66+8?`;n>2Iq2xJ1Am{*~Kg&&me/Λ. >7 Mw~,@чgcsU|bLr`U3Z%uFY9󂙜US)6D HYĐ~~C=m(?~=]A: eP.(p,"!0$܅DA2%0`}6 J(1s\=PuB{+n{3'NϚgwq<$MDv}N]ҟ4$vL~R0M=wQbS*Žϑ1CߙaGPt|;/!@^AB^3)LR&5(YL0t)K*Uf#%ǘWZt). 1'Hj>E}߅"v#}2 Te3Fܣ$r{GeQShq6&ؤe$1qp*y%0]!Mr-I|ubT8D81 2"K8w^1ETrʙgك c&mX9PEcJFXzQʇ{=o{m*疼(HƢ/$ACBA,+z>ŋAOfOyiXr(2'dtLP:3b)xdsB-35ZZqq2'wϷI->2K$Df$R/< 1փbP('G:A&N HhFe> 2 |6bLֳH@Nr0+e$ myD%>|_7o)x𔳖6Z 2$Vr)lmRZȕ؃krD<Q&< *s-{ôh(I$bty "٨іp㌐8,MO4pGiI1mUP)Ò)@!32>8ɐ$0Rt/63i!*ܰ6\a!'!\(nR;5c;Zq͔6^>8Jʵym|J o'8, Ge6iK~|A,J? 㓴~*BPCt6¨Lp<@q0& 4\OoI.KTp.pI uP&2cq;Kr85&GphOqn=M=N>Eu%Ҵh>_m:<-NF2JElV ĝZѐ?}5{ʱQ~։|Ҍj^INmjټEΛW5b(RbM̩m<'˹w9Αj2|^zrOpR21fs$t5_hYޒ&LD3LNtrr⦗9`7zɮQޕV.֣N\H,XMsUƈj<_tjSMI+,8B6>`I3oΆ&+aWI!Bdh4}Qdޒ"k \cL;棆BR^J#B=^t*7x&aÇ\{Ŭv>O؎uڹ ۙQ׍xf!:`jI:5I3xi2^֜VkbULf[Hs+.(A d pxt%I)G_7ڬs)#C Ͳ'SX+pl<:E)YgcXXƈJ9ϠM&kdg{ȹ !R@*Z2"Ă0!Wҁ1C.y!! (2å_}: B߾>ւ6qV9 bh@g%B QV%`bP5Adr6-.1C<9JcnMzWevkzͻwȗ#⊩C fSW_Ub423{Xi0^>{ݗG[g;Ĭ$ e=ɸ-wUvJO,lKyg~k;zvt=mwy#A.ٶ#O`^$e90AZ/zg4ᫌ"RRNi0TL /r두NM xz,1%VΗպ <]6>} s0]W 6#YFTIhcJ]2ȲJH蜒Ex[9"ڷi|6<]/Zs[`3:Z5iQ̣iY@@Oz{Laߗ/1?Е7~3덜!uLFέG,VK~΀kvڂK5ari~:,fM׹8`394nj?쫗οNxc Zf%kRC6 Vѣғ ޕu{T^t84v9%d(UNl)D ND&m2RЫh= ab CHч$9lxkG l`w0v`Gwd&)6RݍX8*"gtEe(1l06gɝ.{H^W]_m3\5@Ą!%yӥ.X+ cR9'ێn%Um%-[@Ŀ5LA%m8Uz62tNԚٵgI g=gXJ׵~33:y<=u)y7_&[ٵ>x}a5wɛ>tuB\F}ʥjoفy=j,/FU+R5+S >ɓtC pg1*բLE碥bC'o\.Uuc([iu@(Z{f<av#ZiơB=g+Vd%e~!ygeQ~B&{lTb@d*UeY!֎0ZmUD0BHGPaPd[2,JBVBǨnS[,D )̮0$@[V>c7qybfX+׆kvg"~zMTN `tfk;pkVlC6l`"`RƶXW7fu0NL"HG}E,jrf<{R*q(~MchF8zĝQ,c 1,ڎ3eEr *( 5 "Z R M="z)uܘ h:U9&X:Mu)0t:8 IԺB8{&~q-kPVZr_twr1W:SJi@_F̨vY-]g2bÜ[iP;> [o>k^s >%rwyEGܘ:ُ]uGT;%屔Ui^Vtn,e{l^1>3ɴz2: Z^5 ϋEī_υ NRp@ΘT<2e r_vPOc ~}Jk PeFX@ lb0R#,ؾHGr~?\OW0;%u;{RϹXe 傟>%܄z>{\?N?J֯p xGWcAUZ;xQzDqx>_Ki\)6ƲUj1cW:]^C Z)5bkQQoׄ=2狛DɏK=0j_l78k:x^Yj){6yNk%Mo'_?"QC'ϫq3NNr^寗ϗ["UMYzOզSvWfgkI *RE=Rjޫ߹1׉9{무\q$OK FNKJ >b3^=S-q|OZ76(T>;eIn7!^R~:m< {<ӫ ^kThևngק\kX&XMNUus֣CRACxhkAk}]'I>?'?2Btx,9ɝ=Xf3Fȹ<#WGՠ b,:aWHJ&5Au8,Qn$5/NjԐ9kO8H IpQi٠!+-х"Jv"G)ƺ̠AW-yJ>}׋S^OY9b(1CPۚNM)s&cB Rk > d%ж B]Y5@uoקqwNwO|Q5wnzu?2r(VuOϻyS@[uuA:j8Sv%?9d^:4%N7uζ4TH@ҎϘ8Y0'vR(YPr5M$%(5H1@6a60sٽ,ED #?(~qbGZ=>XfB(k3%&&PTx5hI BJD$'U IƤ$I5c|QkC62ؘբȭ6gϰdX 2>9  ` x )CBڠH&- u*aeBz8M5 VB]^j7&8%$)Er5 ĖY2JPL.ć"6а<خzf_.ӰlLsGgnuPNV~&(4y %zI RZᩀFml@6vM׆޶n nC[52#4_)k)ug{40ҝJл3V)a3w^;#~F7U -8,ZC.]LqrFZ2nTշCc6 mظ[Raoo\_lGbJِ2vzquS5lօ ?`ri@LJybU' mt9> b%r=)3w>uER%E](f+Bg/@ymbQH Ic,1,C, mbLR"Zj}**FN:뷙8z}fW{4=Ѫg%6J"ß͋}^ X}6l:eňΛP2 ;P:Hh"vE:WEG~DCo*b}-C)gS&`^ج#f6]4PtBBN<ڄָ7a^¹}Mzκtɿ߾1Dm= ?BWz΃|f!.*(]>_r/ 0#+sDiJ1N׆Jrbx%rzzSBdOؖA!8c#K^ꄵVI)jk⑙cs Eg+C*jџ+In[ɐ h %c 2 M >`_'  Jdu`*I呭y܂77`u@ٖ:vGzBny,_YT7g+fw{v1rڿ_r hgIS)B>x+H9Vr@ 0ZARX(NRY )3o &?.XE™%#1P,AaLga'eiX$v'd-*mr!ɑ ,%([]f{@ǫ%0\C&Do@lE.yI{vf~1#mciw̷IÄWU]GWy o]6ǫlZ~n "R2V|!A$[BIBh;mO71v|җ VlFg2qsD@ʡ@F"]\tz$l`Ka$QT)b7&i 7i6qDY}Z|&->x}(,Z{'tPG E:Kd%6.2 t^`ǬI1хe%h,{8k oIϋڙɊ#Ai]B2EmH%OFhSLAJY%h($+2I.**Pyr)b<v D[I5gO;7`fNv)*Jsb,NAz5,1X 3%+$u@bfmZٛZ G~eVp"#2b8 B EkAm /l2 At3yȓ^[͟x2GQ^a$ل9iAy: sm&J5:&cF7WOt[ {6UNܫ*v#\Z/RMմ:~YqK) !)qFH) AJDTLu$EDXk0~ѳ_39dc[c2b'5HעS*$1xX4#u HuB"!bڪR&Q3d JQ#'N20+A+HiQz{a ˭"tx%E׵Cr Bq''OY\3Ś/:?x4 .FXYYZV=?\k~J o'8*Ŏ0GzO88 q&:AIZz?$7e)FK &8 a3.88cFKe<9i5e #c< Df,n&9{InyǷiA8Y9os:94C[{V-bBGGK gd"6+mNhU8+Ngo/aTy6zgoFמk=Z6/5غ~[|*0A;XsOrn+Αd:||E췞zFҌjH}Èad]e$ 'V? Y,trrbÁUoԾgKrdW2RV>Ogt}>48|4]):BoTOײdy{g@wo~xoo_?$cIGۃ/~}hcj M㭆vڪOM:=K%8V|3xkw̗2ryI푙=iFɸ>;H#YckRfG8'p8nT7qLⵦՆ7YZ4s8m7< Oh* u>i֜ bU:xV bxpA(A@ mZ1pMqX O91fsNI BrJ:іj* *Xˣ3[`Q=cSN:%͜gM&k&lueFΧ PPђL' 1h'Qg2}@kI, (26$ B[I[+KnQzcR,؉F Pƒ^xQ lU_ې  Q})1ȾS|!6C=iA7ÍW3TpaV 2m@/ o4gJlI)z/Oף}=BU )Idry 課MFFh ))Y,ZF7)"9(!.#K}L%Vf \(E}1M5r4g}:<ɇXs*8#Uc5I6en|ł %FJ*bCdKF©hJ]4e JRA: Axo޾jӴV{OͫMp ʠR^f#]VǷ}Lauu&M˂]wRb=q5ʾZi5!8Ɋa1\N`* 9yX r!Q:Vd\3B%f1 F&smX1"i暆BbΛK}PFVݭNl}Y: 3AL21ɠjlMlQeJdžR57djttqsEjy5">Fvh+2yӿuk{7Y׌IK]\dˬB2J B1u btLS+@&AP6$Y&$lƒ@o%rz c,؜ͦ_>{VzZ8lwrΉvCޏKîڇP dXS9`)yJΒ!٨|P x<7,LA#?4?3k^cr*&TY@T :̳ĵ+= &{xuТGfw5CCwIe7wH R %^k| W%0ce2ERIZYs̠'.%2Q99;W`ֿ||g'z8^4/W);q6?ߗB]9aq1],KN.@JfU[|@+m*]5؛k?u ]HN?ޗѻޡwNw/v^vo߽`ML}32=n.aqx2:w̆{ΆQΆ]Vho wOgLDY~+]ߑS_rnh8L/ggm@d;.þm 纎Jh*GٳLq(@ 7R<h;-S A]/n@ގt|o07i~NLg{!%nnxb/ ;LXפErdsCdЀbp.ifS9)U 6TFw!BBr\H3 <ĞZ &[Nխ"Ԙm>TfI {=:5OOGrU'òɿNj"1NƠd d )ЀT&{PQB 1TnaJ"O!bA@TQijM UAXˈ S.C>Rr&s9x'=4m]#9;Sت BANoCt82t,RsљE@Ib5W ė^3N0$EJ][B/"dm!]u {fcmˮ9w *1Mb=+&dwv?>J|r_d18TI,tm?#\q>]. ?j{j+-}sIRU+PW9s.1&I3eIuFq ЈG/2SwGS[S2?RTF2) dI!3)ښ9wkzX.ՅPYA>z:Btq^q&fkwwK`2!EQR!hHFd2v̺,!r xJS"nt];Ekà5Y?hBgZp$D SZ@ݵJA8LȐYQ!]E"Ip"H:.J,Q9w֨_ŠŸ/W#Q qЈ[XJ{]mo#7+?3,R l{nOEױ3ɑy[bŒ)RۖleGݭ.U!UT>m )+9Ďe"-~c!EY*ƂJe8Rky*;~=2[(CĞ7q(yytq:Ғ]q愋'\*7 Rs6b1)VDhBr )D)T5Q;fI|H-Nx0̹/8<@؆#cq5(H,}"Dяyn_jg2!V?gxҦegiG>p|8f9I)^#>=FL~[=RwIkJ(ltۧ|vJfbDa!GZw`Me<}g³oAA[`?n_2aȾDvo]ub/?_jv@-f8\LF , j{Հqy|3Yz5`\ ƟK l8⪣*-Ur ^ \) pTpjxs4pUXJkUҹ\H2n導~ӒG~M*y3JfcY(w He.3ċ5ou)g{5F7fʃsAU0bs=ڵyӮESgve$4Z# t<̣̋GX"jjަwhklKK}>A ϣXѝf0}w٦C]|hD?|LhRAO!JSsGD!O#`Q%c o2`i0 jvCDlv{TQ~`Lj!f٠!I3IzAFAgqit.fx8EM$hLV:)vw&P(vP*UB?ŮR҉bDͳ#8"b UWc*:xR= +#*#ZiY͐jM*7|poWaPY } l$>^5/ ~oĽ9?&q`iZyzvkGbN LWq&aJ„N0r` LCY\c*-|XL ^ \9$F o۪mLZV೚mq$<\_/%Kq do9hKA'Jο^smPWQ/|y>7rȶmZ!϶i!O3XMzWǐ|=_ag,dei2ɮQ;4P`XR>G}]}W'ei|lSS@tmmw.{:xy}1=Wu*?a-rP+Iniu8BAtL^[TaJ @ Sf.4,U%u:up({ߙeE^ـaj=y /i"|_ s|qW-wOu ll~?:V]i=iw5[s\ݥ礵giU;Ĭכn\_\ařl(0 [ÆOsaԂl󊻷|>Nͥ=x6Hd5n8Ql|Zs: j.HԪ $s@( -B ksI<$BAZsQB,\P(w*KD/31GolYsa|l&O5x~}50nbZ/So/flU}^ ?}xĄL2zmRbVVaM>bٰdxCrU٥)O&@|V]&T͡(]ɞDJlT%6&f zv<KGkGk__h嘳)flyʼnY>DVV fa{mL}pyViBX9wrz=JoOzX ֚„1Ac_l1!iVWxxAx!(\ )h)%ELXodVL 27mmZI'0MY$2ISHQ3!-ENF%eg|H yNJVge%rdF,Զek^ ܦ)YZWSGgvC u9(0$GElQo1%CJWgC_pldSj 2N'ie]td{RV+`2uհiŮ#}уٱ)vDhSWOrVIw~`{XXQl%`!!vs.YG$fe"{EHNt- |NhrY` js,/Bzg]z:Ն7S̫ Ǹ?L/.~vЛZIr$xr}Z >6(0[ Eb8a\٫&(BZ-N''h'H= Rˤ)% Ry&[2{dJLȚ]#FJw:KYIݦ{I{_/Aaۛ?#3;VۈR%`pE ,y&%01Tv-yHcP'x3I:oMqg;`3YbVvoOqRS*Gnku|y*%Sndz_I,b27 L";fs0v7Eʴ(<Ӣ82ʃHPg&C)rhQG<(Y̚DkyE:m)ݬ!3!ulڽ6m9(>:y{AGDsV܈lj?##fQ#fq<F\|*'Fw9yɆ',R⪥9f'n1#6:ýtU?|閻t^:W7>8!/|ua3OV=yvgDm 9-ՏǺ0~ovJbz{.ƽf].Mkk~v|nGZz%pf,0Uw8NҩG E킁VʯwCta*nZѣbw)7 *FV$cRG X\Wg3QH ~{Er Cӧ6g3ktsQR5aT$7#-J_'Ic4r:E*r3T1-ZIYd*'Eg! $bu)oKDHF10) 6Y:2 Q{t106Uq6h42#>[ɩɐ,+%vðKM4^ R6!Bbl4y8bfʧR-h (xeS9dɰ2=DaD ɻ|}517Y0)dԊ[Z`M<0JRڸɊN2Xl m"@VPj s"T=$mQ9Qa+ޕ\bdj_ ӘNZDˤ Jv~|VR\( NúgDo@voa?11i#U>ڑŽhaˤ.E#OAQ ׄOQdO]ܔ4<&9hpN +]CwV BBt$(x!,;viF*B5c*HaJɡQf}J`@;뒄 Xflz!dlM'J s 7v+tb;a$~g gƵ4IۙoL ] s M߿h!WE~K@8ڷS I̛{c@J{nixoco^c"/G&́fa̭@DpҢq^ȭI_K>.d/sm?/;祺s=vpg Pj,D-t +]v:~lt`Wa9c"BgZo]}fyb^V72]}}۪|ַ BiZL*RFe%Eыj:{[5fCQ* mO .'je:dss#J>X='"|LhO)jq$_N6ճ{Z־O^fZ{B"|:~fO֕:(t[.N1wtk)Rq<ˋʮg<Ms6x(ފʪ1Q~Җ&pUcda<{!z*gLbf8ye^G@Y:٫C6hە)e&1]vG5Z椭0#uKaV{F?ań?FCO u(i BK#8ϗղ.q7n %|NW/En)u'[7~i2"͊^,9$vㇻe0_kS}8Qnwnv|R.IA˳ybyuN>=/CX\LکZbuS\^|i(1\a3YޝE=$(Xg_G˺>Ns].ڇt6&=VR:[j|+_:S+j׊ԒfE_/ knjE'~jɰFlF`mfH!^,O޹OSk۽ry~^m?ZZ{|Ǹ]27wuWY5OyGN|(oi~y< B9_g)z&υ^@~'9Niu!~DG#D5OE0љMiEHgfs2߱1+aK=hKEŬ/vl&ULx+RZm$K>Kv|5~zug1OB8ܝ8xncNf_l5> Sm]W68'XHIUhj.S匡1#r^РkW߁yA9J謴4mVGCk,˘UmC&b^]Ȝ4~jW_&GtQ(>zʗ  W`W{D3^??O#;=g&<3g&D3yf"L䙉<3g&D3yf"L䙉<3g&D3yf"L䙉<3g&D3yf"L䙉<3g&D3yf"L䙉<3g&D3yfLRzZ/g&$0\+L t* HIPe%)5IE&Jl{ZͤժE&0>~+u{W4^'LBem]oF$T 螝rC KI2$I7ƻ98].?y:ŏ-( "WJE7TF%BJ)1JpR8V ƺc~]|/s_O'落bޭʺ:}g9 }yq_(? Bge5)(WV($JJP[IeIlmND Egxѫc1a*\㓃OnowgB AFQ^1uimP(-HojH6qqfeyhc<:`P!ՊkIg'yx|!<2[?2>rOu-'|OC&t,&XO6 BvZfBZ/,~v{l;kQrYn;mg,嶳vrYn;mg,嶳vrYn;mg,嶳vrYn;mg,嶳vrYn;mg,嶳vrYn;mg,~Ev!Ƀ31hUU7^xRm&Cr }8OקlI~--O&^˽A-S}N&$ܤLK %{JzN]y{)k_P{\+}[MYȝc2WoSCf)O#)ױdlWUICу5+$ NEmO¡kZo$TmGpRg]i)%#~,fJ`z۞c,1xnNK}#(5IȢX̶ۆb̗/`$iݤ DKGio5"TrNFuਣ W*4xȁ F er-rGvEdLܜq|5v_W_rA+%^.5ܸbf?a+L\mUMh;Zbapf<(p?axnF ]ܓw[O&xn\L@w֖7 %uDWEmz֚G }p#(Ru|UNOihO5,tuu9\UI &IfɅ*x_7\XL˦`xG} g#qf~6,Df}Ѯ ;sa5yABl?LJ浔; a |(2<0\m QvEhL؝y99!F,+M9\؜=ij5p/w)Xg 3Ha+Ţ 9U0.D0\T1nOmk4K*Cː QׄH<@b`8\%J^uUu̇yuǃˆ}Qv̈23bfĭXRUּ}( L&4^X%P #JUTSFBCF UGj8)瓫W{]}yQw̋*bŭ 6㤢*xК k sҀL̋x]ޏ@a˘%5k䋻I ,}!.G?(*i} U waJh"e$aX:fID3|P>;^/q ="XCdR|LƈOSU5P>޼/I^Ba6hU-Cp$FrSWW`3KWVMQfAQbkBUTרGJP[ಲr^(N.WYpBx=$>{>컃'Ξ660qa(ќAn{π?ťQf\}'\6hIse"86j)Jo dX';6dcy/ XМLTK'X4$hLM[Ov)},R` X4z>b z ?f(~^rs;~7 ? Gp~:h7&,|xzѼP+ʳz!~e}bIm]JIu"Z"s⥻X#ӕR% SZa &dҙ&~m"}I~Ԓ\"+QiLE 2IjT{% h<9}>q qktcJ`re0Etֻ: q ti7{'kHpK.>46R U@>ߤ{/'cX0`>Z ^髐ˇw*FKQ+9`*L5>@F<vlw0fhk++Wnhnb=4wJ}2=7ÝOzh7I4rxkK2tIچe60yD;mI*%!ɔ@RJ2R\J K 0-% V4BBW֘\]2]]qP *T 2wBf:CumtHW ]!ZrrHWpRڻBZ&CWص#~ QR JDV.u@k>ymBs0k`2?:`%Wˋ?S?tviy 3,)t- " Ăo6JM#`#ikE*4 h(4}46jJWؾ™HWtpE2'Vھ,ҕєJ ]Z~ QҼgwte`,QqQ%b;v= V8 NLWpCkiJIzvZto3B$c $Bvۡt(W3$2]]/J$XWG2wBe:G[=:mAn/&0e֫8p/j0(* ^+i\9>N ^sOޘM* )6EW8N iwzעa?~0E grBM12p͟ɇ=4[Y!o*l?|S}úӒܝtgK\,wJh[sJQYU*%#VuL}PϚܳeYGadEHE] 3Ig˚ITȜ^T^>(ҿ]7&QFɄ\llm2.6؈ɾ؈b-U)!CW'sh;]!ʕmLWCW SzJ* JQLҍ)]-M)Y3 M#\.SiD+zΌ(3M%Mk* 'CWW%CW֨d}St*teV$DWrrvpH (i;K}]Uq!ZMNWLW ]_&:}D+FZ5\ybgZu" v(uNDy ♮z UC b+Y*th;]!J#3]!]1N% J'CWԚ*NWg:G\j+lX2tp)=Uj;g:CHV&CW&c]!ZANWoqW^`Dw)OǺBBWV$ҕ"֬.+"8\oon! 4~2]48ʾ4:ҴF2]!` pIƪDG RfHDJ֕Ak%˓ٳCJΐ<~v\tOD2gQZꛡ+c׋ "䝝'.>U;g]LWv=UDs]`H2tpO Ch Q )HµXW3wBd:CФRD޴˨\i!E6PjW]05L.j@ bQƦ"yl&S05VrY4brLF. -%?@e ]lkwGWXt ]!SCie3+Ku*4hyba1HރVe*B\PΐLߵq4[WHW Sڳ#pE2 սw{v]^^=h5T9U;:[DZڡ=-Jfڷф ?{ȭc" ,dw<,`}ə<:V"K${29EɖV=R#c"UA):F 4WUl+LBW)q|RP-[@W{+`F 0 Q ] jP=Ci@WGHWͰeg(vU Z%T骠bWGIW?lz,o)c+{ŭc۔[YY.im54Mp Z{'ԃ 뎮 vp ZľUAi@WGHWBjjRWߵ*hjҨ$.y-]`]Ud-tUޫuut[jWjUd(}wL*̐&X*PդiEj*ئ؛|nmjL@ԪjYv U`X5.v+U.vAlpV,*Z誠UAi:FҜ%ҽ[* J3{*-- ր Òήh.hM!M|Nk z J]!]ښG]\#NWf#]YE&* 1 tf wzN(e tҜڧl~Hv\siOjȭ[UL \4E{'b#i{I]`D] ]\ X ]N0b#]4 \cj+B˙;]|CWfǪ7LoX!_t9VhH]C){6̴+3K3TDWVCWֵUAkyPZ61ҕ`z`m`+C骠r#+?]]`q}*hʾ7j?tPnT}* lx^}τ`va~Y0eŰ|-;Z,!-K"aԒU؇mzr_dr^'hٝuKDE.v|"}w J1b TDWXHU ]JZ!R2J咮e%s}TQ.jQnҤi1 P MפJ#>B&&'fI-ȾUAlxT*+ \UAM骠4äc++Q"""*p5-~DAi@WoUoO?`s-[U]C bhRl ]L`EtUHӖ>_~WOV/ㄞƣR8'X뛠zaQ)\x=_m+?-6M`֏ Q3=?;#^J " >f &IidIa̭.Ǭ|J=Ac\]ǵTʌ:"7OML7gp)k(ʁ/P*b|a%H)x1s!S[aڇfX!rE H;㌗R%P$mFdEV:l Rd5di.$3Ťw1Hkr&[b:KƒpX MzF nSt5y]5鐘Sh|&@B?y@Kt,72ٹlE.y5~|]-FSu<2t62=e!T F7-Lg!F@`; ݳՑ]6; yZ8]Rc`X'Y;v<9$ƶ/hގܳd٣6>ڐwΧG>1ŗ|kzu=)"08xe61x>![FҡQ@H!H7~&eO݉h\]%.ͣh~;헕EP+Lq>'/dAו.#xzrnK{UnodqYR`|="+zJ aё=m meN]ZR޸Iسy =aCH.ljUTMc<~U1Cnn.?9;;t!VEn=.vPZRZ9Li9gEiaƑ.n-'Q9$1$w^4,ӯ%mn\%/zPK2d(GCH>h$1&&{%HRRY tE*Ņi],+E \g@Hjd2]:|KMtj=ZJl-ғo2OGJǕ3͜E7 Y+΄ҳ \ʟ;AH yVC>7paUkH5\Ҁʚ ѩ Q>t'mȦs ,ZIk 0S7a[0nqQOzpϷtesBs{Z&oY7%u6Z;;Ӫ = `Tլ*pe } WApjOt>s*}l9h0q[.)&ͣ\ȍ.7Al^/\1^uc"dnJh&[:xvpHP(8g&E GL]^FAAeHYrc"p3V "9#7sy4]tgI3BC]͟4t!^>^vAzwX]s{jMW>I!Y2\t."$sOMX E:FMUO2._1RrֹgbblY75\H3)F{ dRS&9(sR΅ 88/S-c?|PAv!c2YnAF*f#wrWzuz3i6v蟛#D2Qkx<@o}69qs1 qJ*ȳzo$dEυ蝐=BZP2y}^vVS˚ W6o8$yA8d yI)R(,ijÖ9ݦS1GRSfDz7a9e\FId#( 7Y쀻bEo| F|WR ȅn6hYMep%@>i ˁ1 n⓽z`=z`n[qցR$K$I4#)*60eO.u&ʐKۄ6Ft웞O[Ƭ( >FɁtyY8OSNe 8bWm|"Kj詫d& 4YD1LS BE'sn>r"}`xθ qj84.AT bv VZ|p'lvXv2N 9qF&r]feR 2RBO9+;hgK ٲN?s*K0k, /mB{D.#J,-0JDjB%;$xN%UۉEΎ.Pmf=F6)y4>iBSJb(xB0+=lGmvsG Ea kQYVE-^epJG?GE2ߘ:V[*8ŕwU؟=>AHu)DJyI Dcɣ,iښv:DiI3:蔭u:!s\LdL#'Ndi@TW:06:+L&"AAux%Mˍ<$Bh!(ޢR%r|T«3ǣr՟˰p/))F)9o-ƽņxz%aGvOO8Υ*BSGd[ƿIZE?$SF"8Mg )^:@@3LHe<Œ{ \b#c< Ff7\.h&}{kx} . wk>9B_`^|Dkn[_Qج%ŝZѐN \.6>2蝽ݘSzj^N6DOϗ\7n㎤& .9OdjmZkt6|VzCw'8N13aw$7t7FO:%M8VbgӓB/ƜN&d拓mu=ɾQJ;З.w\HX8G<*jc̏h?t lO}dz<Ď u7㯿޾޿ӛoK$[ =_Dpg迼eWgk5CKP; 欻a\jNg]qzN F4ybЖs]sĖQZ&1?]LuR-xZ]9X\~=dʙ[WH`G|>%K2L,%@xǬJhC@/Y}i y;HOlXw^ Y ZCoPdc$7hd!iUY0ÙN3үH>';/>֤Knj2|,CIYtq߃J <0ΐ&`Ecy%ddc dKn2 ,",s\ՇgÎ 'Qx % J/|LE2v&+PRLxh l0_dِ  U(H8ghQ緄g s׈ڼݬ] mћkmzyB GLkWџ{ߣljH ;az$JcƏ,pG~%c172x*%1k%FM732o VNCm םAg%5aelv&F.Ξ!:_vAK;b]uAK|UXUӧ].FX].k %#YFtAB6lJ`:UVjdwlhE~&OWU+wF+vU߾gcVot1n_H S '.ըp#&\7dFE0XBz3x1SᚋîpWEǭ&^8 <>‚r#AiZ&QhD6+d[d9ybHV*RjhlleMVd] f5\FF6"i憆Frs>j 4ٓC El9~@l25L&FJ[_ rh!WꗂZ5БC `Wj>rꗱvKlY/T4 vn@Ԙ)2N7^-qޏs/D4w-?Wɀ3l}SBi}cIK^${.@emv.m30YFAh Fc&غEB 1JBT*f!$Y&&sJ3ʱji;>6~nJس Z>N&âln=Ӥ>ЗGN{_m>(/',)F!-$2D%ICTgA+s1E'e;˒ʃFxPx5);*+cTDƐIsW3-Ah8bɗCjIԒiH4: @O\hyA|:uVwPt*=Ӧie^M,4H9攲#cIYx2 2)(<nAT⩣xf4^3hoMF',0FfRRqQJyu LAT<\IqYsxS PQcPe'n\w͔s|jRH$4-zdvWP*g7 H JpnY'\~j\a\fr=,gy|62ܓ^%5Qj> hsO?<} N&"ʦ|LA,PqƮb}\RwI(v$%]0@[TB{7z6??Rյɮx?}U`;.wֺٽإ{߽`nk&qU7G_jÇd_7:wV{Ѱ}Wu ;rD ;}{;xۭnawd׷Zk葷Yu:RWپm纎J&=T*!-q7grtC)Ц.M>kf9+NN3{wwns f:ˊ]؛4?fg"' %:Linx,.KF꤬g*iLIRV1ylN u(K'Ӑ1T'8!e U:6p aC!:(TAS q='.$- )d.I+5(*Jp`*V7BA @pDT ^W2pQjѡē`gv>6au2W:-רTnhb  Fib:pdTœ#;ZH۸6 uK$E;LXץEGB2! h\2 R34U *[w!BFnF${<ĞZW>`˭e:df/U/:j8XGcYn#sz~BǫO]ٟ +E԰x:5*kՐ / 0XEPPت;U ]NIYANԺ2-1HT@QBUچEXjj `Dw)(.C>ǕB.2KeNdDRNa>$3!Ug0d^K>ue , BXZTΆ,Lk&\ /T: ܐdi\o˿zy$o R7ek{vȹ_Pyly6mѳÉW}c@bl6S%]׷=xrt)F7=_XYRU+P'r$&\bL\%I̔Lu[ja^(`[DiO, Ճ"h "yL@-&C^\+II֌ȹ_3ֳUj.Tut_f.ϋesz_`{q]q4ddX~YA9rkkn$5O9I Vm%UI^p*{T ^3sΟ-_Za.nQh6~|3+Qfl*`#AZKֶ  ru(IBJVdOo'/}F_+ 18-v s,V{1j75Ya?*Hi5ΌXÍd55 .B k!E[5 {Ȍ ً*[5)"IN FRIDv%&n{p90">kQ7,(6$|QR5K#<:vYH*AϬ  {o ;Xd8kTEY )D*Y=GF"lg̯:]Fj<.ځivŭQH>UIJU]uKlRHDK Ej!h\VNNO-?xrr< ;߽ߧ=@XDz{i ڕ=̿#Hed` wn&[$b[iTo=X{D<,y4'sn76Fyfwn_-㭯Z{ }j/7vxܵ4׏uD{C~0n?GhaG>}|*?Ͻ` oXyO/+M+ݣ}k=6"?|yu@5kXa1hg*&yLCH 0H%*zzB(jwp7V1Y QF(~Oi?E-Ei篍Q4F(~Oi?QQ4F(~Oi?YF(~OiQ4F(~]l?Q4F(~س i?Q4F(~O. Vޚƙ*GgFh3qf4Όƙ83gFh3qf4Όƙ83Z텑6#A#j8[C]uXβR7Ri gÆ(~AAi?Q4F(~Oi?X,waIx]̯gN&Ry~Kw ֚ t&h8x@rRw9P$$)07d"'Cj[1}f+Wk@Yccg&@+CAsڎ-mr1r@BNdx9F`X1ITQ "FYAϭ n9: F+@+y~8t3jޓuzoz;h++(əZ˜RZ嬬a1gy 31 ״aDŢLL m."A³+=0J2.JMu$&+YXgߠɗyeXwU;60ek}kOkn9?j]TE,\軂w~TafnRUxS*xg7:O$겭p~a~!? 5 5_?bJ9eAH𚽠h!EQ(,`d}:@sBC ¡29M9I&|<+tzu96 hp*@1i]/aٔ'k}yʳbKWt3>6${N6c"t oX 2/A C%9,w0!*U$H)Bo Up>b G.(IYjg" u0q < MSƣ)K!`A o!Eh 0$vKa]6@PBd#U#[A-%p@YߠS7|"ARCct3JĖNs?nuݺּ5ټ{k-&ّlz#I=+lCdXjJ:b F[SAlg z-؋E%l sHFအBt%f'1+iErÒ)A*)*O95e/`1:J: i F78W;}ye^ѤsKk9J (D@ py# v0u`t@,үl4KF X:wA+ߨ 6U:X0t a&.Gc"mq$:] h,*9V;JjaaLW;0DIPU;[,^dYww%ͨ;|zo0|ig笩`׽Wͼ~sif)x)LK_޳WyQW?^U|b-{V\&Js VNDVcU4&%ï8!԰F i*"y=0^Gr r(.JW2RGDAC %n ~g=W/] v4~ܧX0ͳ|+poMDy[Z'k2=QZoBD)-k޴L{-ӳ^Ӈ8Rn~u=@+WjB\:*u0GcӰ] ^p zWi  B#Sg]`2Z[Fg%L0ʹ cZaŦ1-3m=b{?fGzaCQbX:R!B"*V'AAR Y[4pP}jfax=@+Â8 e6eQ*P/ J• d"fe];LiHaLGd;=yj SOݪ:<|Uwlf5G"ڭ~zqF64%)A[J,)Ʋ1CH (0d2ya <5kzyolt`Q%G外DmV 1 9@RT1e۔fy 0^}7G F%E%Eb93(FK R0*д~0a)BYB *%ԓ@ОE- X B0UW7=/tʏzxTXigbE A&2H -ctGlb+jffrs+txpF% ".WH> mK֢D1\ܻ塏>jqK1iSJ@K3f_W8=?9Чrgj%Fn7.n䦎͹?gU!8'6".켮2 W \=zVj_4MIY(!h(M3&`/,d'ǝ_e|\{CeHKeZ] {Ԋ\Шg i+x"ZI 3VHr:2B#nP(QNyiԑ#rQ;CaLj Q@c8mjAh֡]rq+//]}Nd_PuspUR8Nsub2)T`n=򔃾 i/InrYQzIr/Eơ8O֋DAjS (~nwR0ʙ%r8#ՠdUtT.gIh bTZJ9,gUr7 f3-C[E )#,s,˗%#Eٟ έh;'Ir/ 7z =?HbA` ( a "Pb E 1BIS-=j jIo'v\x;~'U0Q#1>P. қyi3Bx 3@R+VG"mߍ*pQ(&z&Jp'm,ڣ!T^ mXX\IW5yіov(xА]DD >Ej:)BhA%%)U'H/-~tvhC*e` z>'{n"B?)OQ@ @D;$&_)؄1i&{R K y`eL^-Ēj+Yb0}Jk=j ۃފeԢ=M^M].x㋘ YѢd"hUڀǫl~q+Ηx|P.>mm&.dd sv^%)XHڀF+4P"EA, y X12qb.g36Dh+"wJXu ,0#kJcA{hJb驶&$XDjDHJNDd#P$MѼvA>?QsK@[+SG$HS-8"AsTL0)TcBkFe5̫3wqZVY*V>ݗ<3U濚t%\Ln':n_~P•Əs"ỹʲ4E|pj'@ .00ZT!2QB9agdXI)v2bw2ExOLټ%e N8j=!YHDx3mQ8V8G Qy>NjVJZٿ*-7==[~TrLYdׯv9ki08~'>hR>|]ԟgrQ\ySxy><8\fW1\`sbNퟃa:疛.9GhyVCd'Iʞ@nn-3J? օl O.f=Y98Lo2r{AuXqbb$vƙTž}}n|Q*أɟ uQ-J_ug7?<_|2}tão^" g`=n H1 ?oDpZ뺪d]c5֢Q7F~l1;- ??r^ &>U'U] u MU_HTP>\KB[w՚>&e#ǭžyCex ! O .Zq" (,0Sy6 S^*8ύg,jF"KRr*bI ,$"N+g: {m'ׇl}ΕA``UuvHa;&b`EK%HYH Bmd]#́:qޞU|"ݭl]Cࣇ_ b*nb柜Q̣z8S1@sq+dbjvw䍝|8O8|S</6"<:F(؟#r6Rn4fT 0墢L^ =*JΪ=9s~ Q׸nEMD碭IBO/'IOˋY|7jnynhb\ŻQto޼?9? gj8^oVAqmQ{MJEޞ jn-xNjUr\L)I]!QW\&E]ejAu]]e*%+.̡.tS-{1Fαn\3wqjYT] p0C?+MPȵC9% -8SU~TּPjaYo80 8``^_6 GFPN(*Px 8gX-ޞp~}`%wL,I(Pz0b#5̱'ZhXM4llzIy; $Rr] ؐ|mS @m~e7YuDoR#1e$$X:fHD1-A}HR  gpy`NE8!E&v&X13fk9+$]k;Uk*)\S$%hu+/r1YFg>ԓQt&%u1b*,R&QD0O]Um,p(XusIe#n~[>U,):#)P+pHKD QX3^6$P.MNqQ5&SX/Y}G$ԚwU޻34כ; sBjWvjt&-;G=NE}cї8 @W(D(\6IHjx$OJQ9NM@7#LlHmriȰ_S1~*v)?ϔ|a9J2,hf+A!#/د%lG\3)'pBÏ14i]QM.d~!`26OrkחWx7}~uվy퀕Wl_J?_[e]c{VwR_6v||o+@ xo l,[t_/#z&8)7Z9ӳ*abWVQ+9C:2!M Qg%ڏ!K?G~j4di^DYbpcIk+@`19,(JRge##"H(пnTA_/:9Y2na^Q͜%<jlEp>}H{b^FCTЫ 8}-X/>ϐ^BxQ8kQؤO:eRj/=:Uf'Ľӊ[n8nvθ?j*zG7fP"[iیjP p}AGK٫`׵ϗey4粅lKJ]Bvbx=8&(Â1 bd]Ny,?6PTd dr@NJj:mEsy- e&T0 1[wb;߿6M^Χz`m]38ZܳQ?oZMeϊKƦ@, KqZzƦd2dm\ y#4%` o҄ٛHHpˡ`X2DDA}7;kN t%XpLu4o+FHfm-GL82 gbv=dW>bDN'3p؜x[؉W&I6)D4Yoɪҳ[wnvd]؍l vqt?~ u=Ռ^w^0]M3}w .jG!f/ZcΎ{|(*dF 289}$ +dr"W"@ՌPtZQR+XЃpNBpaܺ}[Rݾ]ywk{r&Q۽M)ecrVNzZ3Falf,^.ƔE$H6FчSu(i I֑l{f7YB(}ܮkԵ,t4 xWL?vM8OZv霽VoE>K緌M+d ^/>&h;jCTRRחiwn|@UOCTv(ީZm TK* Fei C!Xg9f $R%˂J%¨i*1$ ֯hJȋ`QFT̔e4ZA"d0W=#QH9X3vAZ'dKyl0lrvhmJW=YoSZي6AC)^HLT$YKC g3Eb?4h"36%% ͨ}"KC~fOVv"y ᄎ^Ƞ-Y !k\Rt s’-10# jŋO'dΦo]A(GQ)P02STYD!J!T"J5#N-pb8fή FO"[eĮ"I7QibtrVB(\+5&ߣBesfdQT1&$$#;r0bH͒Fm/"}YV'xQHXQ2#Z Zb%'̘.1YZM#P2B%8R$9_<9ǾY^[fP"b(M*AyvQ`߭x>ѩÏZ#Ѿ?~[>Bo\Y~Ԁ`Gz u7zC_OoN_:6~g&[cZ=Y^Ckkʏ./f:a ם9ߧrG=!w{]}׳+Ioi\/)nyGɿrj.dk$f5?Y:'qW+o]*ք|W7 k;ҍڍ <@FhMGG 1ҁWI$BJ%FEWh0dQk ri2 <Փq)"RF d%"bcrE ɐNtzTko2J^ﳳSt &Uc !wuLry^; |RaчC`eYd_َ\YchAsB=_cv])=jh\%+aIӣ_FzG߫/Dz-*8kA;m|,Ib(U* ETG j cpW|$3Z~9bK))`^ ($!(ʝЫ;"Zh$TCO^K,E”0!F] m [w cP6eKFpQ&H"Hbh3 }" uɐ*Xܨ,,y0? {'Uz%(@1*Nt :XI%ٜm_/3M/v*,`xmq[7ڶ+Z3m{p=0獽5,oFEdz2~S|qe@[ Ups7ovu:S~2gR]^0^:/gGq 5Z]a̭X{heyk/dR۪|2YfBXWoj^Fwlwu=NNkp_VǴt0 s<Լ-[^檣xWo(wm$sQ2/ $KnvA{N )9"-׏m%rSS}ꜙOg)Ԩ7Z+[%YT62Z2jWO;w{$H?>n?`0o"OR3U-,d-onr"h*ܝMFFa4I Y^KrAH(?oPPeZXb2P1rъwIERsؘz\87(4R/_Ak)x)'ZW+T+bggpģ%OJeeqphB"6Ls)F>x*)5* ?咏)Wh*ciͻCs5[n6v iDb ̺ =zs}s}w>fю+t (R4֧D!۪?R\&tܜ5A҉YUb9c$pNTeM/dJy fk(|O

{JC.w\0b*9c1.@1x؆yn%=-t*x$ VF Hd P,Xq$X{ Ex*dM'0Mr˴]@2lgUdX=}=&76B͠j`o]T? +1d a! eOc5d6HC#i5KdyJ>7[SR :k,,pv]`&C0eWaLJ.JVb]JA e FȤ7䖠,Vjoho*>Cb:!pe޸\Qי%^ $T* ~J )dwO ? ]FkJ] ǁҪ=6;h:]=Y{? ~)ĬQ@p|# *SNdvXNRB2aFYi}MY 7`" ?^J.G[ هHo,b78/jԤFk A)Cd"BU iIQptaXrG}qTEK*z`:#fP܆`m@]G U2YЩ ֏ݫugD%SLe5wx*ǗCbWѺ󈓹hOY3/V?b=^!ExB.`AjzU@@"P || P-ft ((_L(}XRkuxXbM]y;i+%2<+N e,i₟k7o,t I,Ai9DP?z o^ 7Q%1C4$1q*gD!cȡhlͥV0]1VҮ3i2Jrd-J& z+Rp jܭE a@Tx?W Φ%0/Lph&4o}֞n:_輹X߶^5-7kYD)nn@3 58)(ʟ盂 MŴƬ]kA*RZI8fDh31i:2bo oٕfU(3h7,JxKDlЦĐ 9P/p} D]%\`~X&R $CB6=<^rc T_ߨb0l + AJ B(M﫬@᪝r B~ׇexym žb7J,FjXLY, 3ց4hp?{.LmDRcΩ Xs35~iF=P(-g/Uw&H d2 Ip՜ZՎStr~f}AVA#S#x3P[;i* 6 3蘄jIR8$Ct.H,;(`JX-0 QX2伩#\];*7n)XoЏ,0P5v^ӲrqjuWE``rт -4DL *^;ӫm% ["EmJ0򐡻,j] Z]r_j+:n_?Tp(^cwΗq?|)xbg-`1mRRzuBbsV:!!|흐4B UgNHO'uTSi Q/@Jp#T)PuJǢZ=&;'v9\*ujc yrguCz"Dŵ!5lnQ$RLB7oZT}>HXD͜Ek\3,o7W 5vs|S|]N!yWeS3f/౯L{s8 QwQn޻m@sWrI]9eKVRFXUcEg!7Y4KF1 >c'=oŻS]&r >`jA;uxߎ_ګPb4 %FCPb4 %FCPb4 %FCPb4 %FCPb4 %FCPb4 %FCPb4 %FCPb4 %FCPb4 %FCPb4 %FCPb4 %FCPb4 %FCPBPPo%7v4S6AнDYn&G_ҡm* (7 ^$-ẠhGۈQ?W`,^^}=m} ]_rRESPVBx2Xd2$W[դw*pmҵ(GMǢn/B/C< pՂC]nn{[w%AֿʾTٍۣ} yGwp?tЧwr ~n|/To<5Q?jGM5Q?jGM5Q?jGM5Q?jGM5Q?jGM5Q?jGMg:-[3NM?ᛩ釕ʏ?cMSkW <Ŕ\tC˺^[0^ؖk eT~W믮ٯ7o.'$ڭg[?o^͕|%N׼FѪIaMV{D z sE3?i E:h LI8@;G$^_͂5~g1/\ɦW#B)\ÄgJ:e˙s%͙"b2dk2tt"D6 ΍0 `SЪ@A`ZVS!jtje$rΛУ#0t380,SI_Wã[NZ[nKr;<ڄ8ax[ob}Z̬SGdQDz+t@ FvK=@*RosϚי{,)"W z 1֘B6!Q," .:Oo,b}ʥB8IX]{h76+k#FHqѲ9U Ā틳r],/6oso_u9tfWoo憤n?<X_9)a艳aF⩳'z.+nZM'C aN2Y-k?:Osc뇹C8oVS;zn zX-;;Ra7gRo?a.fjUsORGȔyXnIRK͇r}}5X-϶̙3??p O߉SnaD^s^]su_~&.o}Km˳;7kk8n+~,J ]3_ ~8gy*?{Ʊd O C7X,pw$_F?%p(?náDJ|jZՆ_4kfkNUשjg7xٕ}!}ӝ- ms"O7sv`~(vs^iW3sp_@ [场GL?J{?|#W,1׈KF_sfIk2\ݩ?NFӑ nN;8?՜_otfϟqu ͫ X~Ro]unN}ݙisp+~ٱW~}~c4~)?\OO?χ)!oP{||A]bs?kAGkyO?y '˟7`Gs:mwsI:ݠpi(d:;B}wltBwG&)fZn 5˔/Íi<]sA.0z0llsytICBWy('nP7jzPLcn1#f&i!݁! );e ,Dz1wǠss7PlPM!CTEsuĂМ+$8WPL!%"Qgvfm,SU~}5 =sƥy~o_bʢ#w8 g:F,ù%O bZGlSL$Ye^@.Pm1֯Xȸn,[bc,ąGo2{"߯P["?QKdMlD-kKD-kKD-kKD-kKD-kKD-kKD-kKD-kKD-kKD-kKD-kKD-kK`-sڵ[E#!.@d2 5,I3lX6#"wR;]`x/ڹ>q>u։%!w *r'5ށgEiN3gC,[CDJOA:}1qvh:u S=GЗK@) ]/٘/6ϜwK'U*F_>vXOCGYJGtJGtJGtJGtJGtJGtJGtJGtJGtJGtJGtJGtJGtJGtJGtJGtJGtJGtJGtJGtJGtJGtJGtJGt2[ e3zH p\143zCeD) {՜ {>K豳糔U3dӼoG篮plav4(} JhiC*2xukQښSY+(]?NF.PH׃<]%{ȷq T/(4)x-Gè !!|t,5(<Ӂ$VhA̅@_dS_eF=}=yCUV >xwG]9)И7 zIZH]%+gB8q\h'??r}Xworz?ee:(ps8,;Lף1x1,Vd8|LI6w!yNN[Z;ewlL5s^=ي_ʑYk"Ix iĄ၁2%-at$p`XXD~AXL햱V)f iƾƒM32>[-`NqL('m>7J\?F-6Vjƀ+?3C>YlDE <'~%^b3^E1;{6P̜fɤvXDȔ҅-v1q[l7Xv18ͫծv#aRt&3V4rFaccϙ\\1U=* Ȑ"̢[" l|`]H"sg8aԏ>ɤb6bǾQZj7F$֏J| Yjy&3P01&;c$dZ% Zbl/#J+ŸbC5S!']S/ $*pG9Ҏcj?{xvzYfy z4w UA8N0š 9((zdE0 7[`d/LlZu39k&~ʑe;/{Ć H#Iz$N9^%jaIGI:k73n7F-zb6SM`.d%Ͷo { 1gg3N6dAZm;B\\VnGFBeFhDWlu^]E{:vpi宧K_;9Kwp{ܣwu<m*%|M*eqrBnҝfffy۰|V0h4yzfˤ7c6 nl'F vv7w,vyaf}F]?|uz9<ɷL<{ѡYܹ&ئ˟$W&as[#!!0}Ʀ)z3_GJE9tS_W [37]M(_i=\,$eFZ$gEӬa /q`(. @՚9( vck B(o>_4G_;TrG8aM*xuP5IH#aE2&gƀ 2&q"Os)DS*0Kp>b1qǃZ1ღ1uik/iㅣ4sCņEK'J[Ÿ/ }JB#0&ZAx4A?B3q<])r-,A9 JTP9u*ꊆ}:XSK:0+atL`RcCkHd"锈 &R0cb)0S0} "L5'-s!|ק-a8Z:~,-FvxWDP(CuH/avܷ;w;м+T$8C3N"n}B؄KJN&ǖ!;PV^z1imbZZédcA%0J1Rh0`O,N$'_Nt7x qۋE0l7Ϫ;<٘jӗ]>ܴ́8lµBg)Af`= F EIB9ˊ`&譑:᭶&7=)Z /EcHwuڴ~Cӎ~D7 %Q3 3eYKa4Ih)JAq}ꍜxmuϚrfi=a.ұeSA})[MuY#lS2OZt?k£v@6K^%R?c+&Aoj=&H꼫":~>:!?CHn&fd*J3Q[}HFR:i9kBA1pz&JsJYx,2Kr0C2@@-&rBtii:s;ư/rRWl:۵N%n:ܠAi} A:NMbVV' 00ӂ|Jr%LQl\^ev,'<Ʌ`I*28pAJ&О(0!YORYR=Jo\- 2HHBh!`-!i&0n@s&Yb6 CWH 4҅5PW1+u u!r!rn1-In"ʜ LuW ~QykjnO~et>k,d4Ax-qdp9Jzdldcqe3vAhC*Π@@/w٨8W^=9DF%jQF![#J _ AOZ4q$XL.0$.]F9v*K0kzB,w?84sh`ݏݷag9#Ѣs#xH)s@1'I  X()ڈ 9X1ǬLt/-fپ< 5V%q*;txE2vTS|\,)H-׊rI܀s(-35:%B֨HIyDrXksp`aw?LHNZʹ[d nHÉ,nZ1'AWRFoE0!Xriܲ麱Xa !'w\h9_}.b#L« "~yT}?$`ؑQJ맚NWEs?a5Q9sX9~yzD_-VK?Ni58Ajëp6C2ZԅIMG0ϧ)wC θd`^[#%OA3'xrc_/~;CrrCd @mdz64oxJN^ɼϹySu[[|_͠U"m}k_6gj(MlVZѐ?|5wKLZ)։7|ƌ*FUuEZ?[ϋ;߷7,/<6.& ĜI^Y̭rTdzy$zr8Z11as$/k=~mfyIBp2m`ŲO'n6cNF]/d稂u]v+\BGmJ/d }Y4WEm^MN0;:5.:M8!vϾy'<y̅;~f`d"A&" {iҩk tokU1Eqm8^r[UO&Q؏|6= Gh٬ X@hM"dqwTqEKշ U$*ĉWeYnxd,TGk5q#6H,/2X@I9YD" Y}i 龝 +W>!\mk)\Bdh }QdHC`L{ȬCҠUY0ÞN/{: fɾ=ng'5 x˫,t8y6/9oV a;G i]7z qLҸV[kWel2kO#G -E!i>Pue渓pceQ*!\^ڭ ׅ.{@_o oG`6{ ǜ^s~KN_=Y+<ԎW+Y)6'zЗ8 oeV{Oov5` vv!"%M B&n|iSI'NU[t s79UnwFRDרXkRrEX2e^kmd(-(^ Ex#5|wd8nr?ke\j.jsu۷y ||2\RՆ{.nU?*u}>2_+ME벻0.p֍}5dr]zյuԖ .wT[y lw9l{Q1 &S^d)MBtԄc}``X#2Y&8Wb&H'Pͅf}VdE,+,B2s5% ]Џܖ-ætuS%N]@bMǀ;Ἱb7f6c/,5 kdM\26rMޝ{fm5l.Ki8~Pl#MV} pD~Oi7:RI *khsef`sA m(}}+=q6_QzIW=̲#;ESd dIlubiGO%]C||s彵ZfAݭ|jY8g#)Y.4H9攲'cIYy2 2)(n xxQ<4N3hǦyߠ K:cH#3Feɥ0:"Ɓ'u LAyq\Ҁ.S9>yS .IBEGy6"XH") \X>H}oR'H[a=oZtu> b ]Eԍ(\M 5,"{PE>n4#WQLv<xQ4:xAARHu$-/PQIw:"_3E<zqt6밡Zhj|r.pomNoJX|+#ͅ24g|բ|ohn{k!l9%E"jOV5UVT^BYs$UI#~UjF.mޔkw\_;%ЊW_,}A϶ lSӅՃ_V)*Jkv΁9h4mNРKذikC8)OҕG:/uUG_U7~ YȢI8N8?p?|f~Jjuv0=}}H_,]xM=9}y7CU47G[s_g/2Kd6rLY]|/!$?ɮ^^/&,mHJҷp2?ڮ̳16Gآ\" xK?¬xi/=Ns׾;Ͷ.Xrˋm5Yd˶E{;4:rU NIeL{T2,IWR\)V*%]_Hta8qW׉37/SCdy;nv jޓIZHM6wLSyӐ-IX8-V{f>Ϸ7]1tn .w ,1Iq2hJ1AVWQt=]Xf&i( w 7,l1 F"p ۺ], g c۝1s1IY`*itZ,vJY 9{)YD&,'2F0L >E,*" zC=]l8km8dt3慰Iէc%CGxF_\mOld/lh}eLU#Q+}zyyGeQQhq2G]Z,L%cc$G\ѧb|$wG\^˨'JH!̕qs2hPD^zCΑk߫ xF;e0S^83ibY;H1 ZgL@˞7r)g\l%FxOJ"f٥r P̚wI+}1ƥ#"X^1p1]QyT[k(T߈nQ"KdX̫޼ȌHh,Vt J%􌽎QZoK_OfC#  /pV1j%ԫ!*htmyS՛ף4I&D,p~dmA#IL1Q  8WZ6L-VI'tRFhСRFɪ6mtۘjUI, rmY*)z>[wzS౮5`ZA]eJkRa:7Oqx8<6X3:!z75T/V!_~l>%\!Aa'뀄1D&Hz6yswoX20XuC$ '}id% J_U[I ֋1&qqX%vuz+hIJ)*Udчt'р2WkפTU-,CAܥ=pCLeyU)i3gk79-~C:pb_7Dƥ gQ+@.kI8b5W =/188\]Z+NTCy1̆jorZʌrcAnA50(紷~9#7ه.^o7O.c;E|/55뮽ݗrT~|i:`'\9DC6q2\P {@L[C}ӈRN6֒ DO 6Xeb|P5ײ\AU#&cor'cJo,gB虅rd_۲pcqG/5?z(QYEΡ+UL2A4y-b U1nRN>c7M2>͢D^ )RkFhjuN5 &/9Oo▋9핾zP #G I '֙ZR0gt xCÅ)O:Eodzhr@/:d51Nh8"ZEЩN{˹79qWzBǾDT=QD*sT>1^{UX1i轅ꕈrƴ] р3>`qHR/( 42z&bor'6G.חj^/M\#G.>"9^i.Cc _Ɂ1%WH:DI[:(^Q F.>9;CaU#[6x7l-(Apc㩢Tn*W_PiR\-,R $yQ._4/\4by}+|/ _!>$ ~5ep7KsoSkӪ}5]4Qst[iALHfS g9M<4}e|׫ @|? KBT5H~xˉ벒7:Q2 6h\/#XdΏ hX7banv3_w(y*ݬ5b~~ `3-,7w4Gl<+Vƿ^s?u9_Ng/>TC0qTmi*c[0Udkh/Ǡ[n+z|pLvRP)E풭eT mq0 khk@&!E#;q߀GJZy`uzO`_g2)MLؗ]ρ48^έ=.NyUq) խ.R+WA%5gݛ7_sE#wqtNI|Y9,ƙ#ȒZ̑JFg 9Iy1>-%kW7Oo楀%n%VԒٍBMū 8_o;t:%(G'!Q 0Ckk#V26Tɛcu';- @ݗPЯed $Y-nA?u1mGj_g:EY)%TNuI r:`b=ϧi]U9(uw.kemXZdsebR±2hViN˶:6hıidEX:`Tr*uP# l/k7tʮLjFW^"Iv)Ldb`qIF须ؤR>C[)厰Hlp H5C܌:C\i@}Ov]c<ҿnrڳlm45Ul0Q ((9a3?thڼ^2Gav"ť|V $ؘl0Mr]6F'1M*6FLNdU-+kd.BT 1 qe&' K{3$١J`j#G+,E>kv$We ١Jd˦No]uԩ׮=hCOn*N#mzn 2#\`!Y6">MCt#WrlFB"#W UʭW+iPXW$Wdc]Z5xTѺ:G\5Bd]`HfԪ]JmG\!cVpEW%U."vΠr19 qVN 7V6rĥ0Q;;ہ Fl0Mry6N0n&j*kTFB.Q 6\J;qS"'\`T6""\ڧ>u8Pzճlzd $~ͮ`>+MqMS=؅n*lA : F\ѝ ǛukT.BVʡT*9 q%82 v.\\u."+R̈3ĕp`NMuS{lTm}qpWV0ke`%-YKrC -S2SS+ѩSsn[6cs<'X{3rQ:\l 2j3tTq]lprAn0}b3)`mE>tvMjk1Mw]&aoCa/y [eb2o*?ş~.N6) ?z^( %eM]QUo}J4nkRU:Iq4AW7Ҿv_p,~McGX4@32%_\?TU^6P?/VQ]y%xsCz+_yS+0ᶩ 1ś. lrWVFWw!I;2黛uۺ~m׮:mN>U1)4Կ-գeU|fIgWoI74I<vI,ίZ-Yv"AӰ_r>ڠm~b}?ܵYv ܡܦjFb2@ds%נjz7Yl>=i |锴8fs)-~oc9Yܷ]hVghs(Sm:qk&#vbcbu{pR?Frϔh, ??fҟ '8!O} o]<ӧܬ+e9X}F)M6i=Gq9 LYdHSⶖuz%3vyV(lYsaĪwDDKBs˹yMlr u[W u9Njzl6]?JAr::u6\Z!vTnݽ=|pe>P1 WVuEj:HR#WN!$X|AkE.BER 5WeӫKLRu/?nr;-:D9C:z`֕+5jߦ p̕prRԞbqu2ulT*WءT:; q%i2 6Bf++qIC4n Lg+ly6Bec]Z9x\-uJ% 9-#le+>u C7`+RdJs+fE3Qp1a|"̥'@O`g4r+Z3&Y#wZ3QRiG'1m:Qj H(|*Pa:\\ HCfnP\9CWg+{$OԺ;RqJlz} 8{zg`N3\ynj; ˙;Jڷ鹱\pEW(W H-qE*vhQp%,e2 V\e++!\Ze+Ri3ĕTFn]pn"6Xż~~skL&yLN(*ޤRRO]薷RPrZH6=o^+lbLLXLA@3ںikkTz(a]11wȘ;CZYmg[yVNM#F6[$iڕu˶ʻv96l\l Fb p֞@7 H tSi툫3ĕr|7wu>u<\Zo]J9ZW+?fx'P}d8 wPn\0gItuS{#]T1,1ma`'+R n"f91W(XgfGrWVTAWeӛKƤ㪓`snr副#TAT7peF\i3#][ɳU,\Zˆ+Tֈ3ĕLf+X6Bt."WR:G\I. ;U<\Zㆎ+R9|R#+F*W()ɕ6\S]M#WJ0sZj'f+qQ1 qs0-`@);U1A*֛bs'=zpBpSD /EC[pUսddpUØ[+KΤĮ`7pUkUXivz XxZA8\=vD0kV8J 1p\=An-m}"bo b}+pEV'5Ջ( \$zWd1/pEZ]+.-^\\ U-{WdԼ7pU}PƓUWJ9 WTJw9\]4R;A%0z>pגR0]Kz5A,JnUgsPi蠚Սсf{Ʉ>תIl2+]5/Xku%,b`E4S{W`\UXsRWU#c3P}\A VCI7rkL'pSI)k ì4LE6Z%zWd0ds7"AZLR \FzW\kWdO:.V6v5tH\f}b|Xi+h`Ǐf>#]d-gGO0+\pت fֽb.˘:\+ p#M{zW\cWZysvJccWjnQבXR4 #Tltp~'yVD]RWn.jo蝮>r=O4`uE%wW,ӯׁyiuT~O}ҿQ[q$g>٦!//G}UiћƺZ>5_~x-}w w^Fv.R{]ѯ6λ9ȿ?=C-`۷ַ^O&ԏX_;Ko&vYss΂WsjWxtrz6ݤ5ObŢx 7nvF_lu$:y0$º~4QQEWQ?]·۱wl&쨅G2NxIN x~ 3t^E/<]1Ӎʳo{#OQ%҃.9AтSF)QzA4͜/桓 b/u>'D%[͒'7'R ݭ.nWq2ao`9KY@HcoZ1WFkR+Va`M*5kfϦ3!S?jVtt6|-.N'?6ߛl|wǷ[;3h:n^=m(GGWS*DJ3Qu6ʘȧs%(-FW9pur4qkUK_F,*7&FfMEj'0uxZHk<|&' tT)lM>oΪ_O>N'oHb;Cn)+~e?;򢦖WʃUlyE5p{G~B=-`|u/ﶛt}hh*gU{AGJFQ[T҅ jIlo;Ze-p<4Doy%וt!Wh] ɰa~{7a/?݈G ?t\ c~]XsVr+l鰂ABh9!!Z]p"j->4Re1qX!NP$t`\^"M̹?vQdϼ lkBq¬>nVFX캰]v?KwbkV}K!._n"?[N]42y<5aD2R&rl"jb|$N(Mq%!&Cg3g sr #ʁ=5s<l:]{=KsX~ OSͫWm \qZin/n‡nbh{MQ ;tŠ,,̴,k(&0/f_ # !1:W]El*T01E1HgwNBIH )A2H"`MtNR@bu61f"V ͎$gQbwDpbIy.TtG養vi8KQ2m͜ _AjM *黅ci)o/Mc?|PƶAc4v,$M3#U)2Toz Mؑl)#Fk)Ceq DY[PIxʘʩ@Yq";Gdy,3e"L a,#0A- ޚ-0cʎF?B^{,B>xz 1vӗ9[O/IwN$*HRΥ$ aVw?c6lAj4mJ;ժY4)ln#[ aD(r|a$N mOj͜xú2f^3hp%tO9vvX A:X#Mg'x X)!2>g(*D.Em'e,uNQT@x-HRO#Ur}Y=U+hǗSG?D)M3*@„|aB HPV1s␍c>U_aғ9B5)< C^tL\Y@h,yeQ+a))k`)NKB2SLxxR@?g%f"A'g`,qw٤7Z(} u4n9eI{ F`C乑,C˖貊`^ꖕaTeonqGDnm4jTq8sGH (4Ôi,2U} nD}l 14 i80K೭GÄ>(6/RXQK՗hy#kilfWY]U 9\!uٲݐ6w~Շ[7"@`G&HN H<4$D~zK;iv` vcx|MFI;@Zc, Q@d'sMLTw:%6Z +cIjt(@ʋ4h +rw0쉟ܝJ27n5C=!p8U)g;V7o| s)+) J&'*/40y'␴ R%U.0=NHt+:YIi.!={t,7xFa?<[իP OL-gß1R)g9d> y} óhl3}w9 `2;=7뗻z^UONޏzFχOys6!̻ t~/>NY2o"o:~[=c g+he1[KhZ[|~9"pd1]q'mYhVUI0$>`M8G} ~D캷)f/_) IJ Mյ` G5?cln~ ۍ@k޾LORڡtLHÛ[} etWԕ&{)wi}{$t::y=cqsC_O{uE=ƃ} ԕWo 턷1͒MLV;~kyZjD&~yKǗ.!(,vDD F xF-R8FtJ3Feb9Tf1X`M ųہ@ ` ^Z/=:"Oy@lr@+2tZ70ocv"h1޸U Ћan څ;v7%Gz8g:έ5#,P2"̑hE06X^^Tɥ@dDWҷ{E!dDW`uQ0% )"R{0I^ JT[iI-EuPFk!(TB% p4<o ih89CW:})rn6BݥH_ 6YKp;z5v<)~Tqq<LTD !("#&)C t/$w/8'3ʃ$ yrA&*I0eyLNXQoUG)Piʙ#q8S@О)M_ hm W!$ Q'rV-嬕Ĭ_ýQ3FxGr%mu%*Ǹshh,|B 3e sg=cb$I :E/;EqBh-QhN` \G$QPD A%CCpDMbV~O3JG 8IN_G4#AFȵV)l8(+&VQӸ fJTpRK$h 9 &#'y$𽴕[A33_t+WEn+|ۈ))a1\h * !΀R)"{PEIj'^< dz3QCn+tqt5a",REV(~r֐Ȱ|&E V`I̓>wFh&@b~O}Vݑ^|Aу%$y^y׋C1K?h2; ~5y6hmt7r D&~ùߍJ&R0y3ep,RyE[9kg Ghh;p^hl%x\U(\v9g[jXGEf8udi~UڧEpq: /[M|17K{ TO_ЅX\QSTX:vxP!7Vg8HynQ/Ϫ ?Shь,ůr,%֐F?jjbQrW-_^kWPx nA*}(kE@3hj zehID 7w9IQ[%QvKed6ݥ!n&^$ t>j]P&BBEhLx.CD& ^C,DŽx.LS16x#]-Y6#L+`ZKBաEI*H єjp(N22)aib]XچD38R.xd,##QXǑ ѳhZN$ž?7g9~j4SVN,J2&-|`X='"Bpه"'iݤ  jB`5 aSk 2HĐB;HD\k9 mNS6x$|Jj_NΓz.jrk{k|4=\TSf9Aت6db́ͧŪk3牷8}RTL|gՅ>ue_%_w^7/o O K.Q1#`ӷ˱|lGd"+SdaH%EƖѦff3kIp(Y|2ݜ9 uN2r}}'78s-״V|z Kd&^4PMR!=dyV_|PaJį* og/翯7/N?}B9y#0&3 ܉e@MֺT^47b,nh׊]e18szjm٠ 9jYTh7h \Y]cGlr?dTKTYE9Ua[bqހhn}Fxu\(HeoZ5Q>ߐS+4Z^£ey$1EasАACi>ҧ^lUiQI}60VF8B BOG"I`jSdOgPx Afg/(-¬ڣ ? PѼWӿ~?|Ʒ Tõ D3ڃqWB #A幟q<F~E^ hЗk([CDk\T:V4t'ϝ{vՈ bIY`Ny2/T) Np`md* nsvP7"|4X =hSFEL1VzJqvR1r}I!R"5!s\\5u]'OtߏR0.C7N{3YGSW&"vcZ۾)aHqEGWKӇl2_s{FǀT$gq˜Foo.-oC}WFܰ7u}6,a 5Iqr0]qg{k^p뫼;[Gy;­$.M1rg,ϥLH4s+< DLBj_P@ A^Pcp[*PHV'S.ygCT@V blP~%~=95wkI|L[Qlv ϒOhF@z($xFP֚Pڣ-E!Ng#PGl#ۦ;c<y_$'Iύ 捙ۚT!M?}x, V`N/KR!R92S2 \%,dj+ 0DMNFs*e' U1YϢQp;Jp TiX5c9[h)θ.u!{OJ:_dxCl6mx jՃ,Pp~8X-56#XTJ lpzl9$!TmEIE"#{YV\頄W"jfNc ȹYc8;ګw{` E3âu2<%\)3s .D4<(nuJqIz@/:pШk"$14p|B8$S ڕwXadbX?nea(q]ŒN :z+ ָs"@CJCtDxk轅jDk(!J A NmRKYAE5TTCis1rnֈjz;کR\r[ E׋{xo䣍RQ!JVhͅ5F ב-]A/ o$&^/އ^Ϲw>ԷӇV!>79k0jȭ"6_8Fb>Q )яO 025k!ޕ6r$ٿBq3X{1k;/1<&mb,dţ$Z4Ж*V̈|"++ÑcRXm6=/s9Z(/ɔ?rIpBgD-)6(vTR`>(ֱZ;ha8/s! lBJMTLN8,,t^jsTb @&gI:,Y-;=`OG?/dͭeyz5FЩ۝kœ,TUu N_\I[ǭj\!IHB"8]okrЏOwpvta3a:Ͽݒ|8nGX>5C̥}~Tuvc~n%/Jrݷq~ b#o\%;^6jaW(ZYvGKr[3filuU[(Ώs,T)ws}mQ>ʹk 틣ܩL?hKv7/_~ r+C֡5~zm2ƿXj>'npvzv8{ѝ#]x٭^g68s5[eu|{_Õ7tVGOu]|~e^w\xE'-[q]sթneWoDߞdq8w9ϴLF+jmeծKFCf7B&3)bp k֑/<ۖJmuz;ӳUƺ?>ߏn&mJofj6_uwJ )>f  $m$aHErRE&$ ]pZx[M h/uzmW?>>rNccỖYrJ,I$?NΆ I{nJ JNh-Sq"`U8a$} 9)Qࢷ?~I'AO*퐲9e&h) 0sOfwt1̽g/wwbpN^wWcu.F?,>jMF_,.t_`ebCDk՞(7 ] ;]-=!r9$ Wp1h:]Uҕ1 gR*\RCVөUEi̙^!]I w+ *g}q]UZ:]ULWZ~•ۄ7Xy̫UbUDRy>wojOܰ];L"ޡ xߒq(~X'U!X|5_]ZQ6 e^*j&s* E暠kPuwRLGYYYwrV>uwY;壌k+svQžcN][VMH)CCT%EAdĝCpV %Z+N~F$~N_aϺ8XaEWC+^,t<]U<#Jcӭxkq/-]i_ɛ}*WעP q %/Olz857/agR\(\V*Xq;pBNʨ3M6IDWV.ʊN~qEiԙ^!]Y)Ҳ]*\ŇBW#QQ:}WHWNjp¥#Qꓟ(ϫ ;eYj/θ=c^f=QKttu9IRz@t~p_x=j}tUQZ:++Ak5 `!`Ub(tUѾ~(ݙ^#]IŘ5+z8tU !O*JgztEHU]0 9p_z~h9tUQT2F[p ]U (;WIWڨA+|8•|0t+Js~2*Lo-{|]Y~wUYLK D.xoE(]7I=v~IW6"^V喱g=7z:,{6sZ#QQ6m}Him|_qwÝ ai]\[w3 :NY AҁKm˗=Zf?Wz0t#PQ¸ܢ1w1Oq@y͏q}^kH1(z?BT79V\}xd^Jl\[I#-YrDKK܀] ^iOcĘHg Qa9"sSk9J fE:ˊ xh:cwO>Ⱦ z;}mv*z1G_ /= R"]hRK%S}e:DBH`I%!.R`TY,T=Nd c:b0F3h-S !uGNܭ84%O!'_U:T7 E˃cpm¤1t@l%klP0"]Ѱu2$JB3xYfYpuNLE> S*!0Pmʤ|NPPMNhAxcX\li׾q1bQI2 ( glp`&dZ[I d>#,B}f"o1@ejfL|"Rg,~.A Av98<b3.cH@i o"1}@ B\*HE""AvŘ^ cY!|5 02] pȤPg,%m<D VR V+\C@G] 2roJ6"MEw%BA1,^{y;x)R6RAO) !5B6f;KՍ ݫ( EX瘴v&1BzW#2AB)Fr;% OQV ]kKDK5o7 aX8a1:"kE~l9ׅ(N㻅 LrA%Y!$jAQ\6,Vi9!psʃ}SBwvL|˛u5uZW㺡\kUPN#(Ybw ڦXHCg /u3TzꬤQ0tT jV ccR4b /,RК<4A"D$iyE(C`$ (g8Lt,ب_=ɋ``A!Y-RZ[k#FH܂Ƞm MG"3*Yȩ Տ/XF~qX&apیl2Hr$8 $xz7X ݏz|660,d+cCXXNBz+]_#+W Dv~Ƀ!,)Q/^c* c-\ c,::<0gu8T}LXn\Bc{ڨt~5$bcN`pllc#U݁Xc=YG0hկ[wI Q7`+|jِ~B]۶-^J~m|Ѝ7{p= ^#wkqp@Y{QU8X߫`èeg{,-|3|nCW6 icJ7[4*wЏZ{j&tV8i(m?&'A6JCݰѧãMߴ=K]riB Ե[QG`/ŠEnuae~j9qw&77h.&Zk:Xe:5C#]!S|tECt]*9e5l'>E0+6+ F+Qt5K]E_ta 87omq|Dgݚ>MuefhZ<]ۨ^`uh[~>u2Ӷi37xzq\|_+|u7u=}_?v̯\x|i8t_ַCbzu; kuM+_a BT/c1>}q؉):>gJ˙hcM)gx቟0uZFW뀋6uEEW3U0^݆,/H*WޭY:wH2No? 7(.&J4Rh3t^9NJֆ-ltEH4j>J19itE\CZ?U$% u *O1HsuE |tnK0!Y0y-7N<ʡJMt*6eʉ:EtJF6"ܩ/NFH.-+z]iͲltEsƩַͣ4Jt5C]Y'2NVL e5RYw*~o%җ4r s^4⸆FV'$ Y'g ҳe0"םLl&*resEhC%6Q % Kl ~,]!6uEQQWA`"#]pPlt5]m(+_^W*[)t&3zfGdiN|&!FP4Xm9Z:c+fT#ёɎ(K{vItZk)"A.zEWDg+It߳Ri\WyO* רډFWybgʋx39IEWHkx]!f|tev1ҕ1FYFWk,:Х ) ueM6zQ(U9j]ԓh-]WDjF +}u-Nug0ItO{ڋLJK DQ^%6fhqaռhmJV4=CMG+]0#qy[ Dj -[ =6E0ѦuI{ u+KNMltESG#Z9gpϮK JM>Íhahw.mt2tDWzM Ͳ\~,]V\tEƔ+Zt5C]ttk6"ܩWˢx] uE] *q[$;fI^ÜPw\GY#guJzErlXXĒc7^\VX"h,lJluFZp(C{% A{N[xp+t]A1sJpњ⧇ *t Td K Xܝ@hw4JsL>3rGK5MQfQ1zt*Ơ\+5x]!]*%e]`+v.Ň^uE2odtGb+6PgxϮKfr]e8<\x,Zc&*(K{QU]=uOG6"FW*ҕGYZ1(z] #]!|tE|FWHk+]WDYj'ѕS1]m*^WD 2@ w'nb3"ZH ) *(e+p#pUPFGWDij2+' ++>h.5;'>&\`iƖitF4=CM'&hFB`?wUJh.]WD ramKg?EFtEsѺeТFW]JŐ&xn8ڝG Y^vG4e*ZC08x6"h+ʋf+cI:FWٌ6B"ʽ+ `9 JltE6pцXdt5K]h pY']vO%f9[,sao,`gN{O ?]_/շ;yj˳+폇{GclWCݑv֗S[@3!p |FkܿE!f8Z &z(3:vN+V*-Ui&]֌fۘ046&&Z(&eXzN +6@djJΛ'=[Fw,װ֫uEAXQW0#|"'JP*+gRyU|<`9Zi5hDQŠAڻ늀'"yԋfTQZ+OzF"`0ltA](et5K]`+Fy i(]QWZڟi=_wS e^'.R#+ng^E7slW_s6hjq?7?ػ6nd,JP5@9pa3֋i%'n w-ɲID)Ļᐜf8$ `hn)1HWм*QYS^S/gpL:"]*de+xUxVYrS h&?V"庆ŀwrjְ =##L?zPG<8˰]K6xۛ[Jn>΋p(Ax"m(SO2Z e Y\ R^h D|/@E[YV9AS܇af%F'=.FXH>wM-X{))2-l nbiQ ƑtZIcƥjM+'f|G գg*7du-׵U.SV=^BR zG)+ZK8o mŵ6‹R&R/5eDDp1h#2&"3u%Esk\>ﮪI5}nc2%lє>{3vJ7 T }K2(*)̡ 0Z'"mp (GrFpyȦG) Ԁ!>}``$#)4Fm%Q #̹Ź?hy6MD+18'NEƸ I0wAa=uɣ$n$d8r|ZO5V \Q sj}oZ~^[tv_pH`.n`{0^È'^M^eބWneWe0ăE7ژ`v\[f<l;f Q!fZLwLMcy?8wcꅳt\2Zt)i6TLLxǴрhduJ"mt<[b){z4{OMDƝC?%vݙl.fI6h(~?mxyeɃYsD37Y+h`e(] TEW[ymc¤E3}{(|[m>3S4~M 3k&f|}P۶$m|QI]TA(1,ՉM9׆꒡??[?Wzl G?EBͯXHʙ$7'h֟󅛭>{W=]b<)FF7o6uMwӗM_gr_tҨڏ-^OZ;rtM,#@-ͣ_,8O}bL \{yu5 œ@`V&v@_@n.'6鮷_%rMܤu~[3^\nΔָ7ycPjJFɓm5ocmKn,&bMy(fO_c< 8&M7K|sW~fVe%#󔚿cJ#_:wм^㵴||%7vx=Մ-8fjFUHX0cm='YX@SI@% tu^Euo<7ٸG{;pk ||q۪Cčwo(ܷIܟi; ޷s>Э:2ͽ,gN\@RxQDBmߺ66LJ;NA=10ڲA 4( I 鹴;d`(ZX\ ov-V:d ႉh  j4!0c8@#$0D3=ع&z0wkM:ay_1l='1:v9)M8y ޡ4T-ӫI71jL(M-bfJ GSQ9ʩDDiR!L51H#S|>Z>jno{pAY/|Գ!# 4#fB$Sι#L 3ꇭZ8 sݫ x!`Ť%BWԈ H!0 !H% M{ֳS*+P&CPS'Х}T J-1$*6WBaH`Ѥ ğ0ˣ3{"׾]߼ \~-2{,^ $Z8FR.b`DBY]dޚ>s# j'Vy|OHBG]ʒ ih(Ͱ#QTj N&Gы8dAK{`PK{-w#NMQ$K"qD8%&J .!X9pV>DnhZ,qmED<)<MD|gkeQcL9(s6e\ XBH9W gX22ѣz`,w`/ yv|Ae&Vg&U mBP$Y.nM@DJ;i&+DPJ}1 /-~[xRmmLIѹξܸ&[4e=ˑI5IgrcB3|rN܃x&.R<WOU10c8PZH DE$RED (WKMH8FbFe)V} \KKȳd֚Ҋfn-r%L:*lh!ze<'` Yʝ!rt Βt&ᦘrÃ9ƴ FuL[PHitɜRafXTqXFI$wo:@PCԴ`0& 9[|ҘZjHDs8 $q=*R% !Q2FX+ Š:byzvcC,Ianbb@`VxUKWP_S5Ulիߣ^lzol-3;v[)/wqҩ9WMB@օ׳&x7۠zȸ܊DO75WW&DӴQM݊u :u!7u4q/ ?{G{}ىv沽$֫d~zH%YU*XRJr%3 IqKNwΑ /В"Ru"&%TX|s4bd>B9&Ά渵2fc@m'1֚NQx$T\/s !kD>ƫ C6ig1vUcCJTٖf"&Q)2Vu)E"FYA 25@;Y2r(4lB. a)W@J)s$4V[MaR(ˌ~)fܾ>M1Z#iXl2SEN Wdg]E¬&VvPaxjv3_ 26⎻ ;3cs&[uyp28vِ:`qqiD. <9/5cF<;}a=\Z;}L{:}<ܻ2Fɇ\|ѳsϾǞ|ZB$gQsTGxJ%GYLt'c԰)Y(Q!+g`䧒ɫH A. ";n< j1R9vlx#}g3qVYd_?Ќ\n¢o]FO.:z%tt]|:çDU9/.1̇*ņ6:bbPQb"_]#"Q#"}f(0l)%[ZA9˚rl-BfvF5RrW7JLTRΨX0Tm+TsBVi3`&Ξn&{L2AW]?{[sAȄhVR0j2*뵋`*ʼ4!M~T@vi/PF+?;~"N"lS$֐(Yf*̮8{0[z5d`K+O'S̿_՗nU0 QT^xf M[R)'@# ]-Xč%2lifP Ւar[Cd8 pVBFLՑ*B98=\.:;#nq,8"Mq;X'պrnY-! eŘ('5M,@IAFZDZ)@1;RSɔK"?br靜M=s?,vq>kT^P:E?.Fk[ҸB06*d Ml ⢰$9PS]<] s5;ba&͖_7|X#*?0h7Ȭл7,({$MُG~\u/%|Cڣw#+XYӊ x_Th^VzJ{#Ѣګ=2J=,T~''_n>|\:|㪴aSgW}m¿Eo[rX>hodKރLW1*-_A/pu$%1Tnߛ4yAw90mQ#U"#`K߀V!%=1aقO:TN =RE+(.\O\Qy%yy3ɾ{0{3]v0Ea[[aPػ[7qAMZݺIl0u~gݭTGQۻPfgW]l&|aCbLz9CZ0} rok`O恲f&5R%<  ZUP(ȏ\6v;|caS"Ͳ"O>˧+d AcZ4EʚT6z JaCV4&Ԋ<)`Ugnsw&~E $G:>"Mpt>\#>+w1eOU_@fvݥRpɍk1 !/윜p_\^}`7} xsC_;*b ̖/ꉂhΥtTL lܟ+0J|.~H|ɿ+u'Pa$/WA3']jRΈg~ #׋t$V6!y) 4q@@%J$keGp0WB[%q,G$.0 3fc6Ә=Bbӂ6Cv|K C~aC5VX+ؤ&(ʛCc^."Z"BH\NN і=?z=?GJa2(3\bMۍ}Z`*"L-/.}sl{NvOf :ӀV W.=9%M>U@\o'Dz>O ̩\@,5 $`_A^ɯ ܒNn 5XY ?EeeIfk1R,q4B(wZ W/ t\? }q[&c4|*ͽYm2W,V\eo~٭{[䖌[Yrh+QE϶:5:DQ3SOo;AAF/w_^ގy:i..fxg]y3s>̮v\o}tw;{gUz.ąWבwA<oC;nKn8N,ݲLW0:1-h+k~uhh5}. +p@r2kCvj\+~ar89~։v]o-Dr{# %ʓ%mFbZj)*- 1[i>3u1MI9޾k«Ev|w|,cی#OP^0bW_lSԃWdH;`^SF3X~T1X栦{ u#gGt-9*E:XRmP ckQ"h"Ci1Y ݛe>}^ m/w <y| ҧ.ۗtb^\K acMSNspZg6@X[FW9M3ݧUҴ쀄Q [HѕV\c1r b+&QZTRyZY?HM,;f} ۾vY,gPu&07SHڼ@MZpcԤfyTbwG9 :p6}=],%% SWW!̃1 Bʠbe+1u$eFy`/V+5ht(XUEbTHް √b߆&5$[kd!Z(i'"9A_[G 3AG$(&ok&٣.!zc7Aû\cҭ4&݃uݬ>YFu7AD_^v}%߅%lvu>\KZ%xPp}~~ƌڂCeP х0$g]ՙ0(2 k̘65:35Fͥq+ oQ!琧5gf ->_>X=t qW/+}]عm0(yd}E0j^iΧuj}!U&PA-#FvxHb{M&ЩJMu rVP}8ݪ l<ߴkj9˂[&JhWr|wft睚1(5ddKWwUzz*?f ++(608+fD6ڠ \ebD}s UE /'tBZUbQ9]qfcKQ^_vLT M3W$/aYs[~uڡKw& BbUق,S!C6J\[GTM+b]Im" A@N1lcOl>S@ZTu8{e GS%(aPs KHUNٜRR'N UtIi*&=)67f|pK!uzF|a\XS:p69МG$PShRuC]r7ƾ[F"3"&bcg0/󸱁bR{Ф(JoeKl+,ny [P>8u=gc(< F! yx_OKŨG&DY©sr9vεg(un ς=e1QK,q43Tf֢t9T(kV]taU̚+ܛSBa"Z45×!:kF'>go&} D;l/?mQVJ*O$rLㆶ#ܘ;HV#h};2q;g M-l2h+t5z:t(9UH6DWpH V*YtDꛡ+vwzM=5]p'VWkwOBWCIdww|2DoX9l&ܠ[6C+ 7HHWχ|f!`0fj ~+t5Z:tnZ#]= X|l;\׭zݷ2e:ryt$?N޼Q!e~\NhϏjp\=̣$Hݻg_u] #4 Ixw'?}]=!֯M?[ҭN\޼'z\81qjNӇuj);]ۥf>kO^>nӑV$(biIN*تhIt/ۣJN07?}aWq$mbOʛn܌ŞhJtb Iu3t5n&;OW@It *6DWL kՄkZOzt5QqYҕ71yyv=􉺫Pퟬ{˓?xke a/<8}ɿ;ww'o]708Wюk2OHBM}%m] MO‡Ne8HFoXvnpm3tJ?g7Q򑮞#]%ْ ]MBW@&JoCO/wrt >(OKWB< ]?AWt=yNQ7DW ]M[+%JNW#]=C>+O~F ]pt5Q%`n NDf9ҕ0f pEW-<]MzHW!l6CWnp[J9Gz>t<wN\|I%Q7xץcnN) 760r$d'''KTO?wpF{qM'9ɿ ?ԇ㉑"/TեTCTԐ (1VtBf5}yj=]mV[ z#WϿ~k_7@=|wvp;/OkxYR^lX^mt -+5_($]T ;k,SY#8|?{wgˆОO/P_SG/N_sFhI[+)+PSPo ׃oK.BٵT" )WfCm5R`Z#[lRhrg?,4(P[؇k0>Qj1"Xf+ V,V&aM;"ԛbzo%.b0thFKqFy8'-!Ѣkm}ykHXC9ġEi~hdR.PRڣ$$;7RBN"`%~X B d0!1֐7\(:!R]4jمb=-MfI+-4ZD@s*9*"@:"ǹ5ch |TPa2.MB fv 4J91,F: w 3G D!8>-@#\|Y&;+(SAd*Y"̓Obs!ݝ7!̪J2$k#xuadnJE}zɇ{ws )wl=i͖((E?Gsƀk>&] PR"@k,R!J`+C^S$J-&`OWkeb6Cn]ɳyplC`K "r[ ],9%ģ::D{ >b.@N`.]?*XS IkXP[S`桢Rm>18h-Aspxi@ˊKrW ,7Cj(ͮ $00!-#[q ,E"%l0V ])nQhJ5顃[`8EN#Һ1P [yrAEVMc-%9Vq#Z5x XMƎ;#RIX+ dE 6E6F f(QJ_@eg$0x,8(&` x]E=5(!ȮJP j3|Z ȸCæz 152ePBdBEk 4DzWΛ |k)> A"g#aC7!wq bL.4 LŤ 2Xů PViP~a :4΢AQā.(ʨq`j QVYq(l ZA!NFϺwWlQv J*HOV12t+ͫF+ErL( T%"p`د 6Cʲ 1P Moe'o=FPZᏝ =ZA |0sr u/ĥk.! ʼnc̠*GWC:/i~ ع z]?i]~{:Ƿ*ZE }`D#ha L{T^Z-K)[БmJf$%Bڰ ڦLu1 ]@M`5|ɪ`a "=dHiD0iMPy}E>D+^zGJJsüX# j x ;P cxk`JrC{ϫ;u]| l;dUzC4D2X oݷ3|ʿ䝡NF+l:Sе2jTeĀRuM` tC9ܦq&qǀCs [,+XV=ɵQ䫑sͤG!&Sp Tk`%'5igsP6>@sM*BZpF}iyG] (bZ5Zs0hYZhkԋau!_?-Д nFEI5nl(8qyJi]0Ab?{gȍE;춆/E)`pe dq|9"h8X^Kd>~U_zbk"ZMmH=]$HðR Il;o3O.aw4eF485HTM3zHV S_荛Z]58|X(x!4yHFf^|TT/r}7w^oSwsxrU`ɰ74gUݔ48ߧKn4dm ︀id7~wvtѷb?n>]Q 2]OҞu1^<ڟfm9/.6Mc2z#}n/77oñ$S_7jq1j :*-ۡz=m|oa1_lN93O+Ϸ+:fޒ/9DWgͱ6[4ihW)L/t^۬Ws"|4ܼ?V5XԣD#k}Z .|1ck<|~yI_]ߢh_߄+G>YcYky1@3Kɵa9zR#N,5eyiΗ g|𤖅X][-{9c.fa<}P|L]6,ƙA&dmΧ>*za46 f0dgnb1IpuR؜exç2&CFAbbtŸIAv]1eCU\O;3܄O1 gpNFt&&ϐ}ȄIG2ϣv9+j u kw~a'H /$4M!)&Z5M3UMGHtE j93QzDu˂t(GWi+)j(IWm+}IbgݢW+d B"`}WWE~uUD'EV6 +uŸiu2Du"o\"HW ]Iv]1%:uM "Q7ѕ~0Ȕ7T]2Q2L@u5@]銀pX)bZojS t#n6# kƖ f7gd&`tr4͸ (ӦJL>i>;̂t(F1bڜjהeTWf%UW r2G DZBtu&'HWDk^Sq+}1(Aի067]Weطps Zӎemc&L o rtŸJӦTlJKj8rZtE:1bޮt]1%fuC 8k9\}wbsȵK/ΪoohSt^@}J>ݲZ?r6)걮ͱiw?oSԲU@uoS !6ޗ.'eʹjb3eCbCFHF8$FW`i]WDj۲Yu"btF\|P] QWކMZ7qy&}%P?o4Ѧ$HM3n@)fk4QTp4LtE'K%{vDs2x7D]e3 ]e <*]1.)"Zk+J‘MƢ7߳+}xPzh/]QֶG(UP]=魳5^|+ Av]1e9I7]1m_{QFTWԕwb+s2\RtŴ!Ԯ+D QW ]3rtŸ.JF_2ku5D]tED1b\i!ծ+m6Et kdF6NL@gd[r19IU%ibGDN4SֶAjE4T$I'9U%3fZokS8uώ_b$EWL\"\[Hu"T p6r2nkBbʠ^MV"`2\UmhembMo銁ƾ]>Ht]1e*J'opG)r @"J ut|J}'kx&<ty]S}]U0Sr2eT=c`ttLEt5ݽ!eݏhJ3ȭZف~hlVp%:BeMݵ<,Tq6/ZmK1ܶvg!Oô5so-c4S~.89Kc2ы4&+EDkm3{5hr+D쓆PW%gc"Zi}SP] PWy-AbqHӦP2j 9銀# 2nsi3֮+DgTWU;l6?9781fl=th6y$M3pDp3fZk)޳ HW^ qQQz]1˪*+NI7+߳cv^T]tdӧ1)V.` 7.k"d+ ]%cZg$] FW@v]1eJ>L ]1pB1bܜjS:T] PWtsAbq pNeVu5D] +ʬ輒mU2kA Y.o:J;i#WHQze%nTA%]^(!6sGq-Jb3~͔{CK'ǐqQAMXmV] PWѐ G9C7)bkQQ] RWRzqĺ&L|* MA 1RfJCt`l+TU2nsώh )޳)KgbtŸ}_FG2yիU>P_ߣmߏp}σ2=2Jlǃ\zl['%4pbtŸ!J&SնEt :Q{q]1m)U] PWxAb`btŸr+uŔIUBo gU]\>jg?'rX)XwaWקt/E^WUs=/́7D߾9e ܾy: 5M!֏o;h>IwQ3j[O}"ߞOmwddow񿳾~]QD9O֚1쬃ѯvyF>p@-o}.=k>Ǹp}w߿+7?T{ZbTt*Su:dO&\ګzUZ?]eQNeR*+bE*+64W7iY|ܼi~jGX_ݼ\85Tvhh'#}ֿ\4}-ɛUKBc/W~Ec׼~: ;1-? c>_CV ٯHs+C=G|p}ͫ/ Hum̗wcAq>y7\}l6}55RD ι{'3iXNC!763z?Ҏ|n.u_q|_[w_ `gt>?E B4sWO WrimW!u9/}~OK&L|@6LD]>C3 2N\w\\4ݽ).tyu82n"z} ΛzGiެ!+\/i]]/qȃ!_޾Ռ<K#A1yS1z:G3Xdܾת+b)!i8b1ܙ&vIs3:_Ybq \6v^qOP2UT lCǻC2^ ˟&11د=z}ͫ)x*8`Ք">&b{'ڐ"o8BKfS51nS51-ګ&LZ5 jJ1#i^ y6\!*ARAj!rk&pL^K)1h JNM MK-@,3pzd' _tlPJ#sFjDtЕOΠe(t2(y ]aG#y`^{Ex h5yB)QX{0AW8զLjkA#+42p_yü']ڝ H$ ]0~T\Jc+VʠT4ńQ]`X4te ]NWH!LamѕH,teоqD~(U]"]q \MJwePJ!T \.e,th@ ':D "gHŊё}D%TX p YڠL Ҋh 2Ƴ jhV 7 J)] ]eL10,Ѭ0*@ɐNtf8!*{z_sO䕃A?vv0"6U=f]`%I4t|Z+f:a\I]&84J0I[3Vá+ʴ@10Еˣ+Vʠ me0~KDh,I%¾؅8ޒAwcKbbݍe$3 !/ͲaeI!zE5hS:VCl%ZXF̄Fq~o_-v.)XNv ?;OܣIK NDє"~uE:Mu7uK00&'u3KxN\) χ9X/}]7*~oeϋ??+rKftDR gv|eyxς̏new l2Ͷ[OU79DY5*-O٧?g%ZY L e9pdANRşu97ROgd4|39_F90>zvt8<C7?`;="vo6gW CqDBfYYԕɞ{ז>ryyBtJ%KKw6wʩx7=>ngy|,/Kp^R 8DM+)f{v~I :{eÇGthnaCQ Z!b?MݕM5F.%ۡR 7h`{qY<̦eTs݀X:Ubrx6 xp\L1 !! ,\#h j[ng;tY (`M0 _ g;4v}ieۮ (5ijU\KHR!Z1*] g"ׂi]ʼP:G& )Q1iߓޢα؂:齷F=زՖc}fovU`vT:SJ4fj}*~_,b9 h²=8_?-*:"wYﰢԂ䪢֜DL<'!,RH$*RobirW hvvZz\lw>4Mr6MU9dS!i{^,fgCKSm1S9P.>3͜M_O[%7bja>'iř!tb1 W-; l Ma!۽ӿb̎  "s=ME(%d9:Z 0l8bFEJAoK2HNT4Rk4m)˃reugb2rfǿj3&yhl$[B GliJ<8 o~l|Oc9~].~eZ\65*u.3Se(#lTU*; A]"u"k8rr)AiXYm- CP]`uA%RǺ5U򚖴Xi$ Τ0,PŸٙك;uxgGiiåVvңsK1-AEW=ף7)PFpV)DBT.y<1krY"1* u2uZBݹQ_V2)9KBS~ÎԩѾ>^[|$JAWs]DL*\y`yŊU 79C.$-h{Ԛ},HnolPޟ.^XXm֚DFMdz"=Wu uXQ7bDSI][x#LOx(SCE]MZ^02F|9$&Aa5e߽~|8r#IC`k`"/uT.p$pO ?2R{+Z(gV~J,q;_ZP4`;/@PxRMgҲ~!lՋVA>LbYe"h|Vݙ٪ic'Ṭ^n,Oh x!N,R8lGT7UbjŢLk u*x}[{ Cy,{ b4㖥0%&ѯX` ѸR^U4#"`K'w[Mj%'Xbth#lWx E=/u2-&lڃ}VY_NQ#$o:^ri:9cqWҵƕT)ԹĘ(TQo1UMSHfr\gkĞ7rV:0fq9Cɉ -q ea.V2ȍOղ4i9ܰW:_B[:8r1@Bux.\ #٫)RGfaQuRB71ѭj6UXF\hqZbF<#ɣ:GH3SD5yz1]Y~VJpGcDF*RQU QyRRs_B%>` sUWbg*Һ݅0=BEug 4i8 t\7)aqFhy07$/EwŴܚӽjItDkww͜b'5}|[vΠ?ͻ?i "2:НLgK[a>>û8?=NO;|:>o`[#]SvĔ-QD?Yyνͻ\mS#.n@j[?Nx`6=g Z\HT}v %"[/rbݓ} ;) IZ.1 RHƓ*ҬNfę*s2f:=>8asD\~hVf]~hs>X4`,h1  G2YK>U:7KZ% N:N>z|BJG۔qa  P7'bA =Pd8 fތNuQ&^Z(q.hF-i)gŎ[b2O.e,n`*3#@LfX-B})Mee->z_0{dt·??:I Z;=$8L}>ނ˖C -#7e~TvTA{Fڽ]( N!nb[2 ^hp49қԢ(aM&~c":Eg> u܊ 4/%I|ssKx1WA.ȎVWkN{UŖYq:_kB_D=4); &9Kk&;rנlp=MҎ^Ew߿[zפ0y5exs !:pW8+ᦜ|*%+waJwlg"9 vΓѮ`&rt}=Y_< Ϻ'\lf~ ]݌KW? ~ȯx0h0:^||K7Em/ij"'"Th^!EVYݯm>C9Gh|{n0<RFS OSv.A,4ռddةF38Q۟n75#h,8 qh.j2TUFU-IMB2[i},򈵊Qߣ3bf?mӗtt52sV+Jb< JqK-* }FgZ0bJ*JXYjŵZ4o3þ/3ѣe&\fq0s<(hN^bLXs $ec3ɲ3a>tU59kJ[qyZ치苆)9kHz*{bO֜<-VXPnS0F#Ajr&0!8\͟x2rDXnS}3q@sL93ny6Vҥ\Y"`!AqpHo#U3ir>@a58 47+3궼([hMʛ414<o~ 8* QITaap"ˏMjBr 52z=A (qnB),6N4r34fC9ɺЋxWY|xXC;hT*TdĈJoy]'m1TQ'%!Mj|S~w{j2yPo7B mx)M>4,bᇺ[1ܲd74Zg74>#!2̊(HCQP2ITl#'8 43DWޥK6amJnc) Tǜ3PaӫZBg&))PD9g_ 10Ok sAK:.&K. Be<Qp!Gh(Yڎ}C@F ΃VaW2ߗW^!G]8u'=I, (IKjJNل Kr xLGDrkV{xUjmTSʶ#Ran$7b&;M.KJn1]~:!a sPt(Svi(`=[^ygJ6&JQh"pgHT ~D7@XUe 3R%3ڕDԂu4PEv ;7;_Y#TmUA(ƽNrngܠߛG$h (,f:)c"`YpE_~( :=B( ﯦ=FCvY8XfX9cF/&nLh8)kkڭ]{ {Jpɲp?36üi*E,K^X:/tԿl̪8ŷ]%~c^Mˤs@ B9xI*ii>@<BXiK*Hj:(ІTy<fÛ) dL xKӤ$M;9o*(`XR1::-y鈹V' .瘄puR6a)n -FHSMAS|!Ik U55`(hd`,ys}VӝI:$>TB`RbYFyyxEИ_jYz6 Oz #>6>24p?!0>9+60b'| 2Zf'~s֌ ɵ3J>^b &,/)\%c=>d -0sc?w~m;R٢tDcndgj&@C(oKIM6 XOyńHNPC V0Qe1%!ud#ڟ?&;z jX?+7;ca8 ݸVK.ؚ,"1PdnwL c}zVٶ<ӭjs\m~מmnCMˢ.}ʞw BJwծukPNa ^+i] 3VHӷ|V6G2LS}{_^SGQhي8eB)/E`t7RGQfh,C.fl!{ bŷ_q'I2c̙8 L|TkbW[Ol[+7[ϔ\mIר,ToËk%[ZSIU0ᘒZfMr֪qr8߭'g!V̶X1c!$D*McV-Xb5٭dw)% #II~[^c{)iE-(Yi/ u%񏥳Iωo~yh ,NS}EolyƧ]Y?fxoL{RJly|L6Z!n <;K 4ttmDN!^kIty2I˭+~hHRE a |s6 U;|9r%;ѝmtQ=87 w2x^XH} *Jԡ?-Hsvj8b*ȹlk)Q.Te^j&1i2EbO`-[O?\J+v!vrWΌ7V}ќ}25aπZz!uq<=6k+r"Mo\v+T˭]KŵY6iR rPKfH:Ys_nySZ&ՂE9jV,DEik$·e޶YkM>ϣhciFAw2gmUh<]n=JAKs9/7ވkNeS|}6.T1+-vmoy :-/mxmdy AܖT2iJ? zwwim.HGU$S({)B*؅+ ;u*ؙ_vDXBrdD)<=C,eouxm5R7BhOݪzݢbg-heIldjsp-W춧~<BJ-տSP.dזQ=HX#Nz0@i׮yrP\ *ii !X.(Lj\:Y+LLu?%yjQ_>_tQV:3IyM)<B e8N`T'D)LU;?L2J{%@UN)]ԅӜ:ϓu s~4,:yBDtMQ'%!QCG`O:O!W>cƿBI,Øp{9%FŐDa"=% Fgr0;wբ]6܋745&֜g-ԩ4jJYP[-:{Wh L-8#Mp^SpSEZ7L]**vo˂aUh-PE~=Xdj"+mY\a@?1 47Ҽ|XrfoH!F0tfI0[ٕfsvl] Ulj:ʹ;i3[ῗa]^d:Kdu vKZK c6qQ(R d䣍aCJfȠ %P: :е0,I#hPdеlxB -0$ޔI7] k"NlFl˟.2(wFAD99T8dvd-JmؕS°\67pnVbi"+A8o-D,E^G(>m7wvœ@c= پ!2REzmr2'Dx_vve!Ij0wQA~d  FYcɲzhW&~&Xwxd̶t dMw"On%FwCAZyܢ 8^oh|W#s xe4lڀe,0c5gV3r<|@LiM#e|<2AcdQc@$l8JGPF=o8#W,nFwXXnIpz(l@y"f( 7n-E;nxJ B`li5k̑NX^Q]ݚ&MaU?w/wIûhܿ'?/u/NYu?_O&eM?8}s{~r{Bns jIչvu`GG<ih3ᲢU?ag 9j &ؔ1uX,NS}@ fg|}ЅBGv W+`CXhO&tjG=M^Zmnsj1FWN\+|f!dPBZ+nq|эQcnF:3m!:͝A!ҁ&:sϪ\ٿ_G_hqSg]_t?~]̢ٵ䚣U;חI0v+-[?HEF(^ū:x( <RdY'Ϭ1FeiJ[UXQF, u.Dejl~;ǎNE\fN}1LqEw4i2{xSr}?˝WgiڎUij $O<tH墱nlƆfcCS*c]CP]ƴdYl:sx&w r.uJHRI}lef-%[j97gTՉIӸz@HTX2?+j$leur{ [,縶d4*Z_khpϩqU7U]޸j7VrN;Y&b8]!ݴf (r &W,gBnB7U2A4L|l ZP-5huƖ:V# x7]O0UF#pA-9ljr\x3Mp? ]}Em'q7D"%k|Ei~xqj=5V~piQ ΤS:][o#7+<łcg,\ݜ='hlŲNjSd%wKli+E2b#*i L"!*uN3O#Z$Ͻd~.k0]egtcYA#?[X+xǃZ?8Sz ~Y -< Bb1HWJ΁X5h|Dv&gSH̼a*ߞb 135Z_!@ŦwD58eL~3qڛS- 9rXv"42O-%kc: ΀F+e»R6W1@mxClU:WpA fv1]`C")X`1KEKL3DET{lN ;l) sGO{?FK kqn)|{ ~@%mrDo 9/"YFB72t : < sp,O^V$ff|$f@iު51|XIݗ;PcPje۶Vnodˮ҅P6s4 O f(fj+%.}'.-YPYx[ᨼq6gc[;w}PI݈PG";j6_ -4H8qTo-c9|ʊ\fسeB+EXxrZꨪI&[ufNw=L[*>L"p䧿Gl]%'oxud/yL ioWXt+TwJ{<'x#ƘF0U c J/A_`Th]$H.uNoP#ENi YWsh;^qU:bHq'1@,*ޞRbw@'&hiCCx͐m}J f $lXF@yJh^QOs0tk+VMK[^ $|h+xGb38Ezs ?!0"#Dhw$,Hd]yPC+霤vY˨j_~GBKeVWĊ2AbqGbb­J1dielΔB<<h7 r0.#5T," W;5:@Wf 0iԖ#-G^p% WZ Ÿ -u5>ݨ_Z|.u0T; ]Ad1oe  x-8l =@ g$e`NCǛ$SDSυ:bG(>ș6jd12.sI+ |~T(̕FhO7JSUuC^”,p< #`zBWFe2ğMyUwRl2e'NJ> L`` *,& -W2ݹ\MDb4:j|!ΑNU(<¸ w 2|&1ز_!KOy[e^q4Y2NZ%L[[M-4sI Uftg,YT2?8@\'9e>ZӘY_4۲biV'e'ŤIɦ^1egQ1:6CwXvN+1 QC)]LAn ׸Uh]?u!K7/}YvR s( T5StQBD#@lnJ$YD+ڕxtKӹgg6W(}U'AdJ/HzݰM<|^b]EPWOlj:sq+ÂN rd%7?2fw%c}+8^w[Fjc:ѣ>構~/*E7P[e L*R2{"D{΢UpZ  G3f8IG\ eJLmʒS%*QF.)00P(vz2 &9&Z5JT/< tn^VFl3KwREr|do\ۻt<*7J.b+Hjy찢eUgEw1Qi/"Қvʴ4>NG5V3ؽEbl\e=8N ۷/**#L'uRMTn JE#lEl)L;)z3192#UȨU9 R2|{\̋I$&c ݗ|)3K*pz٧ӷ SH_aJ:~s~87)f]!kE ٨ezrs =EOᓡW`Jś//(+N mu]Fs]gOp:1嫐]>i\M5J$eWTxo(~»OD.^n MywH*\юRSyԾ PK:cºccD K9A(Ě͡{2Ђ-Mz[[j"Qx,|f+3Hd frtN+ld7{ΆxɷtX4҅XmLJgiZSֲ75jY&WxoZsr3)~7\fR:^' `'4V&Bq2d'Bޑ;8vyX&*` uhh w{J\+۰%O"W_bMKRL J#]ū~!Yx%G X4}>ȟ N>q{E3w_/?Z_K8z|09lHq?fw7k7PL~ak+__epτɏ |=,=|>qs|iP/C?c5No6ͼ/B09K 0-[=U'ԗE-O` 19c7p|#},OAܗm5謉rd])v'sF~R>)^ue):\.~SFIIBFp] ܯ>\!=mɑf#M"ϊɴ_(&s-46:s2 |DP2J,m؏RV}M~V XJe^ }~ cR ޠ[gd#TA}?њI FG,$HE)jd޶6&g]u v܇b6>K# PhZ?m XKSG[cQG 4{,r) -鍐\1봁/fm>E(f  奉vy܀>e}gzkMgp;l2 v&F8CR OW`:$S-E^qҥ7i/-~'Lk^xH8&vKaIw?u2=1KtKp|ʺ_7԰}`"6כ#itPD:Iy֗XX\mNF1bݬNSJL#nK]cGVxhr,ҽ2 / KՆ-V 4>@K?8IvtmK zBu/.hډȴ/EZ4FӜ3D8p W$V0cdE u:FiFO|. @voo1b@3-[WEK=יbpOpHAkQBƾߗ# Vo#0eBzvMbJ[U[,FH:IPe<9g// HQ8PQzHKϚv{H>]LxT+d_iY1,͡4S4i.*9}oir:t;3 GRUmM001Nr"@sM$/h-ܛ4eEèHȃspv7/R;2@+q vN'f{I\ֶkI C*۲)K]4])HD|?M%%!7һ~ǤLF|Ç;XV Rƥ @`Pymt2/DR\ѿKGS<MJ~+祦 dNki`s!Hڴ{\CDƉQf2JO3"dmU /q{,Wp}g>pacy}6{9]U|逗oGw[V)P` ?Wu1pNVK ׻rImU;ha #$jЌ\;Ύ?]oLz6d/2il/oovY{brS&,1ZTx&cI1,ejX+)Sj#CO쭟/yt9iwȮ#[^ُqVzNweHܚ2v{&Kh6/xOU`?)`U2aQc>;Ш)&cCFUEI:E,&7`@f+u4shp_^ iu޲unn_^흾s+] τȷgtV_k_TS=%_ۋ@F{jL4g3N{S(X6K8R/\;%^m8Xj`sX!vq֟xJH2ʂgK(gL*6ߣY`THrc".U  2ȥx zڹ1ë2Zd$k8ezXs @F;R!*ڍ '}8Rg8I`h3̣R3,+)ܸqY L1P %`T5JW笛`W1 KSKĘ |~֩KNd{AtPN&LP%EK2X+cʐ5i‰6y/_X+p& ŧ-ɰ>tyƀ<_X3Ʌq$|*j@1ۨ:214jpvFLGAk]]mmFȈ^H+ܢ0eZ;V!nL^SI+yxm50NPz#U~]}>^u9"2|aĽ dCMtՕRSf䒚5Gdǥۨ4F4!؈_SUx]q2pz&s4_1n,b?7?ZbV,<`m Ĭ^'e~ O7,'`أ ڽK~_߿F039y^-gcީ7z3X/:ri4Nx"o)o(& }lJO^aS>+g|V*51x:ǙdԦZ:_@Q'YGrddbeA!1&J@{wC 8 ?cNQ2J9p@f+Xuy ;fs$v?nO5Ta/4P%5e$3_හ0.u4z?!d裡L4Aek:oM2"1K[Ik2*M*c%-.':.}Xv<87P|$E"/ᤴ*(Q۪Q5f$VvxUuy8D+|`{BH J-tHpK&/jbm&Ts@~^uOú}UV c5ϒueV^99O!-ϋp9.6I&o˭v/ĮTfW#9^f`&c{oQ~ph06+Ab3@N둮R/ C^tGFkbZX (/A3HF/||䝜5J-(F l PbGdD$GZ64y'#9V cq>N*b)#0LI hIѹ_n :,v _[$[(`t3&H"߯&\g? `0k" XWh?!`~Ƴ_L}2]ߒxr zlSȑbI2 (C;E7`R׬ʑ4*J`F1m1ut5<;j;5)i&ՆgƓ;D8^%6Z<SQSpVG w7yE]VǘT'xda IKt%4L |&A d19Oċ8q6x`vnX1IDtQp>%{ oO@Fa#MUf =7\ syΰáNT( "^ݰ2Q;g7'H8缮cwì1nv$pn>0+Gdjg5y}GU'J\K}ڦeϒD]?|sHXWn}hn F#C\ݨ[`T>XZ슑}7? ?g;&0L bo U%Еa+!exI $QE2|H l*BV]J4T#0#|CJ(qZб^/BDנ_[W}ڳ]ѭpss%08GLo-v{ ܎swoŃ5_zMz3O7,\|$aA0A7_"xc?cz ?"(s]1ϫqG~:Mf_{q_ +-z>}!_9Ek~<^XngǶi1O .oÁ/_1lGܫ÷]`[<ީ7z3X/:p[ш֭My #fu݌|&:/H]W3t3S^jG`p'UO3I! E*fj&+*i-Lk!RF[ՉRܺVO(XKA@Yᶎ0D`EՏWq? !nyō6G(n_&S9M57py8&!Eu`"pmWo8YS 1J&jiL0,_ c lo3̗:beƌ ^?CXRCU@qA_ Z2 ,Ev|xMZdYȖA+RzZջ^h"uyrd",6ٜ&|joD,Y &:FѾ65cÅy^4ڥֺ[kB mqHƻ\,"]9,=&gUd%T%A*֓RXcMr s`߈O.~mӶ!kp4ѨV;iuL`ى4G>N4с KRKL۩8 Lj=.DReHIWh%`Wp3˩NIV>ԟNZ<5=lZ 6MV4s S2BGadwZc 4+$`]=F( vz`N2Zc|wڃvN#fnFɩ,2FklO^50.IXk369VH|iJڭu =#i(1+^Fbfwe푻slxEؒvfU@НP=|>xY++ U2 2 G+^>X0. Ogy #e`$ ;@m~ Z]a磭Oޚ}z>_ٛ}>;?EYleFJy4׀>vOaVEtʼnf0_b :}ݲE^ÎWR kw(Jk ei],ຓɾJf^ z8j˗W`G~t{%UB""DE*p-+ \湚@ڳڍ[WpΝP(m|8Ql?=\]¦l-QԨR>/hDtFK!qyxF-۹ߜ}ޮh(!ٍ<`xbvT5ktp4O>f&pW֎l=sz*!3Y>{ lhNSS+j^CBz(RPY02ªjMt.Z>vWh;w@B"[L%}3Se/;fӷ}(~zՁ8mi/r?}&w6l*ӋdqQYk.Qs'QBbJkQMDpJ5{ Sj+XaCKhFjNO/(|㿙] _;/͇tawp +Tug#Ό=vb a$m=6ZGxc}nzl?\?=G_bFgXfQXrS"8Hk+{P 8/QCq{.XwQ﷪WZ9]Z|z&cgGQNk`<i$\grj$FMHS.-OV0?1KC50^Xg6˱KZޟ쟝Х'϶%O˾QS50k۫Cl5n09ڱS{N]xvMzkO${c3~޶v4gi)cyW9Vd*?r3__߾VX$E &Y4gt61zH/_As>K.h46 IֳAU #Č ,/ckV>p[o^TTt-K*/'C]6۹r8yE}w_n?nQD?neȿADZS<}=-]ڮ Y~s|}շc\#>$.M)Y&*c]_; lVPz.mxLFjcQF*)]9=daufs+=`+$[`~orM2 :F *$_L4*z$MdP&N2&ҘAOArK!n-֣oU)g>q}(ƦܹLmȥ̟;Mm!PDlꝗRB2t9ڲcZ;+Aɠ5IYާJ[R R%)luyLj y+ Z[QN)/t|7xL0p\sNjQ>gn]3ߴ}6m2[˵߻,6 8;YPːGI4}b El";G amk 8ZZ{9tu!?+˘k?l1gXڮZW/mj~"tgUa/F LbSJ\7`TķȢ;l}FIKC@-TbY&X-xk`\Ol4=m};Z$ۡ =NSA@ݩhk{5x5=̓^J2ʍ}|]WhRM)(eو&R[iƂtuq(.-OfԢHZ; S$PB؄&] h )T}.6h)::fRRf-DX;<]*ʹ1G>^u5Gm==<yer#֯RF>,A9ꇟ}gP^۱%[9xA#+eyN5-@+=z×&i죛vC@! G7K@IUc{-x>³g6)xYBM5E V]GX/Nl. * JiQ@C=nu2YX>QP.RK+Fk:MUU^,ǥ1"8ITo;&S5KDGg#:c݉ün=D s7²S)jȩI/`,dJ6Oa}}ĭ=߻ b8k|N؃S6J.(̎FgN{5L7ޠY#XO5ӹ^:)$KH.nBi1 zcKh$?b>dzpyK~)TFVc\;#';N2uӔ_9囕Ҩv8zp;vȬyQdOv-JM1dsbK,;7dM \ ~^onpavuA^m Agg7Rs/~p?`+`e&Id4U|+ciYNY;/u.Auu.'aA!ciqQ+)`9''-(5"eu.Xvxu5Ws4^yWԖ֜srV%_Ez0k q7nd C Ʀ S)$eDQYٻ&S֬>8 Uw~- :UF zy-+m=_\UY:n=y ֓eHV6wPD$RF) @1rT3Ki2c(A1-gmy; y@Fn+-^YS`3\kqw.žF'!_(KY9=f9/޺dry_KKiz~ml*>|>S7O-\>wX+_ K7?[Ri N|1w>__Ӄ]cD"5hn8{w^!rEVCi @odħvb_K)qx x+W(J1()Z9HԵʒW^4ݢ+/qTcϒYlW ު+^ЀVVt:CclRE1篺vHb]=p8aїîN&`Y!f=d޽H Բf:bd&|Hb+Р`6Xd0߼wM _&"u&jRh't4v1|[uM_& 4(jҠI4Cط"SWvƯykY$''k؎gښ"ɕ_!xq8tKey<Oy76tahaT7j覺Yj6vU2RNwg' J@$TplWW HƸ$B `id"c^0(IS$QJ%8+K9XQUg!Dp<ٌSoT3&AS7f# J4Hadn`4Ey* r5cŦD s*!jr}z3fu'.KW~Ugwf5 dR* ȞBlYGV) 0#d`yr6l&sʞZ{mR cP-&0@R4[@66>&66Ar_$a{u80%߯Pk'/fjfȚ:*7 Fe摑lj[h`V( Aa3ޣPS -EMݘmV|ZPvLL!Tfhd/ ڃ$up/z3 I{&/ϛvRv#E] oj./qO/_f8d1O3KEL= }hR`y~2:s-wzH~ɓ_zr'>)~c4Jwd'dtѰgH~y7~e&N''cpO{TtBꚼa`CY*V;QiT-75`Fé3?=lE G<.GRc[غĶywuGoG7,Ջd{9AXm)p}&յEam]uX6N?on@?)4XeĵdvbyyPF0o]' hRz <oFY=0~87+Oq <y5;3KAW/ j=?];]Mʋ?$DB_\^Qf;X["^qHy+cKPC7hxdN;5Cu0-Z|y@)2x64}Ib:;ЄMp3u71:Ñ mnz ~vomnyPI,۶{I ?IðO# n~nhlWL:K\|iK6:(TlA !vR/NRӋ tMXk.h;+7Z+#3{[O ;2*i'RE+v_&??Na[7χ%nN|qϗJpˬ`Ƙgᨌ|!XA˙F\W{x~wDns֘+I;;`M[N*rh&ǵM䙳m_|>S7=9ub v__[j~Øcie7< c1ma^1o 】L:<0rTb(<$Rj)Kk1Yjݧprٱ$=HhM{4zvd8zL3}PؖZo83~aR{=wݵ{܆ x&ds1bR[;魶Ψ^}Sc&^աg^U=CvU6̼tf-Xg[gvj7y7fh ;69~W l7 Avi6(/Hb,+%Ym&.Ef ۄ15;yBcfMޏxe6]h&'aB]derGL4OECvԊ K1 '5*9!=p웕:SD^|q=ܼe8cwyGg?Ө̋[\1Zhe0+p2ݽL\dk}: {NO:S'WG@vP<ljOX)?2GCIӧнsLSiYZBf "}c5xz#z,VΧXc;Z_FAHycjhk DM RU 6eS"_JYB L5iu8py FmvC}mލZ ]u19qח}gv.f}ik1Zq.jqO蝟Z\&_H[3ͲG*J+xgCѱVՅ(U{z'>b>UpWQo!M:='yJ>AEՅ縑Yx+ &^TSyC!^m A*'c85MWۓKsVq\"A- (Q&Ev: 瞴b;Dˈ؂'ܝ,Mz𤁜R釽+yu4ܻ,*1-xZPs֢XnƝd5w4b<.mFԎgdܳo"\meK7Pԟ>S}7Y肅釘}'iryiL۹tf>T,V(Ԥ * )_BVl4)`B 6&LR`KR֨HVf+{)7J*,;kc MJv$S<[eҩ%[W&r*~2^RFp^"ї=X=[eozenCύ"hsp%rx[YraVhRhVVaXb:fbe d,Egn{ {G6KRa炲_ ˤ_`Ӳ&֫RYixgn2S GaDܪ) '$y? vG7`ֵKhuVa>T~Y8Y|jqzjކ`ssE樕[)/]9ӲR lj(oGcy0Xncƣ3"2 qT~!cb_IT)UJ2dӒ`zt}Y-ZȠ̆0T%nY*Q,A2`3Jj gS,QWy!8/xY 9Z!ڽSڌ?+ٵ?gMޏfs-RGAXez.4@#|nكMNn.sr7SQ9hf {C:Jq| km ~<[i|u:h,CUlP\&IʂZefb 4wbjߟڄ\e銯cvj(-}]VS&Q"|tR] .B ٚXԔJfyy j`m_獣 oM )w\`ɊzwjX:dܝreZ@Kl-T+bUN|fVa1*d22I-3,!;[:Re.\HjҞW跑p7 &˓$ウ .>eQ\Z˯z-N/K1G _^aȹWOWXcL5鏺be&,, "V1WؔFȡ6(/ɐ/h?US i&vސDVIH c0D6T}X1qPSZr;h4:Y7PX,QV^GJ A%KURȳF{)7dS!F/?QV2娹?To:"Mȸd,yd#>-AJ!{rlNK>kDk$˫ا>9&` $o=rX&{@ =5y?dW"3~kE م#מp;9mzGg1ɷ2kZ9BMJ<iw/u]n9c^g- .y`ƂWYӺ.[RI*rm[ ݲ*+/'vlLcm$RN<%;i$&zщތc'9 o=T%s{9P(ɜe?q3&ݐ@ht>Y3g=L|yGZ&%IIkKز0zIɗ7)9څS?+ܴp5* T`vόHe4OxIz6n$ 1^#izInwgܣ4 ZGv- iUnټ}56Ѷd, xN_^m6=Dnh9F[JmJOc;m`ea嬇<˛#fWnE}֋UOT戍 ]0ߌmdUWߴCx'|VjEuvSJ}Y zukW̲'n^pt~f,O*Ԙa4*Kbo:Уϛ-s.ՐM=s;+KE<:g֞E N%Ы1m\a}ߋTx\QQ-ς#E-`{Iትfrܳ,- 'k{^jpײ_no?&5ɑ}wUZbzcL%!zP)FnK6@Ō %Pչݢ]Aiո?Wfg?xǬCn4exU -ՐqZ]9X 9kvنIo0tgkrpq' lF]wF|̱NW&z/6AR鸖bk% mCua{Yk`0]c1g+DD V=H3)RI-Fh*bLJf _bk MmpDt#8lzXUŠ@ H&{-j4--V5$T .=ށdľ)&ԚLN¢R @{48sLt(WRVXgj~^;*X|x,遽blu)aL>Ʊ.ƒc" nކ[έq(9MʼnaIwqdW]bk!F ,sKcV6 [oPI~h(f` R].vk—"֘8 ӱwfizRa ۭ TK+cr=]ك{Z_:Tm /p{|[)8'\{uetu.joS1dߘ uKZ#uֱŖ=Q}zmWZ5ʇ<tcHf >5J)N݈]JGk!=ء;۳OcC>Pܵ~x`EU$EdV8E-^Nlt=rJG!2F"!3;,ry1ywqU`(Xc63Q șSBk҅CtC03-#$j5*xltmY J2[C ap|@'`!giX^΁wsߏ^#ˋ9Bx8 1es 9r닞ʚ4R8g Y]͍(&MdicN(ȯB mc7⚼](e5we*Պ7k7#3 s?bmлP`\~UU\M i}*,*Ůr6i,b$ŕB8g%#YWIg  xc{y"AŒGM[-10[!SLrH0ʹH'{殃K!?G{7kI炓c-r,"rmiQUnp&+aj|(Ѝ<2$^10iie0Q|hTmo3I0HzD72]3Xcn=b=_'[Uu$:'&1AՑȮN 6Z9*!.1H3 s#\cMGID4w}쭘es8@kV/ v7s0WQ%Z[ Teq^rꗛTOWs[20]P_U}ɇ'x6rvz Qy "݋J 9FI%4k3 8Lm6brq&9Yȑ5S^ 6 hƔu 9LuBSηmrZeK 79y'0_BϱQƎIE(^]yOӗߏ sē#X8A_.]qqUn?}|~_w.vw#寿=z V~XκI+:-[b9bӟkJӓwW]ͽ}?c N6-j1ՉwF ƱM "v.0krBu0\mRc~$f*vqƸ7\<5>uk=G<N|JXqCS=A E[5gvZr*C(O%(*[<.b %lg7{q,"<17}wO39mǦw?skGr9Ͱ(9-ꠚ_4'SO1?99zӞW;@lNwH>nGgebnt)]xdKJ xb=z ?B6#%e{-)]KJ/XˇmNJC#Nl5wE̓\T5鎢`ߥZqQƠ8xYFǛxL< #uxJg~f,b2U 5&|⒦؛Hܜ/7&[KC$ˬctqU~Oýw2vt>TA;VVOq;xV?t^}gέQ)j|.x͏)!#JOCqzBi( /VwH>>wXWUk~ny 3Jgŀ:-{G ͘9q1%3;ɌruKf! ycYk󁘶KfԒǿy]r9~r 1U{[7!oJ6w=;iG@?߯eCn.L] 9uMI)69FdESTmEgU#u ֫DlMKMT-}"1FbJ[%~jJw7G˦}a%)VD71$Hfܽ!˦[;&6>$ cժ Uf!V $rYRO^UXS :?~oe@%{V^Ggvcp1n*oz$m5[3t9r۩u3ƒ[$6\ ͥ@*\QV `P3jk\c] g}%*?6dϤ2X@*Ԍx5)K !q=D3!蒚<oK!΋z91b]',B%LE]H<\!(XzPCiLr[80iŏ0+6UfkU EB"*.]Yvv\Уj,a|Wa/B2 k?;o.@SCf(k R\QT]r=54:sJUk]QUs씳>=N SPޏe=`>ctpU^M%$1 6'dSKm@Rj!Gݩ֨vOt](Pmn?/&!CAn|}WM6A)*Eodb)PՠpA,W\G zRSÉ=`^b}i8Dj0-78Wύu0Nc;nɕaC!=Z**mYjҘrCzlrf5p.j) Nsg%8=~ ޷9hm9O69*N?9Ó> 겭޹nk5VU?E͠l`r|V7٫QVt0YǨ؅bh( \I+o> qCl1z39hv(IoFmЅWsٻMZA`PϵLқUJQO:E?Vv:ǺN֩MzO{c^7Dx z&Ϫr^kdzz--*εkYhvNq_\Wm- 1@  RM*[:Mг5rJ%lʬJj6Tz &ЬƙRFKTH|>2*e?t|!z3ӭbUpH~WQlX9*ԜL9ۂEQ"0 (֠'D0ٝBeQYTfDž4ϳ]_xS$5ƕbvH]1k 9IzL*r kkvm'^k:u8\36C"aGnJ3%QR|;]|*ɩcU K'>d56ZE{^ 2%&"&} uSmF-Q[3LrF*RۥEmNK{8Ӵ}#7?:jGt!ͺ2z@ӡ!{27^CJ%q8|1T=QˍktS},hG wR)ti& ,A^hYQ '1ܝ-MAz*FjNxKژW’|0Fk*?woDZ$KrޒU@1' &*@q6k g*ףc9VFcwrP1ױΜ?iAʇq2|8۟<ڮ30۳tVc^AZ)[zWkAY`gh[DɯM<{Ab΋!;d#k- M\-5V=;mVWc^[%'c ĦV$9qͱ?q[EZI % *ū>nP`Z\'ic.1[#O8-#B$9qM="p8[z+<6aJ͍*N+ڀC5~s5h,0hkǣ5h%ƉkghPP+?7ͼNXy=)P6Ld-Ѫ5ٺsv0j-T^z̫zz2xgNdU(Klɐ1}mOUW`ⵛNu,#O/Ix՘UcM@8nAdn5#^صMPXnfyxX3sV*c;v.wd~_By5,Q$ 岒v~;8PQ>iyF;UG5~Q1@h*]F YulE/{/d""˞FPT*>OmX|DZ,v7&X9x"G}['#'m^DΒg8<]w':܏y=xpi*hCwV1:u Q9z]2Aֿv2FaT09jյew2G~&/O>.wdeޭU1FxPo("q"8<: ɉk3wYͬЎxZ^1ZTj)M'Y6肦IN\SDj9qSL%Lv,ZJV^M pRgٲ3+1N\s71T[J_JG_"@nZa",%:aߢd)&p78ηpjFT3F;XG 1xQ(Q9}ٰژWrc<#G81:OX뒜&3 2Cȃ<-i$ėwoڮÒ?D7;5Jw'B]U6 L2T74vCPR4cΣNY{T@1$%,$k XA.g'D"1'XGہ{c򺅙 U%8Rv{H싪ݞv)j7'BrlQQ!E.jvUMk4{pxiLvk33:uĊVȅ\ [JmE 5% E|:'ωS[qS6# !¡A'^<u[_IWw:n|Կ}D3(m l=.Mѻ=LsWƏGdBO0@̤FLAn4w}t]PyQ~\f,$%7jC&b՝ju?Z!OZyMۯZ[u_=<"!c۔ۄrOx3lU)YO|?EɍLbEdy5lPȥ Zq;F dkd!zBf΋?A$?C +%k&J1u1ʭ"G٩OqsX%4=aAOД¢JO7qt'=qfYO:V$pX Ljp}T.d8O^x)crF4ZC(22I+HQ"yQ{J1FK=@w\t 4FpoC Mo~t%5:wf@o_חZ:f}g<#Ov]^؅:P~=+a-yT8wi)&0ܺۇJශ/u-[8$+`2](gty K1& 顛I wPֆA#Tn0t> R ݇|r{![~8m Lp%n6 BIvۿ5s\zK,~|E[_ $/nBa*ׅ{p,F0 4MeL6+Dɒ5f9˸T%X#BwG.=41ܥYwXP o^HH. ރ$YhvnSdE9)\|h0᥮y5V'oI: CDu27- ]͢A4gn b3l+/5iި339 Xd(lad7w}e%rYe R vb!>6 `BX,tno%r\~HKkIq,օaV&Zn#TYD*%k.e\]B|u7 .JM|b"NJ&mAU*B2b"*?\b| T}"Q轎׺C¥ 3 vvP/q_Сɐ1RJXI8xJ^1*پ$z(/MNZTWDe}.duḟn f 46O28[fR Cɓ4Jb9:k]MkN{LÇ1{w|3ys}ꒁO v Y&O|#+n\/;xM:JF6aG)a1w5T7.oh&iTVEo'j>|s 큂1KGfd'2S)|p4E\yb/ Fy{y |ƖTqJSاS"mFDq 4i`{g)"6if0W#$*= Qk%)Rw{eT&r+ "\f'͌Mil aɁ&_<i#auj]d2ߤjbcBU܀WB nzOzRƥ{aH3tD@l|oW̨ Э'Rҭ].0Lj'G`%'GԨpuBf~댇9qӅ:&o}tky'8;ܻEqE5c.Kބ|aZ|^P&q)p4^6|_ _;t4G` l` PL0‚+ۭx ꝌJ<,_f '( EH-xyL(0VZhDËdDcJ$dQ'D1L+ &ZD'D`doz Xɭb G* m9-d+ф#*Q˱^A#&|ab}0clH)})DRgXDE*8Fa*&(PFY0jebD ǀɱ ZmD~3Woxc x)d7S_E%k|/ⰳKYjVxkxWc cyw?f'"ŢʔR(8}\eY_~"Yz 2O%}іڳx07@!M%`i{Wprk rτT))@w`oAKf )0\E9_??2HA7eK k"O jFaJ,L&x|%ڀߏU<gƼc,?frjR[NM29 ygK|ጟz f&~Lmzn : #@:aH'GL^`Aũ؁jxbg"!2:B{㔣;ZAPKOSw*Pg诳vkn3kHĬ@G~]d8]S왔" SpI) ڻ!BA 3< '[= LkύDBv f(włu(3)Oufu'8QH+)2Tjh+B\M?<%(/]ҳ7kO f~'`wf~xj4{M3M0 Z;'oXQ[dkssӅqsfL5;N\)AN Tz,aZDQ *)QYQ2\y5t:o}i^+bI4LplHnfnMCd4o&Q&@ . Ҡ> o7PeS]ZhXT wLNb&'FV 89iQ&k\:ѹNӚ:zf kVAeе*NC[wfH|0WnLu=lR$vz Bٔ3/RaQȚcƬkY"2%wm=voQ0-NZZN0Ip\olFTsH8\o0 9jnޗn6O΃p3QoYv:ZM{N.yr\N>kM <$?աvS?bj "߈V%B- jIPKjd$-/Ҍ sT w @2dFoF -sR-='{ci iMa4J^Jluͯ[.$A T ٿ3~}Ө sFSi CC`|}5g<.R։e`BeRa%\bc<0,)ːWĢ ~8DŽX'TCtr}S8+Xc r2#3"-b{y:)dϊ1y|j^KƉ6˔je8 Plj˼g>LnomeO1a-˰ڔ#dL Yf4FйTJ^7Tl,ȺᚢUZm&9V*4Q΂H< B;)D+ юJ1SUB-E2w^h!Gc[]4+P>(9c$,ݙ$tBPouzodț<ݚNti|+wcZ rvD3=ot6hd մAG xy]o9Wu|?C. ^pTq!(t,Wշx]gR^2M:i"IUF'kqXҕb^TX\"S?HpH9I9>P1ۉ;++ {`M2d}Rk6۴nSв*C)mbe4h?W)0j,[1zVIڦ~cIօ%t!Gz6Hk֋躋S(X6l'y0`$NTg̴l)kԻBv X V2=zjF G+Rl}86%_qю]NE!B(" H0^0cng0 heJ&ˤQRty5bMhK!V_-zcfe=3G 8y $2.t@x#f$(?oEw2#V\.޼<"2?tWT*<A/8zyY/KᏲ3$eSa9H4~sk,u3, QYFS|Q5_93k5ޝxRt?*|S$mq#,2Ƭ׳tv_8]ʽq I?߶^tDQ&y $60<#BA#yǂQa$yIIY!%׎"]GGIiGeҁE8Y@dͻ/# Mz?DjvZ|Ј=/2X($6UKZRxzR(T+ZUp{İa!B{ *F^93ƌ-5N;FPKCL{}ˉjX류?0]kCpGMB 'l~TƐX;6"^κ2?) S̩vֺ4MRf{f1HOEC':RKNlm^ȥP<EAw1%ў!҇\T8APa0FRn ~9$@2Jsr`-oM?#.^.F0Fݓ6J*x2zQ Yd$*K3BFG.mb'¤f ̚w5r 9d"^RЫUJvD A<"&1dߤ:?>4yxg~MbV;>oz\⩢oKsB#Ȫ1}0~E E9M0>S8Mσ}*&HBDg NId,4"ԀP¦Vč6#h64A:MPOz!5a g$# Xԍ{3Q6<[2N8D^jg8R /1eql3n^PJIláRӆXkmJp[.м+ Q @%a]=l5$"bi_pFhl1]Qj` ,wL].aU8xD#uN#zeŽx"&h&YzzdK5[̚ 9pbuOݹ794o<}`"W[_-S&Ȑ۸h6.}E;eŸON3l5C35I[Vnڕ;pDOEO5*\ AK. W^u>lC,),ֿKjR ȽD83qh X/.(6R=6)50薑>[,*r Q-P@ʙd?{53a)\}9wZJ#O'c7*Fnl &]GT?|,~DZ@[ ,E귥l$m pWw?#2*=Y'N<hjANlaԟ_Y&O^}IE{̰>_4X?Ew"6j&sy{S6u;~rṼ^oXӴQ^Kη m.'CSʏԑPZ[.\cM+(/ f2MR)frJJA1O_dW~S΋Ҟ݁RP!HJ8"HA;Э$zxfqC)М4ugi$E?/fD49HjA;g^y(إ$%%V%q ó>?;fp7qLȫrli+Vǖó^S#6G TT]VAd1ʛIFN"U wdgžGP4rћc ,+.R sAſcX5ՉwHi.|2hlk"tZfҪsešDr Յ$*)!\os}p\vFGKc?@CDRn$IVZ)OZN)6/$0/j▒ZP=ע*e/a@vif"Gz]IBu*fl(m2+H/O&xJZDW.LS,NIyՂ:MQ!'t&Ҏ' u=^xJ4zwDa`/;1X siշyG'k}8w_#?yVXz8jt@Ҷ )Q0( ,⩂}[|: &k;k{7uKU*Lo_n `Nc}c/4Z{n@Ǝ&GOs[I:ZM|'xp1lL7Eޣqeg?{t?˿O~{X@Aeo0.|q{W+u1}~{7]]-]1؀bB93E󵕙ПmBTfkFi~i7fPL~}z0>}nh2׹ yn-vҟ"^s13NMU.vQmd̗œ]z>7~po74Yt$`O C_:m|҇Gm'o1~+d~a/cݽ1,Fu}<| ~x?^o=pxf!gԁ v q6-𮨑4ܻ|b 3XOɭkg洑ROK u J-bi}뒷KuK+nuɣ.y%K}\oktIRND'x(j|aL~.]RRƏj]rBUNm̀% ֩ \R,#)J+=f̠l#Sɠ뾨V2/ߘSPFGxqoD-'?k^<9.UBQ2,Uh',D¨$1JF 2Ņ$6mTyyK(rukkY$+}u"i3)RbeFZ`  4q\:2E4vTcI?o$HzP! E硔fM% Cj*)ESp)[J2RG@o8$Hv/S"FAw).Yrθ#Xkkv ޅ%_&PHdF1;Z/ ceKa )Hx&Qn!ninVӟDĩ{" ݀)x4a(=f pY}-0d_yk+jptyзkUP!ØƙXCB֮zֶO㽁^F=L w]d lj=zAȻ4`NZ<#ϞNZKVjS+~CM28.  XˊGix ,p4/eIpl&B_Ϣ{zc^]w>tpoQ2>q蟠6 } p9w~T)dEQ EvT"|$𜃑N:<kNPh\IlaH<7LIlK 7sN0" B\2rh݆$ȀaB4,ӔD~=+G U)Mn2xce^?2{Q+$z:D93kd0/58GEK?# 0dl|0LbMcu\I^;+&yԄ{ R?{Wmj?xЪ}RK9܇8[,zqU)Cr(a8/ve}GuPhreeN=@#ՈY@V4UQ!} A 7]_J tFR_1_r-^bZ+@ C&Aw!jS Q h)6I/b+. ]}>u Qh`T/ =HTClrkk ccɉ`\:B(ohx3٫8of q;`f=EvO.{rtOQ<݅O`\dZALij)hUCIk _TR Q}_+;T6+]x9Jgvj-Lhʽ6't86]; rɘL̫=*(>ƹ$5|"c*>]kkvsvcNk\CҦ6,gĶ+m ̅1lS;5W&)5kvv<ے]ᢵV[A蘆N9 -z86;X@DA-:bXɭwfm[,zQh{L'gq0&+Mb\M9iA\9/^K)Eϖh1xr1̖H.֖H*{("k +|+B"뵬6{&_Aj쫒rU6?z[7 aK~aE;W6Mr's;IiG#9gU}Nw^(rmtY:E|]}$ʑ>`/SOt#eGOt#E|Kd#ϣ/ "9v 7'Ǥv#G0r 0FR ^|jr|K:|rXZ8ލ% 6W&hZ?h4 L籌r3awbNwh;9i|j}vzY_j,݀%;*UYm(gO5W| ĉ~,{ ~xQXuz6"̪q=:?sW \v"=<%?s'5/L|tv,]U1.f>|4&col~Sn61bޣ38ة91YH5wBpZSywzq[(۱X/]0"C c& NEѨt(s)\{SjmBq-&{mg{tWG}lUzmkO"KNv"Q M?]11q=<:j?hV%_d}Blj/h ƶOI%,gOH oK`c+F>%RH-\C?5 g-֧Dy 7ɞ7I70 NY6FiImw^)!hJpAI{3 3Xmp .ÌHT,`AZ=sn0 /V `QwX}HBvP5ʪ7XdAj J!ŜR$#ƶLκx]pK臜tyml:=msOr(aΡ<4{p#IG2he9Ħ Q4CsDt9$YR[dP:ms8\ *ˇjs2^},Y3gv ~d*z|ѪWw26iaF9Y텟?~O {qoDwuIKt~bEqU䞯(JGq~KV#~ܪNNHNvө/\D Olmsl=7;08Is($U7N k˔L  ^$aR8J,c*.9헵W}o p}oZ>]|k6xI8j$:Sba]7*m_oqh˫b,g]fF@h ;qu ;uA}GvDHlQۺhuBCrSP6kH)XNwtnG zX [Z!4+WуuJk<0cbtI#fmMy %gTBx1k!D)Rtp(9Uz1F5Xns:źn&~Hw~ǟ|*|=/׳]zO]3DD6\I쏗L d X)>ad^Тĭ$FuddH4Uf>3Gi_LVЎVJ1EK giuP!ܰ2Mt<hqznQJæÈ.D".h!UlRehlAH*:~"%T$k0P>SDYzri6 пruϲ5\^#3PjDD [96V@#?vڢ;Z6AVSJ.ҟK~H-{|'Iwg\`󎓆e7a8]e7P\=\=nUn@*&:F6ެ[0M[ U4Hy`dnhBeDuBhc݆EI[0M[ UNq(NPx{!. Ҩt9[C<72*AV$r.#/^?O??I|mf򏟯_ۣ?+dG/ˏUYVWThCį$'rq'7Gv"leP4Y^sUb~W_N9R2sW7~oCx5:u;yF |cטock࢏NjORvO׭4>#i/*tFy>U(2ISkq e6BG lyrBBk(Ჷ)8h4m8@A1]6FߗL9?._ӋQ_S>vzwÎ_CaD]åp4<±;! dHl,ȵJ-S؇eɫHKAɐ!v:,@aAҿ-wslĥUsvB8^ݠEaI&8GCƿЋ1w.Cei`/C)Az>"ª0\[ہkItm1;Qk^`DGPR_%JxDZ):d/bWdw2v*Cdox'™M9I z%GNI@yr6DF-Z_n5(ّ1(MQ 4{0N)H51 4@`# PWz%Df6-ЂU %v7ڍ{GXr 0 b@`s$ 1^!v©%m%e:,-Ұ:׀j$5^9>473lcJE9nR"E#Ȍ5yF e*r!R;Y0Pc8P E" (F:rbpb2igCz%A>BPÜ:wG s'U'~ 0s !Kd(;P o' ;ʟUB`!sBIns$6 ([,W6;݁ ɰ&ZT#0$RЋ)aT!lˍ1Sg(P , Q0Σqj LJXt ̐8^l惿vI]lj!WMYcjЧVˌJV3>!X-܉%ncto#Q3O@ffۻN?=ێ+1G Z2lJ@4)/t>k(P%ͧ'yju-;j ѻ%/աƌD=]hVm>OCm>O~-<Т8J_/F'YnLi9hBoHyM[9@#cPRJ0CaE*\yk @Wi>1~H3B //z b#- +uQFP׾8(ֵ6n9kCmw0)_ (/ O{4`}oo^hE_Sxވ&ޑÄ|dD:PC] bV}wN:h0Ot)p;ݓp߹<;dĺsQ Gt!yUFU Q@R9RL@IhB%"?Ia̙O)-iS޲ۓjC'I(*;6h qTX Xr㓊4zstRI%Q25Ll;iaLZvZ!lwLJv'0^Y%^Wj$S{0e6, ;'`Vb@ ;tv9㨐Jť1KPAF bЭOZw 8=Pnx?TuT,<)ɶd7O Mbn&ͯn%uS.2Knf|EruR,󻍁%cx> \Iյ, ;b:Z/|r{eCg+JslgJު ֑4W:08^u / ] !;Į~xZ/)gc!G @wB!&FJ"ϿxԓI?>GiS@ 5g //vZ-YUJj<C?/ >8LJ`L£ݢE d ))1tT{YW(Sn!$ =Sj̇@9+ Du1w|U yfr3"$5 mBt}SoBAY힃ҴwgڡKW@kɞa^56<;@tzvPuzw<3@ $mн("=.z4B$kݝֽF#ah s@cZ_EA 5/#c-~'K F_׋k<vC?7 GwZ >7l5C`A_488+`]R/bHDJ p פPV|&VRz%Yw"&hrꓠF Rv FB!08bxj~vIˊۋo9vIO%+jWJ4%C 5G\Ț$$ T6lsndTj؈k)9J7z򗍣mM]hgJ3(r _,RCT!,&"3mA n%P TsP9ba ha["/^ sb3V\i(‚ Q/PJ :{ミr{gI-|obl8Ay$68@Υ)^?z|[ YY.ˉoZnRi?OS u}遫#oh6}ʇefdG7oo&t\m'o|߬5MνZ4}.3 {XC[X woOn0fPF^MV|7fD:ǰu]̩Y<ܨMͶ=g+E\Ν%VV st_R/'7 3YNܜ P(j%8@M1ˏu/;¿3Vsj[r1DiK, F(0nwo QZ^m{_w i+GKi<= koG[ ֔K?Bsr(E4aҰ\@*s~8 (SP.}Ȗ6V2%SFsF nIH>I?ȹ̷Hr@'^ 8Ol)p uk(%P9E `^ȹA0wsk4Qw|>>G*hQNTӼ9Jg X3]VP 1`C4-hmfq</>I+\L{^ nN„ Cԙ8P~[PbhsspD@C*mV& FPt Q8k5 pV' ]vA+U{#hץtW]sᄎ {jᙊh9a,%8G!BNd.OnZJ5TBi`4DC09*|)YB%F}^!*tQ̡Dae!ZQc(8B\w0jV;`}6Q  *,y{#SlI?}"Z' [#Fo-V^Z"kt?ysE1roêIcrע>UZ3 D>UcZn|F2Y+1s ~ݼ<:q/؛cܽɊ}ʄ-~z5_nӮ bS_{T. I tbff[X.֟-kVGVKbAL#$RL*D`Bb!Lt^a±X 3C3M5]&PK`$UN%106ħ0qj,JB%&05I3z {ym!˨{@l_ *yEM^C<7%*z秳VՅ{2YO* Sit+ዛfj(ܒ$PÈDZ(DqL(^Vu4`=*zfotZW`˰9Q랳1[~ kZ tiv7>v)Lv;ʇL1B0Ѯɛk xo~sm!u!P8 MT{MzWw-\]w~1pf6Lo^V VQժ7J\xYQv@ip7,Y۴RJZ0$S姐 {!z){I9.O pt+.#*9|*8Ugp'a t|].Z"`Z4$ڔ SR%j-29 Si$e*r[7#:pT:i8':Es3ע's=@B`Շky,ogAw\W6nR{Z/8SH+7MN+o=8#QͭN1-n8o}>x3'9߹0~g~㇁+">\]#`׎~ c~!X6<܍}_,J?,e47ӇI~y욟 1kF3ОtYXr64!_(5q;W_cúiTZ;uŠľu;(4dznьVnMhWQ:ucMKTкbPEubQǺ0YhFZ&4+W*UpkMKt-*:Q RbX-Ѫ֭ Up_|A59AՉ}Gvuk(GnьVnMhWQ:%lZ7bW!*:[ Zj-Ѫ֭ UtMtE=*plK&OSI$uD@dɣ+ΗA+JiXN Yi9LY8E ut[PI7r~)*r~QϺhnU|3$B# | Ig64;14L|AiOg7GjMY//¿LB'~ޏp\ro3`{YEkuB_"ӳ=4uϗ뗣G 2o^OSa.z[nxjzrI~ׯgKJ2k\6ҫ>`]x! ۃJH$pk(ݢl{<VC$,)9M-Hfe 3,Uq0p"7'̠H鿟,Э79 f??ap3{7/$ 2%,#Sp"%p*)_c*[|Ɠ\缄g>o[Pow rw~{AqM,{Pí3SHXSM[Jw]M;XH,Ib9R1Ǩ'JEy"ɘVAWp_ \*S{a'YJp"5I('L f( TDpđ:u8%&wajj)8V G|4[vKCaPný١35*6ٍ'?[>W*`9LA7Mw KFɐ.H=.qA4T% R Hאt+R ;)P*'N~m#q.>تˤu`PvԞD :@x fpwEf_Мɶ+ùeT 34ULS/ բ+RVÊ.ĆҊ`pjg L#`` /~`SVB?եfj߿N~-^UQl%e: 潷>:9I՛7@a| ryޛ+oq칉n2v"Pge7F!U3lBJ{D#B$Q6{,Rq⹠dv"$K[N_;Y]_HϼƉFr'L0$>^+PF%,p''8%^bbK}'qf6>խ}(k4G}8ՌRQpdAЈ4ϰ4TUGPLOS"PEzKQ*9EVԖ  V$bnOdnkJIZr WPXgi) < 4%20T.VFǜNDZSf,As#/S/6ҥe Sj9vr..@1' >ɎÔD?[l[ wym2sRd9JkKJHx<)p3FEZUCd,W<`LM9AT2\>ߗ4td<Պ@3 t:4SbNBSTPZJPGόuqx)48+˹SƪB. w`C$,)'zi|k#D̏iEhjJ5Kmĭ5zL-nѴC?铩7dP/*R/zJ}hcbʭ'o Bl{sTagnjS6HWOÒa g4W$ḔbmbW07lxAއ837u'пiPhpuNߩtqݵڨ.#Zv5iWYP'qXC' ORZ+r&;pΖS,߯vlV[ pu;哕Y0%5R_.^ '{iQ:oӁc"gƾPsg`<=[BcSo?ف_4FO|HK{ʍ14~z?0~nA~ucOl&_#]#"z4Sm4zڷyk-#0w]&$4<;98##NBf?y=wKi6-(N| 2.[`aSUe-2e7C ow19ǓE J7]kz6 inL<:\O tNMFnDRJd>]Af|؎oJO_ KPuĂSj/BT37qu~n5kcNk|Caz,) lw̙a Biv$-L432C=Hw7ž1BDP5W''_J)q@ /W☀*[*5V1YOJ(Y֡RË 7?{_1+P2C)_> (y׉&lK6%O k8e1K]㶵7g˪^4[oҲe_yk̖Q@m%oo7 +V/v뫃 ˚tXD,(/6\3YOck#[K`[ÈO>HzM0"B) Wg;wGnXL F8 FcEN\xEpۊ'I: 찝Iho;2R0|g9_oYE&7ñY7>R#pZ&-,2k@\\x8 xaJT(7Qbwm_.."(Z!UaKo{=0SÖgHǭ!q ր6ݗue@ d֯f,uݑX)'Z#R ykj+9`O]B# ՆAG""ADUr_ђv'h=p"\{J[ Rz̦abq, I1Jp9 ]Q7Ls/^#KĊb!'xAZfGs܁fƃ(\NVoGu0;Ǫ=[p&S@7zu?N=)} e"P`at!-,ysD>;H U QM_?.Pk:a V=!J\%ZE3P2jAƈ-YǸ^'9g=`gdmQP.#,8'Hs0Z!HM2w( >ГmT6*{ߍnVH~'o= hؼ8K6eWݽ,4xnbo~͡6{N2k)_~t1 T t%qŰ@c_5Wb,[1Qn?`EyÉ %z<4FU19WB*)ԇmT% ţWZay:4\}d2s_ 1S!~=dY`s S Z9R^T=$︈]@o>U:.2K4Bx~%Ѹ C˳<7G().D|أZrcFH/ o6A*Tbv'S5Ui]qPFhH_{D|N9NvʕkniQef18V0^53ѭ;f=v0q6u0Tz釭B|Lb,%+$2rIoUy%!iϼf8j΁)>/B匥8*C$E:ER'jorqz6O^ܨT)tThqq,)($AI N9C \ V[5>&H Ph UyՀ ۏ0MsR9vQ,I<M7ΚW"hC w3:-P,R;KrGEu7v1ޑ^h4y >s$:1$*Џ P^EF]!T* Q=T C j+2* "$zZ8BL=-qwR ؛n!~mIdВ@˫ϟ(ruI7{!""9V̎f/>0ddʨdg"^љKۊK ΅Db16NZ 3(/S+ X@3d+R&BqDMuSUG; JQv}a H܋Okrͱ11,Cw"UK=g=FE# Uy|䅸B׈ "z"=pOHsl|&Prsi~θ*uR+=hd-&Q1a9c2K)N$}PH;j},OT#){E4 243KQbD't@"ܰ9I21tJGjG IM% [I OisQH%iS{,4S3rstAD+H*d`ȌED%Xb԰ @e5!/ʂ NL"rV@K9pnN*I7][Q}6X %9NqHJ0&΍Y$ay:I&$4'  +C莧:nUƮOQ ;`d@w0_$#"e|ɞ츝a2?0t6Q4O>R8Nu*3ٕ+$JCCvZwͻmW186]A'#:`>}-r2(ƎK/@0v6 ^<-%vS6͓qN^QjOפi]~ˍ-&%Uͩ6Ο{qc)t:RV,Dnje}Zh.7l%3W׷?`S >_ t/śIF^$(L5Em1m#M}C^۫Wnꏯܬ~ǫ[}OͿjS jNm)uׇ)z٪9-&6C5f(R3uU9^$1{Tv]>9Ni}f_eߩX{K*o6yw%lJܖc[|\3*/]!S}]qxeXUϫXq,iiU3S`B.ej#oBg'HUmcbmF4L*cbYU{Tˬ/AH^*xzYxmq|ImA6NI1HbvRdl4F2KTapͱ.Y+- g7.9s7g grs<%iMLfKX<@r$8#LI$h)g$nIԯxdž.(m/ct]2t u[IZIf6Njţ_o.[9naK<%^k*[dU.xE/IelHܓӰ:'QBK44$Uw:nixIWu..*|JCŶkr[۰+@;T\IDG?dv;y_εd=Jn{կ~v뚲+-x|⫯{FOw ~쭞03=^V;rZ|ܢʼn|Պd]5J|tZ\ 8N1uK ?<:aK'Z 6sO Jk=nqZ|ZOܪ4s˭Fy-ŗ-UUi|Ai2iei {,GyA.P?D7:vBBF5$NE€D0)Cpm]'9YG?Ծ?ԱSg"CfpW-_-;Ti@X<6OqjuxJ9pTQ9Z( !G1:st:YZFܣ̓_]~PF5rمEn8+yy.5t@h FHe_<gܪ _<-z ĺoUtBǿ: 2Q-N; HΕ35 NuDv Jы'Vh,sJ'![UQ Gu&a$H&;:AK# Kt3Tw3Jibz0!K֕nCܛ&mqmفh2/~}u%ݘous0]pۿ}s}o~t{,N}o9VI)uG0|ѷrsUg~Q='ҐRpC,_E0^hǏ@*TrpZxEe=MR[W!7aC6!s O7 o0*Yn]'G^>B|eL U'6+i+;!0R|F5yc'58 N%Ec[;a4fS7oHdH2H-IA w8(\pO G Rׇ{>ZMm9ꖈ3Mf̕j 3Mm`5'2Mؿ}I%gI4A'*`iɘ`aH0Q™֙yN"'q1CvnvŮNqDѕ5/'Oz!oEW+0zކv"ۋB' ڬ@R0Oy (i0h$%َ~$$ZKmA^ʙ}1Eko, c8KWE򕨦E;CŔYۍ8Ԉs9ۜ9Y- $_\_Fxa~Pp-dRD@N<]e1T[lF?d;b+ͬ0Am#:4R(P;StI.f,j&8 y ͜SCpqLdc2:;ul;+jD÷TjBCY74@%0(L"0PXuKcXP6 \<^$c8 \Ec,:zz/r],Cr Q 6b` u*F Al2QBY\aԊ#֥$`JyͨA8\; & 2d$Q@pJLp!hաdT1 VթdY/?4RE_YG7Yr%c2 "W2oz?_.ZT2Ń9GϊGwqw!vѤ,3 #}LX|:`:NYO;&`2 Û(E+Z& y""SuqvK};j!Lo~psFm)Б&h',DQ! bQCRHPU@JfDiU)/Pk$kl^u>5;'鯳"tElsEZ=nO8[/pTh]߻ϯҀ {[ 3"b`E_A%Mg2dy3@m\S<(~z?,TdIGw"%PvwD3x(5VZ{_b;Wd~ A)tOƛruya-nIcZޮZ}{WrS1=bd)㝥RQ8S#0uTAx$8qF Y  eb8DJ* v5W,ts(5hh1KA9uCºYiȦúa]E4AbvtJ-UD'uC[QwѓidDj:$䉋hLbJ0BPngyS13JU7-*TU9Q-#ͱiŜhU\2#;", y 8 (LBǰ\r6[4֘9ե]M%]Ҡ9:ު6~%G3,5H$: -ɶv+ߕ֢vKAI֢wAd-Ѫڭ y"$SunܐEJ*:!ڭ(c~dDj:$䉋hL#hn4+ŅCe#V5RH RD u,  ++!p4V\ JeD*I02RrJk~> >ў4cD$eHa,0 H Cؓ.㝇jjܕ:DT*L:%V:]o*|k'.GTl}zXcV9ț(=dz{S;7,|ZMUL܆r\NhGY /s 5#_:'PWyC3yWo[WyC-;~ [րtԍtDґW3TSʢl+uEdgUx񯧓Q߽e5}[Gy{}ov3 =wv4]OWgθwrkʠsDǝi,0Nt2iT|ӗۏJ ah~J@%Bh8<ŭVF}d`&ᴲ@$H1I68 JgXV 61>"b}q k H_'a ۚ/Y(ЎME9AcF>INE} DlկJہR?o!(XbU+C/ ^x"P8-6dl<{$ #&") SliF-xXNb"M3E|\5IŚw?3&kg~GkSkEs =GbeKz9KSr3=9ԛkgl\ ᆗr ? -E?ϫɋzI͆Ɵ*(Q'[Hl8')S1(l܋x7ziq|i)]aRsXS8֜)lJɡF .=/Fb[J۶/5{z S[7VǂbefK`IQ,91 ;XX,OY>:X\cƒRxvDI,!xN[je m]8uҮ+\-!vƼdGH`KZW;p}(h,J'՛װ˒&)4d,|·q-æe)mcz>QTrhR6mpxYjÛ*I٭B"lE t-FZvv^^s8_Fv%wnN'~X-`f~} hc @I~6?C_QY{2iNe }uk{;LZW~ןxw.{k<{2 DޭSdx t1[o\Y%M+F Vx9g7xp{4tF1tIwҽO#NϪ[(eŻMi[Oλ-Z`!XXݣO`|Tg7c㊽\\tr--aZ|ó) j/$s"J[Z))\\άoW"Hsx8ӄ՗==YLαo|VJRdu;sIҖ;έDNRWeSg+Qz]&9+L$9a|'a>xLJXIczB܀, zP&tl 6SA8Ʊ-D>vf~cxFʓB_$a4$V5JQcsfӉlMws&OXtGڲj;˪JX[s:\|j\%ĩj:tjֈ͕>]gO`OC7V;c_ؚ~NL!=g!#Y@)*c-bc${N&>.̚|5?-PCz.9#wv=W|QRj=ԧ&)'mlUX;AwNZ=(q8FȁMDafGB$? ET%}~nwgAc&0gsl-1jfsz8$|ssjs) ̐7uIc)BEz'7$=Ē^ǣk V646i?o~y0g7~q7]䋯$_&_ ~6wD<W_o=~uGa,_ėRJ5׀J #!ќkP.#JfT%VF(uc*)>#IP1 )z`yժ%[*$j^3VqBp'uC`ݧX=ۜZ'seK#pyU]l%-ڄZRSa|l9t7|+Mb 1Q4s~:Oome2E_Ǹz,N#Oh!]`~L~7/K6v-α<"t_&~V<C;˻hK22!vqU-,yY45|>+_wZN ho,g!JQ~y#!O\Ddܕ]m@*BbB^ZWD61RwϾl% C9S`Lq[M&ԌNx.4bR8siQ_OK!7=:$䉋2ò}v4 M_#nQ_aIƼw9dp ~De^%ъ!㉳F"20!$UL?XŜ\aRژIy, =ͤH2nC"gdķI_QN]٬G'ALҵ^'m[ LJÑBEQ1eDX eQ[e<`h S$QXGJWc 88-*s^ B$ d aayN57X 42$Z?wԄMJne0^%~j=Wxa~Χ^ M:+SwrtWM_xcdn Ra+ruyy4`VP<~6-{_x35çxP:/~Yrw&K0> 6gʚ_aecQ#0΄2P(@:LRnODj/ `,Qew-T@H$yhI!{wKZЋ+l}'kjƌ6؍P|1D(eqYL}wj:^`B\qWRDH>Μ wä{bZ`닫f~5W롯-Tz: LQED|OP]ټxB9.R}nfvukvu@{Vw{={ ?530cճzrr(e ;j+qF=<./▣,K v3$FaD3sv/yFsXg}yձUȟJ򞼝xb>yM8"$:X)m|{߼q\9.ĂGbKgRуG |y H KS먝Î l댸)zzX) 9-m_݀QF0̴AɓEMElOk*$jEO%j< ݉pibI}b<эY FbC/X%m$;qe_u'@o%94rOdR5"W뱼+%0:lFoZ+@VeO_ۧQ$%Ge/HS{UmpW#ҧǖr%kEwOPs,N '\.R\HoD9dj:djݗ#1moW;Yts]Sw΃[quӭ5ڙA#-! L/#9pE=E_DXIO@Jo!TͶLFKi#N-8Z) CNRBvԘ5Ԙ;pcT45f=ݘut(F#Y%]-%؝<lckA˧;Gq4_@3^4Ifҝ7AJh|cO(˅VUbx *DVþU&Pݼҵ*s8͜8[e@НGEƴ} ÒvIA"uq͖4HfjqH&6I|)3b:%B(*AMT q>~ְ0]BYJ^&  iX)P-%B6h7OH29dJ!mB$VϞ28^RjaOK3 Y}:l$ɵV!I*}BߺUb?97-le$p 5Z Mk*x%?ZņrghJ{FP6AQI:u_m5~<} $oGqFXp알Ou6SNheCU Xa$")(@ZCe?>OnU>,]<&+C3o cN`]ith\9$(<saJ0K;_֡і)(?&fΥZh4BmDԙ;idbTS^M֚]v+EdU&H17 o֠J")כ}nft\.ߛ_n>_?~㟖{׆ӄ 6,,cއc-FX(ύ@@pAXLhog;|-/IжAr!'QTHtdrauasY9}SJbդЈqvB?P^(wd.((LTwRO BNgL"! FR1H3^Rzߋ=w7x$Ik@"=lFP[005`u:LlfA:ڰeQѓ #Ӏjm-!ULwiL<[i@eOOn2ކӪ'!+I4 4(ƕ9 MXer1N%AF%b3 -,@'k @V-\X{#ɳ$ o"4(*D;?ǫ(E^q0?Lʿ*irՀ\_\}64Z}%ʗ]ǮCƬiR_"> g\,֏xZ8}IK¢y;l y ?W scG]Μpynap ,^\FE^(MRRcS5,Tm+wdE((,eEK:}%ŎByX E*s&h LL(E oDJZRcݷ UFg87*a.Wcnpm,ANϽ9X1,VBDz̰,, S vNHl@bQiҰkN!Lq",yzM@H:%`Jظ<<>C:^Ma^ ~̃pmXc;\ӻCJ`=}/*,W|1 |inȃΞ@F!g?>^tXOYgЇOwffn-@9`j=;*e(&@b,w>q;5 BAK6=|CZyP8 84Hdw0#mґ/4Tmz t1^1eDTiJxbw<PPT(ab7E YMQVXSd8)S̸ e^"|U1uWw%d&*܁NfEgr>eKfE-y|/ r`6P6iykX]n,uXg9\I\QXU_|8uYku`D*>LК3A=XnT a P.zu-U| ?_NpX?O0'7FeU,Rk=x?0t˯Y1ǭ}#nBhtװ캂OS_r At5=XuzHRLkЃ $m/T<5a3n4XǂI2k"-^/u8mgg0ޚlj+x ` Gt@$ŕ(vrgDAQ*z@ e^䭙3cV|?_dv2STee j@ekEVsh=cv:lUZtz Ml43Ϙ*ڻ" "v^Uij6WBGwO%r 8{: Qy$;, o(G}ilM\P:vĭܢ*dMB4*\.fS`ӏY8gdd9F8㪑~O3֐-H.|AhGDZyn͕JJ7~ |*a| O>奎a.1`/+Gߖ嘣U2]NFӢG5㎴hV>&}jX hv. 5HPD,꼟m߉]5(`RIZ`˥ Q[-EVڱpD+6Nʍh (J4LxwX|Jd;AiF!H]sL3CI1tu^` 'Ԏ3@,< @ka6A䄞;ɖ:= cXB+Ѷe(wU?AҐwU4$S틒cx8kMxHi2 ($_"Q~ y 1ݛ'8\2>qRq9|썤Xǜ@ <טZrf ,TV[B0N9:ƹ@n~(q${g.xa&IŎ;*v\UUaF)%s{3lU qX!-ㆇF-i!mNVha;P-a;L9{LĨ͊11ٽ"{ cy9ո N:`\°bЎA g,aS=!|fSnBS`5m"jؔ6F1XjMb0906j8{R;saʩ.ʂ:F<W-pu,{A#vojT Se1z5Uv_ij,d2F5QH}5v%f l 0kaQ^/'LNYZzmީ&޵8[b[~g&Af'Omu9qU3E.lKd\1(th^CR}5UB 4V삯HMpa2%j}R2G\- ]P PQk4u="ZȰfR8B^yA0<7bCqO\UHҳ(\+.(clunA;㳔йP j6S :1; F +:1xX49ldH 'f #-JBf@XFceK%܀!"ܶ4vD45a7l6*ᘅrp g@{(MCR&  8( P!i#D4cgd5-t} ' JyY8z+wJoRa93͸ 3%+gQX *kہ#,%QxöNʉ YS {15" ! ) JR։ʘb`=CiflׇJY&p9ŋy0DKX9r0s$n0ɕXPP%HM`F0AZ{ B?N(hq'p-ްj̪k2u 0fL`R3:A3Aez ] D-lϗ[0-0itumSRe0/G%_eOsa=k4/~z>=o?q}c!?&>YlqbOKv{m# U6tjVO-@w9;Q8C64;qէB.QSC;]ЗMJ'2=V:) >ּ_yL5m%)+.l&}^F['xy܂F?pFe` 'tgns6Wj ljf8B<;ce+֩i"1iة&`1Mp_zs&jj7'#/" p<]١5g6L2 ^@3Bgj$Zx?=H±? qI0ڍ_ksB5=;tYHXIx}v_ !WFh,e+R=Tfcg\/";?|쭙S~%5O-p7<{4cGv]$ :HV^joDoA vtB0֦kbt…316\.EKYGM{@y&*5a{t&m} dl3CϾkw92]9&'y Tޒ'=9*w+$\D)EJ֊.[U>.6pձ_oj.W⿛88mϗb}oXL7YJwA}1rh/Q̳9/~=6ۧd8ouߧ;̙)2Nzg22Q#]yNmR x<|#\|$eB`O֟W,&v@_59֌쐭WwƏڞM~X</w(OvU-Y~|s;Tuà98Iݽn3KII&n'8TW΋Ggf"$DILeXp%Wx)YD\SAhW9|zQ^,Mͽ]pOT/MScy.=MSi8!+`\7|w,ʣ6ь'b9C Pa{$`8M]hBbZZ1-i!}mJ ?v(咦ai kXK5rO#Lk^S.aZ.]V.@0riƅcOztHR0=¤/PƜI IƸ&xF$R=C ~w)[:rQt[g95k201KXT %v5V і`za.eZ0jbǢ@EGr J]XwlC#ĤKyDU .ڥRBb?)3gMEjf>ZBj8[QYn$R9.ʧY3AL\x%#H|!#HS578SXzkNs+^Ws\1%4Ք XD,YGO;YyYsb>^)* RocXq"%h@zŐTNqR8Ϋѕ ,x PނuD%N%M6)0"M@$'Y%LF:mmNf[7w{S?J,*eR)5T΂%O%w(1i-HdȱK9 Pq(P+2Д /F}RO!q,LxRܘMwD7$v1]~ޕ@ÛMF&@g܇X6?YPnF@uAYUaF~ϟ,_5YQw}z·Fӏd~lG`>9m{-7mlM=Hf3۾{^@@. `Eb+f._ ~ъ 1FQ@^-]Zhb(K u330e~s#n(lZZ G*6-- FC*0~n|x X{VY FgOY+V( Iq{> rZЊIS HPό^v]3s9 >a(WGAIG=|Zŏ78b>q~^HI1z`2fK.&\Zl~ocǐ0j޵y"^4j5RW&2W>f5B0헍 QCuTm\qy2Wm|jJPq<fḛJ';}^η][G 1'踝>j1bhCգ82; ,V_;Qʊ;.{cBNjM1Z͎O"kG ']G1_%Ŕ ۟i|GM).j`x3Wo1W~BP0vrrwځa4#dO|rRIdluKC;< Uue^P ܵ? yJ Cݡ#ce̗ąBZ6X,/;/ˁɗx⹤㊫VS,oXZ]_J\Hd h̲˦lWdCk{ k}㒊,(=,%:wt%$bs5R:KNgq7.y@[F Xܟ_h|mcHywn]kƩ$s#R 8ĉNTi,3XAS}v| )0{H{fm݆IFDZd]`EbpeBZl$dIN(鏓ko"M{dk#)6rlǕ7kRrS9T:zw&Ǥ7%m4GGOZ_[I&4n| Fl~"MrhշIz×n$Ag%/Y&yQd0 #7"O7_͆T/S &y!hgQfQdecEwo)z/ Pԟ-'i(n {fNj$?L0tF }/s ᣟ?C`2* J܍Qв5=dY Ei+bT0qg. !!9`T8X{eS60pՅT /*2urg؋ sEEwUL0L! nHTT`H{sNQ6J<DRTpdQT $n١P(֍B@P44RbU,s^G5b\MuHM"]KUp]1I{ُ~S:nCʧЙYz^}Z3{ ˻hXJ-8aB * FE)uHrNBc͌I2஖J*]?vHu:׆m3!HLҎݺKH`M ~6zs9y'4Ћa81,b8rAIikp ku )]Jrj9NmE}! vS1 2MB[)\ǤE1A2)+EWzDCB(wES(ޕq$eOn(}yqu/@b,q<|EW=dG11'pl_]U#JT+v/_oQmX=2Q{,x˴s ?BJV{۾qZi6 jʋQk#W8T^৓O㽹A^%b: F]NFtE# `hc87ǵe3ksh0$W&'en{0yjj} jvKwy;={ Z`gv9)Bs0TS!F[6L!5&YX@SIBL_ysۍܥv]~\ 6.Dl\,ŢpC I6.nŻn\, ?ظXRqqidq`8-.Rl\,8ظXÍz/F4?[%?8R)$DYIuq`bネn犝z5Ś5gk6.֒l\kใwk:3?K~H9ՔZuMWr~gO{F5!cN 50;EA7^ a &{fa9~OOuBeH/AjveIU0Ma80jjH̾L]xhԗRlB2Z{rC:a+1"[s18*餸rXQznKp #H狽Yv&|$ULH?Yr~ SO (+kGC\EtOx7)A1mwv)[nM [r*SRBCVF[.MT'wx-L9^߶л!Y:UuAwDwŠﶻE$SNz64U4K&&r1h:c<ƻ2M6[z64U4K(CFk^лbDurxw]dʐښw\л!Y:%7B7+&1LS-r`N څS/: "Jl%JKAJe@ s@\Zp2Ts kɣ !{PuGh ʈsLXX X f#`CxNtTI l ;oE&ʂ3Qj A%\#@HBH|$:=3 u$ -5C68 K[zP!BBz`b' Cɠ˙ 3W͏ɔ!%3o¨z^&Ww1#eH2eD1:)MTQd%y\pjq{@ԇ`I 1JJ.qko>PϫYtbZ _<| |sν?cٶV~\ڡqzj-^o*I$XN~-kS:f`U*X~=y;)OطtH!Gpy_V?c&b?GO(FElDU0DLZzƙ2:yd)OxbM+*"0R'(,ܨ&:ͼX.g7aU@%5ѝh8&)$~O{m(m&ß uO0+ugz){)a6KQZ?M2+!1I|WߎSx7ߧt@`Lɴ.'MWutYg_9Y{Ĝ}Wgƪ{0~XLGR c9G)+R^R%a{Ճc d3=;BtPTnx9!H8^WkeayM̤TQJŦQrA 6M~Qi\XUu)9=~P`^ݢo`UWS0xDm(oaR拵(DT U꿹4˦)׀AB2h[KrEuquwI^mڅ,/y {bj?r[RmjNھgԿNj5x !IfF@TRQcN71DX4k+J'3K B0#M46h΢.%ո!0fuS3'w _VF`F(2[ ?Cn4B!FsjFC]Trop9;%y-p>_ŽRk<_laAK3cla HYL϶;i3ٻFr C!r oAD^&׆TIi{8{]q8]*R0cg[R$є\֬5T?9Y D5?*RgX2^!Jfq@:@3Һ?A>& o|Ew62j#%=*Y,$Cډ$[M A_5!2=}!/F\*NT~x${ W/Rդ ̸XM'A"]QN)_񫋝o^N0Mld.x,D8ޙ8IKᬟلC¾wIr:#yif<{h fM}oG! bO-y@Kfڢ =@!o ]yKtf:/i6pWhqdwDʌ'*Ż:1{A'];J*])D&m ~-& |pCWW9KR[C2D%cFѥ0Ka:NC *3j!k, Z#5ASdb~q[KvkI鹚uK0M)`^kOlvmi5G_Ç;MnCN>u]:Lql =*@ndJ%U`;wqy@]?FZtчE{Bz$Zv߆xug_Vn-䨹n~V GOtvB̧{JwnCBKp PDﶵ('Y>)hBiLHMzؙì3ifrk`V|L)G c _FHn ֩Mo`I߀E9&jaE6H0BN{XiXs9hU+Lq̈́@e=˔|9 =8~q&I K (h?PmcD{ ;H TӰ]BsA{JoSu`cJ3vy*n7&>u}ő׉#G^'rd5N$W\%^REuRF.aH3-q!]ғ$<;tFopGfuJ@stFyZ!Epk5Xa.(!!8Rg#5F3 "jD4Gyw(ޘ4" CGeI 3;Q:fJ9tk΢Z+ɂ BJqqN> KE(cIi|lW-P?Xc7`6"¾2;dy*Di] ``NaÍP$b}٧1r*1 U*'K"87e2漶Ze45ᴉ9P`ol-Pa1͛0VJk,e0쭴Vf}qnj.5.1("'‘ 3Y"^*r o( :*"Mhv4]ʡꮞ_xW-X9wt~˯2pw?֬RHlQ2V0t~]jmA ׉1Ie*QX!N~ (l%mw`$ 0\'atXTh `)yT[QdɿK+{RrGQmory޲\ -}vB׎;))P9)GW`x]`stFK6k 8ͫvԫBXs/C˥sz^cfWAwlG-JJ.*Z6^]#69S;5UcpVu<ܗ.?_BY`BrRT_[-f{ 5c¯)7oDx&y =M}JҢnDMOA&>4:#ϻR3qmѓjn2+.|qy5? }ӈDo/:.y:< >>)\ktnAiوT{8@tL%|wQlxPPa"-3E=c"Y ;猷3hNuOljcIC|Ƈ(-|J:8քwj) T{QG˟z,?s҇RT+`wPӺZ˨`o#Gi[y,KS @ekE/>[F)-<#)UfݼÊipD©l>\wV'|rWU' P;ZZۣ+5j of3:3ƾCБV ,%3i 7Gn!0FAĥ3 oif\ aqw-*9v'.@u=!$S,7ohB(:Ƞ%f*ZlZy Xv;eͼEid']%܇tfgK=O[N»pVu4X\, vkZ4F R!+ ,"T{ѱ̙!}ZM.+1BO|VvqKc!,!Ǵ[Fpmw&^ >u^.g,:px2!f{0$uSv`Xj=j4en!%gQlqF[읇sjKp92pSu[o.3Fͦ,Iǧ@_&UGuҡH \m|yVw8Æ3C-RYFAjWh#ә,b͛;Ԛ]ݖ2ޜ-dAQ`t;mokX9g3,XOY$ %ݣ =~hlD-0H-](K%Χ7al=1˫wkniJ_0SbկJ6}(%G%|U>VrFxSSKXt4RwΠ̎D 0|SDŽH *XdhdI@I ?=clw V/g,:#<uG?3:%Q;_˓&=_S{\_6` ᮮ q)Bα| WӺ~u7jzն?h)΅hMs]_8H$xKYrKres-S;3u]_980֔!GÀ o_1jme+yLw6|~ ] jNq.90]$`3dgr+͹ \9o8*ky/X,7AcKV ҀD,J+^"N03%fZ!ոo-;-. *]-Y}>`f`gTK]mtPNH)ƐtIe۷6*+b}̛YFp˰f s+>8£2H0o%xQB9. E5c+v)֞MsF݅x*Dd~ݲR,-'L,WźX$a2*#҆=YgDG0u@P JjR}>>\E~ZLT˧GOױ~o{3# iyqy93_~ä>=+c.œB߹5œI%^?q-s,.5/k*Bq:dž₎3'&Mf1&M\~M\սzLEZ -"ːpQI܂hTV0䲊S$Tr_S}|M8`C X$@Do>3_pO&ڟ7zrcUoo"Cཌ-g~txx\\zxx6}/ԛkon#wcnG 0|"ܾ2#@>AoU2r&Ž|2akt /%U;շq]h=Gr *L@ 0-ڄ e ,ڱN֑L/)3>ihi Ddx]sF6,rq_-y9^'F/p-ԙ/n#ϼ6]ϯC%'J2}@N?Vx8_qo 6LY+fm:K.)+@߾~$ۦI,GN &L+ 삙'(ז!hAtreS߈(2\QnTlY3_wUhMiof<KU$V:@7 nMHi&R^pᐕ 2o2RH%orzCkfAY8ydJ2vwX0v=(4-q"/kJ2fa<&N:\Dd3F$J1 gLHAc6#S ,o GњdH$nbOƶmL$Tf>(%x=x(  :&<ǻO?#,7?w?=,>OM}:FKZ_x8hG:THKy;r=AnʸmZP/y1Ì{@!0 7P7޵$~Vo4O,`H"I5“9Ԓ N>$Nʵ?}t*O9P镢}[J35'P1xJ9>|o0#*3[ nn~|{Ǘ,G7Xch.)e\'=) 8Z kP03bP$iᕕR`c`ނc1-xF=w4r=Rt2)X7+ia(gz@48,pr `z) Oooꌟ Yr4qBt> +. NR @nBD^4Bk9c! ^ W27cֱiȩ  ChABΓତ (')) RQjK^Q0<ز8d,%b7{k"$lhׄ*̺o% @Bx1E7"$խXy xmS[MiAj}:KxÎ+gQ4]?I Y\xUǍP$D{qn qZ :{j+n4?ē\'r"k ـ Tpk\P;xTk·@ {֫0טU\^:QKukl,I- Fg5?zUc3Ѳ9?]ɪL1rE} &m8mbCA;}.ˏ/ƒEfjfW.n*$WZuok]\̮2;lb* TزnMC,SnH7kqJ\ RX'] .i-th閽TY4:{! r1Hawtn"80Ѥ[BS['g-b<b/Ү] ^i"O * r t(m,+ oAPy"P"ҽLɔfC\ "sfyAWXR91kX>EB8)sX@zގ};{G+ܟ9}C^4y $L>>7=rp)YLv)ܩ߶jrPPIJ$q$ ˗ײe9:ײL5!Y~"eXX|Tz(p}ܕ4͐ΐ31¨4`2$ ncv !\ρ&/\jN,{atk\{Hv11n~QVJ=f*h--ZDZl?^?pIZ|B>eF# }(Qh#+.*<9_c):V,r0j@DlrB8"J,Q+Ok:tثf>xy%VP$Y"J: r"ZRt*l׶"d JKP0:U@PeM ٬FZa:7$×$ab}+2W{:;?>Gcv Eus2R(&i;aLkMhl[]h ~eXb\SFGI/X4JIdYa@=^N&eOS#15^~$!>LRPSWTUCgPDHj_C9e9K.Ji-~s2S['2 0C^fPAބ 8 F"3A+# u09惖HjRl( }Gq` %B39DZ8%*HLQ G:bKI/mQSJ dS,"ւ"i3B D(GaIA 8!kg!^:,# ܥf!7.-y?zm3͘ŏOvLqV q#k' 7" 3.DbU q';de] Sf6<>ׅ/-t7+OI\{|3i!_ IXgCf_q) 0DENtj-0wsU\^'b11ī|=}9ET_R`Eٰ׭4^:-/&y:aIᤝW[) ;.Sa&b/_P#pSV- TxY$Ze*Zaj|b 8bxl1eVzŞ8@G _(9&dyYHȋ*NkM[ws;*KϻW$Z,:ʞ~ktR;O*ꮫ fN3"VXAB"bE)n%(@Up縚JB]?.Fb|ަ}hY!U0갿,K-q/b`QpSу#V%!jzUiJڡR4ѓ>{q8:Nˢ}5M2n{o '&<),ߴǷvV֠K0< Mú}F[Nù#[o~ KJy1{Çk&iQ8P>DzY-@{;3!=fw1,!?>r$ ye'䴸Ȳr;;!oF+5BL#i$`ˣG'bNBH]΃ ='NNAENL}-`{6;EG)j>nzG#KwJV;V♅fq }ȃC28gBP6(G a8Qo0X(KK>ҿ+i<V"Ek(!pcis虐*rrrc&ެc {dX˱҃rP`(1%_a2r䰓NBco *CWR%,zČZDL6v~ i'$VnwҴR`d3!FQ@V(T` c5$u8 #f.mNAx_X6YƤnԤ)"^7_\r~V;f_]I.W{$ }(Ǭ kG ZnN$ \ B FhYЌ2=f3ĢK y1MYѨzжip{Sa('ѦhoIK)ImYVؒsYDlRa1<*Dǘka))$UZKΊi…MT9WQB6S,rN`*~ǿkZRRfmþUY1-O#V=FY=s1H1{̟]mvX_E-^DK-^@Xۼ]h6J=auY_HYC,z0OiԮ+`Ghbe]V?ݨ~S33S ݇>Y{}և))"kccP2h05Io5PjQ &pg"k P;Z6ޘNDZ1x>ݑPZpBC1rHQH9%Tz0  Sv2['+z̖Ė~O c8U"^ /K9Ju݋d䀊p2/8s!v5`rF:1砝3g@_$=+&jnI 飓}=%6v(U di1Ghb:#i68%-3ؙh9G[/g@h ƙJz/?;#5lBCu\yts/uc^]je^Nm6 xT}9^s+ǻkonןm C]ms7+*}-2xm\]:ǷT$%K'I[߯1$!90̐HbO7n4$F]Qd"Ŷr N$=_^&yS0Su WC9O%|ʼn+wx%#A^WFiXF!W'"؊:^yU,\Q1h vơ{ c^%Z3lZ~h?u_!ا+xѹNF_TQgoM}mN^{nt.0L-mn=>s1l mzX 7_FPܖowuG3h;wn žVR%1 ^c q%Pjr 4\4;ɄK*@DH`$U1dYE AM$4Q{gE2h:мsmKuh\^9tu[Yևp-ƩܾMjR rD]}/fӚ[yGLևp-)HWsC)ڸsڭ)}G]bpڭxڭ y"&SUr,ny .Z6vBܥy&09׎ pdV*bwF6n):+,>^y6<ό0=hí@ f"H6T@;>2)q΂"`h]?">۴j:8ܲզY_D'1>VJ2V#*6 l.=fx伷LQҼpoY!݅ c/N]+ Uo=S '?ܕ0rMw;s.nTBԫ.fM^vqRYDmd?/WYZgi5x\I10Jߡ "R`0-C͓W_NޥZRzB^,nDդoٯa-Q݊9n>)w+b~b'T9W049oڜh>RjydZ+DX4*JUpL{-JlImfja0سX'.Z':$ !(N(U#+rBtpĹJ2T 'Z2>ŧt4}GKbEU% +ǣE$iZyVICpPHL$P:L!V՚vC]ܤ|/J֛/cݥ-eë́Q'0vˆ*V,?Yy_Y~ΈC}^XY9w:"e~7`nFL WևAPPtl-Loi_`) 7t8–w ?k!|CAs83A]|F|McӧyiPǮnHj6y'a Bv:X>O >VDT wf@y^3arE]|႕+).4ˀ Nlԧz`×]K4pw$FFL hwbDf>D 6FjO+N#7/sKng|~(Otv"\Tqbu^ZJLc8: NqN &*5:61ɽͽEzgP7іVW'Q8͛?2{_n..~츣_CZفB\M\(LzR#%IQ0%?pyv !W>j]($#;\(>uC*[q06?:'f9E7zH{p4]hf0] x]P> '8fw,!lH}LX~:/Y;Up6TZW#PWW.OMSnjNo4rL>7Lk=2n:7FI9 VyDlIzDw[3*g' Ҝ>N.:B3eZ 7=2ʙP i{ȍOt 'jg@!_2ewL' eڳR_ړ۴YN?9({{G> !OS:΅p\-/|" |<0D N.10(V˩}tJƐNƠ}b('RJEq3({a AW rCUtL  P;*jGR GezI&H!O#2+fc!f^ܢw@7JM]oW M#9DSc BEƖ1z]r'0L9agsI[ZDݢw-Ay q{@,DoPq7Nhr`PEDNsa @﹫#uR<\Aha+R GdpW}zmZvdi0e $*aFlۡG6HVmg+BF&Si0FZR .Si6l!5mcT^LSEnnp"Mۗ^__1Q=XMʋ<r-9國Cչ4ynfI'I[STBj cm!TK^PLMxFN9eem7IB^9 A!hQ T0 H<] Bbe<.\jj@mȨ̷M%JZ U{ XNn/n?Q P`@Q]kIdIx$h>ō)8*VJ-4%r|YXqB?NX:?\5Qj_Gվv[齑 9VZX8'5 qg/|ˇs3-5$FSf0 tn?I,:Z^r"cTdm=:mcxIk.辢GX ua`cdN'T3"J[_ S4B(hDˋp)HAEse=kz%;;k7~+׊ 0{3'w^GZ~DK*LPZU@$u5A!߽~O춡( ?[ ޻us."LrIa8q2qxy3`O2*~%~ Hk5Hmヘt'K>J]Dœٴ /{ǫ Ah#7)H¢餼M#zA_#->P87NL+֘C'5 q;ZTSrc⺈R(GsWҿ͟GS0'7։aYFgeu*ٺ3xh=*~'Yu6Vp ۻ_b3F=Fwt,g7-8@6O6y?"[MU`S߭"Ա?}Σ磛Z~>'QN%帑9'~M^梙: DmXE ؃%gPѶh䤵x{zqi=cF|UoFiatW{S 71~ϜY#rxwKthN2N(;aŏυ-טS b ea j{<+>Vwʨٳ1p%e<;v7,=1.p롢0Q b˫Պb{1]Vm΁ ^1j] >1ԐZ$hL M~b.b: 'T:uTUկ|\ j!7dz-U>7dV(r >HbR2!ɦ5pGk;x&_ھb-.i(>xDFZw"fEsVXOU*Z眉@Fq azE&b5զv ~_E:,z:tBJ)k\޽vkQn5P4 @On0 1ۯC vP8sDN8(ZZwJ&nD*^psChmWxS(ib3hG@@/\0Q?ڶnO;^ՄtN}CM5e;PR ЅEOG1a.Nln7?I__{'ќgQ4/P`s{m| +k蔱6;J\b -v`gРeZ5Bu)sC%INXDL̥QzKˤH< ,)r:81# qJޘyK%AQes+FhXYYJ_|A * ΂t3S2en~el1D%x_[ZLjrf/vq"C:%yKm)gr2HH"Bʜ<M:@C\Q`@[ Z$J)[y`%) L2ւ5|u #^j r"TX#q ,(P Dh`绺!ѨWutuE{tQ =m S ({Er$]We'4P>MNePQl*`\k]CCb8ݍ kPOd:<Vy !ٶ%րe`k{ [oNjϭ4DpnBsat$Ar /9nQ Foeh]Z|Ԓ%!S>dh*zJ.忍?ByܚyuG1ʀ(|1\^vgk~x*ytreg.rxܒD&UW/!"^K4Z܎&:1 AeoY|pϛ`Л \fI#ztHxaF=;8pyu@6lM_x8bm+%mv^m—愇jp6_{?:ML39+ݙ{Mp|nqM?Ŗi՘[Zj%״0qաhz~? Vo˨ŎhU6Aw%D*IM+H Lt2L1ŰՇfB04I#΄# 'r˚j@gP^v)O=Ɵ}} +m<-cO ?GHu9pU֓ɿqeTH78j:wm\R**,u١Z'q^:(jE vu~$|L>dlw?Y 0ً(EA/coH tչTK+oAo)s(ecqkfl5,ijG@FT_US ;2|>ܕD,B"899$5 x@RuԳ䩞(ZrÄtCBUd=<S}6jII?v\3+ӼOݖ|āMk⧷I!sm s\I4SY^BHӚ7n ڱ1agJzȕ_1NY&% g.3`Чdf.!K.Td&,mVU&ca0Yj?Iz9p~}j1x+!-%|@( Z5)?D! TW Csv/,*իboG(jew 7Tzly Ln.Ϝ[nwƓ'IO6 n}v@:~D2o[dqOk[%iACV6gV7'zgw_jWϽ/zʋ jIq_ =UY'49PUB˰MYu:ui86Do8s{-mZ-iw a,JدS>/t2-.h ~xSlaO;bNPF O!V#hM4 l|,%b4DX"IS'%jg͕2=LA͐ "j$S*ozyx% Ȅ;\q ޖDfxP9&tsx>r2&B j f !דLאUt'Nf"sb%jbSC֋Brh̒UK35e#;ʛ68ȡl:S4yg4* L8%$xέV \6E/,; z^S9m)U2[չ;}N)47W"B$m.A~.lʯL3"Vɞa"1d< OE¹8]aP8%4D@O,~֔q#2j*gV?vWR$1~&dS帏׮ܺ%̾Ŏm)M(iZ,T>- %p$TMhp:t񎇛/?.eV㚖 !JG{󏲙D6!Z >|ռa 5-r崡qaR4d )% h >CdCx2rr^0&X.jyiׅy0`vLm"&Z睓^WeyhLӐq Rr8'3G-GٌU0zdV$1p`fԒwF.=; =S7h.P =egL#`{3A/֩Q?SD5Jc5yeU,({~f9lO?r}t|8n 'ɩY `IL 3ЩN_)ww cA:}b$bvݢ;gAlT(^'c#.)U+"&?`i[_OVw`k)r: K1uejxc-u UtK2'4]؞a8= ;_1ԎRbj!S_f|u.AT\P*A140@""2O+0L/gM1A޳^jV99@S2 zTq7t6yb%qѻX?r$/Op_k-It&yd;ϗs/Z0|,VѢmфjSv+:it>ps!M+Mk8k|09p=R^%{.u.XI.5(J%8 ~['ϥxG?<(ھhR qd(R7/'e DVG[QZ8M_֒uFvǽWO~İSxI UjR5' `N Ĵj70=պ"t X>{**|:\%wL~Lݝy}wgևwuw޿vo"$)2tݻOW˘AwDUu`.6o TqGʰA6}W8+s\7ff#]A S2dz1Wqv4+E[C$=}i4X8{t,j>1~So"Sn>ο}ɇݛ8i*8l>Ϸ?ǿȃۏ޾Xߙշw]+;k/ܺo~&pD/IW\^wɼ޻K!rcK,{I4WW l]hbMB)5'W}b|k勒hF˂CEdՌa|FI_ͣ&l%+!ڙL$4ѳ%+9-(qY9:mRCiti]料~^wvtD$JOX3G85_%?|ψ,%/ 5uu5.F/=|( (|nhnG0L?屽M)lηۛ5'8$?ir 7V)lcjk5>4;˜} rV0|v|oj 9JsfiB+vcUIV,͙sW9ssV2q' xuq&ܰt+C PLٙWϭf/33]+d+% N;b\h 8y.hC&K:&~Z@$dZb1(L=N r OQlԞ<7 AD56q? A Rb{R1(2; ARҴϏ,)Ny#iPl|Rr7`>I~\тn#dZDZjYdXjy 4}L$Y3{l!X2|':TQI+ (o9ӏJ~ Mj]۳(PQ$ȅvHݔRƑ7x~zi]'#Y "U/;W@ Y:hӗt(CD|? dȦBOHT$}t1~?j%cvH C#ceH͑M@HC4՝%" PTDBU/k?ڢ0sy>ǭpS)m!],;GDM.6~:?_Ќ'~@mÂm@7Vl34NGw񎇂cWYĞfuW"YL닏eZ0i5&]pnZ ,vQy˛x`kTx LhK5}2:_>8T#jA7; Q^RO9k#i 01ǁS?l#|zz}jtJ MmQTC?#iFG =z:/ HRҕՎ$4}GJ\e")$SFciEDzӦ`zޭAy_ ь"7Ql/H2(ӜU4OQl̜.wxR6^2pf;\qrYj( s^1Z1ZKZ3Y,J`;`vhvM N5A=[ 5; XWhG#j9fD 2"SbŢV3YN`ۧL}m@:]aTӇZgWar;-38k zZ "r5m 9͑"9q [dQBu󘺣[UC+:g32XUtdB^z qI2>k;g`)X:gRIH+FPW@*%25Q5ADei[ET▷AݹD!b`r&OPg~8f4h͑DR6ߋr4]I1rL{{\GiHFK+U1B%V̎Y dRMWM(TTjѓDkdCl3H3H@MBxhEE@G YLTNXAPqRfRC#L6C&w:z~,FF#$Yc,jypw4֬lLөV0fmQI8:QZ0hVZao?a\㷲B~Bm.zZ =┍ޤ'Ezߊ#UH:guWx& ^<8FGW9.r }`ȭK W:΀~ܚrR6!%$ R!ᤌąFt:'O 抓pqlGdӣnSo'Wg#*֌\7䳦ŜKr!%ΒNR&'~bKt?g߸Fb-ys{">qI3.VbbW$]lANDoG}/rmIB2XtE: й('(65P6ǡay)"ؒ.F_ sEm)̥͍F׍6u4\t넘 b3G3:tq%4ǚW))FKB=U|s霦(s% XDGу!"B~|ϖ4cj `6[1%%Ҹ9ĺN͇Zh"Gmt;:G8t`((uʝsB|:JR,&~5񸺣Ncv<jDd&%QkUR(aV #"֢~8v [MSy8UZaP8UN37$wXzs\mZaKDiCdVLmJ?ʜ~mn}L+2y[]rH}&QH-kNo>s ¯nK2%YMjAu!ʥx4 F*ͫJ Oԯ4g[eTHđ!̫֢̼&9Fy#%R mUęBG["M\)xIзK/mNDɻ:qB3bhMj<dyJ:jNR9arĦ5Z6; jjmE]ZrThyl$(0T~׊^j&E#'"-HޤSBjB(DPAªTvUk,Q X_Ɖv6ٝή/0f̗TJB$crZ'\3\`v5cG*]K;=ySCIK%yr%\p3$ 0$d Z1'wXi+ bn:$)"JLE܇ͤ\49v'y-4fm;AB XS ZAA$"Qh[))Qsߍ74`dlѶTܩ7Jnܺ]gIX^EB9 ,x"k Bѐ&fk>jkYލϽDȤfs;(id6k$ 4O=B2HL)SI/JS]ʲv9k2++_JƲ@\8?'ՉBN,т*2%wXwBU3H7:wؚ [xIP+q:^M;viѠl`㊡NbSP&LpHFZDd|ۥ+?ʅ`W.n$ Xnd#+35M%@IƛEV~ ߲)P+u,,8Be9D)QN 2+ ` L=y1n*f #i*ނ5M( {h+%Q$~Bty_ *b!VLb1Xa]|L6HM4CQ6l|N..FKe_XVq ,@7Z gwVVCwjV%.W#"z t%mUD}`tSwk)m \+ISx[WqW ǡ} #& ԚAdF8Ec ;cbsu>lmC? ea A$ci׸ .fH' |tx 9>!1iG[uԂ%iH?nOp,#O.lu ̐$QλNXVh O2([[+Z`_v/〩) wF2k-%YHRLX]8j&|d -3.Təݠ{3ٵJA>>h ޚ:PX;|3`sКAu ?g'P:}}.{v!KjWT^(1Mipbz`Z;a/pyxe3vʈ gFrOJCąÚz03'4p颏ݙgq NX_@Y'L} yr駵E:(Lޜ3L؛satr]11"3kz,%GL9 ӛ2gAc ΂7dz*))l &zL%jEWjydYQ"JSR T ^;8_N3G2MLA (ϰS2wuJ3!'-ϸ. b3$ KRˬ<u⎺kpL'58iFdY{k#JCZ*NiFtf%W70b_IJM%FjQ#MKu5o! Xq!I풚j@gz7:\neYHPNJDDŸ81త,XqX&%OU)2.%=E1NSg e*2&΁)q% ԲWܠlRSJfDvN]@N]k'Bi' yHP^Sd 0 RLV56?\\2}wf.B>Z&ު=ZdžpѵB !B$ ˌZ&|ip׌a tVNҰλlt#I>Ѹg A͸;Ir`r XX WA&tMp"tGG.QkwF@}55ˏۻ)| ?Ť}zcW/mgK/;\=ruog΋/wϐe ~/bz^ ~@ϟ}?ӢӾwϋ Ɨ/'o/ɥ;=v./9B~\7>E'fp?ŝT(4&t}{0wg~| | LX(qq'g? S|q~oLntyY|.N &}5\t'.n3Ӻǟ&{)^xUxbO_NQ|.(6u)0wSyAFĪCBeX-pW?0ȗ34f%qx>p1\ 13y}U᳽Ru|χI v\t|lr#.A *f8 R'Xس_Ej_s̆汿yNYÛ̻ 7ugo`t4R{g܏Xޏ}I̿gCi 嫝a{y בOg?<{X*?>y}r4fӾ/xg`0:;vf0)]棳v`#WxqL|\ F'p_֟ӧ9w&s^|~5t mx'/_4sπ>C1sテV䓳әfhxC,Nxr~_ȼzp YQD?b:MA %bs.13~*<+vNtFp-|rܲAF)Oνspe@N E8<'pvgz Ws0(*hr5mmͧ &m{JW oxd 1jL)#'޵4r,ވ&~X7W"]]ItDsl`Czlm^3aNlϫA7[Ds WZzfP,fQl0o|8}T@0H bg0MLic 'l{p$t|@իyQҐsnŒ)sas H01W8$Κ(A5ƀ_N YpM57[y4i3gPE(&W!p8&: ,SK{Y.34&bDTXPF5W!ԻFl%Hk'f]`рTM-cK0DE$ tK Z:"a] zMkza] jX ;12pg^ }ǸIwkQF(`Q0DB(.WuL21_BjZkiƇy BCm$?y_]HtiD2%YVL@qNa5 GD! H,/6!$jȺ71ɍK wAptv5^?jFS~;`L~Lb I i8,3.9I >Xz*[4noC~ۼrlhh<-$! {j'##G2R_@F{I;r1 6@ \ӷ(oQNߢEt6\ -( M%bQHs;)1+KNi?Ge |UPc@ 6P \ Q^ {9 }QƻI+d )-5y\U~k~] SDǐ"y7"-#RFix%9˽-E%K0aIRc,Kil50m[Xwނ!ӟlH_x_@1uT(Ms<8^q8^@r][v?n}hm fxԉݰSͮ W b_JrȺ(+7< n]'H% a{KwP@މ/P{Ž s\s6t01O;겯1pLȿ+tҠ'F&7 GNݽ?]x,iʼnQ" gy 7b^u 2'щ/<'Z(gs๝hɎ<`H(B#$E設҄7#@t.eH3Ъn'~#.1K >ƥgu2go_,s݋&1i^Lv/D*ƍkQ-o( [ʆsF{riN&2B3]A8Z%"6s/MW 2cy1`#AYH b 7s63} 6-d-qGn͇Y/-ʉD!0 W=~XmF,/<7ó*;!T=y|.+q|k*čLJ{i 6(koFKs*@Znl5>p~{μXq087Oה`m{fYFXdy$n*H̥ HnM:*8."wb0 ~Zh)$NQdSTQbE*XM e+H3AQXy)H)iAxT Gki^E8N/|)xM ZC C8$P23 0\~:bAŎ8TcE2'HpѠ}k7wj^xCtNY|XJ񿾺E8989F2sH+2NH&<9&1/>侨5y Z'Jd;c$,$WːpVӲwvG;ʝQ&;m^d+ː| [+X e5Qapp2ib"D:xpಃba-jr{><{57)G6zDtX ,vԈreQ(#^U,\qD(EAf='#H$$&JTU۞9ʈ?VvS$BA)o:?c.{X۾ Q/7}ÁUp`"Bkd9 vHrȐ$,aQ -8.B p>QxP86*hd~Je*䡎sD k^ 'xBA3g0,UU@ [FڶI<3I|/9u0 N `"M(8Xa6֒/SO=f/:ȵAAxG,=ơrg`p4YQUJ1=0) [. ij_$hEpd%KCCrjFH+o9=%RN5ԉRRYpѰ{WF,f?Q% @%gfY8$/y@CrJ$}9/\0bHKx:0_"Ev? LRMۀ:.IaI+WO2%d[w':IIH؅Q  4\TFDL'^twL{Kθ/eqNcj"5QxU2 K<}*b ]J6ibRA6 `l@;A1Dc/k^~Y.sԥV0!-1]˄MPmegYi4Sr .HAB2[8{^]R!̖p0,ӠT¥^:ki个MYێl{Ɨ׿,dR+l]79|d~ {nDC 36o%ύm/O`Pą|ǫɿ' PNl=^q8|ϞGAbl(| \*qpFNQ ~m} }(Oйw+up0fzS[2y*}fǖwchպk}~x61̘V>ʻ̳$o>rv?M {;z'nǮ<Ȍ/ ZDl|O;?fc/;0 &wp^h1x^71?A<9ǛƧ .N Iܷ>N_fY˭W|R=*5HtRg|+B7`zA^c|lyvw.{Wou. כOW[bϟ`=?qwOxн~pQ C[jtN=-@s_`7>N{v 4=t[n oCnxk4~:|}9Rg$٭gZ{_1i|p 7@I\ c:ck&{ز=Զ-K3q3ft9dVϿ[7޽p ѡ}k_}׻?@k`&x(Cwpxs|]~;J^?~sD54|~ l~jۦg䫣@9=q[la|M3ǯM`twSlEkKevǚ?}]LrFK ֍^tw7U4ШEis,ZLZ^FcK܊Sf3#ŇQ{3Njd}dW_H"Y_$d}>:yhݦcu։}Jm2zmxEc%ɬBMmZG359B$zVDAj[u`T"T|%4'q8{mE(OIzvp^΋yQ8/ =v2Dq[Ϛ7ʿ7TAu%8qvp3RtI1<ъ+nƮb_VU/x'>j[W,teUHz)VAlycŊȋdԊ'n F VZ^=T PڗjctXX^T+Ȫ-]JRn/\`ԜZcָ!7jO\Z|)WQ|n$˥|X>.Uzr/'{Ѯ^vX(ߣ v/G;F{ rUæqs ۑR/Y"mY1vA^#*2kݻ_}꨼-~0G&rZ 52 ߽->c/ff:_]k6u P"R gϾwY黍",*v)X,( xDQWEGka5ARg|s̮ng),\̞ӧ{K.asPx:x&YI9QVZA"6a#d1YȚ"mrx7&e[R^ڐN ˾ h4=}^=?2^x8O~\\oRDX&Q*Gy}Q:B+$RX2l,B^ˢS|y2R]lh~u࿖8C?Cd]3Yק X:@Y8f }s~ڼ[r۬ymgbώߌtiAvI%>6?{|oTcyi͕%.\B=$wB9TUc+@lɓA xD&h4`&TU"C)ƞ>)K\`=GYb/)aLVֶt*hS*xl N@z"UudD@@ hY+o,)V' 6ü8w8M AN(Y? " ,5 µSn4\e%KŘ($o@6+/EH/lŞ>Ġ%{4pnh^@A(lGjt~D4BIz ++.c#uuJ[wA[/L*8_K\/Vݻe9y.Ӆ[EicwG2QS:TZI62L$Ǩ(WII*Ad' 6+0+3XQ~%?Zn2ܗz].ȎCQLC.$MV1otstB*ه֏,"XJn*Bqa@9v3_)H1'o>038e嶎E.vD:/˙sAf*W$ ,V!V6(ge65&U#T.}^黽7L1IYppӋOhiC .⺣  ["-ӁBXHMfePx].BV"NeR? JV88kw8#do|Ț $vdEn|T1@ͶߑZiD Qm[+ 5 27t)J20G@ 佣R*8D%ZioJ+gCor<,o;Kku^Mǥ,āNp h?L9ԓp_C{qi4S\ GÇz4*H͋9 AP!И-KyS)j~)';$Yvf'$`vʍ mJ~ qVtM^[7UYwpP61+ِMv$NY€Hz0` IzW:^ԍ:xHL1j瀼aMKz4*o1:<۩]]^AP9iknX Ϙ_bی:u& O16zEoj&-K%|uK> r1b8 &'m ;J? &,&p3kE:FD^:mJI5bٖG-b*;4(F:KBnKT6`{"Zך9;A"S9RG.F_=e:v*뫀B&g,%ת- >N\F 7OGw^'Y{Y)ș},r%($z׷bZ.6tYvKjܘ"%8TRFRAvUK',aq>8Ubnszh!'.T{5/gGx6;VyD烓]!P!ե r^5MMޒYV\Yej!,@l!a]J;ᆄׇD`fe4"_?}w1[ #ڥKUEO^ϝ \ IvfV #9g,}e EB"lwLp,䱦aJN$b퉓-׮KEp ۝^LOh=.xw$%!0iҶ*6њ21,~g(ԃPUWe"+0*ZDc;0.vSdf{:Vu^E׎ή=CP ˭ѹҭފ W@E+jrLUВU͉2%R6%f(94"0P[[0'o^.>qRYdm-gPRt9&$cPNQt]ݩ6Ƞ3YCk[EHRhSNI4uHv‹wlі>#p{]i{YI#ƌ$#G`L;±H $7p`l-v- , tcw/0oaobx2k$㶀{ 2OmފKHC+N àA Px3+Lvi(Lʖ劧!LZ34k}e,w uP^]%f=)ûK. 3âl7^'^"lAU_޹%11l72!vBi H աFJ<%kPvkoHbOI~(!AJ=\nMHnuO/1{9 &+aH0k@i]:v߼IAX5 v$hC zIj,$;8!0 v0zX0-> LM)Glw% g!1TJiV! jEVuuY;Y:(:P]`*ⴻPòնc2հp)lW;94cP Mģyf5BldıUFOlcP +y[z%KP~8~|]O`^sY~Obm^Z]v@;x|8}UG?NWJIнvx?^y8;ZˣUr*uտrEE~wYQw敢U[suV6FOfX}3cKӋ+VU}T;vmw'+{lp䛇\9[,`efX]ւW6AGlݭ,=0%PdoZrݾE]lieOWbͩQ;ٔt0\UbS&NFr)P)%lgѰ)T|v}~U*Ӛyq۪h%g7pߣ*n [QӜK} #`gp,'vyb%qbkIAW(-URQGO*Xxr!Sp}Q`!n ˈW~6.O3imOo7KAR&S3V.5kvlʜlOd۠JCBf:-oH')#7npwhűM{rKXvDWeoXZ3{Tꍉ5ڏo?.r9;xʒ^*i;,j ǼNAd#boSA"{!pPUV%(TrDRV {+ﶾh޵F#Ř/Id`?,v&Aaq>$ Hwvp{vۖTKTK#aϺUI5S:*}5#s…k] U^L34_iv孰8Ž#N;bd9RpwJhʇ1iy袂K0+KfVn"\14jz.B}ِ5ӹ2%s0p(dA-9) tHMɵQ3[ߚ4Ddڤt ߴe)ޞ^ ZTKKl^B_ LԵ;˖R͑"-/ $KTԊ+"h>a<`jP,M@ωB_=ܞ !=MLnMz`z!Yd~u&Ӿ8V+yB+SD2Me?g[~vN 3O`{~4pHnXȺE6wD08ﮫVڗyy$o\cG%*rS"$ֲy6ghg (t!F-zB%|uxNbNlvlUaV:&3r!c!LheT# BЫ҇–Eay.tɄ1TϽ@Ij.3m/xj{zG3}){yީ5*C*ԮQR,)t}:15(UV"AjR{'c0FИN{,^oՙN3Kf[XMDb[Ri]c[}8C<&Lc>)+89}<zrgGz`wwGOND6ؕ"WJ#}Lȸ71S MEj<ݟ)1:'ν 7adPn5]P(#+xhCηlm:[ mEa+0.8"tוYq&h ø]ζ֓ޭ\  z\s@<塑ƀthd*{ƼRj: \ׅAH:y`v*o? mypc~ߙ%7_E&.sY2Mh\M[KIRUJ$$Os#/)tuUaœery$A@|v(!ȜDZixSǾWR:D  1d?uc$E(T%ip@WćKe ʹ΍ ņzH"f郒xAT[->cEˎg[L2OLk E!S'mRa1ݸ+R9BÌNT(;{:GJ#cC63Gۄ,FDfotFSJ "6SLd&1sBVwuٺ}jvuyX+s:z~uxw-䰣)Ub29#>YQXe&cI]=tECU=Ts"ΔD: .(*0t!YXէZui!45mAEQs Ra%CmJj S8W" {:?(v5A#@ Hަx ^9k 䎄 б'Sp՛"M p2c_.?߶}SO:d!#B ]vY|+H'2t(2^)Y晖be f;Q{d%yS܌ހ{\̈޻B!=S:iآ,E]M+LKolh?t\n0xΛ=uǔS@@\'d /7:UӴ4҄]2ȹ fG R@lȱ+r Y$dy =5VRǕoݪ0"?(|e`PG o.J4;1zA 2C3#cI d(QΤ|0ql7gJ8y3-đcGoc UBeP0`ԸLBxtW+?[o|/bCw,{leLAf  2֋I\@BC°7˿eբ062"e틧#R8]"/kHdE9ɮn!)w_nE#^w*c#ZΛ\g>p%:tWWx[1{Lzxá!?44뱝uPcz*% 6Tl٠VI[πN9 Bi.n h-1Uxmў_#|'ʒ}uĞʲ#%9\l$~~o@])Go+gfe[хՂp9OQ۝ФZ6 3)TYN-frL QmkQWwA( "J{BqȁRr*U|%x ;J(0G" FQ%sΑHF 8q/JSF" %ۻ{+CShy ;]G'TDb<聖N0bTNK5CsP!$Nť"9xڋ8VaNм"AxMS\?P[2zR> P7 _RSKOANF"Ht΅Ed@sy|&FWkA+Y9'-UTs*^UK!T?~yfٴ7ͻ9oM?l<߼޴-CDm ;@o=/0 0/0 Nk*̈́訪\{mVS׀2M8^\Cq ei/4nD&> p_VΩVHɦZmR;u899'Z;,ݔ4st{W]ᙱŠL#;Lf` /c۫Ac0\">2%qE1Ozõd9T8888O*w<>m5J01i#$8Ư̞31*zI;t`qVH:($A*'rhSr_2j{h" J9T1xz1;u.;܈ .iMP ,p> )DJ|0Eu0iӊ #sZ(Ղ8Op4tр{+:5UsL&d{;E{coevh#WRX Y%xJPʒ^иnٌJ $ZzZ'iBO_X 9. j$+|5K@hn1˘uƭA3Z"BHxƓͨWmrDK1~4&@_CϸCIR:Ƽv@!֕7S>D*Wٲ1WT`،A%|g6# T/r6jq)gz8_!2ܴ\b@Oqr@U&"QI)SMJjnRzQSmX::kSp*?6 &흆n&Ȭ_j1XbV*p&`eSr@<HSנ qcw܀dQ=ʺe% :odu5x)f$H&oߋfj9*Hǂq B(&[Q^qv!Xo9Co9=1hHOɭ#qۮ;}B)ghfJd,zʦ w_hn^~Kנ`D:n H&J.>Y!s]Hav]P|J4Ze7"VHakނ0n|JEuZf^ }85b ~: ;z+X6&H"Y@-9ät+wLŅȹ:V  6 17=,~xgc=&n |u&ct5_4kw>8&>OPͮ`@yXU/ ޽!H4'}Vז$ "hNpuJ_a@΍sBL>CpAgl8|d ƞ1VLNA\?zuW-uvN}M9=ϐπ,Szw׃5\ᡕh4/:ۯFmVE8WEσJ=]؃t׃e$=p얙pwCesn }`w1ZGZsv1GPAcDJ+ 8莋g[i p$:m%t(IװR 3׉@ nh1xwrdu3sۈ YW4nUɡ#һT>wum("|[Hc%I*!s'>_C)u1| rL Ƃ Z[ymSc:t)Ipntr F}o|sO~-j=QTM%.7lX#w uȥJw=ƊMwE#qK#1ouִAh!qEK6‘@P#Sca$ !:|3ܖb>OT Od' j{\~t5q0 a'Yxhb||}Y:ǴwZ|CNI)'#޳_/ܝdNcO(҄vUu}M=:;zh/܋{+{.# W&|;+9:[XWO, r6ą}KCw;$}s9]ܟ _N~<SP^g)k0=a:9pu8eb/2Jb9G3Յ2>z|u!AS3J[?'S g{TʼnB:t1-X0¨Z^5}5ȕjjZ*ȜW+2o|tZ][XɩDݔf6 hFB@Lٰau&9ƐP`midfTxg: kjAr0z8&M ™  cRŒZYarDVgf*AB~#S|Iq8 rיĥfl+ćkH5/T—RsCC_A`E%qj`:M=ZA&hj.4uec0@Ez|ZK12k辬~FVY=&7U*buA)ִM>t(e/1g߱E9@]PyAeiʇ`7]KW.kAgt|ܟBъwi0ʉп24&a(p;~& =_{za{[m c n RJ`T7Đ3PᙰiIĄXnPq\|#Ĥ~[轥( ѥPv3(VDzSʝqF=mQtiqJ.zX>f6u>֘ƤH5&E1)5&9|nZфs+;Q3&H" AhEPNG6pK,A;k fRअ4 24GFQ!tn۲F;QP!aX `yf\E4켐ªF1pe 4shgrr_#Q;*)RMT2Zf)-Q^ߝRyQ@lzµ8P ^Ub1{B(V :\iژ;@aY3W7궻NPM*Pjyj<Ĵͪ󝥶3b7AocP_LF[(-JOT#`p9c,/u3NM6ɱp+uvYɾgl13r 2X]=uM..ww8%'Kǘt(;' @"\=-%F?n<|D;Xo.ON)ǣ 3_|j~?O$KE2)b#tLҵ~K  QIEM*ϖ+Rd]@^>*_'p'͍7ߤZ9}bI%LyEA=|?Dx7=]=8k@S$ճtY:I,zV=I5(B 'x`D4B!i710LHr,Dl:Po9u X͢t{Q9ݜ)^[{>bf1mmP3')Ifޤ^\ɉ@TIʓLg$L[RgRS"Ç_xj{wu}Gӫދ/1evSy *t%ަ+W|1X(;IN}z?173s6nW+J3`M`IʍXe>WnK͛5g|STRp#c{k~^LSO7>ݼ˷tkAvY omG1HXpzʳP) w نgw /^5٧U^ŹK2@Q"4YG6ǵ[kI1T?.ܚyd5W'iW*ZK%V ;g:@Kkz=>iC8(0RG2= `޵q$ЗL?"}O[Y[J^WM1$b{"-HLͯj*#*w4+ɠKF~}XLvԅry9ɜ4>J\<`k,\9%&}svj5< =YpYyu,a tُ4 :8aBӒ}ʀB}Ni̩h͑97+d3b:HlSt8SMP)K%W|H %]6D.L ^_Hs`e1͚C6/cT5[[a. [rz5ePl6+jF!Dv(OH 2Ɍ=ĚW{O63!  'RQzu% ܪi! )%iѡ£rRgTtOeT9Ȋ pլH37sdqI_b:)[ z+2,g!*dQkzUe~f4M)OqDQfX- + Z,Zz0P$QGS K 3;a&g#8[$  QL^Qb"Ӧ͈ن9IV/ӊi:Gz0FYzK#_cK.;lgM92}r8J磅Jxn#LRVY2rLRygrh@L#:) z/+s GoɍB6Gc8b-VhB> 3NI;2~ZBj&6,ˌ=$Bcj󼭱++lv%׌џ9'ۺWz|;[V}w2[xfw_ٮLAL{*\i|6@uço: ̊^_6k]6AL!͇aC ǟ\@t)&Xȃ ݙ5;+=Н! tw4mHԪGOtJޚYTf6?ɳ(!;w-Z}CJ}G^}[nOK翸O3Ϫ⫋@hya>S פFy!_Y5 JF t@?|!h`; I65]_߾}ta8[}T!3!͔P~f3dxc]pxVÚ,Z A-p[*nǗI.8yЦu[6xw-+#Kb֙EٌF1MxN!e2&ewHp)8m+u1ZeB`I{fv&nDe9< ӯ)k?4d|"6lSj"H`RCI {Ԍ6,NejF[}vN*'bsG9 7RӏkԺ@F2% pDcW2ȩ(/ W_+sD4bMzѷ0) R0^!lqb7 \YKTK|1  Q{t{/ 7 61gEQI{M€k$iVM ޔM tVks"֗DeBLR"6U&E*RiCV5m5i"^^;'WQDuגWeMٳ֖3dƫ)6.7IZ*5x qzˁ]x'v"Bxڜ@Y-*ƌm}ϛՖ3Pjjl$= ;ik[XE86]"9Uk Z9GQo]`uIzn&R@'rƁk3jTb *]J o-Mh܂j\ʺo֪d˅ޞC!ZrHPK{\.)\갖Baz 7rl ?9ARYs=-yzXX+D12XJ 3HҚ>\,V|J$Ov$^-~awznTEnJ~K2 *Y0]L ;q@N3ן%oMA t* .4\m5 eW; 5qz,.4 XeȶCiCL*-奟zQhGnwQ!rѽ%Mx[/]BV[ @d˫,&ѡ@q9r\{rKI9ހ4Npg:P (Mq WQD6rLcpCkbh̨NkOK\MZbؚ`y?EһjRUm\ @ޠ-^I .^A xeO Hv~vNR( FjCOܳH~aP>U?+ FU=>I{VLTz( 1d(F܉–H=_57ַp,$gfkș̴N!Q@N1Ͷ㾭)驨zOńں0Jp*{oofᆰl{*:y |x3̹,m=jn˩iU=ԃפ/w1:t/|,;GSpY͟Mb*^t:3y>/~yoMZ[q !O~7dާ1 O?Te,+C_Lvf'oneT2DaWo利tN aqYia dFٍ?b)}a v(|Ά q91p,F9=0Unt9X%N3V g?(s4%4@ `,a@&X=ПS{ 4S{?'|ja>17^阪䆚3G2t-?z9'eB ^F@f6ij@^ٻŻ5 (7N;@֍o DWaԦ1jљ5vAJ8Ȍ6:RM.i$V+T HDi3P3c/FQV%;[xtTc1WmL5rg!NUI1C+{σ՗4mbS)a lwyA *},/zK#+.yAKQKL'1~]4>TH{^깋M^(WrC` r rq8ka4-H.88HLo@'$IUt̿!d riw@ބO7Mt^JZUXf,vKP޶M>CԀl tKlb2-"bܷj?kdpoMfM{Sjޔ7ݚ>_UB 2u)X@{74 KrZ8ިio[#wجSviRjްAu X4Y4Q|mF{[;QB.9yr=Z#ԧrrg2.|Y<\hĴ_y`q5Na͖_uvڅMW}oW*ՀR1T9 x fl2ا_8z=rfC}A(rNq'|eug$?|/6n9pnD(N/!&g1Y&j}wrׯ?UшA_o?}JaɩpK>~~{yٖUu?'[8ǙU\f%*Fv]CO]e^B˩u {SWثJa/?9FX&}G5NXCcdWO\5z@e\-ervL6u9_s6($ Hfv+#iԋLGL˳g<*xc\ː~t[{ޙ ƳM LpЕH9+~xd*Fr3?+lSKc(/Ϳ/~x}uNjRZO e>.^[Gc,'͓G#12g!&},M3(k@B @F`uGgLo'J ?Ox./c?隤y<jbwߪ&(>H]-erJNG?<0&jH!$"H>\i|kJ#x8a|\FF xDzxR& {pi54G`j"' Jk'>'C%0'Y9=*l5W1$OZLJoS.Phd$^PK[X4d-mDΑjГMjћ6:Rz.9#dȥ9!G"4^>W9mc9Wsɶ$K>K C뭝<9*y\1obpHKZ=ZSPۉcC!̓% `C_mZ>{ujC[J!hQY%Sa,:eQ[\q T$0gƮ ``3%%$ksxwp[|= fA!W/g:֢8?O1(gg`Jj<3bœ%@A\2EfLؐ+M$b\fYh*iGԚlTCq;mƪS.z3Jyg4N柢RVxXZBN[z=։YO[x0R`/ddz;oxD.9zĬ7?b3|f̰{Ad5TɭA@-@[wrduڙVn3.J|`_hYr[:2AZwj4K9L1d~(BbY:$ozzdzK [SI峏&]بpͥМ83ߨR*'"g@`#K ] @bjHUy v0-w] @5zINJsl+_rVAWM$&L7`UXMXƻ2i2?v)+Xc Ew>clL(@(,E1F&&R$&4,1ŝM"G~5 };~6a_y.pcکIK7=xVRpsZtg妠a{&Ck:bS\` I@*d[nmlQ;rӻZV -6 9nikzvA^ܲI;E?e*=YJUI|4>L/gi|E[uĘg8\7L, R .||+#qQgrz[חʣHv2 %eG4O9ѥT~'] :뛣 5f%*HQ!ͨ GќkEo7Q^x. "v{>f=}-"gmz4@@Ŧ!y $#?aAhӜ(IZehE 4q3uا5FbWչ5;uյ[_`YDs֛ilr{:"mxUrA%  `PUK}vxG@(6)fř 1' kzg\kAvR0ȓE~yCXf,׽a)`wu_|O/v~þuJ2QIݕ :b5KaJ1Ѿ1/H[wjH[g| $[0i4R֞_/clY*; ZLkE LD~L(F=-p}h^hw?PxeG!N?g%M191cIXd(iZ';0琖t>ng85Ayq1Ә'Hpx:Lb %%'vĉRSSkA xu]q8C 'irPzÙYmIt*ZNy]rbgoוvBzu0 s4y` dGa2=Ý@ I;J_\b&?COU)/weL}r3`ZT,|kX7; pdfQWƩ<$6/א]Cv 5d!i $TQ@ڦI \qRDḅx>߁ow $ͷ^nGw3RjễzTΏ`Y&OY=) BZGrO7cҋA$3r.+†%/8݈DȚ b{HGiX 68iR._1lԐކKNI?.rV basA;@GzhxqZUMx6zQL;3ӌ%߷Mm5:&VQ&JpJ5-_ۄѵرȻjzr&>ݍPcT&_ߋ8N:fjhc.~80nڿSʛv{i~Æ0@pv(u C /IGH8Ō(3Fy"5/AU VQJHEbD=KB `WQ,;T͢6jōH\Y||L0V EBpss$NsJ&S9TZ0'0@\BfHa !(8{:"%laLeKJ@v ({\2(5@r<0DYXoF%@SQFR7%7EZD}4ZBl8"9B!bivzq۬~S'=HXK9l̎z5A:B=(i/aͪcȻ繞]x"ƃlX~bY^mc2A~|?濢'gdb*qw,M>2&$= цcp~oK cnltZ{; Nb9z Εv2( \[mt֚3 S;5p5u<],}kvݩv!!_"֤0)ZX͔.Wehb_n2fV{t'_șA)(=@@sj!ݵGWgp)Q+/j)9 Gqyo?3L ,dTս8_ˆEp3_\s|B/=x|uys *aSYedM5t8:?Q5kBj~TnG_#*o= TK̞Ձq lVudS&HmXe֒õ,rJLRSZP*>8W#MRFX݁(%Yvxad%Qڶ׈XMLtq.$k7Ri;]Jbu7:Gsg= +bB B`B*6sZSB)>:\ΪxlDcn$)G&|3bwhJҎ6.(8e.)ԗur[Ĕ]9pBj0nh5%c&v,آAU(9Va|3UEU"܎F8ץ|*\= TDU*Fjey5t7#!To6n3f 6o/&_j)۟V𔞅\Z-ye:Ϟ6xG2a9UI ~ 6pdK:w /'pC^wg39$ #[=tOk޼K|d ^Z%9ek~5/I$ uY 'Yޙ5jXiGl h IcF;zOe2^0\p@"hMۋE9T {39Շ6~=iqkZ\ĚE Ia_.bĈJ؄``Yx/lɈJ[d bsY'\UAZ܁b)!c Wäs0k |w04i/qAA\|O'JPߙ骋j? 5?⸈G"q\ԏ8 Ves#9D@ ApYr^Q ?YurY|c3CsFb lP,C`c񌫉Oe%Qތ. qxauaw7Bp$6N~o/:\RhX(~_^gh SE;k8xp(|4Y_A>ICDQ\$$-r02ĶfPRXiBR{Rfq3RIQ(C ray]}h&L ’.W+L<{F,TR~ҮktH¸ j;䤶ӽz2JjK(U\m768BpaiHyG4=c" NB{ >DX=0NFܼyqC/3"L@37|8;iUZfI| ;,LƜ0fZYݠ&oOdcW;zw 9ʔd F̍p8{ !B[^NCeMNgH˗#Z/+GN}=ŷrH+)ƵW!֫5MNE<{v9?, I)[zf@*ީg.grzv O?@ԆmtS܎&f鄂!k>o>NV<0HāAki4Z#{; ~e. c2b;lv-Y,c,$/u}$GF>ɴT֭#g.WhofЬ lU?ՏYzH2~tv_q2=#p"1 .WE0eΔe#)Q]M٧+şVG*#_}9 /ՙ7b.+jGwywu S&ߔ@mY2 eX9MV֋YB&;oBĄQGˢ a!ߛ_W\3 /zsa}hO1Eo`Z?WXj/erJL‰j Ǎ /ZЁQ?\xt7{@9^(y'^#a4X˧_ ؆s֪Z_>h2Tqb4\W#E%첈+ aL>vGڏK5f<On)lJCTt' Z>IYPvstᶔ& FOe~h4zl!7vt!|Ӆf3J҅Q𡾕ǭ&'p/160o_ƥDKq5p~.L4g $c$9^b ub{k6«8TZ/Fgsp)+QG#)H#5Kg}2[Ic` DUeV*@2h@\hef}6 Lr.48kFMڻv=*'U*sCEmyڕn>VA_¯oR~՘7O_o5{CJyk*˔Fߋ=U큵r㻪h. XPPX.~ۻW~/V.?\?mJ?u߾|+l[5s-z?’[CŬ~_\# ƥz8[;:M~Œ89lvU PEo}#a|F\E>Cne[j<2p))qwqlƲoS,8r7Fp8oZ &5)j{TQ}qHkv3KmXV*-b[P{etGx 6ר{%αvQ]jsOpyE\|^8y~A!P -"('E Ɍse|RKϫ9F!e!JS9 47 <ݕ.ќ^ ?Hx&3M,%!ZZnh sԌMa&V3 -<{yve:կ糿 Tro˃bLt 2X8hT qԽ l]}.^.d:9vqE49U6$H- :F٧*O YNl! 47Mq&`d`cBvܳNa3MڮvH.FKKj 懈Q"M zY¨Zdd bqƄ5m,Tkav K`2%:7ǖ$xjuORk9p9KF t'>NP3Gx?,!Wll[jZ3GQ#h`-mYVtjGHh;K󄚐s:b)#A9֋(g6WQ6WQnRKQ]p g16V׉H fb{x(Gi|5cmˌ/3m%l sencnǂPͳ{5[;;C(톟zxh9ϳtXX \*#;_yt*-lE!nm_Qmh- N֠aT[h]J#nu\~UrUo7_oo>]5̓q˯TW)|}u_o^}QpzW}*o|K/)izƴ9[y^( 쵴 57.z&QZ뀥, &DmYީu'w$%R!y$F=IrgQۀ.$Da!:CvCpr`F$3 G!L<fc䁦*eVVYp='XR"Fx4H&}"BWyq'ŤzSNmTa:?%4cmwij}ce_$|XKݧxݫ^ɳv]GV!o_]K.+ vͭC>^*4wCӗkřTh.jatuD&7i0n$L{*MP(̣&N5zkygUU pUW/D=ulߺKӬVwo0|dfR,I^6:Ƞo-̗VW_p-,~ߋZբ?fI}- 3de*2dRQ4x2qMqxϭ ]( >0 MxO`u}.F1xhҜMG/u}"Z V H_ƻ3nD /[Lv&{r,V{56hH蹥Sz`.-YȊ\2h"rrS0^Sr"^R3OŐfΤ(WqsR`xR2YøL2c*/P:ɦYjX;UR0b&z6\Hcq”T}L Omw#`y-!xB82EEn^ú&xLaOWYiɁaƄSശU "A`*Wwm(M)XVۤW7y{JWW"CYkBJ\dw0ƱPhBm =U<ٌk}dStihnMAӌ%R o2<~:@ - y$t GPCR(2cu10{rG@Reʘ5Lì7NGCƂ6I6+Mo`FaR{~ yj>jK:.8P &2f% WdܜZf7-z?@P[*{|GZO zE } 6|Hn}2lBܔE}H3s3‚pUC9v;f N U6= D=|RUǝf3̜zݩE+ N4lռJ=/ G8(JiC .5\hgP8˦|ptsTplŵi"D(LhB!em;9hOA+3z9 ,7hYAJ2)nFwL:f+J\ d.J;D[W@}F[7<L2gک\k7jrc]Y- :D]^YȄ :/p&js c 6=g1wTn*!sK+e/w 0*L?ov.rBEfXiU pJL[Bj2jӓU€ n=uޕQX>az]#m8_`}:ZgW}U DE|[yyjGl-Tmolb>\~ԝ>Yo~skcrd[>h0-7!ZPᒾSJz>:㌶z*pU( ,,x/]oi/ GioщꅏIgPpS%dj!]]]6B =Kx˹{)^ W(VG=(e;vsΗr+9Aխ΁Fi4ZĂ{G3! D / _o,)o, y:Q"T9*~6 $>S9AE'TK󻊴9IMҖB <9Y`Q[-|ApZj^&dgvN&ٮr'''<drYX&:f$puir'J1Fo0 ETIL\F1xW!dcb6#i^$kM%NPdyR,{t&pό , 7TƧ#d V'wA ,2X4`[>HLnugr/UHN Cu`Jc#oQXLvQX!Nk"RV ^$lUp[Y01vXJVLKݤ+9CIbfhQfŚ4gZ ,m.0]m7+,r8bbb 6X_n1=fW-i<V`cMY|"Y,I4NrTû=[7 2qAo?kWO l.YddLZ/dTubʣhq&6*rB׳3g2QRP&.λL9qSfrӂ^G㬡h^r%-=e\lwuԧe(j6dgΫ]u"p"{2>^j`Z=-I8##~/Pq澺[V 8=kuBU҇%2 }X !#w'ΧsT^>Zjm:x[yz42ui0F .LؚzS>5;gSlH<-Uh5 \"Lcib3x )sԨ^:b*R #珩,f8Wzfާ{ylD%2ё 4FK!#9MADS$*M9-쥩kK<YvAηb"0BNզDS t6%&Ȣ$ur؁m #9IXnHt7wsE4=/\-֤т1[ˎ$[N`Pk^c^R:$s M* h\Ip5 4Sy˞0nz^Bpkw^}mA-9N=.{'y&Ҭ虣}0iP&ǒvB:͂8QЭiH=) p{oghP؁)H§Bz1u鲘,j0ӆ6gC֦f~Hg_9No6wA狟;ӛRF轅̈3K8mIR./_8Qq9!rU:;k$lU@qi^FRݧuYRe2I )md4T/@#{R=ƛrV~lv(,+m%;ZRͨ2']8So)zL2idDN 9bbBGd mjHλVN ґ%B()z^Ԩ*(n_jh24ch cSLY;go OIjS*9RCPm:Aa"\lŧp70lÃ*b`EbA.H]#OF.\ZKLBڏ"]_heס[F9?0ˌsr0-&4Ġ E,D5j&qٚ?-mNb/7wP#U{(eˆ`Lf7!DK]ݢAҷ3x?ڶn7T꧿rRQ$ %2"Nj·{mNj{Q~&DRĉ(ʝ38 E㪲T|J^S[Ͷ>>D.lrFffڷx^|48c@`T E*cr"Wj֔ yDnië^}G,.pk#SAN+p2GMb pB @k&` |BɓJIT$_7B|A@zby¦d>/oS E)8;jTb [ 2jqvp BjFbI R<.y v?s*K,z5>XK?O;{LNf-Jς\IV PjI{^s[5|| &\n!18t^pFbWal!1nh8Em|q@+|9j6Kw˟9x$/1~S4QcS.Nw1.0MpP݋g?7Qs|vI9oڸ¹ZjNd:͕#r?d ]WY,C,CuDZ67wh(4ORƎ)F=G$\㸴"qiG+ mώg[-hZ,@oBx3yG{:󭚷;&E2{#,t`ptT7]h?c^An/{?¸C.(%i>٨4% L'w_fϢ{TD察~ѯ?vֈgWx=/ٶMb\M~fTd?߄_'|Q\Ӿ-ZzlptKa7#ovbf!%##ox#i_NֺI0fFFz$(sk#/]Aw+w.,/wvߔw>1v (wY0թ> j$ypHrE%!89q[!8fI"pj ',DL":ÀS幣9% $qmƒdAv(KMY Nt&P')QWDa9jPdKӬn=.ٔ:s caj)wB+ƞОh9CՑk Zp@QAs|}(AѤ'VaFq5>lNXF]1|Ezc$=vC5,yR ΄i0@4q|"R7i79)=ƛ YIdbhd *Ґ :G QKGS@q" "g]749DwLYŀiR!¸3& *]V+sZR sڰdT*hQxnϬ1Y|k16?1Vi!Tw5mSF-gĕfzv0*֮.s*uwMJL+ f}ȩE0@p d&7魰%fG M6]-0_o,b</A)v(^'nָ: 9ćG6i1YY-U67PżSmqkDgS& U phL5pmjZU)+rA)yG#.7]ԝ,bGHu&8%QġDb ǒDKʌ6q\+ 3A5ȣh*V .o hjr^qDt|Q"yOn݌8#x/oK=^ OR(Dh!$^J9=WMZs9C}8%m&QƉCD";z =\l'@=,}ڰ`og~diYI62Vːíu`iJkӵZܜޜ^ t3[M3nN])pp`\ʎo{v=u3 =[t^)c >㊺ H"`;"?+g9mhA;j߹wrNzqt, >荓]¯9|ulopcT¶P̘6hV1spӃ~1:Ԃ*F]k{SwJFkMGxXs^[/YhP0]\vC#aGvLH/5ZۊyZmEd 6at 6VON~17 ìGf(oCЦA;}xT+NxmGo/:T(FhqD?aq|=^X fqI䄇Nx<5/L}unD)<9_x);޾1O'hPff$bLvLƘ#p{y5iC缣y㥐1N'Yh7O'#ٛ XA}}#ـ9ێooyj׾Gx7£vέ)A01Sh JeD0 ¬M£)̻S0QSÝ`>X==x/'Z lFWOH$r9`lFWO>$p_gt{bt(d]U5DAh-7SOg}B"IIx=FN/G ~$Wp:XNpf+$ -C.Z(u[0N6r4vY_Z\ ҇}JukʌbY?a̷Rcs7TFP4>TwE6}̐LuޠVՓŽ+Utxفd3K#Nk*xВ W<\ucN2>%6E ^1U{W9}Eo"cַ _ү~s'ȴfv@W0im3SWBq/割d R|иtJ=D-sw%2^\V~iVq/ۻ[4,"JNҽ'|-ֵTvf߮Rk" {[#t)CȽ>PhTC!'jՊ;e1W`0Y|4+wRa쑰%'@ԒMwa,K&W A=o#PvpQܷl@{5]H8H[êWA_sT=5Fv rXE?Q8L^Gd1Iw&w l>JSooe_In(A?ѥmKTƔ҇ܛx~ܸwx6>_rӪT$݌6B enF3paUsiYfЕ {im6 jώT'v.#HN^e /|ű\H)me*dEI dd_^Q LZ2Cy{9ߘumTmIdnI T%tjK*p@HjBiv ᎁIͽJE1bpxcd, VH=7x-;B%蚐W|]qCLe_{N/\nk~\I6||n*nK[N\ bTmO^iʭY\HFYಪ/ײf|wK%aD32M̔rI٬9k3 46p  ^Gd M R@3+8Emȗ{^MxLZw/v~ixqA^v9"kweyC& Hz(<4UJ8'>p.xKyyuw&!1V|n Lmx!L v~k6.N^x|(j<['\ _xUx{ KTzp4K@MH#$ɋeaoR#WH R(5z8FzP?Q'#>y=ZPĆa1˔#h"J*Vcf=”"/,틜<.7dGψeLi [t%J[e'6j(eSɡȾ OQVx9NOnI'vi<=&KUL䱊(H@TꅷKvtRƹf2*ΙbHk%qBbRa'mg0f٩~Z?'@'M잧В Z5RՂ,xst^gݺ lJqDi;6c,?'v#NxŢxVvL}YtYУ>[\Y\]~h /'Sbvw57sV aN[nfM]@אxL6E4מ'dEZyeAf@wwnđ ,޽ϫ_)(0n;@:$單ٿu^y yˎ[vB`gmm=:]w#k'm`2xۛAxh֮fV-80 G Ơpo G6P}|ƽǽjg FhqPmBpnP$ꇭv'!BJ9 ICYtɪ*AhhCgWO#F T)Jkp&t^dD" FrfY2S"6 K,,f^FidFSU[}_ Kc[A ~~)7Xt-V%u eHJLpM2bX$%Sa)2oSRSDQU+eEKPןWw$%'{Gf]DD3F %R>@Dڑzuqc'Z|k[%6+iN.NӽW5cE4~CRX1y u|/ y!NPL-K屔|:LB\*^KNkwT߯+;p9k{ +>]lb!d}m@FAj/ו ` Hν:Tl"4,|ѡyCD˧BK8﮺L.46)]U34iḥ1O!eҶtdָgs̉ -81#M(QB(i425ʤ#۞+.CfWeQb!YAh{ p"uP`6u",Ҭ-"-0qbl#!C "/%qPiux HI)ѲIicK ڣv1EHJk3UEI:_M_lY7iB(GK#@Bl;b4"IW b7Qonxl+`g4Wd?V˕GJ*9 9CT]>4!o!!Il8; F jp߭G]™gG\% =% x.["eFs܄IpIGѧVv|OѴ3һ*Xn4T~8mY{fat":qW{.%׻QEHBv|TJoy[9۵Y}K%W5BL.w69:範$ʗ⚧|"DsIcZ.2+Q,j<9i =rKTa]m y981iDSJ+,OLU&Dc]NӒ1aY -cL69DtXv+pLDLRQfF,`LqULL.P2H)5NgH~Gf,gr<' ?EUxAؒcKVefazS+r3Ilh9 Dzie{ب&@ .]%G5`Pwm3 F23ކ#/x>[+NH|sqXu9[Eku; JLƩ ݉qih=((,_5,N4U1t$]60%N+= 17} ;//$WA9,N&n/XU> I:!Q2d$Ñ"`A$M)ګ˧xq9Зy.c7xxAΫIȁ(th^9ʁRBںL0&Շ~udQؖ z%Jv\,w2Gk[K.YJB&ysxUڳwm )Ǵb5D >t]r TfM%i?VQ:g*}=. B/ \QB*m5+\U}*(2:دM?#k- c."~PD˒{NO1s=/ɑF!,eVF2UR%7aQJ-YF(=R˖ZWv ."w)J}H~ p1~gjtvab@lQmwo27wK;Ͳ{c޺o$P@e2MXuހJ-Qj@# ȑ:;)a[=C-?'eggjy|`׈jOquP8ble }(?C"isZ%h??px ܙDp_%}zH'Gk]~n,T{B$txfwWqYŠ3)L[κ4" 0aӹ\HF 6LU-Jt%VNak%-D5hŦM;џzSD 6$BBDQb{udIDeL8~"W\nW.AI jVӣ-oą'иŽWD-uNF݇ϐ*0)q5y?(NuR)o[D i368Ko'z*:U!q5KV!Z%9"hmEWRݸ|%\h[;:} :[CZNWH8/k%t4hOa ӳZvI1٦ EX A=WnA S(s4G(pѽgӴ ;i @_MKŜp|Uu" $H1 *-+Χ] R$2#AG 57X\E^^ϛK"i&0S2vT1"!&VXf<1s}aQiZea}kSGym(M嵩bmj## b-RmhAJ.Mc4 Y*g 9ZFiFQVp5Ԭ=Zn/E|M7˧2Q^?GyXWb"!}  coL";8IXTUTU1Nt>I?Foʰ?;?_f!w5m_XX$8M7. q_;xV׸~~_s{}.`l0v$˷'w!@@Ŧ;ď7xxS3|`ؕ 1]HpT&a\`rzs\Xν%6Ur3EHh]z8Y~E1boDS5jorr{l7F>}Uc{ 2 kSq#"̗N3|1 '85K6(Kq_QhZ/-Eղ$G]|XU,VYx\*wrA޶aƻ*=u(T+ *y MoMޛjBNVuP|-י7LJ*g0W0uvEa=F<0;{<ޕI珣3P>*4OdB*(5HcscmHKi[kF^~9]//O /q"_x/TqmγY|;B\KM=rx#}wHb${8'dh<6K|Ѩd<Oʥ[̠X%_o+h|#+8'돟's$" J1px8{Y 8 gKRv-̬Y3ޤt-/p F uuB,Jfp;ڨ=BLA1Ul,h#?䣵*/ ,q!~\دM[炼A{/Rϟ`lnp,EO}kQ>r# hJ-q%ȸ'h2ԼQ˜b1e00"kԒBsUsqWaCF. dUkYY/z30=3xUm^Xs^̨~>Ltwdf3tj˘8 \N0"Kh#'!I=S۹gJH)C eL jD#J+G8+BceI]{=8N^ziT E!#;Byl/(EH* Q\hIIP3 ; BZ;Ì,%FD'kqF$tCC,` !Y+e ˭1Ts3=F*昔:0PBEOH'd: 7 <n6|!#BJW%Jb,`ڒ#ZTL0Ul aX!FJ^g۰).f.KiZ1fʂq^YE laK!!蒺a J f5 ab`Ø[N6Z/CoA8AB E)t<rCVΉw&FW b&[=I cR)e[FTJ(%$߳kWr. =9`2h ABۻf4*֯~=}w$w|m%k|g,aMIL0\*s?/60ʝV |V^U jL xi`3 ;T` *$x{P!W),'Bi" CNY2Zm6PJR `JXdu;qJFL SSaV!Mb ² )XlX<I`kw%zđi|0NT%Zg8c- )1URRfOK\b2>tqq)BHzb/H,w{7v8PrFx]t0/StZRb xQRU5dT)[jb xl>n Ugq4fkF!}|}?s{Riqp! Yrho)@emq:5+65KV$zɶ㕼K mgWn3B ms%rVUJ E2HUR Ήr䰤Q1EJn%f$ڽH;_׈$z'Úa\3x&7=e)?<_#s&ҌCxC7%dH-Y%V*Ӎhg5)69 更`ϱ V X~k0R++1QRpv lR _&&|~,ĨC}q@3TI_< ɉZRLC*8E u a^* l mh1yp'f@4?amg#9|[F h|[A 7m ƀ@N1V% t]6 Z;Y|-J+c#(DGҔœ;mAVر+;.bɆmo^Jl>R vpCL9u̕`ӌI=g>WPwe!ߴB^] KX] z8,JDe![1J5^L HbΩaA'fKZzy,s$O .v뜱]:f s~‰ggb~8BޣnO)p]ìG]v(P_0˗u):vD!e1٬.RUE4zZZB\Uql(pE[]^KHQ%lZmmN}K)lǓTk0_x[2 vt17Ùc 4J"$4%骅Q D7ߜ!|<-mvW>jWvX0ڔ0O P37[.1ٹ41w ׊B^O*o<값޽Ⴣu4>} nSV؟VQTpF"(d?YrvWj׀T-{$z@Y~dF6BlpDzk~eP;o Aς_YXZj64Jzp7pHtZm|G荱2nPN[}VV?Gaa`S|tuSO;"R!+*|BC%6׊LvF՟ޡMWXfa ؐIDR܂;az8[p4z31هYZWTaV*57G$RkԒ`%AACLi)FBpucw؎fwz?"}PQ7ʖJA|@ e)JN>q!RII5F>8TmWUɨ)%bՌ1:i8%L=^yhߋ#ةݶ38r1XJHݧEov VaR0!YĶ^85O' WisXfa>׀R s4|h,c@ ^9㌙xxE*A7]4^n W{4~i 3сc%$/~j>QOJ|޽c/1eߘ!xͅyFQ@9IhI&gASЧzN 'u4>NbhgnwkxTnxGg3ݕ[@4Y^؛M10Ӽ)qr {b-g-SFzW q>z͋vBcu':pS ik#e ,fҳ e'ʩPZ#+H]㐌ȩ05?Ҟ@](Яդ? Izs&}F5;F3zmaTjk9["LZCI;bPwb j4y 3}/c&2)KR;͗"} OJ0 -P(8_.]]<ڰ}bk7;mOɓ#/myRR?^nI%[$xs1s=s Gǣu{NySUU3q Ygѣ$7+\oTDnVLY1}kN4u .L.f#>b"yGͬ<ǬU*e֡kugP7k,7Q_`0[ucC (x@U@UҁRP&f[N6lPߒtCvM?FlE0(vCV Cϵڛ1kiFL_d_VrK0.5zhFY+w^S8J=|/^~F_wS/qeU{UN4,wEq 3فd~Jm]HM- pU~ &Ւ^KMC-К&} 14OL- (izt;8 eʏ6_VWƻ'g᳙=<|anw 2>||BY="tlŪ\{2eg{gJT^iofU^kawsXo$3X/N,= .~T/{x%&Kb;cK8'u{%.igW}Pw@dwFomڟdmϟ~z"OHzjEe0 k\qmŸ?uW{wp񯓓?w8.~~駽ݼO~훋_˗ON8xg ]0OTu?'8we:?{fX=^N7wX+buw?3EhzOx4yxhgWʝM~Yd4Oy7|zރLd$gux")[MMAtm+DUƫV6uV+/wOGSΰ=3 RRvm MQ_sOϷ\ܻ`BblПwų8Gs7>GxCyNkك;6~,+~+ǏQܦCUMɷݩ;sY1. ^^quGi?ҏP LcA_gqYca9r:64R}c"U8Zq}h&I{8 ?}:J0)x.O:%O@T\4 ƳGƾ0G ̼7dX19/u_*!}8<]F{vƀu'^'g翖f37Yn-Etl])Z+e Lk Ν\ސrL6!*FnxQl^^|k5 m[lYDڊऍE؊DX"Dm&E&\XLp_Ͻ)|+<*Q]|7̏q!h 7Z2 meJwo<#g ;ǝ鸻 K-c<8EkNGBGÑ pt6z <$,@SF-?PF,wJVv])n/nCWttXtL7ZåonW.!]ZpE 毽|I^SXxr>T/ΫA>y=ׯQF-:w}ZK)F'{8?yIƀ@D :a;C$ 4`| AeH q$HUEFŰZ9~it^j첝lcIJ.bID.k$8p0U`G&;l v H1L=J8.n- K%obF-h9-x2 m5Gs?=u?.IdR=pܛk~-t25'?z[tݾ5^ZoE:V:M>y%춊ޤ.l4Pq-+[1JK8lL4Z9Z}XX`܄jOTfQWJwI׬. (MGR'`BgƤ+Pљ'`挩g̼~=y`BXginm>y[ x%7u7eP|q`棎-q>g(O6_XVxQm#6  U|AgiFsM2Al_A&ǓL*@ g֩VܕYN +2xI"|s˷3JRD%1dxa1@'Cm|h'5؆ ݂,YuVLnyb fL\D5b#`  >a8pk:ip[&ryvgjyvgjW oL?!(̃HMki(\Y X)(_3A/sZM`搻ی.W#a.kLɰ2*tfP { Ĝt $+FFDˆ >X.zf"<w|Ha'-7)Fk(qdk'\c+ϏNsRG>w/PF=k'­"p|Q6.ypZ,]#*bRۭĘ=kr3fݻ8>!$H%hL7(}?ذe@($n@Z"*;]%bc7ڠ7ċ -tTނ &Zmhkl q*D;Jo+BRsK9 B('"`c gq+M:`ӭ鹅(faI~Tu_ #Nw{Dc P!8qʩ9\s.l&r%p| XjpIJN+D8;%DpTU4¶3#/ Q5EzPK@ N@@!YNH10g PUit5aLe2AoWV$)YTKԗ8|Sk!Typ |;eРEfVa=Hܜn]PuQ@LnNFSM D FDm$ytO[ZnÆ\ ĸP1@0F@,*OC ||ѧ`z <@gieZVc* o F'MC0֜nS#+t}+u,%u B?ݧ,kfM|I~xΣokj:=?.vL&~$_S8r0dlSCoorh mk+vVsy)vK g ]aOGNA 6EF d%"$/Vkx4ɤ[Q :vc5L"5fF"E_⁛r뀦Dej5`fF b, Y6!qrEL{ClѲdʾߏR x֐ Eq}&vi)5laހTVU[ ޵q{`#ڤv:3p9JGҕz[h3}uG[EKhf&יpPWmљ5csH!d3W(E'V?2Uƞ-Fk?]7߶ݘw'dCn5x8r0yok(y%:~Qv2I4M-TxJ#qyFK*V,BQ7̨`)19&L z4qlڥ? /][6RW3V]辝sy!`_YZ\ ;Td8M/12wnPXwŨ^T67[k9ť,֎G}<,K?#OBVb5ME[nif 8cȣ7b [*=UI'x%#vtGڢ*%[L8kܒ' i$n9jڦ]E׀nNe5ZAPTy] (Nw{ك$9;p#CM7ZA&J4gnT Zc̑ ݿ^FdPN^jb n \^ICLti1-ُҐ_7nI-K_ a's8`b>- Ib'Gێn7jXY,O-?_ۯaS/oVXȋv%$-HXARf2oQO3W7o\60P8#iN8!5X'klmשsFpFAoKV'|ژ1D! ّ^G{֜3GdĀlݧ F猊á Ysltq*KHѧjv!:PG!X0{%g4`+ A{ݺ>V1Gȕj߼/?m5*zaʳgHUgGYJ&9HGGQQ}w,%)'02#(~ZjgȬEQ 7^@':ZrBLzḤpժ>/լ/:iTj55˗5wJxT;j*w{5p"^v4ژ(׊ Z]6UrIڒ tq2)'!MOrL iGWY4ؒfKHʽ:w_=L9n!Wn}>{u+9s[ɵ)TOi 8Unw3sgx]ۚ^WVY][->dj]u'"EuwQ]_۾ b Te*WJfe'nDŽ!(Tfi&Iy)HɟwMqX~ѷEE5u&_H6bZeis,j_Qxez7~VRU0Չ/*1Nі*1Bi&,)%"F"eií LyW5U҂P*1tľ9h޽ѹq; ֻ}x (n;j܏d.l]mHz3c\wMzwhl#Fυaj^{ Lƞ}Ƿie.Wwgƽ K6"X'e(IZ\\7Y$Abd|_Z]BpPdGFIHd&R'OKFA=(C1ei(($4e r~L-sFGXdqR^bA5j N'^0Kp:=#3|:'@g WSfI] 1CoMe{ 8 m"`9T)c~ ~pcuH=&-IZ-~[ԧ`PZKJ(䲼^"KN ҕ{OVL2JMq(x=)-$THToO|}}])TSГ-.!$e?jMec0\o8Q*CɓBse ,S*R5qSYƣuhY.M+(ISftB)Հw+坝A*IMݗ~N",C>"P5c $*'n:_Gh G{Rx<hWDV(> tA&tHS*Q|f-1WˊLD2O MII2avn'|Cʭ Ee $" h,ڎjf$U /9%U=kY_R*k 4r$i2 GA&ɿw@OV|qZyZ@k!SNW?x >_:r] m NcUePǜhjV5DjDk@N'&JwW)fE5]|)SEvGWKft8l5_z\kC1ARyܗ8R)[q?r':,HfQtx1*] Sqb+(iK0H]J%!"Y.Uтjr/D0$=z^1[GNA FJ(; S/u & :=(JT%}DVD5#엊Y-:k̳goϪdvZm|]=PϳٕM M0*n,6EUfPI i)sRE`:VaX/֬*Zʔ RyF^ZUYD`1. NEk 83s2u!vO 3õW-ȋ4I 抑d1y.md]ݭnA| -"CZA'ÛŻ,HQ'S;[#1K(ЩrBݠS֫?)$ywG>ט}: 'iy%)t85NnᔙRX@gwrB(h {aj evJҳӜM.iw~x Z'1*4Atf}_滈XeP)s)3#D_>_J0řg ~ G !퓘Upu=^vNA UbEV Fg]]̌ F< Rlt.ѓ=V]1 I|u80d=72x4JP0OUef$TIBhC ɳRj]*6{H. "*, ,3fD;16k~bJAS1"ǻOxX9- m Ĕ J}Cà+41"`TJIFT 64bJ{Dgµ΢7-j =%syTO|xws#d\%!x:(PALS!IO 6= ),gA@ +BG+<+oqx n'.Es1"f+$9LTVo^R ƉzwnհqiҟkoBMg6l'La *Wcpׁ 6fm=d]p-ƚ4n\g5yR8#K Wtj'!Yt6g| jye ?]9' ЁOhc!hW if~rQI]iU83o^uaxÏ_ ww}|y?擟n ͗g0ʽ #8G1 ƯK44bVvSpl lb$$QINnMrJq-y@ \܍:(HL g}zm{me/كӺ5ԂNCuDap@LD15NDP:Nx"dH< ")ԐTH&Ir$NRLɹ2^)sg} /,(mx1kݗ ںZ  %ijd/kR-d9|FsWs*'Nrze^ XҎ)rIϝYPs-%5616vtDET9"B˜"0e ib*D&. L>b /xiQ {J??j9S&1Տ7 S)i{NvS*1RTTiC: TjWrJ˫Is?is'myyf{!$B")CD Duc"Rm' " ҄:=ϗ&R }s9sn%D1|J 6VJDH2"#5BQD:0ZDClAß dAFقILܩ9NT)Re*Dr ab FI02G Vh81QX,h0SLJ 1,af3rY̦Y0 k*H@!3-Y9tpzV"i(Њ\|E~.w ]P`Fr*Q .a,0VҦm aKo4)P#Hn}PD"mHCh?NjE8M+c c 'Ԅ e\7pc$6-f#F4l5+>JI/Q-.L{p|JKW a3[`ȁ"F"XQ 8MXgH"tN r\aZ KV૰ |1  [^e 8X k @01DD l"k _fQ,odpL|yE "XI/cL?uѾb%tiXK=#onݜSx ^0s鵳n̻;)6&3e.޽ H&rr|{ ˻8#\9xK@0Qu&ONO'~28¯8vM6.z`=h7\s!ݹ:'c}fd+w:qYZ'AmVzh=Lz]rt ֌œI(E)ᒈ]Ĝ>A{1o"oǎ7-0'a4zvLf c"mʌLO*\2N#SbUQgM"kSYdۭ,f5Nq!Yj.\֌'l5܊ uľ׭R "iY!3Gg߃-1@F.*Xܬ!^qZ; t"csڟ\$6KPu&wӟ@&Ӷ9Hd\p1o zG1r˘W4_ɴ7ά8oX[v[F R`$6:z |ҘVmPͪ ^ت ^Īi.S_ysѭ+yI3|!N"wjƗp?(Wz./:&8 eIm<4kӸu75_@n?)-|ePԔ5Vj@fJYlrvR2o% cCrF$F9"b k i,Xh0rgMč ~8ҭZٖXl-M! ٮ!.h Y`B7Rp;?A P3p/^2K:B”D[ڿJg%^NAVe-uiK6 ,Zu|jZu JuO˂ׅ6tO6ՠ6Ջo+"D1**BoDq%zY8'(;#Nq\ҩK-TLWg Йf_]Ғz 5)C[ok$윪k׀c%BR{ ^;Vx@Q}o!`SR)'~W ٱDHE5z޻q^EiA -ܟ1{ 8qILEhL+<+WѭD)Vҙ|_^2yE29RmMsϧ;b@~I.* Wׁ$<'\W={Eŗ"<kwqiS]yb-/^ ә؋O&8M.'Phb%SA> r#Τ|zJMl[FPP- a d/]ͶT֡\sS&(ュA\V\K zW/fz܀N%nO~zp@Uʷwg/V˒biMt#r$q✑ ة(D80p| B1k-YW`8y]X+%sکމsw7S6s/+9Ig.ttNR '.uY~}>⬸E뜼HUJ4 WpdD#fT, lx"(%\ۙ~3P׭tRXE X_-$Bj_uZ5BJn ('h|$=.b1FX"€Ō,XC<;1Z˫- E`dk&&xxUqѻO-pH+H5|YN*܁T]/ZsO4 zsKB+C">B|$c&;R&c~(ٮdoxcۖ-=[ ۪MC] w=Ronw?yR/N_OvͫRZ6`[`#D+ɒ6=6j+/%W */PDpLZ.]V(&]$(K h8u\y8~}>v ߠp5gq!\+*e;9\XŠdf_Fb~V=2ϷQҧ{7iqbSU$@ᘂ2״hn^"F "EF{#{ 񏀡-qlb:%blun.owѨh|p;I*I &=޺,pt㣦&n ]6,z!Gy^.f?uҌ\$LAQjϪȀ>X[Qipixpyk?L9rST1U|<8dA&7E#8۳~;O8I{{t >OIng-0_zhoeòx Gy_N{{_nۯ{}ֻt$ݾKo& ׏&/r=p[L3Ams=GoztFf|;p㨀7/>{&aޤt/E>[$v'_F3M,v&3t?}{eYH ;٘M? M)Y' /4hť9539S6>|Z^ʔ_S~ gl4/w{y!6\ۦ[3(O!pM-x AC~@Kytџл۳ӣ;r$]|l/Aåb"t$gK`.}{?%ѥW |}7Ӏpg!FxMwB` >?! 흤a?nzoldgZ00wȌLPvƂaw åǛw:`ƿMp@'xM ÷߆ٻ6%WEFŀc'p: +ØYr$RG[=!9{3aԜ%+xҒUziWs[^/-jofUs [WS74 څ_~}w3,@XW5O"7jTK#X~=κ\(KW=MϠ { v9Ckex⡓%iv>n:ǾnO5}lӀ 0Ӿr?r{F2qvlSsƣT1Gߏ3g# ɑ&E;][a(ZlNσOG$G\a6? e ?":Ytedá.qW~u9|?pW8ΆhѴ>!w*'ĽznU/ƒz` Fe6uڌ `M?咞iυ ;S'Jǡ,^cH8*p%dkF{`<!NA<)r|[Mk(Ƭǀ.}*(筁2$"?&T tO÷9ڇtwr.8;tYL !]Z !Uo}*pb#4Vl[/L_ 2u+V'UC7xW=RsnzbyUQcxmSx\/VM-p]߬n٥~gT>Iv{-agO^N~ȆRF4XeIi#GÄ3Ǒ#48"Fsp sN?*Ź_k9V t@@q,&%)DsOB`l)8xb5SdsFE\-|r9"8i(wIϕG3^o'f=j:[\X m @"DRJC;V2ddbn&DKӕG[b`=˝߃G%J1H90ƆXfвԺZ/3$t$`Nv*0 #v"&w6h8Wp`Ck2.€4=!%yPlCk}fl2H f*mMBP(J@v݌^zMp7*꟯:L &FʔS(Qq:gdB| =Ze tU'MId$ ca|Lb_ D.`x9,'PIL87VǠ10xHW`YiT 1)7T!>9g 92TL6=;Lb<0nyS=jcQs~Y2AyǝSDAPLCVXʈf'=Ãajp% 3fT C\GLj?(*`? W=? !}C(J~&f gyzzfkBk2.``Bl`ES0"Հ6oA<{I(f+,u{ wlCAM"zu:_̦iW"K,gA[*o. LL+Ufm܉1jT k,\=!"Bk '3 4^i U:G2ZrϼJdGR+p-Ȯ #,05Ѥ0  [p^@|B]xF5h, OD(t@eؖAqܗ epԪA#ӐS4@Ü``,n#JD9=R  Aktw|+6 ܃TDgR, Nj%xb9aV4hE.79C܈l<E&pdD4R@IqP-|jҖz+Jx(8ǐ݅[4R. q^q+[phe fN).4KE|l;pMl-iǼ @A(0oR XSo F%/ @3P j<>O>id2J0%y!%1RϨ\7-~ZPu?]ӪP^Nk~rJX]weԋ--R:Q!pO,QYZ$=eLxK(AT_"UrLCUIY/s s=jH- D7$+vdBF!dVBH(A l(Zm|(ӌj!PHI1W i$n1l @ ٠eH)1\+,no!LaH@ E 4Pb*·8(n75(S)KQI|f$^f@D$3.Y&Lu$8o3zq4n9Dxu~E/]]h'刯W#ָI51wk EåQER誢\ɕލ5sWl+.n+?Dv9 Be,欦)߬Lkb GV:EoNJ д晀 &tO+=F/t[VzHc{m6 8IQ4* ,_ %c`=-6maz|ģ]..ƈ4Er0~LIZc;ƬF$åOVI"%tKIHjzbdI'-cIV09=܍öQ@Jln"{j N.ǯ P  5,? -I쥔k]uۄ0Z~ҁAp~ FIanÇLN_s@+{:!i},uDAZO&lK:m>iqRF, 010?x);'\y*!L+ s~dR@2yhlG^\/h8 /9Ōr^3\r={_Θ"w$^ù)gNj[, -}g;vlU^dcpq|w/$~e〞LJW;_&hTmާ\읋fN*V2~~̭-c~TgvT%?л./jS̲ؑE4GHxXvjGnN;hcR%&Lo$j.$/.92<~_|+tz3L _lB+AB2VB.ὦDs=m*1#A#=frabD$3{k^KrD=6m̩RDօE4GM`[.)6vVD9v?)ݺfɔ' ⋝b;] G J/6Mr%r)O' : *^f6q*Wȝw~_ggyCg"{+& L+&z3br+S["1#f((6mסkvSOyBhy)E(6MJ0niJJJ+=1: V',r|:˳A#*FH7YrC +EP+FX"RXJiz -=Vq[z%`5F{GYGˆ0:g9|v>'5.5 Y6V~c!KQ"Şwu9*iIcT O^2Yfy4Mm)ƞ;'&;wl//㧛x24cDU4&7"&JEhe_ kr;_ld(dވQ}9GP ]]5/Zn-bA %l ûz;BS _D_tkw{@}rY`΢b513A_$L"4ԛ*-RoPPغW5*A'Bk菈b*_7Xګ-^?8ޥEH Yك׶qXm}ݞ=\7D7V$x5oDh7^-b h>/w gT&g2ge17mq}ePʴ>f_?Aװ6,f= 9<Å'ѷ༢|D X/nk\`TycJ1_͊ WL2p%wYCDZSPĶCƩ^lwTCx! v@.m~X@Ww :*hoCj$Jl=;seB͢$pgXljlvU5Jâ2}v G%Fk@[Nٳ)9q;EEkug9^{b"U{ Έ})+/+ZZ+JT{)@\s b:d W1&R p)KDH٘@|qPzz'-{fLd[ <1`&%MC R!dtShS |y{1?slp55zdUx),_j j̚?XЂ5uHhɑ;CEdXGa0pBtRb]( KyG­=N}>Ť@"%jE됶Aᚩ6FʺpBwU[L=tǣeL!VeBH*"*gΧ\~آ23}W3v@%vɣTl~{.RRR%6cOug ON]Xt*]: 1HgxQw_1=z _fKײYoS=?iګ0Z OI dp=T;{[xԤFyɱ?U`8u^?P ڕV:rlXfKvZaJ&POY`-$ݕ+:omGCIc (%CBƘ랝wʐǥQvs ]̄3q#+]?v6+A{QB7Nr DrK%AX =9-zpA/CAs^2t3^ =˲Xdl 2/qey#R Cb^`H:jKM8zQ1pIy9{h3˓)a\P*W]tMׂՅCS 0%4m&*(+)GpŔ\oPo/?p.ϖ}NRHȗpYpI7*WT/TǴUr 0L7W.4c<^- |͸d.~fpY|=E0DM۸k۫%Ooހc)@;a+0$`xE%%-, qEro{ hEmV3j G5bA;6ETQ"ְ6D(IB1'p\ NpPpK!ǔ&Js8W9h7C6:I\rNʅtq=Dc  4^Dǐ -- R u ] '"p-$ߔ6@3N <?Z:U5p%:{k n\ \IDK\Z2j'8XF>5=F?Y8(MZ4[RT9:%A'eA  Aq z*H$ќy2h#Ȉ8 A1GfL2pcvhԐu+א/>&9ߗꇒ|e>LX==<}G΂oB .O߿g=~Yx`h{,gT0)+?EOg~>^4[sAȷ6'|>L|h&7oj LI ^عcFp,i6(d.dﶦ緸 wKoBWcU^QP۷}RU@^qhK[/G<9 /[(mQ%R% pcֵ8x4 VOMwa6vgvhz Ad*֝PIiz b G0ν(CPSCw3B;sMhgnW^reT9mK=h0H1F"ԋƨšNEYM!>Yyj«Ѽ߅2/ʖX:҂gMşV.@Ҳ+m<ߣ~X-Bش"<߆(]7#*ӲLAV^u~H9E+~Qbc~Bű#͛t { ^?= ׽L!{4a3_$;^gwԃƅH?e.4?oH#2 7KFo^jb.d0$.^Xu]ӆ)2W6A $7?U % Qi4vݔ,)ts@jԜm1 2c`!GKyp(U8n+z/WD7PUR+id[BɍQ ll*/lІT6T KE= K5 <뜱o43TF)f(fvS yLn/PEӗ'cVe!\48ayf_Unw }w-4^J$2몉!"-Uoޞ=DM!/sK!Ӭ;9?&utjAT|kNaӳA\‡ kJxADKVRKP1?^K4^ݦ5 S7lbE\+% a1 sj#MY@BP>ֆC #Q$}}5$͐&jdQմHqǞ(~%yAY=.{S Zv?1ܺtWbQlҮ[s-2o/lJdq?R)CFMcM4S[2zaҘޗ6O?'W":/)_jD=ciM Fdc$[3 ?5ym t}9Ȳ/EwA[¨Ƣ__L1ÊE3lէjի1iKu`é+3DWwcTܐѬʭpCPvGrx* ;q :PaH;i7,*EIy1=F..ꊦ{@WttHS")JuXֺѾЫZ&ZmXBxXGXӃߌc$152‡ٗa ­Ӂ'oۀ9!jFvN1:>#&̆ڋG k+8r}'N[w;K{x Wk# Ɇ2oՇ6co?{\l 0XX<=}V~u˝@v8R=-]~qtYCh=R>BF_tˉZ,1갇#}0r=T4BMJ|,FhZ=t0Ze X=72a"tq" =WKhJeq9D)%x">dSw2 UHX*wyk?lKG`StH[]s|B؉`V&l{w-0f䛸+?=Ǭ֫{7?֋>Of: U~}5z4T6vFM6/),3ƅH?YL6n}gAUU$ۯGelMlY1 *ܲ1(/wcBo7 on>x!|jG`4VK@Syc?G6V*0N@U5,e%()/#Pz9w}O"⭈ p;g'$BZʸ:R8;7j#;L0bk`/aO_[d ׉Z  uV~^u4]=V *{ק1lW7oAS #}C͚n8'].u~xs0?.9OĬXN>tzt0he]*/+v>s)oȯNj,ƽ}[,.RϒP wHIqGwJxGpHw7tV9@P:LiN.ܑPR(Ԗ3dMgnLNH $]]Z˜,r6-_ o@Fٗb6ȋ0 L7ht?7ש'qś7_}xap h!Oɿ˙5DC~pDΣ?d~rH3І.AoR.r\Yo=Q!I)AԅPdQc<1#!TX0{gjW)PlP`2 "4VXJĺ9a*C {Of8̸ Ǹ3M İ?4lF=Vi&H)f_ )_n߽ d_Eh/Oz}{1%QN2ϋsIx}HS,mB-erݓW['Kq&F.K5ih~$~m7?tc KC 6vT8q< B ) 2tP2zzp.QJe C9F^(){-x6GO;6 @ mՁBa:o}t!wڋRINY>ՠ >O*c&\JFHYDu0Ǡzdg㵱,`D6X(KfX/+-c1fF;`q"Sy($%(y͆d3,4Wb)z6"5i bh $0ƤSB5R(l!mlY KXI 8wtXGA3mԟl;! ћ%ԺZ>,*鷛.T;~o7u~醱gng*%P* &nd[5?箓W;] >ɢO_']=|pʌBO%6V7K(t $aG:IgbD)!5 mHtup(Zm0?-U>&W1Y< z^:IVE)yYt6ǵ$fN;-]lwN2 ,qv k9%O Fll9*(,6jyR0!:*=HD3qU*# 5Ԇr Q<5PK$ Cl,q9  !\īrf^a@1^ytJ JMG*i%S1I*;%q@CE]pVxL 0EdyiXZNfԌއHi([utiSK&Zo0M2ItӔ{%MyCKNgxwX^y&@@ש_Va 1_Xtm =)JMr%*b3}&]*hcT+,2(:+`|s ޘ&٬HM <מl:A҇z\vnʍt|Y}SS-^La]{6ttGijC 2op7^ujdR@+Ir#wFqR@z)0'n%s(bL#"I$fϵeD?g8OM&D$VIg!^\]ʏ=L6Y:'se$3w4-kcHx*aY('j*0:n{RP-DAXb HQO-0+bGkJ O4u f.#k"BIڔ!؈ vdpC5#`aK"D~-~jcj)D; R,K"R ɤ▐`4C'ƗĚ,a4w-JMbo0ػm$Wrە/d8` ; (餻vle[)Q#t[OEXù)0.6  RY9ė6ݡJq3|niȷIi(OP6~/(\_ ~bC1r>P%48>bT2/w[^߇+fƷ/}?Oˢ ˧'e #t߾ -e9Ȼo^?<>rں" lE'XM~+Oe KعOqV SΝyߙ-! WRɕX{r'4 볙ۏGtcK&Z#H Zߋ AܙG\*j?1jC FhY3?o VJ*һ\i cF c,`aPA mTЃ[^a<)$l: ;.Pm>%Z0fVpyT\Aw8Xi Wgdc&?zZo܄/7WIdWh`yzs7{ 2.)jFheFx!Yw! c A۞d$Tr$7hHƥ\OPBzg 8T$Is?ԏlO1=[ oAsMU\L$u4 a%oF`Ē .|XD=`!bU# iD?Gry#[\ˆ(Ul|̡{GǰH/)wsש}qVl\u'.J%$nH&ILZ_?V_qYݱ_G.\btŒ@ F5'oHHG3;aeN}6P!vO7Vgo1AGdp/1ʾW:o]OLLQ2pͼpB~ uCy ,ң *v(ИY\Z\SL $ǒ3_@tf}nf3TK\UeZSuw-on%B1mV x\s6f^(k}qO$>Qic> us}5g5wF -T""ş v,O,l+'T@FC0/!| "KRA]Ep MV#b@296/4`Kܗ# v4}@/kO ?=w)o>f+1plb (H"89?VJp3eBw8|Qֲc9ưb"˘y  Xe/1Qx&f %4؏0uFrϿs Ꟛ<>?P>r[O.o|%2&QREndn'O5i4x<K d"g$+ѼWT-dTSM\פ-2MQ{NnR훛du$]VUhV8Xe1'du 6H9@W\%Mf]mF[u%՟|])Hr!.D*SmH#kyqL}!\щ]f}oRcqr<|{w& 9EgGqyZVd`%օR:[e)cIN!0|J1Qg̅pi9a]m(~&F I$׹d\iW`Ss ZwElD}%A@qCi6 yzA䖄_;#+S-` G 11`-"2\HjGi ^iT\ja-5\z]XiˉUHp"j9V+ #*)ZKMBj'{b๔_x{4/a|pjo6kŏ7Wןm~l:;Qmow>) Wtk*RQxB)E<6F(-+%S)gbhj ZL"¿@pBpgϜ/ [r!!zȉuXge_`Lr%3kl`J!難!7@,0RkL˳?C~*wߠ?L==J6Li@_/[?eGyݺU9PJRK6撬$+ѕZ $$+ImuӇ.W=>xL[z4j@*qlZkZz VE9*ޫŢؕrX?8,upƍ4Īs=iq\bR H)UxF0v\w\52t2(P -(T؊~HuK D[ ǤJytp0 &,'~ҧ&dLN+8TvP(cE8Yezd6i: NqثMhI}^"FtSX/>SҒ9βP4(ˉ$Zl\O v`Ԅ")r\!](mBfbYPt sK5\*S@P uJ(GY{7cɰN/7G3˝bh|1C$*rd9%XkO))EMN,ǔ)&/G7[uVx\APQdÃ$tQ%NVUJQE;YtAP\UϏN1G_!_f0! nz%}|VfpۿY۵Q BQ~]Po'>}Y)(R@վT U*}5=^FkU˘В+I:!JD?L=NQIŝˀ ߆yy| 3}ӟ-I4򰭜Ž [b>?$)}ȼO)XؒhgUY7B9b v\T6Υ**}Eet TxH:^E~A$"0Bgj1IfY^ )Uӓ+*F pNpэMsVéu*Ab݂9c Q>N:_~OlӞ6U1ASU(\)=c]p!tlMS^"=Z 3։l0IA>6ﰞ"#ۑZcTQ8"o7; ˲P06%GNL>&?[^a >d VfaacEj Cބ,ïn9 :]uCWwaW*k"2.}gL?_XO+\BrM)MG( ;wK tRhN)tZۻSPDz.,+76bǂhjz(˞lQG=I/k՛rE(˾&˖~ Ž~9oE^[D^XY\-ܿ;v܎[tQdwk <~Bbmc/j`td\ RQh,OG]n88X컵G7%G=҂)PG8fL)o>RIG"@%s-j"Xwtz@V"Հ̞ȉh1ߛk.ɳ\SY. `r+׿}UFeL {N)?7(=m`Bw1CY߃>^@"۱]8}_dsD9!I:ge„̈ȸflO 劊bگGsPf%5\ڠC5Zx3(1j2Db/g]Gs")-CEs<,h(՝]Z7W(*ˣ+->>eR*w{ޭ:ڑ`3Gq, o9zq{!uӔIZj\q9]nm)g|+%ռ+v*RjVwPX%O*0LǢ:8'U9P"S?{Ƒ`庝 AukR$Nf}OռXj6mA:3wU"1<.1:@Aj/YHWP>ta%̦ݚ{yrG;`^MWLJMj6,v=IHy@MHE'%aِn +LQ |ijVZJnܯҷvt;F 9 o)0-1Մlƙ/nGA}~}U&[5:qh9ikH3 ӿypA+Ӳ’: d ʖ1rZ4cL 'e3^g{+"q#3pZ n} ZHRKjJy.vl?b(vç/ic5 ]ID=x;d`!+B)/})X i}楺#<YHktS=7oxk֢DaZi$F olIH!-k?r@r45w. /'k;1xe*w I> ɨ0o1qMϴ5q駷}4jAӜ ALf9T*raՍf{ 7x|1UO\z'L%H?ar ;۔xڤ$\ܶ>;5;U* @ s FR[l [n+ޯ5IyNGFg!}pV/x/@D -Q?r$ܛn!{o{t). =!MY .8 zyui]?Wyr^ZYoi-mlVu(^KQJg5+;,p6 U, Pϖ{J~)e&g1oh"z5|E.F>[`3u}>S_:>;`s ` lM;k4 ++ ? kiT F5M(!$gil`}gC`f}x6U>-}"ZItEMI6=`&qP5ۛaYҌ7:LHY kӺ7v_,5pOq7h+GƘ|p4@))ՕL%?7T!U Zo vJKL)u#],PRzÉ dJZe𪏁Q8pGx3y7SZFEEFd` +]Y=Nq)l v2s^ۀGqT d9)%#˨vz:1"~.?޽{%2J=OT:rEx\(r-D8=5HҪo7O_<]LM7\LhPu'RYOwa$דח--\o&.qq:wͭ"O닛L+/(ⳝ㎇V\lХ3 zQY+XޫBLr>IRz%΢E}26פ SB[-M+{$rsp\y5xm[ "X84`rcfRQ$'њ7vPRg/$N{+JШmĠJm *h*XJ}iJz٪gʅ{Ko+ dy3xZ.d[(PeK:ׅ2,j)QЎs W)7 Isʬ̜s:#څ~R[mT:5Dp(98¨(;#|գ AD+!,W`3SGL;1$ eQ Gxv%K2(F@hCe!eOǥtq}$.J@ 3On#\E5Lk־Xrn@nnؑen \v:L3m/;& nt5`D+B8}F:=81wO%P闢& NvGXڵk QlPV*~PV1u6zPe*GR郆<.=tfh^ B8xL gwI>pka}0}̕z;[pdZ?Ϋ8mi0\}1N퐬Waq=L)*ɉ޲:(V/ cv^]iw^K$9Yua>̢VW8M_LDߩ-4:+t^]Q F"TNrm !Y,*t'S`kL}w78꿯/!H eDmX9`.Q3ynOwkiE\4x\$<mM´`K& 7 Y- 炙%eIH SA1gOI*M=2x$UyYYPioԨG89avJ0L ]R#{Fq3ኀ]8& lMNZ|IC᾿d.F.N()St>zpj4BQ)ܣ{w)=-f4BQt8Nbvp#Ml*H 9PK|FHD|A㯲C _i 3hQVpD3hPVp 1lv6'^@-DRR)<0R 9"9NTO^GfXU .qלt=yo?4 ͨAH)yuQw"PJ#MT>A'"ͯVץS]vHU?ê2 wMZ/uھl8nfSD1Pxw+'>Q<}2fR/aJqrHҳ8-kv;҆2HuŴu!Dd,^U&(UW,Zo6R.*51rHM[_r7÷tna_q2*)ɾ[썻rCݞK~'c&be<\H\Wd rihFn$ G|B1QhHHCێ9 MB0"Jvc@L-Jd`al1D1DAjuXZdX̥ɑv G {f|A) #6/ }Uv5TwlJ]g 3V|{BQuC  =9{[6'Ɇ67փi;ZIsD/<*>Q|6fvkfpwoQHAr?śO白5CO֢cr{7bo*هމ_&7ٻ6$WL050=6,(d!CYE%fUE-nH_G^$F,A{ӍvdҿSYâBz,*Pa^kFT`F4rOFsB׫¬Nv˭n\p Ȩ#Me T5\iʨNh xkTJ)ǛQ^)U~pzPU9ְPzUGԜ]_e&2\Th Q&1+rrK &eoBS婴7lUaV%bWչWy"-s(I-ۂSVZFlJU9JQZ{+M&@ɳ)a Gu<IX8?dGݝ0uFi.hXKuͬ?7Emk0M++~ ]ڟ^@m81WOJ2[Fz[Q53}~ed<W~⍤mkR$[-fjh!MI쾛}-$d-K:7 ZW,bJBA1m_Lqsشair5{AwkB@VN)mp!/\%jm]z7y(RkxĵưXw^Д0s)Z `ARiIS"gtյ.V%6 աj-/~2K^"+}rK"nZ{TN(ō5BN%6Х8r2OAU# τ$52a,Rڱx)sg!tc֖Bm8.l x';mݡi?ּ}^, j˾,8_!B*k"V.pw㧟|,1\3糭XC#Q;  Zr^(jJ+ oe]VB·A]4d0n"NZa仲B[vbmx`>8.}vENr\%V O%ٹ+/^ ? #[!Ύn,PF"Zg% AǺxz 'mx>3Lo8[1^|䄵+YL]0xǨyRXWX WBJ:00XX:h|%9Jyrǒjqruq`Ɨ~z92NTi&)LS2)Ls-j`U\ c ,Rb= =u "X`i`h=v+ivO_ |aK2WFR3$G8`$Մ٘ hIi2 ÐUdX Xw:¯‡ff˸|x*R hR1E+*41Î#~#|R sVJO42 ))bT(+ܴ]w!QBw`jqIq!s *8O=}E8 #sX4'LYfB8/ S0^ѭ`$o=@Ek$:K%3DU#f^4y_wBF{ewѝc~D8X]^n`_)mk#."E?}99vGt7ǽ{qu=9\r =͔Vg= gj glIz=c 0;bCgɷe\hvƓdз%dͿ+I찒+I찒;OxA`p:op޼y\p4oq.&9:Q` 8+2Cb`c$Hb$H.Zź7:6,IlÒ6,IlÒ۰u nu-Z7@I@I;P4ͰFuMtD'MtD')6љ?.pNrV .(CtxwcVtHW[Z"vSb$hx7v]J9B2) 4 A2pc@Y;p1éT4%`ʠ'%`/Z-R%"r2-'r2-'b";*c!iL*C/ Ӗ !3zNWX5-n끧ݐ1,OSf?ZI_`ʥ;;T &i`L#bi:*M =Q;%E"u\(ПNy:I@.1E<7i :N2e_=]grM^ulxMblcf3cfl0g `a*Ie:1qDpJ%Z&,p+#.$F!F18a. YL9LhI Nu`/4 |ĩ%.Ak⍗D$'ũJpDs聗ckEc'9BJC0#NuW1S'/,9!0 vslt=eOR^+؍ C/-=! 4Qjvt zMntxtcK 5 3g)7DRTOxGSQ Mjgԓ >5BC{*5Y cd9} _GPJU-ҟkp50U@ ܴQT[̗i[ʰ TT+S.HoҔŤGk$&Nel6*M<5*ةWyeW~R 6lϖ a4p w"E4h'9u)J03()lJ`BjK xw%硗嫄Qnṏ |Pҩ)5-Vb%KW=-_h3Lyς@%zsA6"͙j -1wA@yxq9#2 Y m35g,vWNpSl+8~Ap}ѩ1@ \|̛{os,\9%9>^pox?v#LYuR@W {@Pƭ+F}qk>zuUq!Dc7e?}p'{|B^jS'A[mP! #qp;ekefn8RR,o6یOzp`;"8BOǧ3G`bCsKa9n(*|QDɍb"crcF5 )bgPc'o_ Rc0:QZj )g\z֔<+xSQ}(e.ԱnXXZp:txtOڏGE ?1| $:ڎfb3̓ANW -5z&?]U)zF~O,<-8eaD\р f::#ߏ1@`ARշ~Z`e^U& y"ZEVYnB6Xcbݎ<\̺UhYVp"S ֟ϮX7Nt+[U ʈN1vn;sB_[剖nuH J2%TӫMfhݪbPFtu; D̺UhYVp[tL_ޔORoq߳d?'2*p׿|0zC{򌬟߼ie)TWbg3*~zЗNVetU^MVo.2"0Bwh3|rsC:b&.ڎ aGgFc:cv'}6iLLbR|dew.kљTlQ/6K]BƂل[<ڸؑgRDz"1ie9.mGy=ioaU +1g]5ܻ"!Q# ..>ƆWRY ۟RuMA$Kx[v#a|m g?ola<W~3%w)5]-rg |D2rvQ'BJVijqHi*2mt8H+!%w/C|rV;Eڊb@av3k2CX*A<6jo$ {^/WVMyq\Sp3ƬK?;ʦ=(Zrʶ%H,cO?ݯO>mxFapͨ]+"iXJ yg 2X[R;/bP^3&)>bx@yR_gJVlJ!`t~y B7{k>:/BL +y$JT%*%NFTJ z8+JR<2R%JX-&ϫy,SuWyCl) _rL|U|I>dI=**6M$&^wF/Q8w .7/Wf|9觗/̔r9bʝ\JCM/_4D*(+ QTITxZ;!usbm0N`MQHe$U,{Fy!b'AUBKEO -;*@ kBdzH&fDRz0K!PxWA+ZXi,uCˌ]mƒwT$vKk0%LH%[ax6Sa5ڋ4S_T`%8 }P%O`ӵ~0@ѿvuiw~vIJaSe(eHPn!$Ұ7Y^,rZ)/ES,'@M}:ɦ-ؒb)$5Hc YUX7M%iQR u{]t䍏4XD\U1EQ~ 5玁RS-dmvxKr>rsBWlN6"\`Ĕ`g70鏊ty$-sLr+4n`*DZ.J/(o }xK:rH T ONe[Ni=X7l/nNɉqk*j'Mo.] NQڰ(āx ∏Z:=tQ*J*DȿnncH"@RF$r:t?\@c[vjvz]wv}CIX*:a$jE9e_غa9p7B dbH9Hk^hi⍯nz4[yrf|+䨼FUyys~yT@NX^7OյE?7Nb/ǁ{{H/9'Xrz,g4ݨ__hN/7էO7}'SJg<;;FǬ*#_\~6_$,X86T :?{mK ZP~|_ q1`aJ3Ov)vXձ+ɑQpfZ闒6*KHc/$~P”-s%{ 8G֨l&X +>dj0ݑQrcbu*MCESʘw]$,l@z;"mz{+ PH:D5*#riW/o"/T)r>J.@.Uuط`{F bme ڏ~{=cW^N!-sLzAM-B\ݩmarG^@l N>v9@J _!]ʳgc_csZẂ`Zl$A.87s;ZS2vBdv83K",[ժ\~21*TTh;'ZT2|be{h.[(~;? f>4+bţNє0^Y+ f<䪮_`6L9`U9ySVƙkIO 2!E*|湥IC.Ow.~V2rpy"5ji mCRT>yi[$Zc:g~{Vu&~cJ]˵#"ɔ#R҂q^jTaVz^SN',2]$h!EFrC3^`:ÞH)뒯r;3BR"2Xne&؆YJybXiҁ 4Vns<{r|?8pA*o?,(9J`mB[:!w~qUpBhj5N tvork%i_:iKo{to~^|/x9991m:of-[_G{>tlSo?_im8ckO@X[a;lI4.蔭d Lj` O*5>ϓҴhˆ( ]9AwgWyNxg_epdr 6O>^|: Vul'<W|\y>|3B8\9D4K, F$VA&HP ưPY瓚3K]\Ȃ9 b5;ћKb@.+&Yƚ/9g֤_N/?Ncɟ,΂uj'B_hoy>8;g탳m J*ZT3^'Ơe-BEƒnGjXhNKhN^._j+ƫRu12.F=^KR;ہf Rh%NZ'G"haWQwSr&(m{.,/^3I/ԥHǶfӾhe=ܳso;n/[+]5W-%Lȗ=/2E<ϋ}e $h\QeS7F*ڀU]{}b_D|?YtQvbwt8-*:B<ZXz:3ƉqIPk3#iRCH! |oY:=F/9O ;ngKR$Bdpf\:gY!V*_;QDɟ,DN]oOr%JBZtG:Cs]'EzGw8e":7)!kzcCdy5Ai)SSs7wz؜@pE=WE!^]AN)GȭՓ]w} ol1i]1Ǫ~ V}kgz`eA0ZcՊ8ZѪlT^`G3XQ1)8^ٚX1*(AM׹ &^ID30`}JzLiB.+@_ȤV]`ꚗV脀,5BXk6&Zd<1_z5Wsys~y퐍OĥgtYvmʟ^?qK󅃷ỷϏZX,Y>}T 5o~#/7'&䈟j3xi_\~cw~ X:?g7і9rݷgg'p,ȱ(퐥c8n@x#4r_眜Tu`d#ȼi"&dXEQN5SI;I#xm-;Q/_+ |'"&̫ひDY+I7veo^Ow:bG{ _[x9'Ŷ]%;)`:d- r`1N{'kov0*Y+-]opstm!eZMkB)9Gpwln FiV˹c2(pF&0ovڨwݲ{c="'! QvT؈&:ԑde)*g 15ч`A>LVGJI({v҈BH>bA,sQ УMJ#k/&ܤ*p<c~Tޒ2KJAILmJ0&/g˒;ּ;%nqcoF缥o5*3N:K9߫ֆS(lrBuo`Ajm֜r1s %*|ml\#EnQE2Ŋ31kHf mhy!cuickm-Ɠ)MڜlMo M-4A8zH[ލv%l0ŋvb2ڎΘ{lA$!R#N-q9f+8A(RZ@GwӠlY^YC =]ya[MA:CU0bt9NLR'WK<O 4hq0*?{_ZG#t(UKOMlqhZS[n;GBBq)Yc"/e8@ 9]ք˚r&.┓n$p_3B5~}g0[[Д\]^V]z#vr= >΅Lvвfuw4$|>AegYH@pvYG@8aU~. +lVe^Qՙ*_jAV!/A5]+/LHd uF-m/JT)y2_A8zzsJ *D8u2ɄPz`ECYDm4Ils*RZ [ZAFL#,R1`FEʹoG^|;?^r5W VFc⢞pdqG5h`^KX Md`1鍧o {6_JI4MofdN7˾<vs2)hFdibÒ"Q[%E#U)#" 5LŒiVLCZ,`42yF&"SC1B# 36\Dؒ[2>QIN'誁{HL[(aEM))R J>Z2qKϯIp}"~53Ԃ7Tꇛ1֡G钒![eG7[z+CbX:bgU#W4U}*sRcǐ\Z.FO&Ο܋P}25*T>t 4^ .$VDd Sѝn!RGלg) ssUL% K+JjGQP0w5' uT"lHFgU8DG@ Qf8`iAb,%!xFG# a_Մ*a4@`7S {W6MiB o i[ 7lHU˾MVzUEEùIqK_.\*L5 ^ TxYc?dڟŁ~yAL&b~uE\q t~^%xc"OqRooTZdnQڞ?b8/SP|/7si>̈́-~ҷJJv^mRP`f8V ?|~Q †Mj{`11 gDh++SVVQ++ЪAS:ԲDX koG8(Z/Sm(UXG5QL4Qųڶ}Dy)ˢsl)DU:4>%O24 :nO !uiUkY\ÊPaܘ7;1f'aAेZ=ªY`8ܿ.@F04yT$-*4 ցjt5vIXihᲥ988WspmW 3,&a.~+W'OO@ #ǻWe/(mvk4CM ԉv]b۫X'Y,8;j=Al9{JQcT+*@8SY> 9E#Vر:0`ÄZ^<|)ҥqKAj.7ˡ:R!%wCO8*tO(ZdUqrrѢuJs?'(ށun}]aiCCj3FQXIL[w>]2[V{Ie~t[c;za&ޚh͎^:4b.h.'=o D3FQ8t Ԥ8o|ڴz(8D=qnl~M8BőI+&V)ΰ '#X1I m"2fZ]oege3&;j#f븯$ޣS]! Vh,N3θQq"l '0eb$ʨaD3#a1%dKK FީO<'ъRsгܳa2c .@2$<%0E0b2EYt@}bFm N;N_'q$D F%uQ4Hu23gA5\%iA&W Wsp_NFdLA޾u԰PA,O_3q)Byi".AuLzh0cLi=Nj_3{sIx|y"{XA9ͦ+(:tE>/27(g{fF>மz=NwNu@&PDٕ:]FJmΆ&Ynk {:2& [u8mOc~˦8g5W7|< h&Ky[bF6)DV^DiśPƭi9IF`\f0d*A&2=4 L{1ЖI"!}(қi)"tv撶XT$!4a(ŐJ2 H$61FFmp <=&¢5 iT9E.Y5'[urtߌxy[.L.aX//T)Vqp&ˡMzϾxk^\roe}~L܏L ZN_)cs”\^]]w6\LDžmσ&w'TqQ0?ۜ"G8Δuς'rZaT]`uL0"kGcנ֮Wc#o 8'F*Z.Ga(eK?ZuǺyZjpK:ƭyZ]3yZYĂth%ڷ*B(Q]u@-1`)<ը9'O?l-"YgC+VMR9_N3i=|`O#,#2ILfJ N'LӔb g\bF9!)g[m~: F馻oG-s!|1&-F}Ofjzw~sʕ<]\c.`":AfcNE9Y`Z=l_2MЁAaYJ[7$.6JUJyئr:IfOYb28[c( ڪVU78,5H%0mR"p h GƑB cf br nRx\X24r. #KuUoS-^on&\g˓< ROl`ʞ]M'Iqdgml='IK,x1Li8oR~෫mb-+·uַ[tlfW`Yʁ:d˗":|Yr5 KY zdG3/? $!߸zɔ:% v;mbyZJ Ӵvv;j򍋨Lqqn[vZ<.e1>*PK˚pYg i;f6i RTRt<i$ɭb-C r@y6{2ބ fBKAڕAqܭAy^l0L8|P7VuEbu>zj zDm9J Fs3ESA2N䬟$9]2w7@#i%}c3L_53m1J}l$ZXW sy{uc ȎzhNo wl"ds`"r!崙We.=ă]T2^?䬥vg{eCEV%#\#љ_BaaR~B+/FAq) ۩~:BR]<І3^ Ds_S":q6eRTbփ+ZVʢU΋a01Ι 2!:UְHn$#et/HgiD0X~\bv÷- t+.YpDdL M4 HdD,փ kɔPTJm5: CsqyvYM%CM7d1yW| /U)wb댾lKwh{y5^~O߾~Uj0_@?4x=O "p7WR`/8»+=ngl20[ЖZWӽ`5#i,*w/}qK)T(?^l`^N|&sk]P9r1;$K5F41$*L4eĤ5"iĈh\+oyܧ$T_c:aD ݁`okN SS 8y)SD`Dd t)/LZ9JKӣ"~>*WϰJWull~={zmf @'^e=ayMof85> هa,ŠNBʅe^;~.9Tޞ(ўٻ8rW ?%3&YK-O,mZG`{]nl/#bk[fWd}XL:!6nE&ݠgEP"#U4lA &~T l1x^Qh-F-l=jQ'e8VboF!a*Ԍų?n#0#'JHr"up6ɌN+M-:SeVלB .^5l}qcIQX:QO|)9ZmKBi4:/TՀQR5;hlgm32b) cT$!3 ( ׫-ތ2_ZAPC+Nע̭0X[ѠnL37;3e& *s2%#m10Ĺ$8fԧc# :\ ,7:7 Zlmj bMqIZ]g#s\[-Q/U7 ᬰVQRHCiZH%Óxhj9ub/@e%8#l^>H,m$n_ͳf0|=(f?ֲӀ0l-`H:>!ELfYCh L>{x>N#*. nDʁH䎑\l*QHC4'x@O:9%U31#Iyڑq|Y BxfT/iv. F,LRv5-0"; ߛ}>rY$eP9$FZ1qĬQzuD-?73~܅_.:wsܣu`2疓-Ս UǤ65^WU-NNE鶭܁GKI<2z:b덹Ð\oã`TZ8aY;?U|R5@E6LZLi ZʊܬVfM%ѽuZQUuME[형i[(֤$Ỳ*Vʠr3pGASMgecR'9tB۠øcYX a6&]h bVJ!VB'U'y+L%S.jLDMwй[#SLƴLHb@bF5~9qob%1CשUq\CXU\omHUW0ԥ#9"M1¹}q,͠ '`@%=u= ǾMcfP<^^}s\_" -"n)CwثY昬b$&͐)&؉C1,ZgחMOU)ɼ3Kp/_q{ջOn޳0=D;Glܮm6JCìݸ&Fm-=":ADӺߺQg.oah\ѢČKW.e# 8u~h=p@ ,h`[Vx6-y'zWTyȁ@u q%rl̂ N4C}$&lF8t1/IhIW@jk=T]\~9JNbv;N)XRqᲵō!3׷coKi s{,0XHbs GMl2bV ͤ]EXoƸ/!dbm(^jz'K|ٍqNG'A:Ozvɡ%cnd\6e JXUDW|W༒';9/ϺK):F#i0 ș2+PXAIG̷D 7{նk@g#kԽy ĮK5م.kvfaZo"VHAVWuYdMu2ugdU7 Ewե;uͻ׏_piS ?//|~xݓ4R ֪ #dq5CZw{ݟlqo?npMoog[m9AVnt<?][u;OM{}([?+fWWOڃ(wI/9fmybCeT.A ȶASql3ہ),;jmnJws+xB,^ ResOe*-MW+B_L %rpn[%0ljպ6X}V2isjgEfd vUr(y+9m .m$]tum TbRsc 6#q1{]]=э4%eA.x9 HI][NtPu+Ձ&]E$b+@l;cXr3ȣ.+Em'4C8 C]Jabs~j KI5L| `<Φ'X*b2 pbIeo{=W덬x?-xk_Ɖ?6$a5ceg~1=-)g$[Dl5q~r+d04l%4n;1#L-Vf2TfJ0As4TR- B KfFr 1F>CAp/#4˥4RQ]3لA=yVd4חߣ{p}1p=<"~m>ɫcQ ji)IfktޝM?_7_1ăM1MM%c1JF]<(8s,O2d@ya:{qEY\ǯ,gbX\V#T-nocipO1K H$}$j:5JHq`fwdXM#Pr | 9b 8>N'aF{\?r Y zftN:vWXY xJrZScIAR6|j 0]//:!YI#ض-ZZB^_m/7\ʻװ&죧{}+P"䍇hFLRj&Bza\A).i9%.7f闆qsf&ĔuW68pu9򺡊/-$Bc+oa*ϣY+`C4__{ qBlv=r3ū׆sn#>×d4J6C~m~esG}u1j0M-7͗/?j[QlDy7F.{,gnQ`30}={Ġ3J!ⳌsQ3x ֣ '#I?iwS ن t彂^-W8W! tkk~kq/I&s$_IJe}qNdG9D@U7 } CF9;pN",ރF޵q$B 't/쮍8B_ee)J)gER3  #LWUuT reL4WzTj5J鈶0ibtRiA%K5Sֺڨ)Hp!Zghglf8f}.([JҊT@$d2S1l"% aȆ(*!䰒g$|L%7\\.`Wr;l+W. !tUe,cu=3>Ȕ80o"l< .Riuc1 z6 )+')9U#+M&bGXJo$("\齓ç`8aIeRYQ ԫ{ZB%GҬTdS D^쐩r9-ۊ<ަZ^= LuH}/_D\1;oяנADEgӏox}?/dr=§x :/S>XD[I][L+UV1\ny_0ܔ 5puC駳3cUAGM ,$lWv^rni_)뤁}U7 a A#0Yo,Cc`dUqV[[`C# e3NM]< kq?نO1wǘDk#Ow=qV:fşq>-UX^fNE㨱ijlzjLIy[SgcR 6mql9v.Zl&C&悦M]?ۂ 7pE#[M?Pϗ7(gh LuF_HJ6YZ˾?ǷF\p3/JB~9%)c58g%wHܧ+KLw2e^WWchYp6gY!a&"9DH)"RA}R,SI% "@QiǬ LR803mFBG*`iy*n$sɒh1Si72^=!6ڠ1"uKƔc( &FEE< 1 T)Rߥ bfQư !jZ)Iaz9 ʴCH: E``!R1*#%k"󷃉0eU3E=un+޸0\xMgE܈YMG 1 %min4ZSYׯ^l ń$jhj*pքWFDU>dorAQ `%HE@cZ{K + 5 'n+?Σ Ɋ(tj ), Iq h,Y<,%BZ#a d#JBb8h!|*J3Q\fB9q9 Up=V1W3W9nTQ֜9xU$K@쯁-O.ǘ6X-j@"u@"._tK@wGUj>!GAs [i)#tTr /## G^ D؂E(U)l6s&32+]欠jL’ 7,BD)[L]a0#DY1Zf01 KJd܊1lEA A8G儌1'Zk;fWQv獥Khu:w:^lAU޾Х#RWtX;@Lv\E0yv5ra gM Z>?w~ bBz+ARm)U `Ne꯱'H-r CPF6$!om-z{mz WMZ5%TMU *B@t׹ P)-ЏjL-ղ7 %뒒1"L ܗn_FUX{4Ro bi+RO1)5L7-ka^boJB-l\t~4xa=nL714pW^N kj }U [g1AA~0m{ݷJH!x/,!C{ڸ{m100>hwB*f$QT@@!\cfZBiR_h=OiӴɉ)e^k & ]-§[]s1E$WzTw{Gh{< N8A.{yZCfʬJX[P{_QJf +z-FvyFQRw&eeew#͎p(B6q(іhm^=WP|𶻓VQPM}S0&T3g;jdt>C8הv{;T&)FJHPW^yOjkɣP$װ&-ڍZP o8"NP}E֩ce'WK:W固>f3Hݷ1l0߹侀u1$n& +.d*wWC5 Gnu1(#:uQEsr5mk^Kj&$䅋h-bn#Nt[n`>-3({mPs#Men M0`y~)cXsI ˛p,4GPf6dI{Bʕ]Fk1&4!cH,;IR OyZ!Țw)`R.{,& aB(Eʫ& v! dfN%nn? RLޡF=q`;-yG_f##0H8v͉ÕӨen[QQ *e> K^3/zcU?o55mZoОwO k[ނ8pK]'h9>cYK֒lHJEx91S(%[w֜s|(j:Aπ]R'3]{B0 9`O) ưp!5ۣ[#pR2JݔYO|?Ow ^#q>%~3zrzP!LËEKsm욁-@Z3߶b/7 ":u9~1 @WTŋX$tl+|Neߢ_i({nv3IIkP|mHiUӊC/cq0/xS)聞NrFr I'x1U/'_˅ҍ|޹m-mGAGԈ -I(#]W'jLƘ˅0β"V~3 &R~}?ѾpVlsg.>WkjW@PWgyyB4j˴PԔNr'K,`R30;Ls$$($Z@aztf6T{Lߘ?S+%ӬBdAV3kB67:جd|dޥ 3uHĎgk~0od??Zd.պLV2owQE@XDQpo1ZbIA80EXTz3ʼn_|̰~z=xg-؟uo}rCl})l>[__VRlSTq'~=[ 6|3d^ 3p7\ѿ>L?3R]Lpj5uFvՒzlf@Nfy ˨S6'"=hI;g<\#yӶ(b@J͆:%V5\ -.8%C-#j@`D|#$0 Wh1D  X#=~{0w,&}5l"AD"37C%g|xL§x :/'ږS>XD[XXpŷ>ւJ8ÅwMp,=\T@ S-= <^n4oxU!"βiHZ=ͶV.Ue;:'X!?o&fЂ+ؕSrɕق#(G,Sh#I.xzAyJ驽iZ;O' v(ŏS<ŒdO5 rH+͘MSW.U/*G^n}K\~ɊH"Q- GM~|az _%%c/}_!-xP EiZJOX4AH˄(AIő \X&ch Zo-Saoঐ@18%Ap# y@/T:~УL2)ˤ8.#*DE[Ϭ wmIې> ۋa@HuW=CQCrF왞72 ["{J)# i$&\p, BBQwPhGAB7M*AP:FNfV0yyuEnu[̐$U|[ЭE% Z2W[Yu4+D晛y_ɇe t3/ѳOQBrEe2%8Fq%j4"4IAq›ȁ^n.R #hb6+t1S,br /HDY>^L7f S fKSx 2cwdu1lGI71  X g|A, ܞ1G;4G^w,P ANItq%Ji'H%1|)K fl`$t)%I(rRqji3-׷2Ep,jc\N\ &i =s%|c'/&w?cW _vz42cb߮Unk97 r7 rskWG`֣SR0\햲+TQ~ƙWG?j#AWg-vI.=@]XZ\DCdܧ';\:}-nO`>1f/}8yx Pp`\ǘ!?K,eyAXKWcX݁S2۷V^! tO {&î)Զ1ICkVvM5!!_6)%?ܨԎdF=_e;jhfOOPu6aاM *gL}?7?Gв9GB9hX׽lSLe&1Ș1"+}lzx>aO[Jv5k{LGf'JaV/|D^8WXӺ"# $rM H5!"2U gh-UA{ub[Q+{R]Ab*I?<%RΗ̎Xcƴ\LDRfv<|l-"$GDVXsQe˴+”.QXSo JYdR؉薽>$djO3 / ~'O yCi VOv3 ;>1~ ;u! Tœv &;ЌHSFz/h$T,˹P?ECRb8H1F.G#T"ҥ*VC@:+B?8lO*COysP11yFwQ޺wJQ-R%F0UAGM{gd =OJجBʁQ1cyyRx8΢6 2'حAh=P`߮ I P_ 7k'{/DR "O`?!u*5Dn3^m_t}"Yp= gː<5`3I+OZ԰5`U%O[V$ /u"8R1)HFGSR܍-r?nS,T0AK96DE$IDxRDF8L#!\b\i)PB)Ҳ܀}6%BP2J[&\>֊)NTB*\o֭Q |'ư"T VEԢ$D( B V8nӅ*Uky:2űQrW,qnqYJ]EL 3qL%951t'/`bۛ穙/}i7#7_C7E-@ #{Z3#u3t}= wej.4HUwEcu(Q"eo5)h` +NQ :> AWrHMy~:|iZ?,n@ B!+Taַ ^Ke6E0|pf8ČB!Qf6ThQ ^H#jxVp=BH<gŷE)g%?Ÿt$,^:x(Ue $Fő h *D|5il|ќjN&1M}+% f'yعAi|ߕ m-!3At8$f;w_}^]צ?.͞xQvmGhVnG zMQb9XZ*7 .G{afcڄ/J0Ne77|upgHY1Xv3hXS(_¨t}d+ƃHS*02IS,T$f_Bq1vfVmZO};V / ،/嫇 ef_ qsmSE$/W ﳪS/[x oY[ 81VIyrg S?V_PZ4Y ﬷?\e>6c4hjA.;-jU#rGh 5^z-b|Yv]͍m=ilWkLI~&Wn_' S:m&$8_DuS57&F{& 4")a$"ZNdʔTSJN"( kIv1لM%X4^^#C*8v*Fa#[Q:xqw&O kGr9 T-ۢf>;3Eu! b/͉|SƋy Pe6J d Iks3??%u&H;[Dʉ֕+'ZWE*:[hJ֒%LH(Wi-Rm8ň #Q,qlKN]j@^L.]Thى3vt12Μxeh&=PfBsz[̠ژh&j|]|2>iz4EZLa"wEc`6$[R]i!R2_nꝛ$oꗊzsիonr(H^ q34gq$,XQ?*6<6LCO-pɃ;(Ql ' $%K%:2)0 I1LT*bxd8/ \KXA /gH۩PkSdd3;v$݋Gr;D% +\Rh8b-q­KG)R%8Th cdLW\ZszJ`p YvbWjqQƠ>rƠvh]kK PgGtyцBE䈋6ehC3hԼfURKBtX/+Tbs8:4$kᖯ56$+2ٻFrvpMC#3|Hh];=CJH4!5m@ǰݿ6'huȞoP>]ˬ:zӒ9!s̈(Ε*a35p4RnP4Ĵ%;H[t1L)I;[0<ص̅o fZУ^mCϼBj'_(goA|cK j-d),OW; a oO6 A6%a Ѐ8o2|=fRsh#cϙT7P:PW sL 6s/6x9̭ŋ2k}ыlV&K_ELl4?UW\Sz.T-KB@${7#VmuDn挊:ku֮]כڵYģ")AEJLFʰ4]!1$ CܫG޸X:+u}f rĶ4hҊ;#iR2RPIzex0Ut`a(; v9uհ.S\;kH@y@ 9_R|Ì퀅h>"Z*Z!]qϠ 0LnSPa舃gK տ-K[)pM_ #0nӛh'˛_S.*kE$Hݫ_KEC:E>7LW4̟B??ߪf7~vqCEr$?iA#p.|S*9~\/gKHJnJ(;X$H2G\>&cpl`A>W%Зop"r}ϛ C"Cka,mp+j!7l.h>|MJ^ PVY1~Y(&`Ist;o\RrlW{' Hr8+ނ\&6E]fHEDxlK,mRBXꜵnT7(c=yW9aM+'?hhqP 5)Sbfzr+Tw <3 ntw{<|'0dXێhzaHR(1;_AViU^y?'Wph>xxOU6ji]nQJQ @.X% Q]hU$:1EedLZCڃZ O(^48뀕 Rc8C5p¬)א"Ҙr:ӰrUA;7 b#AˀHJ廵U֨@KXxF[sk!'1Ae9aVj2܁ɔlaΟɕӨ,DI0G[g2ip̅Pj8 8)큓Zr q.qVK"/rɹ+- >o##g!2Wm3E.E{ 0.6=4-ijQtsIƧR9|".;|q \jm9JF9|fݗ&8uh[TB$ tKSdH!+E`P{ڜ5Qэ&z~&ڊ*BcDb`#iKȉ0uxas)/+cx,D'ć-U0ӪΚ-G9yk59k#Z4m-wWh /G:7^A޳԰Ɯо?_+(G4D5~G%-˜>CK{7HIl6o.Ht8F=t.QĖ|T\q#a>~lJ|hseX5Kb˷/;ig=ƭ =0|p!ވ=r?ER~8DyUZ י"_)!F6[ͽ3S؜Ua5`VVOcFu[y=os`!GX@8|!g ?*Hc<`5_ŷ5c Dڰg|f)XtymMFQQ%Ol/B'JmmJf3;l1 eg{|b"KF᷈%Q||ƌ8o׭Y= l_!`(2X ~+eVs[{옾wf a4 ?;LGTBR< ,=0$ rK_b 2:g*p%j &j%"fy58Ha'$jQFM[c5N^(ЯīNC {dBriZPs-Pt߇2K1J^a"DǹT Uvf<R)R130 )] Ơw~)۶h 6n 8Rg],żr| E|6 TRYpM˛iXsGT(f ?7h-"s{P5@ARdK I-ELiǃG6EdԀm IS (5[@f#,LG`\hT.*7JQ `I -4Q0Drf\ll5΍wuVFJA(6 fB}aV.%x벎? OGݧgd׼`d$.Bz9 ["U72'ג5V3@GSJERv!jj!T2>("2H'pDҠ0w|rqy 2߮QN[+~uꃣNjcTg?x7߇W"pFr9MFRƘ:H ]ds{Ny5f`2N)}fF*8ijqc1gWkL*\N)Cuu)L S2U߭?o}$pwno3=L)|Cm0˟q箶v THCk_k%Fd6*xB$@ k!#A Cϕk޻RJ8U{[S >Pb0'o`zF$Qu򮜜Ȣ's٧7_~z ~}SW_ ˶Qtm T^jMJF`:\'qxdkJ~Sa OΧaM{lH֮ oLqjq\tO@}\>{z /VakO]Or *y-SɰmΨ {=O:=wFOfA) EiRj]/vJtr0mW q`U.Z{\iiHY 4ڬW:֤T(<MTN*!z=􃶞)z랾K<bwrQ0֔< ǿi2ڕMKi\) R* T"XHqǻ;X (*FX:0Zچ}d B:T иRnd"&mCr Yi/>Rfu|Ng2IUoJpBF?P)D_ZtP!b07rcVB9P iS[Qwx9x˝MD#sV]\NX$?DgqH^WOvկ$Gsbf$J@YŁ2JⶼX+:F=0g-(l"x)"8)hp_4%:E GePC\RѶzX5:W-ZvJQG_Sz>g% 'KL(gge(E}UE֨Xch(Zea |sZFxDg@X8DLQ"hA"{ BSrnQowRi\NOJxE09 N~ %γ:(N6T^)!bq%=6Fgmc3e ba).c* Ȉ Z"-&+1ݹ_=q_}ƶ~­}\]Т RFd/?J8U~g9od_1Z`.\H6c/1ywQW]}6Ɉ#jE3Tr9Ubl|Esc:Z^XH%\D bW߳(; F ڨ rl,qcCl!ɫqWWJwR3k1 ym誶i"%h Y4M\9:{E~W*;2 Z .$+yS#Cb6>Еn;LHZS( pZҢΡNJ$plW-F^S6dY-7Z Am *!y I8D'wZX"2\JV @Q8ÚAy |ҥaqV›q| H 5e]ȑs1R;9i#`d"=uV#wbŹ?VF,͈0L98%Ɔ{agJz` *$!ؘ_Y99[8]U 0nUoYpP-1_'k􃫘 >e?pw~5xFWlMDӢ֤$@('Y, I7gR>lT#{~߀#u^(LJB+ _ؑMq+4%e MDvī{O]*4-?)kqׯ~DEq~ʼnd;$fM#ězp9H $Yl;5U'KVHlWuy%Ѳrl0O x'-C TSeIk(,bIi/a[y+li"Otf༣6 ~ 7AvU󻓌YWn! _nk tܧK$?8o4*hGp谂,6H %]KY,in$n~? F\jE*EḜpY/)ۋ̪kÍP%!XjlEUdph6 Xb=I4 3|/`~U]nlҲũTdU >WqՐOK*uۓ47[<Ӄ~_Ύ45 v}ّ[J%=[ _gO4]Es8ǭ?[ շNj^g/x?:z|~v4:l<0flyth*vn?2}n=h6ҚW2*p*e֟lE (Z۶WrgMލ=& 9R`E?K9/M}Nz Hc2\ gQhNm(xv|1nIZ%Ae(غ CPX [pmT(K%o3k5r%ȕ#"& hՒ9-)GmlR āk1{5*}*!mi-Tޣ(f)EnfO>Ú-(J= 3P3tO,M9W# ܴcRjez;`OkxGkB1mn gl:pvr[+Q p{}gؽlb8Ey[)Ƒޭ8\JŐ[[Po6/0}Kgr_BӋ{r>ܫumw_,dm\c dMik1xR|{..FIJ1c"UKȠ &* 9tԃǜJ8jRQߎrDuirr(t$)ϲKbXr"\ RjI;̔_9Ꭾ槓6H;%#/]QVmS@#[Ym6M1vg]/bO#[h$ۄ3B RakFO9/Lxif5ԋzIiK+YJEL4yR`8 X7IzJ;:¶.CQ!xv;¯BS0孲t+U(r.RHNH(O+8xx46Aeo[)>6Q!rxCImmdm: -Jh!CI5.sǛ5@?e/KHT<) ʘ y&zL롑MR5>Sqf,O %z:;zT~[?iQ<_ѯ텹4xA`=ڊsbNSl8H/ ~' Ÿ? Ȋ4m}@GG>YRi 5ҳmʝܛ~Q092?~Q9 TI~n 81ByDizicN9B|h͐ᠵwb2%*>WO &>cȇ>֟0fm:5N'l6r٭>9P.͏JG֡]m0C wECżj&@ܘH1J./\eD{u\uf~NHBSp8AέPRM(iRF+dJhr.yٻJ9򾍽[{vON4I_xb+nM(]q-u+QCsU!P; X#a5 "yjWz:*s:tó~<b&ܩss-֠qAGy㆑poY'6qvXN" $` bօtHA /s Xπ FPJ@Tđ @^(Cڔ84' *W4# X̱7:qLeSjxV]wrЍf7Ut qZs DL̀`%З PUcFND&/%kH%iZ*M6ւWmJ$6,,Ut G -Q*Tj,~`f%Hφ"f"6XiL!|.yF0O PaMT(y5&D3KQ*-SSDZ[9&jA0y7[ԅb$(rʴYfYA" X ۜdwl=w+u`X]Z[0a*` \6nXձ.و  kP_ +=}uA昁r0swuFsL-y$Z1h|΢HQ bR T#6f~ZVؔ- gj"lz. 6Chl 1bDfI/"†F{s#Pr_OFϐPHXe6 #AcJejXLނz {~DV/ Ex&F0S`"X!K- p؃D`5\=;j$5{,HG LA7lK`\Z=Ml I JyACNz<g[P ܀B  \0GHn{8g).2 O*/!f(ST1؅D!7^5cSxco@X%VF,UStc=77 n)U`*rc@ˈMVq"@ R CG1* g\|))`~ǩR#![zIcKhD?_+V]V@eRZoa;Jx~г,Nrksr">؜z<{C%`l!R<0TIX, 0jWA(R!()3;& H2].סOgJAIb_a08+L39,6K_s4v@93<>` Z-,;*YҬ0%⭍nbSZ 7ѱ",?ʲ|%MF)H;ffy/qV(n{[ol}9ָSfF ],_$/J^4oWu0 <g2)wffvQVl\$ V.k_[ݦYvmzZx4GD?iRiʠM6z*%~_59ab<(P% o#׿w_ ~܇|=ȶ~dWS`ٓy2䟷V濨ֻ&|勗ϳɞ|(.VWۓV:/{˝7'ywn4Si} x!0.>-GgjmNf:B2lMasrEi:Z+Zi5UE촆sWfJПYg\Z2mvPCUJ$S)o>'3gyZEH_%6K\3-`XIUJϴ7y+]'8i͋z|5|_ln#Ppc ( M7NN׶{.L* mF Xۍo!w:~/č^*ge=Wfp|vLn|; d'n@@=6.Y  xRq\Bj+n ]30%@U)~n<}À&;%q`k`16cvu}pٔ<*}Z4 >f[>Ih<rIr"pUf9/~.éЮAJrI8EO=+2aVK?FVmR, W`o|K#3}t>:bޥ/N}t>anسzRܢv{p.`/`?:`v]=R~>Z׻qŗGٵsu^}t A8|SfmFs2EL{}^/UJ4qO\TĦ*!_aҀK@_٠ZV|lpU[m$m W]Ъ<|e߶&kK?ݝQ+NCZ0_Jo|5ئBjwPZ})7/T4\;ėGr㼀rS)sKMp0ety4 A4 FD`Nh=q4bmL E %!Ŕs0)msр@$h+-rg:}(Hwǯ&Ln#.uF[2Xʘ~r%q, qt$%o溾rLK61&D>.}Hb}6{{Ͻ̇O^d>Uf0=fo^>oF G?md'xNT:Ds;jGԁ~an{~V>L{WERSRpR!B9|dgWjP:SSx0ˑ\maeds历ٻmdt6Ik+=9q\^'x@JBRN\ %ѲdI3HAbX,Nx=cIUra4lばDrIggW:~Ϟ}}s+,}p`G[mI˥]G3TփAK)R6ڱx30Su7u{L uS= cM*j*U m]ڑNI>wMQIe-J%Պv+PڠQbU 7vTRgh'Zu-]9p`hPPm#:mK,>\0|Pf}ͭy)V^yV5EPZ,i6fg+Q6,:50nHhs~ >qZI[^FgZ( k3[fN.Ƀs26/q[SVjbhXU0/Ū2Ka%:l?YkavXQ<,g Z5I >$'_D bX+k0RZ8ڸ &n*$9[(4Lj}+hZlM{#Ӛ,U'y#)*Yo8o^]|KBy?G<^:̭`R$S{J2HVn1M]9J7NgvYhXV)M]& =Z{"L^鶠M6XG.hޔJU2nl .zjZQte4bj9%k~}ΠhBh@ضk~J,+PXSDA_Ea9U;aFUj`CK~O9Ŷ5߱@uS#&0IM+P]Z ȩfmɉf |FJ U Pjg!Q5R4`0wIa3,gs+Pqf{_8fxP2ښcZ,jyUjbN4;d6Hm 5Zcx.#P[::XYcsBVi8ij^*O)^@`0c1 ڬ *50oU$bbvY6z^"A( 7?2 kg':vchy-7LacJL8` Gu8u?ٍ Qx,x<Ά)-e8+$/oA,Xw;|׿W989>]lA4yge>$tgp~_|v3}_G$C,cH42Lon_uyQx؃kAϷ;w!`#CB a;&:#b$W t?{!~~m瓽g{g{?v@~ʫ+FMr(\Hb`<y4/~ݠ+* p"Ѡ0X?h0G~\!0+ȞCT Viքf|: ]uOz{g=t|Nz{`xČ]%} V&\ !\Lu4/wt;Ap/ʗ vf*;Џ?_?0)z5OW?Y_6G-mKޞ>~A_3V"{GGz?N[}z:;oo~%fZi6"9C_)H_P`/yϗDNmBe+E;+9:huDTmF54CJF:N}$]Q:(JӅj%3@,ȇwC{b.L|V$DHu ޻A7~a0v>_WЍf8|k/LϷ> ߀O4 rjw[ÒiO-iw^um'.zwzjn j|8p氾]ߤiwG }E(}wCQY4OF{Qx[,5o$oq%c%K bwYu:dp2Kʉ:F98Ik>m%A}'3;o-Da:"EL"_e5~( 3 XKw>;;v$w׼`t÷C*O[bd}6b"׾-?07~{|pyDQc;ל|tx/}HΞX)&@~'i mYp</">Dq`0RaqB0*`0):̀ݐY>;>;BFJ)ħB-"VN 1_swlϷqdQ7I %9Y~/ܐ.Ɂ(n֒$ bkߡbn@Mh"E+|Eis*]/؋/5 y 3֕cݙ4wV)u"7OX7Q'Iob_(DPf+߹5 qB:SD>uvJF#p~~i! OR<f\*[-qq[FwQV{:?[ ^XY~Z/Z˹BWu/?Z-dc݌c= r9izvhwfa+< m!}/0(UtS+qq69P~ҕ`)pmzsn TK=' 7&.qq% 3- w?F\'6@KMmsnx8hR5N7AY#r9&&dzlǮWp0p}m8 b NqqXdcҵcb[_ _+bS̘of`\$s=I+^Ozp5R0+Oob ߃+Qe/\,y.EWwaZԇYag5ʝU`<@' Cdci\n:=$4,b3r{8=C7Cu}Oj.wK}d:/l;ݏqI8mcN(~"l1lWuּF^b5oqԕQB,&2J3o#- vsue߂OpuOr[s ,6(Є!1a83 &a:8P㘛DCe\WATey'3r$|]D˭jF:<6-u,,A_ѥSʊ:禭qyʐgOwMKL#Ԫ\rj`˵lĎUJ6BPŶ5e&߰&XP]t\Y?|/YSQVbݭ4ۨq+JcY"dLV'Ql/mftM:hFERvVRfj_pY\{ހSf9/Z{.yV|Pt=ˤ*BEV:.9SbqaZ7M!y[O^IvnsN- P=;$F W C`hLgF,Jc` U~( *Ku 6z;%nBĶp3ԣ 6>]\ϲ׶ bQ]0GhY)hIޕq\BMU"@&F ܗjhq1q$!۰zٺN;u:uM&751n[w ~vl>˙ff%bCJ0*A$?׳׷>VQzuc+?.TZ6Vj̞kaONj$|<`ەOH\J+摩e$ /}3%rqӋ%!u-ه &p0[DXg$ͪlNx_`тpO~S֍O=,+;->X LdJ+ñPRs؃[Tg:aW)ryxF_ K%x.rǛ+Z~I6AwףWgJӤ~a{xwH 9ƐPam)VQQ P`؋o3R3¢j/^ǂ†JDW/ξJ1UK޷,nSNQ|5uM|}_nznޣjW˼IRFoS[w i&Mj4j{f`g hAG*HP^/+W4Du ։pFw''K&ݿspvJ/C3 VRẈB,3.+gyha]@?],qP{Sd۾zdhh9윥{'ワ^Rֲuܙ{)F%]h3/B;6pK^KG'bגUKC&2%.>1̮d)q+2vHR;Z.3,njqQ-qp"V ##b\8ԁo7 0(> 8Y<. L],JZ#hA8͜Y}9*#%E@h"L Y'L`=7 d eA(R"W1J| uކ-Vqݻ=v>Eϓn_eq%ET,>4?cEbtR~YXIh"J 0ꉴ]SCd4 ODеt}*딓{jOgHXI0²:ϼyF>9AnDBA*&^q.8ƁZ8]sNRRJk@g,A:+ +ôd `X~3_V+B]SY/&%F+屏΃Ѳl 3Sa&"9`9 6c9ZNƮuH歎~|~oFCq5k/&]og r"pQ%[:tVpfLO1QrȊj=D8-k /EurƆKaF*bQR܃#µVfD:gyKIt^DZy+] *OYeĀ_-_eicH"O梅 ML ex0ęPAp{J QIɇt$3RÈWsGCJ|D-&"Bg1=A}my|61zYb% &kI%^%g՝sl21US+:OXM=dMQ/!ɲeO>ap/O7S ,2RtOtsI}jfS֖>~[Z['Եy52H'AbKIoG@'#`ց3|LbmLCئ=cY$Ȫ~SW'WOJ#qDQV{'8aIeRYQrg1??8b^ވ<Ys-2`1+$mEpଊs8S)uNb^pOu\$bni%UJE(DASaޖ[Dif[_{0Rp)cE~inB1BARS*0[PZ/Qj1HE)"!zʰD]J.hNeof=]xu\?3+Ub}>9)gOWu֚ylE%WgA6W*ǁb[\5z/&d}E+Qoqo}=6c:cѦ5`1FF4APN@%/&ẟ`D9OvL? Yާ>H^T bv`r x֗;,=%^'noC{0بEN6J'dWxgb "S'.?,Ԍ.N`+R)9@1P zܰ1VvA^,{j<ԃ;g?mN:%r [<П]>[f\)xy&(*1"JV;>!L1A"u"i?W5=s]%llX-SJ' v ImLVL+0S@p:<`.k1pp\p9&`"2ࡖ]x=އ@i&2! ) C0xAWj 堼~]9[w\}z\S:p.ִRNAȽ5"Zg^u U4m)B py UyN/bii<(o&`-艎;TtJ~[P$ڴ}k5Naupz[)8\jtZ0S`dyRJv I F/EkA1NXG(2u-U.bho*nXs2tA(:pE&B^[ ?죶{ufkA^y^.(6hc$9M +GFsY{n|Vt*Aa*f2$sL濽EL!m|Η&FH5D%6)=w=&!Z>wqk4LJ>:GuH*T ry_轾ٕRPѩg{Smz߰a'+5WG'>3HgPRܗ)ož CEJEvbjbW<ńtoۄZ}9ؠa'ʳ<ػ}S7߅ x'b՞ߣ 97FG;7`Ƽ0W{PՆ\kSFڏI?S\̉Ñ5PHL"A2>QF$y`cݒYRfW7{~Hå= V~wIk ~,ǛzٿưAz5R ؉ir?> Ƌ`*Muveo|^Z[L_r~;rq5hz~Z~-ۯoEgY '+JR rN3Hiz9E7VҭoEş[]69AQ-WMS9d~}|6[9"_&o)jA͝aŧ -`L?K8'i ^mn(&.2܅FAr+m0Y,oCE+?A$@>8ݢnci4/$͵h5,Dx7{6$O/9 |]SA?mQ#Y{߯9>25WWWU׃fN;P!ؚcTğvyV wC@۴j&yqw…8j{gæQPuNFRHX9q.t\%Ktt0Jɕv*cDwg/el0jv. 1 טdeXe66jmw~J(XAEeD ;,=ygl:{%DNcŠ! jO:&[C\F=4].GRgFg2g2l:ehMFJAm3;qƔn`84א ΢VX4Z:lg%aii<. M~ԣpejT;Յ͙Xƅ8n:fYN̍ww -GkĈQ`kԷJLbSUX\ Ά慐'Vj X+]EMz]ܽc7q5552 a+]ؔNVM1&ӘZN(+gYS6Ռgѹ>pN:Vo#2epjpP8&KMWi|Khu5r< X!1\5lK B_i*ZEqtc.OTf (yڻ[g􍻟z?50K: ~/ݱtn% !/Y6~c /~(z);{?WVig71# b3C+^e݁wl~g>)Hr`$?(k/jw eXBKcoZH Yv|-O<|./5`ْjiYXCcӣ38Je>O:@/rOx"fHA4K,a"c"1Y0Q [r%A75jV&L84s5TTF[&R8 g8G 4k@K; k,&C9Ŷܓչ1-JJ+}tB8st?_l=V1Na5q*[u xqu|<ajvՋW-=D铺Vj]J'p # Rnj1w!5I$u2 jѮ0V=_ =  ahkzq͌5<۳͆t:n h5*4Fԙ1 1A02\ /$RY `-JUJ?d<7kR+g䋔L$@Fp1g #=x\ X伴3WrRG%T涪GPm5ߺEk"SI3NDLerVv5ZvKHve{Q;sT$_Onܱʌ?KOy62 k4masybO"G:j}yzt QQuj7Ws6/qMmcc}GBmxr~ɥ4 XS%O#O[1V-Foj hLP$ TDg&gDTt`S8s) ǽ1DTnxQaRRJ.`=_?+] ChU9#|9f%O;.͛YNq'WeBdT+n tAgҋ=M~i+A8mU[`C;mňujH\Dd#=fݾ'n4#:8EMyZmPXh_VCB"zL.,evJn~^/cHbs,wĠ'N2%9 Z1gv(%y-Mr.?spTilz] !&$d;n?ޱM%F;$6^\п~[i8>' x-'?a/Yƣ3Ipģ?Ou5LiÆSGe8,V_H#fɋ&x$q# cx\[}G/ȭXdr9F§f #n ܎WrvrHl{=|E B0a` IlOI%*n{7l{mٷف15b9.E{=T@Уvg8:ٶ{m֗سE2{P5ξ]yz j(ڪ/?\?=cJ;bANvc jf= wL6aE|eV ~ :o0B+{:b).Kwo_^G'=F@+L.gY,alݻ {ptA06$^![FlMJ܄Hu,)IN}Dd$AibE1ML'uMy1#'~ JF1,qL1~2Š4᪘&S J!oQZ%'Ψ'G0 5뀥'RAoO)yl~ww49 ,)Zx~*ݸMMNepB06O絒" e!೐STj?-C}y]ڲzjcNg1 +ėGF zT0/tBFo- ;cM.̛>EWE<@YgN2JJyIZd\ k9W! _6e SR};ۻ,:{XtGyǂTk*8 S42{KvQximNKjt91N{P1 np,!pd$T :J<bvNg@Q(#+N9Y7M'mVtBM ooQ7hlJ:os-ߔ%A2j~YIʜFw%j 4X*BP܋&0؎ FW'}A?6ġi]l-Gkl7&m"EЧmҭz|^rH: ,RMJi'ەh210ʴͦ n0H@ě"H Bɤ-0&*P}W%9~տ˻gCԚP-4~o˜ AFǨ6pp|K&,,X£A tmޠqm !+~ݣf+DHJͳC>$67ۮ8ogMFҫ5r_*[HEAXE%u ]d Ix8*p Ԣ5wqr'  z=G9`m\4Hne(=ܖbO~}f\ӝ~qsÃWMr@sH^!$>:0<˥o_]3F2p9g 8лc2A7)HL0/#g k#)[X"qJeޑLE4&'x>+T@JYyh5#P ߼5$8etMl2}VZF>y-!hKxN|j oOY/P]K(xh́M^F "o :'*rΉsY$s悿|* /ڜ2,wlE'h qh>+TA^iSy2aY 3"hY1Zy@=I c1\Zg*VPIB7Q%Q"h+daǣs9Ž\GE4k 1^%5kzfϱx]ʍU AYY=eiZ'|DU_j_1*sc(C #uFy@hȒK9?&hB)~?|RE0;c|F39%~lY8Q8/`*fg޾G\SVˍu6v`ts~Xym^P'ς6RTD/~I7kzۉ<^y 7r9p{}Fd1C_LŒB O w_<:xĒ5Ηj.Dm-Àwr4 W'L6 dJx\Ԛ*n :$Z%Rh?W/`ϩx? \ɕed hRDf}ٔH$g4J&(G'%Ҕ!uM!:rӦ~W'ydfH𸇖+|7Y.De QvbBjot! ˼yzyG5O]ch>sJ*Љni "jd=;{(g%rR` ]V5.zHSAep [DI)e)Sx(EY՛&f[|.}:Q|3.7E䉢_mK䡮|(Q,bXogT[4\Dr/I+ j!TV$9'`!r.*w .oT=_POMJݧ~=WM5pT6Z #jWO~Мl^n[`/_ zNHhKza9C|׻VVNG5]l{"9ǣG&ddu)2&Jg`ZVGBJ*tH 'иc,g4ms#6 ַuK1X @Ũ5*Av2)7iޕ5Gn#RYǘmGGa88&Hl-R Mu.H []BDf"_wbCDy+Q KJE-+Q:# yq*FLԹ{R3Ht$_Ëz; ՄAYBRe(kQ aMtdSTUB/ؚ֕9Gj4V*W՝)藂ZK*`7gXE\_5+pYZ0iQG^a!ނ_ܺ~}kny馈Dhx7E`ԓ9n H i%|êOŒk*\m>㫬@%rAL!$ Xlm~`ۘ6&/$<|j}qz^t x![B)EI*x[Ch5*+%hiIEU`tUՊl@zydQHDp|1^b+e\ܔ:m g(a"eT+2Gwx`vmo7ox,V ̺ v(.BH)ɏR{dUS8G GD/1Ƃ>A#K4ta VEn_X^0]`8FL^F.2\˝aJGѬuuߨB@ga?qIr,."> Xe>``Qu&%qRO4ز'i),WyS* 2m#jg1 ~F NWH8l'OY}4Vܯ^]97X䉣^jR .,CX%Վh[.lAX,XiUH!+!tE,X ,ͦ'Xajb|K [/]$)C]BWuPBE5X%,jV^ K lIPtKjWihCfM+&&bM%Xy;tU -aZ5A\(BL^K@6PLZPԪdcJTI\YjUV?+kzvLsP#T]J3⺪ hǚ5`t}R &KЈWpe*N**JVdHͰ o"`ɅFi=)^^iNԫd6hKB\ʲ( SJ2T"[QQ+",͡>ejN F 2Ih@@hfUx(^I#Vzx :ww&Ccu4<36xZ=61croqd dFS"4%${WfحH/z=GLD/=hRn3NG eHbK88 hBS5!a2UOcY~rL\n(]rUӳ%շ ;K/tuܳOn̨ɞC,S`$t ֌- }閐{bKJ)8΢yJJ+Ws/`WݪkLQS+? iЇ1/ߵə|hnw( n~|,7#_\R}X7wl@ݎ}8C!&zo4f$?LUE+~Ɯ*tл 3(<ɜh8 $=v 8a=zN~N_Wt*.c[bN~?[A9XW6/[!=2k@NֲU<NJ B~A2ᡐk ¸*]U?O3۟LPQQ|H ;瞞3?Kɢ!Yx'eThC) {7%瞞3Z EcKM) aɾg͒o#x7/}|˒5O{ ˲8Gu/m^KV {W ;Fx{7߻r$ Ξ){rfS!妦Wq]9_Yry[,yw>ěݗoy#C#ej|LGW a26MvqG'gx'wL˓mt"U%2m587Ri&bHd,ÛO%2?gq􁋭=?hcMl{%!CaͰ/fhI4Lf5(dT/qdJýRgȽ|14\.+TUV7O 9JMu&!/=eȬ U3/kYn!"s覜P6Vx\.MIM\sk*~[o^4U"@(D/t`֊pB\ a/nt&狤U"ÐJo9 '}xu7y`~ 6y`^}Si̐'84aR-ov}^S}>})McJy41Fo(ׇpCQ1EY_] '3Hu+`F H&>g*B#w̑yIɣ_D&,ۧvhr=&$obD _!Z!5 1ZL"rĠY 3(,\_Z7},* zxtSE@|Wg ΰ&'ėcsDGLtr t3Z{]>O]^/8q>\6||:D3&-90w8״Ck2˥ԥ35k>~>}w҂QCLs L53^ԥn:QfHaK*@t2AzԬʢ$eŢp͠s~,g-IwjdobR*u!s^z& DžGƄ^747JS)E~iGw /*%/:AHns`̘HS`Uվd,$K}#I\J3; Uଡ5(N;yAb$Ghe~uH3UQfe[s)i) 10Y>Ӏ;/#gN*וy? cK/U3!rG2aY=NWze yTx>gpՃ"T/qǷv`2A@ᬙ1axt9Q b /L@h`1ԋRi>22!:&Bl/S_D%‹ayF7SG1Ud+/>1!$*==6nSp$$#1GDŽ'/M50E #IFQQ@zDLn_/]^Rk%roZ/'[n#9WxY;<-֙UHر=1PIi$O~fA6(ʫ22nVuYýq!U˕&@]H8GvcxnWKr [lIV.gѿ"4S 犳"[lhÍF[<jz'?OkP% ޭ|_ ^f4xC\tϣbwݰ:SݏD`+gGwuᄊKj2rຍCc{o3rcqFQzI}FqZ|V UjD:|oZ[y, },/lA$%k(N xKj?xȲ4<6o離[E^@k+GxK J\s[#kT KwkFVlJ3mG!Qt*21 ?w1MEw'}FKͿUfh)3L Q8t;Do=h)c_6NOpϝL! .i>I'ng!0W_TVv:k kekbxlm-lUo pLoݑzףr>Z }Y]jQ>c磖prSy^5 .V'-\G^9$UbAs%! CtsƮ>5]Dy{qBǩSX [8pR2{1% 8N}`>}7fhG(7(m\܏WŦȼp0>䇁 ՟x=dqwvFNMx4_d-Jk[4}gS6ǏG=,,yr_fzr͉NI(QKlXsDp0AHM0K:})\7>M ܦeqy"URPF!n)Y:$sha'(X8KچM. [蜌4\q04,U.ɋ_*eY1Icl cyuHж~7^ÞWS=VU#盤d@q$AmXJQJ6L gv4_>n/gbz}u&LX!N|Si F8n߸ru7NVr7v1&M3hqK8QCڨEy)ɤiAׅ%6ZxpS(jS=[ճ@VUE2szVZ=8OR;S>pJ>0noG[)1w[cQDAX̊;Wh$H&F$ E!P+>ƀT=ީdq$Xd I3H,~/#WhzF l 5PCEaȩPj (9͂E$@{",u=bEED|5I~5(aPKgrnOC7 ̎u>D'b6Ys|^ dC~ (Ϳ09d\rB >>nQ#WȻ0/!ܒ,:q>?,"E H!) &ܻq a<8&w 7Y}tR8Ԃr ex;amvnп/"-ڍSCv&pSQ$1AJgfe}`.w, )JbZ9n@4{LRL 3u>ÎW[6h p4Y B,r:$S`OW% D@֌|>og7{5?U=^ /*.qW%TM{w:8cڣk&~# ?sD9H ԗ\lF׸>>ԹV&jy0}at_ ըW=o9lw+ly:ŷ/c+y0EFq2p]۪!+vٴe}˘YϿ|}ï}ҚdXqnЇ,Z[O@MvSOR%E}U>&k/4ґ2Ѡ7&U1Ew<G KN)d@k_0er-~wsQxxY3yj~zy1ӠxZRF1 If ,, "z<&{F 4תF I)~"߄|U8!_tRO'ˀl:·HVB Q$F&+-慓  r~qIu[&}/lOBJnϬ;5U Q.OG0! OS RmD)q,$ 9bx aAIS6'6nsMNXMFM|zLǢ*qPx*מpkF:+`t!¤y@)&ogetu GFi6G{wbإkǥqUMQD9$ghKFG4!!1F16D`S\& z:uDamSTJQ.őwKQQ*E6I)*q]6ixHzh%yE\|:T/z܏hxb=nJޟ%PsԼ8tWcZ^tC9Yz.݂ lX /^$Ywzqs^~A|fo#bY~r嶬+K}()~/4h aԮm]gD β GyP{3"GޕFr$B̋m`K+0Oxa`}w!9EHBY$U};$FXagdVQQq-p;u8xR/Y=ٓ׷? Ǩ|(LY,{@Mf7i9t~g@$UJ4b\ITK/x"Q`(DIY,l. +TpNwmDX#k`BilǴv-pi6Px "M9Q[7A+ LM# 0T?P rg,xpQ;p%v<(!_f/M!úa:yXf.`eԺe2n5+W, j{+.=Zh^@rXb/;C+lFSMq U4C$R> ӳRDӇR~s><'}(‹+/};im013 6~oΥ5$Ay SѬ؇X7^{M*[eߌ'>#^w "m`^D]=;7n)/ݯDe瞱"6EKy̓Q<fUۚPHP.MB֪PR|ۏvA' N`R:hVR݄3*pQC۴.0#Ϯ ?ޑ5gKqhk~>y J$ِ*!iqڴM=p(UǞu>X۝֛);i\na OEfe4h1B/RY4˝ӶrgnKlW7q2.p4y.sOBj//dS25fExkWl,{bRd0BuY圮iӇWNTtm+O;iesӶo6PaFaxũeߚUp^UxxAxUB2?"r.iJ̬Ny4I/ڹ(4 OTSU\ƥQRBQ3m !'4-S JAP AȨQA ۂj M~{ow7^u쏟֟?m<^ӟ^v_Fc#1\ `2h`':)Hw_fe}=vlN9ikB+G 78``FAcC`>a Tiq#<Ԡ %2^g>eS4 w'Oء !YyoYf >:G v(<0RyA(`ŃV&T<'|%0 Cr0,/i;aSv(<ǪN!5fPR ZjC9'oit<!\יO١B Ł 1Z;g(gy ;`>[b hdcIC|%Wxi-斪hHTD ƒo>|EW L۠e91ѭNxx);`^FZI);3Iϲ9^Zd (Km4SLnwmM7th=DxUx_l<=㹖:eI-uZezn;7è/A$cdn{M؞‘!UliwviQohGsup>ܻga`q[mr߀Ds,-.%i}7.h'T&*U}z0ho]sڎH1Jr%l6_! D/URhiyU`=I03+#K EȱO@!V cJ ܲR+#l]-2P~%U1 խCU x VE˵VńP=MWw$n翠Wv3\?Ӈ?t.\Xݮ?C!al``AڠϨB- $Tk0&)V{G8pkŒ #Q덋_WEdޣ,3807_a>D<,FᗻO~=%!@~uo&牆/_wߏoq-= 19 dLd`xSYy`@R)4D)F1x5,02ڠ5fBDXh'B"E9#pW$}JeM >e%-n:kk8F,iv6r9˜7J=K8k*T;9֛ HӾ睝#-/p.댴\"TL9"?&P䰵HdPiMGN q[T+ W9nPz*"dn;:FhKD c7]^n6*\"0mW޻8LhyɱKv\]\ɸD]$.xzi{̥s?r>q3%U9 Z_O>89s8c;KRc8+2p 39iysqGZ_*~ZGHgzfwvzsտ=!vw7aLsf_Ms Y=|r|dqsw7?Fs8)~p۞M7&Wk;nߟz5ܽ6+?|=[U?(WFCrҩI~}F҃u Eurĺ].3tYۺe3jJhWY:%y֍ƫ`$ SX ,\pUTĀ ȓwC5or  Q Hgw3{.%UFOrW*zUg~d0 ɱ.(qS #-RRk)Ea`16r%2^bxLjEFe4s!M `􂃡N7mjY Q` R(+"0f8ٱ& i%"Բ[^XU ,L)WMxN.~U=($=;]Ao5-bzMN.\?+$ և$s$l~9{9tev5ǜj ǃLwFN2*VR  XHP׎oS"%B_A 9 ji n h h9쀵tP6}u^ϴ凅"}e>ܽgԛ/ >7$ݲk@%t4$y-GUto% gt 9 L6Ѥ|!J!%-/Xzx( (ɥ:xR%e v? P}%L%[$"J:=K/rZK ib_eܦ;aF JpubyƳ_/L[Qd:$|% gJx>!Ҁs.HTԿnULDfaتH0Ⱥ BP%!|IS[2!ˆ B,ܖ\ډ}I@.o ItGkSIDA q Vŝ1jL1@irif'Mӊ` !T 4Ti(KkVir 6xK4#owXO˕o9Q$"hPɃPƓ@lAZ9)ˤXי'c-h Qb(w~^}3mesIFǟPUG$66;GB`dDȷvyQTCq{( %>K C.hLU~J/hNM SZ F818i#"FVV`!Zo\\dKP '\bvBTTb.T9Lg|Z"Jo3^F0$(zA| k,(>Gb*9$ª TP:ۨ`>Pz|<~[=-*yS=i HdCV0HL(c-g/z}}2OyN2Q} 3+?0d1&aFo4G|%f>-);`R, 3V/);`a^q2C TטO(-B)̏DkȎx Rx:X0g@{ GMHm\q'(hヲXĭH0{D"FLmpEkB`H?>nkA9i@ #Wv/֩PN6~&%}1C:T7NE32֐2߲I 1p˴`ĢC%<JK4 ~׻Zݎ3EKؒ/1Vq1XnjL08 npvQ˩"x3g4(=ѣ7>*J0Z1Foؒ N?'At(dqRH6c[Φ]`͒ɗK#9<9-8eIZpŐzEџʎezp= tf{J])][6'nM->{HsD*”D-.w_⋾l3O%j|QHSpɒ㷫tb1OyISdI_&QNF ׋Y),j.W/@Xv̊;BCkJ,I'NۥمN$d_)PRljPplՓ%GpN?`! \V qI!bLkm$7Em/Sv,rHN6W[XLOQ7-f7[=fֲWŏEX%U練݌%TơCrQPܽ ˯6c+WL"p?}]rn&Z݊o? dDT&xG=m; ׼ǤfWŜ{Nj6E)\{3e[Lr*1CQn{QCK;wK^Qcu;m_9jL5aqI0#DZ>`IFzMj欸Mj朇6I°~~KU$Ǧ-*1Dedv_bUhQ9X1sAn\e>L3Fa]<9vDZ cBƤ}A=<J1"AP㖛M~ N6ԖԌK[]TPwD5P!Bz w9=^A q '"tH;1Ҫ`WL~Fz޷^MIoX0P++R2>nJo#bNfS{ LwH db ĢOFB &:TwW`O[ H{ !Hs(aQ%Y,)byr 94SjzNֻY:POZVji׽pgx`Zo#v 0KiejbN+1̚"l,i 0:'N+v2+]e.P효WZ!zߜzqB鬉V4U'N}N3)Nqtf6>ҤXE'hEJzZu2yX w#`v:OY<17|9Ӕ9ZE_n;|y×9 z'A-^bu%)_ˀq=/U۷n-gU=r;Pv0M:,ؤSyP:M/P4j HULm} =2_v7nb %+>7myh]@byaW"ZD|a_Xc~]VF8?58?2?2?2EE?? EOm7Zm}+z{n,G99[꜂-۾vՄiIz]'i:/FGuhT%<C* {u}oTsPM 6u^6)`\w~y‮t'GLe 2>|.ײCGlab(3`)ASAb$HwR.R#aќ56V+(SKfZRS ]\ s(V-J&[&F4.8N* a"h03G`DF-L1C@j!I$/ϥVԭ_üۇo}cϳןW̕/Jx^ga >|qjŻN }hT>f'ԼL)M7WaQS|e~X;7"X~&e7ح)gonG@٭օs-)&Fq>h`v+ tJhn\EhD݊;n]X;76j+>7ߦqQHU)G@fF$Gʼn~/a"/*,X2ڛ .^@wB k9Uʼn&x j7kc.F kgdgcT`E,l8k5=5_zHXiaCP-p\z0zg:X'm`,[})6gcs؜f'ͺNf84շ#'%`Å_(d\(F|d/|^=1>潏V}"z:U'Z^fyF:*tkWpk]xiVOVs8t'ztFՕNft3sڎYMD}Loa ?L_@P+oXEoz<7:;3.ί'WX~ )!*Kc:ya/aʾ^,>\\@?(u07tfpwO,O:n?<.Gp5@roz,M9P?|x TJդڊ+M(BhNo^ 0Y@3gCDy;f%=ܮztI/8cmk$@WKzz L 46IG S{ݯB:+4ٱ\N; b'K >(6U[@.(/³#eѲûy+Kkb%% ~*X1ɭGcqܥd(bTΚRN`i<R#PGi0Lc]K- >8 x;n 5! Im Dkq`3ZøaqԀlV8jouK`ѱ@j*n5F"p/rIJRRQd0Wqf. V@gu\CI g"\[f"@V: 8T e)%w ESRK$,P:l fBxmd0}V#Y[y1FNZ8C`Z2gݘER rL&vh+)Tօs-))GnZT-6*-*3t}f*t`!D l da7jQzGdR rL&vS:ʴօs-)F#`d٭)m G)'[qGs٭ y&ZdSmk!xshH=1#B 1:n/lQDn% w4;" y&ZdSmLantgdNahFN)h6u`!DlJDe7AGq)m vy)烱[qGs٭ y&ZdS>cQG[Aq{a"BtJcfGt`!Dlg6e7F#)m 1[ TR\vB޹ٔnySƓ 븐^G&vWuQץ]=g;j UP(4LUU*w"Tw>ky"kݒl2L7y basCFwNkYȦej25vYo~9BTaȷgᙽ|Hu<_OeH]!dB :ve(EW& {]gqs) -‚@{F()Nr)NQ"Zy^jn`º' h < G5,%qEKEE.* )* ][o[G+&9r/XAىe20*q# I ה%TB[<'C(갪uuHܴ6ZpãTQB9W)Q4k|g"uQAJ9ȹGTe$'L.H:3g6kEy TPm%5{20,ہ+-&hgQKJ".E 6$riBqX AtjԞ[XD32(2jKwJն5:䄊L$H5Z'߉"{NIhX*j#?}iK4a/Q2eQ Ǭg"8.' 5,"(* 9a2DJ/ӠXKa!F9)%bwLQ0dA1UQX.( $Œ77 2C6XuR^xBu̐Iduag3áɦ <Ɖv.LHQ"áKøziAW!ZXTrݳw5Pأ1rA(Ccx/£ZW,<ô(rO?7lp}Yurzo5lp3&U墭}ȳ6(bR 6e@SceRX)T)ힲF \$9re"pT@&9k ? "3KݿNO WL Zpyu[^W=ׅ9YeߔiK6 {".yt>&ߕF+0n\pхt~Nʽ*猪@"]8y훴rz:HX8LΛ7,PːNg;aXꐩfM2ngUuZxI0zA<>|pfw!O" dQ&3jϺ2VH2 5Czx(UΝ tD%&op CѾkBYaNZXr8""o0rg/\WBp7{[Cgln-U@ lw` s;!AjterDmwyi[SLo?='[jkҷqjC_[mcCqWqZ!l`dXzw _ mH3&{s͚"{fNxL$D/nR}*RjSҿ %%Y)ySW65;F Eڔ@o@N;@T7\yRi#2\:RNFo9a 7ӟ>2.۴BX) Ȇd4Wld]$r0eJV l\UCl~H[U[JFuUöykooVWӪ`f{r@"ŨcuxT8$[0lDGZ0a8aign:c<$[!1Y-jHsDك}\?\7T!?G(V<>BY9-MJi٠.+[u@T7|-09,.MXg#*Wfa𕹹eJUOWӴD!kY0 't* K&S>h%VHDf4w:u l$jhT,C67*p;<,e".,*TdMPDժyd`[n@@Ѷʵ4O睟WK4upTv-<`Yj06߷ 3 guU@}c Dp bOf+Nyu'.aZs?$Ec@Us2h \J5/ʌ8 |&U3`7M}xXv`tj{maCf\^cq5+z17"jke.Y`g#be1+E[awjD xt6rS-aB[3OK6@4mDF8iNxC3`ep6Ԁ:80.Ǡ15|ڢO\/NB3a$6j`ŪqYִf>e zQY}4?WPWƁ #Fjꝍ&&)kN͠D&F_6 Lq3Ӣ(zoak#lg c`HtAʦ)W59%gj*6߅nTC`3i3 n\cfd$EIk~ 7 [T:?6,,I/ P0 ?^o)j^\&n2K' m*O?+ͅ[Vg'/u:g+X:ʟU fǛYv>}}&"Qh]cZ7dh/L]_\g7K >WR‚<{ZLKb~?zZ7e}~~unp$o^܎h򧷳OW(sUZO꺋ųLD̑*Cfp!'6ZO~8zHr|.Px`}*zruBZ.;_r1/-`z3~`דuw@:=wADŽ kT 5[Af}gMѐ'BD/j7d/Zb@rb'7irBBգz3l۠EwTJ 0JcB; lBa5ܽxGZa-t^ ;ya߾W+3F-QB#q'dy=#E`-V s-R;P+IƽZʓY",jF< )jEi2;Gh0L]!E"dnW\];wH=24ǛMP~q||$^{˛o} v,FJ_W'?ku9\?e9_}?H #LuvIz#XrW鶿)_0I컿\3Erd&nV92cJ'.dQ{5Y+3myix9f]Ɏ&*}CoŻ}Nw`,ad>}e+E,%`/oWGciGqS~5;K|uv}hRXX:0Pe''I@Jej6]]w7|~T~?@I,0{y.#m=̭TZz%OUAɝ0u6%'T붕W˳jd 5>櫘GԪv}Hb ;`i H6cx7^pN%ť<G%S55?(LWTQ.vAݎuUR Pbuo@Uʴe]|%4aD$  Bb2>NDɴfLp.'MO0d$_Ъjd(MpcG+)a>+IS6_|:B/慹qk7TJ5};Q{nzodD;1\ZG(fzWb=xd~=ld Viqo.S}쎎F7wmuzJ1bYe@]l0d`3[M)s-؀ubǺ.|+>d |}'Hn~YO]Do]v#G8߆zP]G:ÚX-bFp>?y/}+I^@ ޜYj19n9s!+s#I, ~v'~,nd4en ax?mjQ}l5X;ƛ 滧hGsv<]?^jq|qx' {ǫ ǫ\?6ߛI#`WtjozEݙ\n%Vނž*kI }EN'W{Դp}61_](bےG@ ?oLj=MoLl:8}W_HEumC^Tf=?Z~]ot?Ź ozP{s%#СEoP=P,Cی& e;Gls xWa{h|О_8VmMzUn$yݝͦB`͓R/\BBJw|b|OxҲ%GZz.+ulbNRb\ܞoZys,Sa=D`PxRѳ$ h5ոK]PWʂ,FP1BBt6Vm0jr#TETHXv5AN vKjPOwllەmjR! D`.\DoޡVsN6:Ȱ ڦ3]H++f% SIM&Ywђˎ"${;cr![ ;[0`PT# } ! &.vo\Su$/Ҩr JbPe2G  0hKƄXZ y}u=/ƒ>$nuwfӿ& _f0oKX>~cK^@p7CsGfӮ% i쟡g¯u=Se J{K~24CsD(2fhNgGչ6-q"sF{0: )= ͹4cg5 N x<8^B%ݏ-֞OI_Ploun:wB6 3Ǵ,i6`4ң~z ފU($ _@ \Է?A|_WuS'^]7sOyuooEk3~-i/ɇW^\M˗?ʉ|.h1*~b8~/~~./h^|/?1y<7冽 4ff{^~b= ^=&.rB ^ C} X,VNzo&㻫>D!ΎmO z LZEl^KNg4 }J` pF+謗BBF2tYYɫ9BؙGpO1hE3ĦdD5ẅ́Xg TiF&qyq{ ypY#RGKPm?Knr}Ȕ>P%@7E-rU(gjYk&A@Nd$T$DٕQb"xlŸ9Zkjv7 Z!*{Ӻ? YsV1IUAeT얊 3I[{\5YVL)V<%B-PqnHx}uUl98Hhj*pZDdy6pdzjg̤+(FHk0 "fmrlT\@v$5tEͧɾuM:+hEDj/QroJ0Ј=m`EΣZAѢy}@ NԱe QX|CЪg&5:٠nd9fcߠA_˟~s@9ʏb .?pξ /zpoN yCރ~+ewzR_F=2ҾU/J^lpp찶JtY"JQ5뀡b*IT\.j 7RM<[~6vjT󬰟ֶڮ41Jlj?ֱ(}(5܇R"oqpJ;.}(%݇R+qSgtx5jcM @&ÊrL1HP#AT:9l)$=jv,Ä!qj!U$T@| p5Xm+Hբ[@GG@ !FPBeV5q+4KV5SmВ*U[x (&ke/ŽUaf*(JlT[LRˆbŸdwJ6kc5)UtWF" qsGsޱ3%BTlX"6r (R%^B j|"ϧŵD2-Y#X{ۨ8ވ.b(Q6Dُޗ۩u֞k^$EVTmRgd=Pj6oQ}I#(}(+ Ca0ᡣ@i_5>GOPj:< C@#z(s+:`&Jf U(I>),ZNb%*Χ%,DW=gIZj*K_]A@d0X_!iP팚gMB$dh!t%*dH7T "zjIbBck{>Y|z @`m@&gi~ṖGǨ>Cg>mjӇR=GRmP:P-mׇRgQ}I_ҧҮoxn嬆63;6[-h'6Q{He >| UFQ"sj1̘O[:D ,ILj&BN*,ێdG,Wn0ڧQ*53p~䈹u MeJ\uʂ!*PM sOh0(Xr[{1MNɚʤd 5cY PDxpՊ!5f`-iZK.E\BI&ZM.ڳ9Zg3$x b&,hHlBaάe[r&hMC;]4"}/w<<`/wj{ʖoJa̶[F)@JT3g>i,á?]w>g6J5T@>T-F)uuWa/93Q=7RGǩ>۠ڢu=)0X|@)pJgzܸ_ie7m ,"8Ilv%MmiVE)vH3lu2ceMFHV}bY:K铖R⤔'QcJ)EqRZZu֥O[JRVz\-K)#qRJ鳔>i)v)e|iH)KYk])vҧ'E٥]ХBó7k;ӖҨLBEC$Ƭ%Rg,!"rL[S=S:7@_*df,7#g4w!Ux`Y_/"ʺ $Z0)S"hQ:tr&m yR@G^DU*%\s7d4,u L,`Iq}&__{ ݷ[.8OW()́ Ra޳(|g|x$>B~|&=R,T1C}&)c!WʱSV!ص ' $h/I"CC. --J|YȻfRg6^LgqM{?_ $^ ZTPKyg %'I%j0[\e9x H{ G$>,r,q϶PT(L7$lDʣOF_PzkI $܇VVI+íV![qPO(''ͷj6om+["L~8REr%('x岭15\ʄXOdl.~% V}M~aqqd?%ÁpN];תȻ/>dS̎߃ZEPuaR1E-H.h]t$uL !VbmQNmL !Ν &9vB25b0iq#2*]No_n­ǹkUV21Nܴ&w*`ʛ^"]^ pP]4&/:8-DӋ/ 0ۻADUGgglMAQ:k!R#If;d${cOZt˽Bp֛Ŗx5Q9;oK`8M|kM7ǵCwޠ| ŝ'e_L -3fF mͭi<+"J((#f2v  ]a# /}^0Dڌ99~8̵})5_7&Мu{v{TioھS'#>j2s1MfM?J"z YgJSe+IA~/GB zԚ^& FH8e|dRVJ~N pw`Hb'̯y0T9eX|tr恠йq@l S|x1U$A~Z*MoŕlYOO*P7e-IW.T\'\ߛ5[SĈN1h#ZPnc#[h#ҧoqnQ5A4v;ܧvk@"R!!_Td\4]*)Vnʳ2M]2}(;&*f ԝ|\aF^v1"D_^!GLeZJH~}?ϧ{aC*;IF8%iB9O?Vqxd9,;=2."\4Mz`6@!y(SZmH4>$pɬ">:iEp{N'tωl2 UӑaԤBwc4*oP4w1B MENsI#B[ۡOtm>5+R0D꣢{G)ȔO,eDT;R 8Xlc3+11mLr 5H> ͔e,ߚ8 Spȍ m\89lTсd1F[I l #DۙK=x#ml<juDo06W@ 8ʉVπC1-!rtƝ〧"-+c|,J7c*%Hu+ʅk,Zcjdt Up%'G7 X ;W!|艥=$&OAzH}u&y͉ɱ+n]LuZ;,]N*}jO7F8oF^ޚ-Gn1|{_XQ<{?="{VOdˈ>,ƒ>{ _㱇hL+x8C}!DD؉XCV{^R`#:5،$,pРgVF.őRn,8!`.U$G6Kpc)*r >YKiUj%C,=@:(B VFu!d/,z3nUQ>YJKJJ 6RBjYz,čXrRNXZH)#Kq6ZF]^Dk?:&uZml-&*x`;QUQíJ1aOwF%)܇A*^I]`R}R:4VRhc`YJb;IH,1`V. @'9QmO R* <[}> +רDy~IM[" rUiT2p"zU|jCgfXX3_bt[W<^87jAY*M9G#6Y=oȅb\RqHн *FTdg0 4AINˈDQCqJ| [5esQ`!2ЇFq9!s;"Hqb I)KLCJK@J'YA(fΗn~Lo l;KQhE$~ADeSnI f(2 *8%6=`NGf(VY5elFėqQ P`ic20qg$g1iOL O "f:6m- ,e@"$1gdbP"e.n%-ƣ3_*`D"2{|ʦ> +>GLo/ll>'&ӏoƦv"mw8gۣG9aǣYt?nevg\$l4ޚjE95ܬaq}mozQF{8B*z\wnF 41Cm0DJFܘCj(f-djFyRX$6@JlxRE|c#hToP;}B)xj44EjSДHcй̣L$2\nM 4& X$OeΩyTEkoHaBjBlj3JPϨPQeծ(p R.F*I\gTC`Aͪ+N Bh\BJPhM2KX = xk5/$V CFɾQ g_+2FX@UWCƭ<>`3jl uc|. -K3mwpbw1 ا*n:`AfHKiUj͎)><ۇh/vox^z u1S\  X*jܮF Y{y> {{Ã3R%Eh ɏYtؖ3H_̳;?fR1O*[!븜ZW߯.z20[ry-[eS4['C%P}k kUsOwz1viv5 k)<1*V]V\U4{+ހ WWtWpPɂeo hdug^s!e{J}Թ&D mX?5rxv?EӚ;%Ў<:9mhzs"" :y2 ?Իf XjJ̠d9D&1"S7ӪZ.XEYa?A~ws5ZyMGi+uw,/:y3LOߓ,5Ńu` >Oۿ=Ȋc)Щm<|0ӛթX/(yjTX,D*{LԝCyD5,{CEKHnn2o 6~>Djpl{B aVp"R۬jUylzۋ\6\jS.N$| Im!ARj3uCCnP'>֭՗i徭[xLGC^9EsJx+Ԣ0ѣu :n="K]tf;Z6)ژS\jdQpnGϻ\Zc.uRԐr6TV6VvLx)n,eEP XʤK ҃fpDAT8ҥfqdAm21t,G[ZHLgCf)?%NWSsŌCH[iUjcrdAj7R\V֊Yz,eR.KwI=J8[8l R!'KwI=JL#Kn,,9 zZץRX*X1k2z!5Vy,=8#Ka5ab˥͗6UT>R4^EMJx1JoFP? t#ݗV8<)p#+{"d)r!' {T]72, .o@>}xm C{R=&H@m^ßW/3%d0!IN8'i6,I3J9J)瓬ǭХ"aL";}wiI-5Y(U!rޝB?+[N!*9) S]!>jY+ܓV3&ItB_gs"U L}϶z俷JNv4psj)HԏqCm0*m; WVª G!KBRκ~gSMy IFy„MzDyfy&T&sRES (R@@U˴]򼪯\N=+5ʿ4/*ĘM[f@Ei1ELR%6C&2A'$Ѹ!:bA9q2¬=<14JD,pIКC*bx$9)';ơL8 ! n1d\q(rhƛ̔JEKYhE(0#rn=.=DXDžsag(],P5_)@vi3<~>ew O4!Mˊ~1\Wf5 'SƯ5۱%w:>,2ju߱&h?}{tu5' TRG~>h'JlI Ķ8ݬaq}mڲiϪQ(siO0ǣߣyr޹xR!+FԳlA13Im!|kV) DŽ(S$ fTBQ+Ӷp:H%IN" +丅b$~*4jqnQjzso,28|%9!H2Or//h6m-YN!;8-1Эf m@m x@OF]YO >5IWdkzAn 0a$u{7P c'>lVEtH1ZϴŒ=r "qս 4*pGIw is;$]>ݹh є$ԒQ)Q*d;U00 Xq^`K W{ob} }y7ۼR~p+պG@yP7a K:r{Ia% [Km"Cb 450x2JƜU(.%BSŪiDwU=U`+h𧽊uQhB1pN;|[KSJXg-<ѺWN NI>u\bBwXvFuW_9Fuk!eNcRXV E ngͻ*P[T2`sD2%F>/qd/]^3ssW]GF[{,l2 | ciqnl73>/7oL&COBfi^F<?Q:K4Reޚ'7I|~9 d)IˤP=L7ŧ!h5F啰?d|if]f] ( Ox/A .ᵣ+ oy$/=A^E<WƒQ.@%)K^r)!%az`~Tx46?7ҕd}u~A?ToV;ٚSf3]eTj@D퍍`r0[..ॴB!AD-PD \\i/x4Fo j01H@Lg\#̘bျ\9/[Jv 3ZmLψlk>ZŪb+ԍX(V5RT @eTrx.#Ųy7kM=u~D},#6RhHvӴ4nOވ { ŘYF1boBIYLO邻9M`i#G@4w|B Q〃'}`"5k>UOSxCqy?Rvg49%+[^kRrRXYjhy+KTdHrՄH4 QO4OBtH.4>=idQ?+yORǩVe<%;'ЙH(H$l|H9hmvG)T` ]ؓthk:leI٦ϐZC጗lAЪ{KmQ,^XMLObxRƁeҖxz_jcZ3R2su F %sԶǦd@!<䍻hO1-~>nͰ薫FJt1"ߐ-2 !oEs|X|X\TIZdd&fomq;Ʃd2fIO4;TwaKz9l7\omE8Rjλo4hK9h}uaZQmOhTNPCnO5L63OKa02N8vʸUCzAmmZ32[G\c։=#LxKj=%6}rK>g~lO8m,Wn9(GT]zP6b,ۥ+ i&/)pfSm@hK<*0y.0a#.n8si_-Y"l\ K haY'L+SlQ.j]Ve kY8'@{$cɺCE W UBk@ؔ$5VQ k6Q{84JԾsxSL gj= gƫL_j?3~&d&& k}EZ:l'=vix?CkZ )dhgtUw![ 4ȵ[5u$IjKGmeh<;=-Я(� Z%_af*0:$GyA;O=aF' BG1;?Axn DY?bt&\}zrKv1&բcQ3>5v?l'F;FL9%`Hҕa1]9&T&QQk Bӫ0X@VѮZ"ru)п<|]y["n]V4=y рh6_h$47pA5D6>XIW7+UtYk N6żHmDnX3l]}o"G[aS%|xgToz'?=~Jx7>X!jKEBBuϏw}~|| HlkF[͖걼}bo||~K˛mVm9J֭V`qӼ/ Cη(ԩSꚺfYVh ^)W6+c]Aɶ ?tI??甗nMLUuvT[ Ob><[#¡te4)(:ociUoJӃ63SfgrYrY$ATEMd1F \R! m*n4%ɨw}j9zwzbd| @Ç0 p{l(f7,$Jtު;޵&C{_n﷓Ra}h&W>8XwJ^Mw[?_7ݽ{1DwO4aOsE <}Yv;rˋ1~o߿z=mX2P$HwJVY#KkNFIL * !4( GԺ֧BWEaj٨RBka0e#( +h=֕};yn=ᥧޗڰ/=k/U ^F,=%[wS\0K C,Vڜ^zJ}$x{)Dnx炽,!zOjpgRyk-ْd`Υ'?~tkxs{ⴰ’ju`%%9'h`Z7d3golF(@Õޯ%IU89˟Vyf \ J?~cyxv_G*N'Obd梗݆4WeLs0 [(tܑz&b$]I X6>`.#}mhR ĩXK3/)hP#s.q)&ZVi&2 r-j=$=^^@G'lb14ʸ2*YUa_\N(W\e》*\S6bWD^Y5ɭ(mai*I+eOV:۶s K!Jvtp|ɭdX{Gaq TksֽQx5֡@*QZ]"jH. P4Д^צB`5FnI0 ?T~qp7Hf%_Go4CW i @BfXJU5Yߐ ,ReJTl(^D>iyf xB7Kß #V(]^X#,5+x-md~4qm֏SuIBgK6z) aHkQ<繍SR_Ht)9s9W }jbd\ ` 8 F,Ofqju R7H "'.jq o;S@Ԏp`^K(9iC25V]1|c}iwudv"S Сz{|KFgs n9}M:G͖d8ήd|-ևS96:⎒c>DQ~)bs_ƦY ZI$ݼ}zr+v19͢F/ A{Mn}ꑑx˼I.oYSg*Ĉ)Γ患)O\jcji* Я`WmD2}$UŏvRl?73>{a5A^Kdh8*ޙyERc:Ȅ=@jۢ8q@-򪹿x"ֶ}: 5ߕ7Z۫?>悄~( 7Z>A mﯞS77>/n_q1QUףx6M-˼?"Ɛmꆀ@?ԏ-Sn77Y>՚-u>nM1 ur˾M &n&#]4˧Ξ&8VKn:qoDgަE6q3t^y/W{by/ol݀^āSR_Km/s?"n՛Ex8/Hm}/^VTpuUߊ_|J[O!: \=d=th[!NވCZY2J0QқaV\&?6e'd&(#pЙ[aJ5H efPz+J:3.48WuZc^"gxFu!%PFhy|]~*xc|^ }{[@*4T[ ^VL D407ب@BldbFđ`1f}r a'mQJ(J7, RluTF`MdcjMfT% WPj[xO P#RZl\%Ju/*9*2( )U\Ŭ 'kZE]V*34-FP&(\}BK*tKq2N4M)λ kXkPi*b2#Uk"UE.jBaB>hAըW YeQ5:eK!4EFES1- i)6(yn%nZHK%#-B %gt‡XW>%؄Zp(I"Z>F/5b{$E&a?O L<@/A?MG m@dRFl.&[<;ڨ}}t+KYq&Y4HEױ~Ll`'SzURX )%H8Xgo{}7V@LsT#.DS+.̚!@5K?Qq>/PIVe?D,OOnn1BYXȥLtV e`?d@Y'Wip4%ɍA*p>g߄e4pUǶTB9gh'8nV #Co'unjzKm'ĵR*5rqQQOJh Rk ,>c_jZ}A.L7RHH`֦$!4q)0%D\ĸN7Rی4Yt?(݆7Y>8EP,suwF|Y0%<䍻OEɮH˸US˘7tR,*υzOj2E߫TRͭʜ^zJ}!uҳ8"0nغLJ"4$5钋*Jm9 O;;@Hjl>y!Jyǎ/ٔŖ6J /Yeɿ$/[akrPճ_fTYr F`kaꕟP7$fsE[Ea V4) I*_W{9RaQ[.:'Ϲ "&O*r2kn*gi8 MeEPU\j`0#XV=}:BIOΌLrBSV=}E`z8#z'jtS,gF+L!Qz倵ӟ -*ZA}|.CzK1{?.pN>F^}8P2,3mZ jI T= { ЌhTYg+84rV~ாnxi{*ȥX5vӼ ,/3G3S\ :.LڜEW^5lZT*.xޒ5|ݣIqǫf3R5ʣ7%NmjԂ,[/6=nS ՈQf~̮)] 2G Z8HIVOo >0|2O7+ ,8޳,ۖI @~G*Rm9$a5W Y7nu3QބVFTی]:5QG۲:[ ,G1wjˈVbPR3j OO9 { !/8jcztBO!Š:}Q˻j=Xt0!/}8e:7 qE7Z{ zbP40|FU6[fzܭ)m y)%Wk~Zԝ9h<ܜ}ys3ϵY wc~sW{ʽZE΁JuaN>R &?RE7[#ѭ'E6^e>\t4!/8rzt#.bPBϨncE2i^NΜJ=^M=R*+,Fu0T8pTFép GO(?\MmЖ0>?q[j>yh=ʼӧgt9gD&P "6*nΡJ'/Mtfyos}mSz)-b*A w]Hqݼw&_V[Pa&83eŹ+qض\ .P-!iQ_kJЂCNC"&&yF٠ f9.%%uLrd^?{ͱ%/+8n^N&|-D }1>i@FZ<~T£1Y&)\Iا1?>Yj)C/y79v̢ \HiR;{xҽ $ҁ1krͤ \<%]( Bj.(DQȲj y(qW_t\E[񅸋>WE^dr.B18\SD \辢՗CK}zW//phVE.Á \:  ^H9Kz&ju?3 Rdh!6 ?Ac猣gd`r h[kVY5"[}0$£pl - L&eͤi,PEe-)$5F5U!j\,o]_Z䨱e6toBS|G&X^մ'T)$<Kx^E 6ZYKɭZϡcxөܻA[E8djgj:C//ep>:ψyaZM޽'zArݚ:i|ԚZ_ȵ nh^IC<-)1#@ |qL# )Y6MPFZdAƌ> vR7G!9f#ZGQg""rAAȀ;%2YTdGg4\*ҫ5!ĎX4{"*fy+p0erEs`_?}~.1_G_oysW .Y_Ys_r{wp7s?c oW ozϮ/Ԏ,{,G U>|4l~ G`֤\JiNg53oowM<+BA}k嗱-U<$a'f@H1G /dJc>x#@-Oyk}Fi$`7ub>((r/y31*З\yվ9G%@**n["zC7ѹ+}qfm3aa]_n93l/c! ˔.J?Me=(41]_t2|/mxZM{м|-/I;m*5AV_Y-[O'HQR ā nUwUCu2嬖ۋ); ]E2>B CϞ ߬,ƕJr#3)bShXy$k8/"id#xgԵbmnsK }f m܄ÇۆPB˰%n-]^0`0wgP}];s>| -Ϙ0V-!<>ۯ?a~=q5#>>>+Hl󈼈XQGjY$ʺjID2*<"JI/d-YǕ?5]u}8'>DOma=0Sdv^_}iԜSw7˘q)9 U1R\r:\ں-j}@+& l> u3‰O/4 &D̊Gڙ%rJim&(Ze5\s@uޙr@=E >\h{oZ\*C!s@,ƾ9iPw 1R Ϭ>/!}[>C/KO ?#+!r@D;}HQD(k@8Wk>hzNƇ#!RX|fJՇ\)rB5L/JmM5c)-S-=c AhMQ5uz˻)[ \L'>m)nώޭ M4ȦO~ǹ@<#zPN;|ۘp;%[hæq1o v|( {߰crI@uI5@P' IRSHLJQ}ݤZ֫VzV fRp*V2d&J}35 +=Duj(;[)[)88K/+5 JOJ uRBTRI{ 7+vŗTIX)nVZRJOJvRKTCeJQ}ݠc@l'ḻcks+o9(v^Kv wGoou=, (vf6sp䀡Zr^x~i"l3xkZ+c>;ā]`恫ZWN# Zo>U[B ʣ>׫(Y.>e+1C-cZ&H)#(ş*KtR*Z#v**ʌE):zrjyJysvZOt^uHד.Y_Z vhE)%ZMhG07F UC vEЅ}O%E)zk IԇGֻ*)ӯ8~Gen}j hgN\\^lq>YqͷIvޖon8ΠHafwwd B)!NJ]A/{ϱ-J,KU_+o_R, +TYWȂ|xPX\RL'QDSMmɃ~.\ gnI{5I)pubS֡ڤKm#qEO::eݩIp`x PS9><솴:C:Vu&0Z7MC_bֿ%A*5K ${۠D0b;ޯAᣐΨybI$*X¬BBJR#0w3ڄ򕭪m^l)ږ)j$YʉD2Mp!*2J0 g2@fܺ!˾XҿxmwiyQ^.GGczt*mRXT4@~MyEm|ۅ.gA՟}ۋ/=o.O!tU6Y tZ?ݙ]Y&t͟P$f63Sd}OrPNɦ_'Eb:k46E_P^D|I-ρM0[T |8c1"BNQR@`RP$1D"#*S$4%$:-tymQ0KAp1WjB'=#=/q虤| nPͫPmY)pтq> +%JKVzV W"v-3/-!/OJY)'!&0M"T\!ӓg!y<+=LujA+QRnVZR89mї+IS}ݤZZ^JOJ/TIXݴsH 38S6RPde":i!ذ]d:99I"M $ ҂kFCW座]08mLyZu,d#^75Գzlf\H6s=pw8=c9Mi)Z)3>a('Ym(FKc`^pޮ#7bK!Qc6\ɈPKUT\00ZM:',r$\B %F(Fi݅ !jV1G jo~ w3*E̫PR1AHz1`jB0xFd[a zB6g瑘pPݱ{_̣C!HS"fG)Yլa)<1Pʐ$xS3+ڹ)Ƌo + AHj4K9]h3 0˨yn/\^rk2N w~'Y+љbI-;.OS!fN>6PpxhE⩅hs2~JTB_=`"O-MZD-c~\elYuKk޼BQi%e!^ O_DžQJz`f|k7"oE-rl11ȑLJ@DKrYB as|ů(h;7ڮFdthScƸ$`ɉ Ru]yХɣpd٫&1;ez؛@^ũ|?:P|j-9iY|mОje[>TZݠ8RMbDctHv iO5E:c 1lGk`q=&.Od܏y0)yُb 7 RM0?H<4Ow<1R襗:bWWtaenFz]5'n:N3bۄ34K`ڻz>!@,iHN k>r)j+&k.L&L&QJta2Ʉx5P>Rg&7.7kT](eHY>E;MO}KĂwd/z`XUԜ[1%7i>:υʌ* ׺m /jwSf}Ea͍;spUHb3Й3 J`׌؂ڂ/Ij )"Nk>ОS!{M8R4j|culYaj&Y5f6/)gEi((}kD dgй5Y3C J^[&k`FK$SVHD8 x^e[^P`oˁxqaw5Sqq"gE0nouY|ҹv+8DZL[wmxhy:$dApW\"PuS |Iˮ /սfrY9"(plbs\cp8u-&ʻ}Ǡm[!FSbm+!P؂<̠)bLʙV*SN'0ط].@Le5-4g#W\"MkVU^ @,Wv#VY"i;$Y;0 v-Av*ry#Ef·^ !GcIYd@+hLݴsn:N3bۄс!om@+h}Ĺݶ&n:N3b۔Zmz>!S(@xD酙m䖢f`N&QUft<05b8 ;XwW x^q֒B! ;{a݊b 3΢$dz GҁSSLXhnk5T!'KF|#)A0%(4#g3҈b ZpڇN8Ci/3ݺGvT#P|DM:|ہ/T|.Gۏzc^jX<\26Oi?o5 Z,bF;Qbu`*3 IL{CHxz4zqC@}L~;RȤ9*ZBUN(.e"ׇOi۟yIJTH;o~ ߽ 2Ӂg| BŬ rxF1*Z /f h;^W$>fW$({ 3]hZ.و@,Eŧ^a*]ޟ \ȁ\‚UT26j[UQ2ӲD@ZfB  (ȭؙR\ш_ #VQQZdJh܎?vcXܒg,nLt W'=| 0˴ٰ-5SsV_vrjG`Ou*LX˰j]tH՗ֲ\/C>ruVl8Z%V_gA/[=E=]SOQAt%]bQNu>l?Ue'YT ;e珅~ͣUX!# Z.7#``wZoO8JCRKL#.h·)B\ xԎs: L}F >< Z?} C4 S8 D&>#wK[axg= C4S;kQJ;~Rkyi훍5BK 0$ xk)Rt49 {XՋx6+ M@"ϸ5f6/8-(ƪ*RH8ŋ}#z߼,x:@r@_QvsC7\ :o}&^5=[@` %L^+"֬p("Nʻ؃4y8%i#ىkYRWktXt㺨 )5]"<*3,T^[% zGjHowO2wzVWuHۛOoJHz`vWA6ۇ̊׵KG +e. ~Fߴt#$x?f?c4dokyG&\R_H-pA2RwRr3@qwv /(=kmkJ%@1w4 JRT FuM<R" J P"Ǥޕ JaY:goHOcR_JmĥRђ/ǥޓ QzՓpJR[m.qyTBJ*.׻RˮyT;ST0yAԼeA3ΡT2JK}+5 vY7J C)WތY{R[sAyT0Tx)LST06R[egR%P'NqwFxAYQ4<'#({W}p??|6yHt!%t' pؒ}/_#x[j߼u#$h'7Ik&NbAxIlS噵4(FFay$UhjʋQAuIjƃg5Z>TMUi<>\י3_͗6߳]Xo^Ѻ>Ze͙LЗnMуlȨRr*a.;צ-n]%ƌd9ջ]Ak _Vl8iR_k `JB `FI%hϊJ/^nܾUQ^z`}Z 49 mpXی,$ȥ7W7֬`Ghm!i=.vFN҆V8{~|g];S@ `kӱȻmnny PUC) 2d KS9+s,yVR#~ =dp-.Vę8mF޽k`.%3ߋ^w.hͤ>~T;~X4flQ`*yo ޞ'8ؗ{s!wԼntܛ'ĽNܱ=t{tVThFk tۯbF+dkFc6یWI),b#4EI)d zGV}74Nk OCw{i.u9ǗǝM% 015 .Z!Jb$tYC֙J(5Ruf>|#xҵ>g" WL^ADޤ\-3(blV>Q2Z=}L@M^|#}h7'E'Ts#t \X|ErK̴@V/Fm4=tO>>ç1MpVk id/\hbifņdJq Giڋv#W [v\ضvRx^㣽8eIP6]q )1k~ᜣSZ* 0T*8y֐R/{pQhnN5GĎ4rztTSEj` X3pיT5m|$X$ؘp.L܃Gz+2lW-drgcU0Srw7lYL c5Ɯmv~r+n&t>)|d,Sw.>s XXN+*!AM"|])4z90QDojso:apREk@p3j&c@hM؆mU<1#<вHyQ|Ā&bԖ1^]AMN7K*} llߠQxjz?H q},%8 0zD1^XӟC:DC迨жukЫDƈ=&`9-PX PʡJ93tn bc&J30z`@)Kİl* }wV Ib*eHA )ȵphJ )r!bJJ#-R0 Envac%8e"[.?ks$3KFMfR )F6aHa$J{ZoWj F Pj(X)&^Rܫ21- QXK{،4.B$9#w 7`N:xH |nT; Bߙ~^^. P`_ë˜"=~=|:Un`ƿ+LyNb*j$r}lLpP@lF.>R%Ayp1I̸ʟoB,_K3Y,iB)|5DGv=M&%aK2i4nUV_gӅ%t*% П*!OppI׳KV39|7~`i.d6~J]^uN'7D^-:*׋m$3f uw5Z#P9żfAXU [SyXT< +eqSvpKE$5Ktι<jpT`Q [S_Xמ뜋t"b6ՉqU/+&T$O j Dz #XVϝ.*A_kj{M +b&s~ԛ5j,Sv`wS :c8J6U©jIEpéW/6f2oT̗˖+K(9l޶l7hnߗ鹛Y_勬Pɴ1_w.s/o^ߺY+oѪ\%PZ ԧ3B-*\ Eݿ,>-5[8}FIu;ċ}ddVWj!O*US-P6eгȦ)`6݈S,[k?Hsx .jDߠ#=+œN% y^XYd OmP]pjZ:$CQ;1~F%Y2*Τ:"ǢPdS,xNǬX010$3 cP[v:'Èk. 4UG3=9;I'Xv(H@ؾףsǯZt Ж} D6#Q;'S 1" 1Fp1L׿/2h$}`ZD7 >Bۈy$MM8riQJlj6c8H]YhVZhUMRL#IS o@22.RQTwukD"Fیab2\`qa6_` fgTSs6wZ׳Iy2㏷yȂ ?OfL7N]rońZ==Ac⩸Otx*n3>%c*DbW|JI6)na~y8GO{E, y|)1#0ael˘]䔘B~W7xlTXB8QTut 5 ډ4-zcҍy#J5N{Lj,#& ZB{8o`M[lݞN+;9' V=Ne%( Yz916Ҿ "lD_G3%~H?_\Y}M(_9s]Uxhto2 sY*wa ʎzeNe:MYReb)L*FaAY(Y(w_A)@;f|5V@PGn1&K{E+!p@>ex"d/MrZ_I2>:Lj3V}J|^;a+sy6(֫We_3Cɓw6\ "h)ER_BV 1 !'&!K|'=,ei$%&H9CON3A+01Yb=S- %@$6 +ԋȓu[*A"uWh)'~:E*o*HdbJ{S?s~TCe !s9Z? Qʳ iύu*4g;4eΟs`IqtaӁ6 BbpD('AT3έcZ*)F@ * ,)My!XhNL/ܺZqx>QXA/z6*=:g^=eAU\Mrq2Wsq]xxa)WVL!,ԉ뜠#=gf1b{vփ@I^m ay[<#S"*m{.K%wnT\Ca 3%yQ,pޖTIpgf+*|Qޞ!=MA{WH/N px1ixɋ5jJRJJLԥB//`|U%#>=ܠ ~J_ߗB;K~|3M7LmW~P w""ػJCO86 jpO XY-S-ߪ5N)!8 h; w XCԏ.i[*_aډfmtE"R,+;Њkm'j oԨ]0$aU{AuxT[M@v`gyJ-(4kOgyTzb PCMJ1QPmdofȗhyHdV# $otP&PďsSǒʹGϴ0LSČϊG-۳ZX ;| M/ԑrptҁlUzZUe}1_]СpXt \Lݠ롊.vw{.f.F |6oX#ڜ\fhr b+7/TWR^PߋW^lxkBm\wQEYދ.jK+8 vM$+[l  !XN=u4sna`Rq M(1cXь\z1V7!*/NN\ъW3LDyQ$r ~CF~b@-WCdF{ zE*O=fTAljʏ28> 0&': aőU,՞mB|oW;)~0fE.|rJ ^l {R5pTq: _|ܕi]vWU^iWVJeʨ,)#Q{I0LqHA#ނT]'ԕ/h$)rtr5vy}J6~7*jh\ KxTIDP'&_R0+vC1&:;*91J;i#Њ!8,xLRqOD5F"Nn;gm y88}d3I:zyZr(*Ak"pf#EU&O-aG[fG#Ew6>Z~ޢ=G0C(QբH TݮPl7 j;FCrXFjsFEQEQziO+5q8*-RJtc )M=P<(JT?}Z;NԭOk^PI] EHfn}8ٔ5@P`نbƙ<;ZMV v]Kx!SAmAuYiхr@kn$1W qn]IVx0h{+QcGxc\uk~Βidl,1ȈoTĘ3 ;ʞ\3aLqtLÙ9^V)?HsܾV>Ax% rJUQQ)P%pMv3p6 ,[˼˸΄A 94{-MI]9ROɛҜ)#J }3jrɋZ7#hqJ Ʃj{79oKE5X?ċ57(sEI7!sB~3ዝ=lկկaS)ᬮ78A?<ڂ~̢Mq7+wJMʂCmQ8]ڢW1j&Zhư(d1BFps9GP؊X53G9Y(^Ϯg=o_]~~l~mzتcCZZ+srLZ Y?gއt@J8e:|+t.+'r:Y9ZIzγ5ӷ ` ;REcj15'Pl]h/@y P7H.8rrCn=q~4b9+$ ZSKwN/хL.ѝWR&Z&O#zOG-?g&x.+Ce#A<%֭? auuoW>qE[59(@ۃxb5A&CJoέ ->Pr0T#5Ӭ)fn;?M$UWY^ezU^GADĝ3eIRI[@S5jm^x|gBvSsӮ%m{մ+wN_M+-#׼6Et^>1K ( %'# lJ ht2#*)N,F ]=3\J4 1c->w`0d+pVh ޚ1TeX<ո+Ն1^)ZݕVw_LSͨCJC(]Wt>}f~wuŘhj'6Dw0\?sHnqv\%n~.È1|/$_eQGOg=N=z~(|B r4سٔE_rW |:2+!D3F/t(\Of6d+Y}K Rߥv$JN" i"C98 ^3BKh8O̼@sW;CBU;&;-/'Z+ȿ\Xd:y*Q!\ $8!BdnGE*42FYT1MAu+8=xuR o'8T6DA9Kpo'swx:D3u lGl@S]@^mWBT} G͠ 2H@L# b(;ƻ-,jyUˇU{T꩜uȴx?|˱z|&Kt~Ije@ɳM-r?k5 +lPϺ(_[6.#O?g#;^Et7_sUNW<ϨCӝJqʩ*i4J-sҼG]9u2ɟ D72%gVO~|l w`Fy|ػg_Ftb+# `9aFW`IUHzdA9Qt wDp`SAM1EG- &rF!9b0"u[E4=ZGkJ+ U5 O)=P(Bi0]eW|Ȋ6])@Qzz<'\y= 7ތ1&4PV!%(5)R0 7bJjRSM:K&-!qezINcBXS߽!*-^B fV vHq9n 536-ogaփDzfYX- z*vlB%\ׂ.pkOdT ނT{:(kW*zz\pJjr$!r2O"4<8w:_~ $5p]zlb+kkA*X`+kMjᵗ`rپNcMiRM;hDaꦠc%j7[Q;Q{ Ƅˆ> x bt#sj߹4eO<ۀЁ-8.oj^.r.c+kmH /=P~`=,nMNȢBRJ߯{!{EhWU]/\T@ $f ˲tB-fY׼l/sE^5[y#[*%GgblmB %Pl 좲,(C ~j]RuP6@{7ނ:L UnW 3n0#]ًcȳ(v 0DD(m*Dhib%BW҅zȗ&j"4\ѝ\<Ո*_ / 0*ʰu2a z*TˊO'kq' 877ӽs0-kpJ%]ѫ>O&=U]N(>c^AMh=)W_Cpx4["_92rFzzs[)oqBL^\AҍI}oיGRwqOgN'U@:xۃX(Xv1(KC5~ʊ-!GIHNBYv;LLC lϲO_q=ܜj3(zC,'4ͩ\'~>N1Gpc !oJ _@vP)m 5wAw*`%ȔDr1.ABXt/>W 6L4@`ieR3 )|%P4!HDR$ ,Eَ)e4Ȁ@P`#ʑ#dJ3MHRręJ 9\ lu-2!8hvE& 9S+0E J. p/vQn$r#uG{~ ]m}ףbc/= N(h+= 2g`y[   ([1S%$D*M/*`aVk| b D6њEps9^J4H!,,Xʎ ü?0σ:.*湠\[pq뷢cр]42^]8sm w\_߻Es2*S 7u:)d@GE,bIK}"J#g 9Mj=<=,?)>N] +ujGD+Md"0w1 YKS[`I0vNuKZ4 o_ X{(fR2fs{S)m,Ab^5YbOچ>Eic7G-#o/׼-kLShNxGdN4NLHUM  N_?U)'Hc=/Җyֳg|yf T|9GvTE~"ǑT\!Pvĕ}w[{EC\3q1TPⰋNsXP*;!!P*)@O?gv{ }[7`o>?]RDfNV?[;n㶼;oHyIfxʥE4RJ@j8L05cj2aF1?1*%5~=soo&oy^ig~f>M ΚZ=.|qͿ6oNV@|@1S@(`$轝:n32!feФQcĤ8U:`>ňպ@bѝ~٩4h]I;C"УHb;8vvN(D` (qY #*s[DZdYJa"!?VB q1RzѮ2j\5hv2Y#ɛTHULݚl/Cqrs=[]8pOnMkw<1~'Y&oPns=/3 ,yv2\=<8`圜OՃ֭m} oUwO?^LLMVd܀w.8Џx\zK"w3yM @~;//-j3Zk62-FK`YBT%B,q' ))Qo^]>9N9skvxy3;r Cک {ZF"Y-`9hzSex#`PD u&j"mNHH FRef§ul{Q GxO2[.=1lN[LUu}_ʿϒ|%K۲$/#bMi;+m]cx 0q/@+rr.'.xNtCa⭯*&RYsv+j’Sax"D@,-9/* ?}hmI~;ڒnE{bN 4aڒg(SL撃HYSdβS*3W0>uC͛_XM;}lb[`6/.Խn42ot]3s\Iu}x<-8-߿!C*G;_\vlcNq-IBBO57UgAKUgd!xX/2筏5SpD<8%Bt$L/gn~m3~oo?ûT=([RϴXtÃOtO:ҏђt1RМ~ \N6 #o*~5hH`ǘRr"1 i1SC :Y=1>>9C ӑiцɂrdA'DBLjJp*YP`Kk|YYApY`aC=EdȔ˨vXya;g;B2u;CaJk`D&J;as WVRci#4[o#4SmJ,izbBb :C$XȌd f`Qd P+8UH {kKs\R}}}_fۿ`v G5;kPٰsnKM֩M_&%`>KԨS`ͰT-fPgMc)pەu?SuCM1UԠ9h3  Bi T &ҷ`N9|_ÄRW&& TN!6DK@#G\)9 SE@567X>ػ>A2\/gw6]c?ͥя|*9 r;0/!dC^*CVE`Mh}]j~U}LYvo9i)T-fn<.N Nb D1sA_ @nEj[ntg(]낂.~(nG̀A6ZdfoMrgR[m}&w~-#T $=u -P&1Y&3Z3ΐ`22BB)f)! 1[Jr?V/T?{ 9_Cf<:xH>0h՟181vߧ;'p71wf^O(( /7ssfh=O1FB٭Мy\p_xtqq4h59"u,%o*}t{e!~H+(ڍL[,!:msTL׮[BC[EL*^B`w/boVsX{ia܍R#Bw,Q*CD 7.7?v543DR kBUD"MT+kG"UV%v|8;ںnL99EV }xjtgO ñ~fO, ,8iTQ4=(ťA{x3x7P yk&eaiO`OZޯ[8Zd', =xWqn)yl 0^T X5[sԻڙpn h4Q1-ύAýoЙp;ǯ걡Ju njSjL!cz8UT?8bo|.WpGQk_P%ak<=,?A8 8(Mﺛ+h+\e!3z;ɋxS^\`>9)ڗW6.=eWC4&~+;Zl\R?ۯJBPy NWo G\KѿDB!@FuFGjr d~إ9o(q-bޮ><p9p .g.ڐpگ)됗e/ GGaPhKkc(wGaqbٻf5N*Bײ%*NM8{.)ǽM`Ojq}7 t__5c)}kd~ %\1R!2JB@Zxf: ܢ7ëv%H1"* @Z - ZfΜfHI5~VY-R(tKgi:4/1* e/1sx m) ٶ+f!9'1TP\L#R+wd1K I=H,[D\xB&wK6r+{BXo]j$-KaLT爹HZ꼝hInkaZM>.Tj'3sU- N/TDH# ޵u$Be3CozN0ƈ ܕ)D%c[}HQ7ύl%LL6ϩ*:Qʼ*}dBiŜw,xH&y'h(:等1Bn Z}5sھ=2R=x]\>j{,SHPjRpS_s?qΦBx[dIwa\xN@ %߭u _l~FE|iswqoľXxU_ {sm6pw=Tw^ˁ0me7$;b.y/Aɗ5S}l%OaR UN!BWDzƏ+A ֘ Aa>pzȦܥT[@5Zj:파*Lrcwwspq9t6\d" m s5&4 jcuDh[K,0drR?l>T?FrȺEgeSAF˖,l^27ix>3DGoLUދ*yFͱ*U׻ "_/' 0-W30<" Nk(N~o<<,:v%N9m6ۤc*aT^exÑ9)#zp[{yxvk|OO{ٴ\ZLRS5L+K'd%]Fv-/):3=3Uny428Pu\(]ɱԎ(݇mYYR d7PNz+5S]%f(I&,MzdJRj,m,^.Ib <נQJ9c(m[bu'&t#pWޮErXv ?a55=5eBob'JD$ "M$LCjp%/tFǽT{u"|_?'H:S"hP  IKH+Y["qIGJ jynVMC|rFᴢ@8@H=D$jOԦNY|ȸ";4|v|#?LA3J f!(TH534:hMK'@':)3,E&1LqEDk[n"1}^ErhZe>pavbr[%v^D,kP-BмGCԪig8A`h?Eo5+B HwQn9}RP! O' ( Wtt GX:.BLL`;Z 7ʃpU"(GcR&:SJteD‚/4c/ :A]ZK1۷O|7S_N9Sn*&L]ZE 3g'k[Q7; ̠Vp#gh6 9noR{{;8<۬F xPhȃtZBR|u?* 6ObY.[\R79M\d9W:PU6ϣ+Sz 8\`Oo@|ω]+Ua&r*c\;lIErG}}kƀ51\FH 1:ʱ3]f oծ<|g}Z2CNac+& 5ĩFYu% *bV X cI+ܳ"/1}u+X(Gg[qDu@ A"{ƽ 2.5=hCqE/TѠ[ 3ʇr]&Cm()%꼊Y$9%=gu$E.5jcD>o=ۊ4 ~fxu.x)Eju8Dpq* d0wq=و0*6U^q$W~>&YΊt|*f^ywQ:K*D!OsnuC ?1q SN3 m`=9TS<ﶨaøXiޜ\f=sd]n֣l~[P8 2kG([WwjVy1Iv.a6!cAyY~xGOI_]QvY6㫃$PvguW I8rՓ|"^P+~RGjuCXz|A,.Ϸ,!Vf/K .-Ϸ |8jZ|_XTIDbB9 Y4 L0g:aG&)xZZk'y܄pܪ{H/[o__+Yͦ/l1Y86UwiN}Dcacӱgj;?y2>p H9:d 7eT-rA@":%Wj,cmӜڲwn=9/ñ[W-l~k-ZezW"FXt>=3ܯ-J&؁^ZSpn͋ kA@xW<0}17yCsU;Սx"%y;J%tSH0EꁑNe> j#6ʬ$TD 5PH  (L:M\`m,,hC%سp+ !N`-&NT7IyT1)@I4kcL7/[:4OKo;oZ߭)H ZwtnkQ5^8NӆI @Y.c^Ge[F:aԛ7Xv^J,W.FC6aFc5J+yU6hmG"ЈorhUډ0r8m`w݅v(`]QuUʬ}*o&E$FUFk2[Cj8z0`_ed%#VDyRѼNmpf%khņPMnzt:G{]h|v14$Ρ8%56`4xJfa ;W5 K!@ V3Mhy}c(Fkb/[,38F-b߳fQX#0|"1ݷDO/}롞m;lJތ3ZS95THю=̈́ο|Ԅ#@ '萋) 5R;P8C^N.s1Lu/Y, rl 4%NAў!%ǭ@QFCTjq+ („#"46>C03g)7G4>9JLҎTD*ɥ2Cz(Hܔ"YD;6Wm-ڠ<"< ZܴS)dP="I3+-U!j?ZIhp lrcBmsoqʇ˂e e})^XՌ*Bٌ[8}#&BGS"]ѤN ;dS +S݇|"Ijgϧ,li7/Gn]1(hݎZѡ[ۮڭ EL{vA+d>LP\~1~n2z)dtR$R] ~uUZ 2~ս2N~Շ|"^ӈ mq>Vްa& K# ACPOk CԀ^Rt@o*. }7)5)yΦ7&J0C.PֈvtӐQD˃%Nbߥ!@@IaJ-7@V6R}4qW$vhh9qJ }J!m֗]n~ 9)B~8u ̆3}K3PSQ-,EӀ[L)w4^,FhGQT< y2?FTq0 0}j :bȳ]YsG+4X=he_bdy^elʬl X*έgժAWB51ga囪AHJ҃9F59/5#oNV #A3F@N.s0@aT':j$W$+TcKVR1II}G- Ȭ:$"u~%Ynt\*1*ց)ŭi:ڭ E4I-ѢŹ0e]nĈN;h#z=-qvCBq]JT1Қ?vw iq,;[$Lν:4Ĺp)#44"I% :XT?;YM=Spv&gjtDKl<,K7]_.77wm{zt*:'KPQHKJ_pWJTU1lA$Q *19#TEрp-9 JLTX4:xoר^ږ(TI)}L%RbV5dE',X )E*բ"(TI) 9.=vjJ9)==)M1R: ]_RKR㤴ZP /RzRJ` y78)-=%"hEK^ݎaS?@Y3G3UvvJƥ3z*Gϓ05u7^*,l~Re1V{wFޛw޻iOM&OE! b-*e"[Z`,z$1tL> ,)X<[F)̒R#(uځYBK[T\uZ{_\l SAwh4$`ɣ9&`VC#z3>^/Pʇ̏a&~ &P-=Oأ>Sih-U=B>'wwVt"(C8Ӯn<ȿ{8ΪT*9b2VLp2bɴS&# o0K+ WL/~Ll4kt٢ʛ" .#ڑL\gCr#ER_]RVkN%_,Sa B}Ȓ y$|GNʹl޺X|MY^{?Ï )8GC'~YoJwApz4$PKoMPԩ-{O%Z'GuB +p-3q"s3JZe+V '}-rĊ:oe9܇6 ,qo5~Yx"(ی,.KSRж#?OX=+ 1@bd9sTidZHQr [-u}g6ːY :0?;=(994ZE23d8HLAJ-ʃz6n[M [_ou+51gf7DE.n{ R?|-nqUy7cW|>DG@,0{Sg>檗Ɠ {0+')3T秊byϧ} pm ^H9yw2GxM}~C&ѪꤑCB*V!X R(*HA %!N0ƬtAkp=t[óKUL5lΨ=XX37" c}Av?udmt"I/~p@dY yА\9d[ޱfؕ67JI<3hi|yomAް?zA8쎪 '4D.oר_LO;4^J v*aBʗݾr?_z0YsGnÏހ^4]@n21^\er^h ﮬO0c#"@3cةj!!@STJSN@gj1B)'/˺@V~ygZne`_~J@PɔOY(hjonޅD*sm-A.B0f/լ ndf6w`@W"?StٵKC{RoC=",xRq^Gt$OWÕfdi[.+}܄QQJ]RzŬcRKz yGp'B'Rig$*Kd%L#?%%Pj⹓:@D ˍ()|3o+rfWK$I(xJ(J4;uFSMeBej?+A$@s){];bG `pHHbGɰ`ŎRU2^DeԇH@ ٱ-J$c`Ǝb?vTmTXډU#xs;D!h-vt5&@z9;:y(Dǎ.Ȩ%vT`Q;N*;p`d(R:tIrh76X7w8Qf&GxMDՋޗBҎė58Z*B_F;]BsS@F;Nx0NKH+ &+1b5MYWH }ehf$!]9, F75$H6FZ %o$mRH !os$-T>1=I_h۽uiDnXk24Bޛ:"9]1WUc)`󼍙QrLpE% AR DJqGbq/,Xbr `̲g"Կ@Aa e޳aLb-2<l0w?~1@eˏ>ĈUBc|_0XU%$U!V3Hd^kzO pN(@BXH8%T0yj ~ڴ̨L{n&Ik,?d '@2Ӑr zQ ynsNq/;IC)L¤Aꮃjq:(RP;-31ʈY2GBXkQ{,* Q>Sp Cv^?fû) u 7t"Q=(uO/q j[ ')?>+(әLtev e*5@|iC0$ ]SrI)01֙,#I}jꅝ'4 HcHe/%[K#bɲy|ؤd+񁗊uHMn\N&)NNGNHoy 6DEC&a-DK5hdJ˽NƙM[Ҙ`w=e쇿 6fK)c@nj-aTCBr3DVSLl[fx|,HSBK^ gdE/ƨskIf@V3#u!RɁRRy],^Ұ1@G V݆9%-H}#'VOo2!Q20S)U'v^$j`Чf/rڤU]LA \g)n\!ʡ9Kz٩lcvd/&ƥbQgA 5(oҘT@կwjf='Kom=_S=ku{o>c?|gHp0{x&n⦅w I(m 8 cY2[$϶fE3ez7墫[| <8U:WF #w9 @!9T\{iհoKd%5&E}i)2kȜ0ll#cH!gS.EWr0C"@_SMVJ+TZ\T8f0 EY wŞYr#BjS!c;KXIRp7-˭ .}D %¿u eE`3_Bok#Eè_*8ukK$@`8#q"ȥtǡ36iO B^nI bKO+r@vx=] î)T~֬"VjV:Ҏf9*E_<4~)It8g* {qִ+ΰNN$BQ.Ag;ϰP@wx9 s9qh5hw_ѱg[c-}e!HsƴĥwmmK|͞PK̀K MN6Y6q$RKR12Hi"3N|U]~}yY}9-?t!耯7㈫۱AޫMf}6, w)?u4&en O (>ST+gU2z MSAo^*S@+T{į:h[$eJxO?8!-N;TIEiLA0瞛)"?QIW|B` yĵg_wSjC5&[HLzteRc#ڳxQ4x5.W%ץ n د=*[k/^qNO !ŕ~IԛQ׊6aJ8oNy&G%}s94reS ["҈ Z*8X.92?FpLFch+&GF<4%50h4R@h%i;Ui͑UIۡ4KR*l&dNNi篓×t*YLl˞iL( g6mPnǃQsAKLZiHTQ"}Gm .ZoyN+ߐtNcli0 9Géŀv)Mtj`<}v%tb5NVcvcm?ϝA(|i&@PJlV ^|߼kF'x}h,qxNgE}ɱZH[ý P@t`sGIj4$wsgJ,z%]Xm]A&BIh7i]DXڐ45Z惺&ʍD%V3b"l"壧p'\TB#zcu!&!)dH0ʃ޻4J%O((V+✂TB!bb1Ũb!`YA9T(IiKS>IJ$X%LJzk2D@j !ZL 1 \A;!"4&p:E9XM=ZRĸ9/D)\$Ц#~m8Ho- ̻ g/!< +-SE%-/K49 bK# G5P?zA~C8W5u,YcΜt!bv1(2/H{!gEAZQ#ڢ1#)ݓR׺1#k̙.ɁAE (,V|-Qs]|e`r`゛$#T9INQ`2AJ.F\8A=[I nԪr{JiaxS;[*-^*.7y roUDE7U3e ˇ\g/#vGkIf^NўM?YжŘMR^riȒKkr1}rwlS$Frc Q&E2uze%-z- J{针l a gXQB͛MϚ{^txt^bwGqpc H1 nkҗ+Ͷ} `/I @6^RIcă#BG[2Mcoo`=252*S?+lMք5w}f>fe^oEXq_0Ko۬Zަ|hV=hoW-qeoG-:=q7/7G{#_f q\{8 ! sO1Vk]e+}bw#PuNrXb 4%Q2xOeQioeD@{ j%o9U:PQTfF 3zCȬXۜ[d֜XNApNbi _'XnK6=*s g™{Bm}չؒI5(NbKԓj@H^τݛ~KO'OB X՗OƅKE4ʺ m~@{H+ bH; lhQlBR(q,'= )4MA(f/ 籫Mx2(~91Gpۻt܈%GwT4ţb;\/kF057O6В1vQN) k(`$lWdеm)wF^-+ M52R]nB=~Zf *;[RxUZŘtWh* ihoorͯȆ*zwawmWc*B<>j}wERyc"#s6yg:/ ǧ^yҩHҁ߃QG.mY φVֻ<%!sŷ`Fg__5PF.U DKcI'u@a(0lTJ1os4ԗTTV:|b䤖?+QM,ӜNcHxi9|.G0򨢭Gڔ 1r2 yD ~i݆y$OT{v>fbċ?>Ì yT9_qi,47SZ,\8<ߊfV7dk*A]S6jT{nAKBN1A-Y\]{|v|G Jɶ9Z4rpnǘw 3="e̻Kvu32>._ʦΥq7$l2e~BZ: ag8 U3~qN?[ArXrih=jU*+] Q/B0ќDڡ'_[~磫x:gw-M &}9|e0q?iN&O8nْMb0LKx`Hͻ\#fNvK.`wWO@Λ"W?ri]icGm=%b e%ڞY_{ŕ }V00(gM)Y&j0ϣMk`d6&uz`Q{&=c9NoN0Ӛ?ŋI5`ԛ.7=&Wgf`[ edeť?KV@1u= v:/VN3нbAWq&^Wzoϧ:ZvKS BR12"4P4140ǩpQM! B{Qs 9ot>'{ 3#DwOpl N9$&t7k*g)~8~ q /hڙ-kpqFo\rWv|x6p׶ZTDD0id[OTȝ4h6fEHXWh9[th=t6rn{@{W; 6LuU6?T6#U</l`fŕ͕;=,A6JQ-Z]覊h7 ػ2Y}{dC[@&H2-r_7FPv8fNP];}WsWndv8;@~.ݣus66Q矣9k?< ?[l/g:{c(&g>|&]8 'uk?CCDJU?L뫍!=ԇHէھaW-#N7aiQkжw9$}z3׎M7լowc{j>|'6Fkm[)]E$ܿ&SݤҼ{Ow_Yr*˹+_\LH2SfjWʯ|p{M`=x{A2$RO[}#lpˣ2}CθZ.b7 [.|k~M|BX"8wza~VLj`Ѽ}lZz|) ,ƀ꣕pEOu_ՄQC]:We|3z s@-'a&V3.Dr֠Db N :<c+1K[I\3*he-#NP)M\$$ ĐLMt`TSFj4e kb"ӱf|SZèẲJyq{c X`EݶsL[m-kN @l~ q2S+[&CѴ9͚L+!jsZ %BɉȆaP0hwCsQk' 9IB:)FZZ˚FJ֯N\А\)Yc-մry(]c3c|,$w?Ϩ]4HuUD% ]y6[&iRGQakkB5 il74-TAM*#T݂igV %]S*gԶm:fyhI`ce d!e^< T5vZw#=@<@|V,,nc9$ʕHˊjs㱗j?m c$J-.&288:q0d(TO^>( E9EoG8@nE\[S}>2>;l[:蠔R=q$# zOr)6%|eODM9JA7VYMn+M1"%2AjORC(v΢b>s4 y)?JNMT->Y:FHp$߿4o>3 ɅH8_jh5mNpӒM.n"dϭk:naKڠH"k+Sej); .!p1~iwzJrPcd:ꀳ)BlwAq&. M(9voƩEg@ n+cy}) [oپdYWnW+eI(%fq<KH%'ǻX8-/=cέߞ6|WQÏPLEw|κ?Vƭ]U87Sdw"݇CeҬLh- m(⚑NT(mV 2m-sF6P_xkO~h~"z)y|_l’1T_2{ %>,aK}wڴRT*ax7۵۴TH- ƉFZRet{A#m@}oN s>_eɩA4ʽY+ӎaQNX'@%\b;TÜ+!\ :[ 91N*U$ 9udʘh=Jʍ23˄em-TcT-㲦`9Lhm)u4 JA8q]rTY@M `iV֝j"ʹ>(llDH)l}wh[t\(=iLl[Y&Z=\4V$A8QiJ;c1Ğ qc.үjnN+d%ܕMbweQMHP~2MzAtyɕy\"^SV$+ .q;1v٣gj;G QpU4%BI٥h}cצ^ rPxfrDEFTx?$OQ ^s&i-W(,Hx˅F.S2y:ȑQp:ϱc}a0$dT C*bsJqj. {zJ*a5yYBS9#T3rmp8,zrVh^':yveyUjɷ0"4.J%߸>(u'jpiM ȥ1 ;_pr9oPZ2\ Kr++)o+&r1)4C0#xyȾ]P'{v>}[jZi-#Jp7bwe%dDlCkGQ -5>R#0hq2x; sJP;$\)nՖ6 i%44yhR4[bk"iN)r!y6@l1f[n1sPB~ZiEhi̲㱗J%ռΪHQ+(R5z̹e9 qz˥ݍ-0a()`2AtM9#SOcoIrMVl0þoehsq,iD6aM]}zWj ؊Q&F==Phބ8]}<")-|SJ4yǷ̠Z[Z0ԅhǎJKPKL eLde4LO]Ib"hqd.gpr k87ҶD![[O)_es]\D tl݇>t΁dV?i,?p9B+{/{r~\.:Rrfq_LEҕbq?\t|p"bS%Kę %D. E'i2 P0$FW&(=cK_M0 0zzقmL vƉ4U!R\q bkz{i[gX&7b4q:sj4!)N!W=am÷uMZ}6Qε"DA%=dmJN|ZRIQS.#A7[X'o~e(Gct:2-t l^ ӡ^g)YD126߸Ϗw%ɢ_ߙ$O/ɡ|KmGif׭(ʽ8R}-=<FhlWKqkXe 6E|5~Gnq Opl&hS]_V+ Sk;/./˺ JPj8v/wFTH5Jϩ[toO}^{{SG^$9W]0ᾝcYv|۪Í8ܷR$ Z9؁đ[kAɑIq6 *RMG,osVҸ1Q.Ĺ>L0qV!Ba D"""qN7d64m#kMFPV܏_lƁՒj[MX1";y ߇U>a{&L ^SEBxPJ^s$cJFɚ)يxP g~g,҄ʃ޽.uo½+XL3=(V[ 6d2;X5FijJrڐ˵,Pvz ㍉${lZX!i6)gִԔ_#r0Q%D~i(h߀ ;(,Q( ù䠧|~lb\/ 6Qz :%m )@jw.Ċ0%e+OBuK<嬿e㑆Wi(W# !mPlxH"vugSZ@D] z4qq|ȕf/~u &Z1]i8.RH.8B=$"svqh-_Z^Byd#| tʓmgH؀B.+Eht$hI2i.B9%0%Yl䴼g 1g6jOG7PnԍVn+e9t!>M_6pu.kc^[~&i;NI ȔT`駈QڈmlMȽ|dy*̢w¸T(eZqyc?ǟjGdש A,/KY1<4!("xY^Z@ k5t_6:>I+2ŗff.Mr \&ն6HrnYjl[jŚT}EH.kiC׾^Y/y.~5c)45`XB_4& ,N{F.6jpfCeeItxTYcB ,@ \\zD46c ;X7 wNאÖ:䧳Bv{P?r{07WOM MMGfҠcOoU" M6g <*ilvfaQPXTDG&Y??p;FOcgɨ2<mCPgP\Y@/ւ=gYȤTDK&x8s9Z(a0>co[Qmo=}=EW=əjQSez"{ )rO|X{6^s\ù+p"lsc޸-'^Ґ#h(XA"V<8:ӹ\* |R "C[%t&1<(*: \U#\[ ),ё#;.`w(QCj[M7T[wRIZ712 RV% =kϭޜ2.5dPOLiXuV{ qQfm= }B5RnsjƖda9b1AUtq {NGJBx!'Bc6I^;% a֠N iǤeAm` 㛴&T'aTtdx[$^?/ ÒNx{Ā5F2G2&a5)BsL`ЩʪW[wp$\zVe6:! %;uhߛ6Ǖ[Lqic{1)ru׳~5ǜu3:+ufP0O2d c-(Dc+&Sz;'ϩƒXy>pAA$J"$35X Csψ 24D /"HI^eU%zͨZcP}ѠX/r𝳁T pBEo򾝻5T/7+3}?EW2ĄJ+S): '@!2 BO~M"FN,0 & 8 eEQ grs'8LQAFd\wMѯlIK/lqsΰ!g4XŹ"\hynOs]^'#W $+p$"!U[>y;/-}3u~CxQ\#~(J ?N`dRl(8ݾ"}bAoݢlP\~sU\BlR^ְZ;~'?D6嗵^憐{W !te%Ҫ~/.)_~=?.͂513/&M}j]kt_mϏcFȻL.4#ALFMS>N$xRʢ/+Y~y(\9u:-# fe|, ^'nF٪W=k*ʒ_JP l)ƊX>vr1Ga9@- wF)J1ɏS_Ƨ^m8^uή6O@1XI<{ۚ}3+lŧLdz2X!r%^VUZfBF/_"nǂAV߬Bi4ԔOZ\I%IvHm %o=q2/vvf?~<zhb"O'ɭUYͮ㻁f+j/J{0!nEa`eEQ+w,1y2,2x6phGhQKa(\ $LcgG 2{[qqdE҉ؐt=uesr_NO~B)O'LM}WtݳJKbcogpt:>y`O7 ֕Vwz \w.WHEwt2#>H_qǺwױW*򭅋DW/we0_ F^NReHYǜRYb &(pr#(.yNcMie{[^Z[- R p4ws3Ε1_F)HhC\Kv6`Թ&$Nj4g"8hA-KDsƱɩa&Q.tyfש͎485ؾK4h%%<Ϸn(2 &oѬZ 8`D`ڢ׳ ETrr"Қ2I*ޮ'dP|ɤ\ypPpq9LBm:L7toKWZTfW)OZԽx i]H`IaSjtѳDi)QCʃYAzDii!5筥@Ӵj!$ꅖMBjhYki=Ϣ\csRؔ:h袥g'(E1=)u/ZzZJh2 ǽRKV6'hm; n.3r0B xXQ`-A'j-h}_S*UCDsْS3z9tsw~ US>:w+b#E.e[-=IZ\gk~H`*:Vrx;8.v«Zc2b PC01ձ2Ĩ+#W‘L8g! Z,!}'W}A'#> Nx$2I< " c*NG)%f\q*XڌKMr#vE[4UKyƺTD9 }I% }2Aqб/KlUN˗`̶#Kxʈ {8Dn/f^uvœv<\zQ 5S:8G]st `u)$L Q(5Yj@]^n_.B=Y1rg|%l0*im7n] ymcքrhغ&SjOa@  s7nӱ -B!` 0Z^v|: >YXyt'otCvM^|`U"廏bfZz6h.pt<]o]Lm/׈(!Qi"+B#&߫,Q;XZT4⼽Vk#VFÅHR-ȓA7O75R 2y6mob# O54)%p g ٻ6rk^G|X$`2/նnl+dKz8=Ȍ3V#E"Q162tEu"'(Rri~kLG;l9zGvEub6|\\ .Wx*1C& d%- <BQM 3C?|*{.kv'8kE"D)D ع yKy nw6N6N6N6[`R8--)ʃ &A[pT1f5 4j &NԪ`k!3X"ycƊBI\>c޴')=&yGsO W5?? 67l:@-ZW[iٻʙr&:MDE4[n@.`P ߈h3݀}wHTco|미C-D4Z RTEƫ;Q &.z+; aAfu@M#5i"ͱRBSɋS l^)* 9<׹ 8c!sFҁRTUL&pf0)g0&.sAe$'bO+O/l=ZK.cY<ㅳoK铍p>gcʬhM^ߠ^d.ޏrv.ζ^/_0Ր/]8T&e&܌ۚ:\%-tԭڦ}\ K XM; n$s\fk`6I(VPuUYGv+@z{Ks3 ;bdp/;8K*GF+DP깰>oGߤfWD[:.w$U5xc&&S(;l:/}T[띐AB"sQf!9odH?ǻsӎyq7B@E>4A9eZB%QYE4~(IBXzO#b;P;m{b.p2Z|s2}NR]7'¥+q㍉_@ŽV|t2?+Y7Y'H_QLRGFP{tA^Z4=zZ48%0Gpd*7C{ånR]Att\hBzgx1 v"i=~X%iӣp:lw|LJ߰%4)߳>?<0_YeD".ВEhߒ?=AE2/nb2={sbgDw.|[[~NDږ|hF}{tπ)Z Mw?Zj8V>ei YzY0]'=T \aTziPcZHTHOZ#@H'8Z9{ 7 F3%h,Ew m7%t]X-wJᨭB=H#NJ1pa(јԸwRK%\2$ #kz&}hҀf%$}ꍧe`KYYTx!"10ܰG+ce3u*8n)1£3G񻖉u?o?]]-R[J+|i,nCq<<:n@F|~̄30\QAWI2I6⻚]/V탣0Y.,BºB .9ʡB0ei8s곝WSJo*gWs<c䣝MJJwPˠP/wBLtT9CF 2$9&K-~oGg'N3ٶ믓~/` t*6 ?Bsf q(+Iqիh$P6@Ѐ`A r->NI,)8mqjo胠K=)R$.0l5$ɜ|$X,ȶZdzM cWYKG:~@Bo>Glyz.ƻL(j6+ho7'Q=!JIܢ,t늆dFN@ AIr ]@rk&hWx=:ud>^ܜMߞـVm~79* hVJk휇k?ż{:f8~`̻5rjvu7D6Z6~s.λ&1XN4tϚpMۺ5B\uܫ|5MҜ%_]<]%^__]zGIhکA,!4zJ5l8*?0y)s8N Q+U$1AW? 7NTQXxOu-mAA@xx&Ltt>x#.8t9KQIFAQ:FDŽ-G:ٹfdALi=׌*I:NsRJac{M3VB\w˻wO\v[\\_X_b8bo^慛\(}N)oƕ!%^,`<h0NmPHǗrλ?Jۏ?|T{;҆]oȷ$j b71c‡&hײ/`vO+uzПk*hE76gk2fW,VG \\N%@Ut$\ ̢`x4fܪljï_ػ$T4nvǜ&Iw̖ʸG5!C\6_naN(u5۸.k WAamG_)Sv{p;~J8Dחh=_k'aߎv LfIJSNT6ZU eHm>+ka ,H=:z"Ԁ md޲ %DZ꫆B`9/_k9mӧ-PJr|>'A%%F/f4Be(n܃ D* q^֯m`и|"xяE'ŧף?y ؋XڛŻTMv52y&<25?YMbv5-h_͚ wFϧZ99w+Uj{1bgV(^k \Se2Ko"3jL9lltӹ S8ٗ|635s){4#J+ \Ͼ,Zq*-՝[ߣkh'Gʅ$s$&8?}Υxrvu9rIjo ߪ^ "R=S+y [. J}хDcRel۰|ЋnO*M"6I̎bÞg2&434Y/UiyAe )JR]?AA}=eD~AWU-)^y\&'o1VϏ$ˍ6Olr.4mڞN~]h;knqɚWjTKR8Jn}\ZgRNԪCjɟ-F1YwH$YgzH_R/99^}[,kE'`{4b7BKa[fsK}AULxޕ S5JmPaC ika霣Rd+Yh&e}'k_,C'F?UOlyIƑ45H ǪRYci`KP!"$׎| GAOƊ=D4%\ 2\`Rm^ 4&џ5b(KlHtkЖPʖQ߯jy-4o vU $0DBp"R;[Ɇ$CǓA7^_Hȉ]70'# C/bK 2LpN5 &` A /uD!D BN4ZN2KqiN"ŝeuVsHjY^:߉+aВm!\ك2!~;`WmW#"ָ kLDx$2IyޝÔ'mzcPA:9.;c26K`wK4@/GA~$ \~;OxowQFFIkp$ktz[5 C_qFAr^y>kA!:8 ސ8>8!ɾǰY0b76.}aŏg7W{hgW7KY>sB{33{`۝<}ؖZcwΩΩs%>/½ 'Pu潱 'sweO;Iлq Փvh:W++:f3ƚeXwHQRϦ+sSL+>hEc9:@fѭm4ĺb|Txu ? r*!Q-ەk>,>|{S$W'=4%"2HlأJ;ǥGx*J\jD_+dv@JeǕchKQg)2T}Rl 6$APp0mNwx߰Þ_Oc>g#KG‡/Uiƫ^v9r%(hCGyz7s]ZQ`%0KDF,!Yg-|cۛObC6MsH!yzsy#=>7-OϫM3VGn1*eDc9«gv:k1d#ſB<3нpmh}.؈G3h$Փ/:A>ur4ayv<zZ{_w˼vj2t# bF%Unf)WٗndP?eZaa N(9 e/5Ӫsc.}OjHg AOyny|?;5-Dݭ[Q(bwg6Ї/~ubc7o_G)_m|A ʹ:~8 VYaO6 =㞕4">*W'e,䅛hMul:Go{n [))S6]uhޭ>ѻհn%6 }Ĺ0!nN;rۄ)/v+hwa!/D{6F9hG!^s"wDr{ccnx2/Bݚ\|5wWbꋸ~g\NbzUU9*NZyD>%"beC5c3h I.M/0_T|@N#.N7?qǻCuE(y)]^׀Aϐ}gQÓaRaNpY=< 3ҩ:oq:).%aZJ_7ѓNͣN2 BGn=q(1W[te-|y} eq^q{' U` !dF6(&8'AR'@5Xrߢ_ߞ$-f1=[a2dqfMx1|ˏ]x}4^ҀLeXˆ~j#}#,,i@QJЙ̃cIgH Bd;(ףLw߼CĖo–L;Ob^˕ :m!5,/#qV*8^dÎJcL](&VX-prm?NS++ jt,47TU/c܍2sr8_:[߮[/[а÷ߑV"4c'NeGqt m c"u?}fջZ ?~ik{E wlk~rsd%PP`!O~aԟ+koQ߬C5ˆUoQf8s̺[l r [Sa";K i#"Čl\mljflEvoW r #h]t|"*!*`ml B̅RYb՘!}I\ѸJ %*tq U mI)5-Շw5fRXOh+N&Uku98k-I?2! 5Z/"0X29*Gѽk[Ņ^7j2HEa> ĀA`F ɬdİ,oV};FÃ; C= HuhwJ#2uX# T l% C ZWmZ}c%wTk17M@ 8 !x R!  *UlGK-AڴU|X"k <ܠ'eSQ kJpb/\}GrEJ1HhT[)VܖXtp-)f,J bTY%'g9xM.)XB^mSQ!+2vlZecoZeʗ;ҚofY@ΝLBw+wrj$'.ۖj,Atm^0C!}ͬ}0,;l};RZKuoKu%5N-Ry*-{Uk6{NyUFD)o磫œj=3=e0OK 5 RJ. M|4C)Fᇿ3~=T%.Wvl7!D4ﮃ _n>~l'u3}lݛ^_}{dmwLsqLw]Coi3CZR* Z-}-+:V}jR }XбER)ƅXOaO(x^}$[iҒR# J䪸/CSXC,.8(P!8 e?tZj $u\ 1wC5ĕBnBCL=D'X/ RD| #RZv)XeK1X3PBsbc]~á\"sd|Ҥ< 9TCf-j z?jsDi:]'Q= O>ImYҰ.*:!U>!!4bf_":iTS:V$Q#u^yVڥ^U o~牚<2 I9P5:L(Ul9Bka.uz/@7Xy 1r+k0czbWw]etu]v]]1`!0fr9m3j|CAaiL\"bҗmтBg*XЊ Cn%e6H!d(CEq8nŹҒ`&{jFR_ndž]MZl %VPJIc,@8#A KBiP`^bKHyd@&n)b]Plk$Ξ{wx>HzF*RK3:Jۂ߱/hl0rhOpvRHjcͻu9'*8*7\-%dI%BWBx*اfvI&k}j;h2f72g %J")`SL(M< ͗O5v|X[__~ɾ.܏6Xixˉh-ΐcƀաz;#HHL+iꨀƴ,4 65HpO$-2LJIO®$O}f !4s" 05Ee%ekx.[S8Ln0`LIAPBuskH TG)UFA ! ԕ/HVM2a$KFʪwnqy(~:DeA$+O":H- P`m^Ca/ Xfw]pAYĮ6nd{i|AB)wb xfsOQ<]>ӗq,eׄd*J+]r&9hɬ#PrQF)|PGeW;'F'Eiګ)фrv❰Ǯ]Z;Lg$u xQQ!(4'opDA\sh=:s.8V\h_?ló?SR026ey/̅IT= ؾHB:SX̞ڇ7q#PCBN⯃I뻫 n0#k#ky/7rRPq&bIW$D˜Ja.*R'yQȄYS * #jKNGqE:$j+ }s=)K76*} 3 N hn:>.qq}yxfl|nkCrB4[. xTa,8dFUI~x U ^&̽rQ\' SPIJS`RJ 9pZnlqKjp| C =QCLSA*0ybx]ֆZ#O$ъ4L>G#fOu; kmwO۰E=%L써cɄv#Ԫp`w*FedEB{ȎVCz)a'NY'5H({WxA%ǡ'tф6#R!tl[1!raޙPqf0pB@i&$E qBD"OI'yW#Q(r$t!wc#jGQAq)v&0ѸB"KOA3 XVi8<rpZ15"9 Gj|GU5@>Լ2'f^Ž`1|⁓ЧuqEXki7Q7I=Y6#?-ǟ*Fh{Lɑ e-JܔRN'.p|hʯvo ::mx"m-v1D6ohw9Nk !DVszKx}+TJiQIkkJWQ{^+<_B*k+ss4;8TT)S:./uCBN i  pn㣘?CpR9-ꈜ Ew2JhD IeHQH3.s)BۿY0HfM LO*aG o \IӔZ8(U;UA &)&*DeV <#~G6/Ch ͆F8_Γ۷oxw#5s{`HKtXtnCVZa v۝ETp)U.d\}x!s؎efZМ(jx kv5ƥ_U6\zAbٗ@{,&soIQfoϊ--7o߿}x;RJE%J<~xE #wv)o{wPR~ᛋ/Wf|wa?E?߹NWN\v~)`l%ªB ~{vOrI 5/a:?ǚRA(0W+-Q^^{Jڨ=_x1ڃPGuDjuJ`!0P{j|:bʳj3%&UĺgJiI<-$)%SL+&-7E59`>О*ax '1*7cu'ʀ#P@=#WYoIh \pN4G"&e,_;f'C]:1s`viچ8ӽG ACNΈr"&j̔ G HIh '6Nu&d&U K+QyŹ%]'䂿XO849Pvۭ`prn exGP4q6APY" iBkoB" ~ZSO/ tppB I; +Kcg@0+q h)y2HLR oU>e@ ;a Јer +]^e_#!C(^ h7u>$ !#Vx= ՚AB`@sA~q*Tz^Ru7+1!ϵlaDƖs8x0T;zz*r*(:fWZ%;z,X lnn9b0 i)Cۡ T ³vIZoeh&Jv]Tj SQb5agԡSpǒQaʫQ1]W^muu袗r墂 e9>kvD_ ig޲i+L;Ҡc;uK>쓻- ݒ}U?^)>bu ;cƎ1a@)2x9Hn aa"3YF2 @&3zOq`BGaFgfܤ/z}wM빱ln|Bte9 %n{$1lgDP" S!P &9E*cv*0;584E&5]d:Aq:+FRJ3A#2 Ӛp&p TE[p{(gZqs% P*mǸRb]_CFؾ>)ߊ }JOyr2@'7!N:IvO$yA[Jl^oA`]Y#7+BT~m` x2F$k$U%U)u3R]2R" /8gP@h fpAai6UKR^2@ OW d`ɛIz/AUnAf"s݂Dwv.t"RA>1I.4$?ml#;'E(z]#I)KF% NNVq\C+0!NoM_bJzo(kmǪ # {Q u#Sl\2 'STśrUmɉ`(Pٗ2"owGAh%t$L]_uIS O#!W#Or))f$<5m2W;->-.c>Ӊo`fQnTO>?#Fx6m-y\iny.Q\Z<#҄ȅcDNO?fߐnc^,kx%1pPE7ŭ,AZ~v4ݯO~=xjh;خ[qmSK-?u۸_>pN,0:a-sFk/ !@+9͙%\iWD[ef N:0JJ3µ3URS3 hsAq Nk2cKSM! (53H,ᓹV*p9A"\p.49Jq$'k!nw;&RE610Y<8T`P82.c&^L N@5>m]]Q~xu7SZ^F煔ΧB|J :/;/TI.y~@p? ɉ }[o 賽!.zu+7N%/' 8iy,wp!(/SF@-+ gF\{_>NES{MS ƃ*Qg^8SZ8fG c%(0bfc/AsDAQU >\ UnٞmoP~;#.U3#!a}U 2ƏwW>Иy=L_E\*@vK>/&Lh,r)$d^1F%15]ݥX-ISff溾nf%+ֆ"Xb}6N|f\g Qj7y^[8Zn%uʠ,͕Ҹ@Om}`rBHEY냢ռoϺt"Q%:ҊPůԇa碴~ZH v] 2E85'n Id^ctS1ͭPaS:I rI$4=/}5YugSwwBy'|{M[/|(mdԽ}N1їR~XK΄Oy1.;O 7wi4d( 1:7R`t81FO+x]9cq4.;Pkd3CNb 8̄C!mo I8βOҷSm/")mrE9p^RڻǏqt鼕I`og5h:.ch.?oDXv/Xsइw/_LJh 6ܴvrnε]rxi.D t탹N\Xj 0wc@Cמ| ДϮ?NZ[i{*g H'\<>߹9M@_6?n֏yxOf%Ml=WMQ-ڝ 0W7;yo6u֏^?iIUKCMMoûᏴޭ.)SU)g3VK7ѻ5a!oDkٔR87 t꾣w;*1:n;ݚ7nk6HJ॓ymg,oMǣ].zef@CMxZ߃'ucRsy|j5..1oVr$q-e|2uav2}| [TU SZ*)w㻰E%a|j,( rL'6RNj@-oRߢ NèNrs(8[-/'FA힜<ѥԚ/2z1u \dR+ G@y1E'_ǵ$FTUgp'wD1p~Wh_Ǽ 33~(tKl)ՊLf5M#1Lȿ=Rȭn׏GM8~ï}:OrìJU f]`Q&VJ4HIَw ];Fӝ^nI1=M'R*fJJ;#9u5i0i e̳X!6MI%ڛ\!HV%woC@ŵ%ksZb7v%S4ҀXy{d>ǟi/Hu=,HV`t樂ԓ71IͼxQk(&ڞ9/#M/gƲ&IݳseTږGS{`̆fo^3|q񯘡yC3ke%EɇQ_r?J$ß_(6Fy(o:Ǭ0-𜈜B,jqQZ-Wg!x"20?|CDUº/,cjkɟO~=asjh6Raͨ Ĥ{z~F{mߖZL0R%Q!gǛ׆qCUa X`ju_T˨|>)h5ZkY!׎z!qJY[oʩ5@sNA;Eȹ#禵 )x",_plp`H~DR :Iidԥį=e(QhZ֢ET..sʂ 䶡QVljNI^ i-[bD/\e"#C`8I'_uwW4kCtm"WTJi;7(M;C}\K]أLvwDiȂ2c%hl Ƅ=q<sen! 5J ~iqZ@!~!הwYJ"lTnD j1`L/6LT"KLDͲDTK'_EgNB״6yi"-UQų~m)%CCLW0o}@z+"-i9[w>Zr0dͥ664ǬK% ʽ ~1O )'LIRkL%_ZbJ-=ɚz}ٓ>^HZJofp.:c7y>O 3  "&G8l;^ΐ8Qʛ_ig"A5mɼI5\1Rt>4a4z04#p#D8EymUf%ojeHBEZ{PWz99m㵵L}EȂVϲfR,%]}ؤuqH tkaip4=~&O &tp .eFUggkɝ5})I!;q\tDuK։]FY|f9RM=]b 7NPyٽHG$mKкo M]vv)jR]b4](.4;Ǵ~X&ejiJ½llͤhU~!9 e靨KB6xuYKKd 6g%]'@Oyh;KԲ_R*#q!DWgҥ8W2ܵ}RޓN٫Ia40| U h)8qE̦xAsM4B3budXDuJ'1{mqޖlC=i9) u$͙  ̹W t$74C̓%u_TCˏ\<#ߞщ` T3,Iyˠ8NT/\&V9G*BAiB.|xjH07] J\e_5ÙY"S2UR⒔yêzu9w"Ho-5 C. h-kknFE=]ڸ_\姌:9'SIv%\xl%d? J $E2c "_7  0dJdɡNs~<JhW!w?}TH%|Iz$9|#n " zfW'g&ӹ?@\8OqQ7B3~3ϩ߹--ƃ@|{⯜c}"`U}Cp\P Z'b9׉vODCrHˠpXVLh/VKaaJUi+ 0x!ĭ F <xZɨ$@o&`*}8Yl"+'IiMvdٜ4@1)M둹j {@Uױ.( Sk _AR(A ͂TiN9"KN4T]uew RnrˋKYjtk@0(*iB|Fi,@TNΠo ;fzݏ[T^ֻ_#$j`ww.r@ָN }սP]a('5^S ,bmSJPP ^LZ买? hb>T.`8OZu;`c0@߼)d(F%\oΒxKr YIhHKxw7'~;8l5FbA"ͺ%4m۵+{X"lznfz%:San{O#D2GoF2R/@365RL$ afr0<$QnrJ.#JKžewFW}eTc&9bH9ˤQ%̣eqxp2Kj9B%^A炵 , 3,)roeJJ8ҞJh]Pd4ҁJ̩Q@>`0'4!I &Cb5ĂQ(H]-+dBQW|^, Ί'‰"q)D `9 *-PpX:v$|HO5(v㺇 Azr4yc-[2jYK5׻*}w4%bmkޯ*Ku_v%v#ǕE.3~oZySY 'v 0"J- hZT{H0.ΌPTT#Rm8`,.`@^@B dYW]K6<\PvlYOx@5U-Y7D'z7r1H1w4nPnMnmX7mJ1xgF^w2|y—kkiWy3ʵ =CumBK;r'\LXog+I.$7cА(cgɵlO0Z(]L5k= Joyuh+]PU)ln!_g玶gb`J{Zzw5=x/zZjֻ֖޿Q$&iʛYy(/^>:rz5@z.5t -"vV j =L;{hwo= o TH+A.+f^\RIlIФ*;hh_xGwcО7E[.N>=,<^0%vN؄cʤA^{^*$ꂜjtsڦR+ f4MjFgDiK4wLfK[x@z.|N;ǷiȒw#CU땗}_֞VݱG%=Z}DC7gm7zJQd_Ed1xTW?%!ǂ.hIa 4=}/7{҂e[;  _ ڲsh}i5uhƘIԎOӴ":y) E<4xXM Am#BfFj}BeDSM#\NI5uP)Kⷩe@)&r!_o͆w!OA}z J@qń8dk˾qiB`K鵐^)HUY(4po4K˶%ufW>rO$ cI_?ހ0emxMeT ײFת .o5=J`Ao+?~DHYVHv%'L`ݳ ]`ݵ9uء1SRw[u6{V  nV{,-iI"3:jgQ;c: }]QQ8?%$80^"L)ZXo\!e^FL$-N@5ZCP)OYZ`[H. X\Zu[\UK՞w<.$ nxǗFEB=׿1Gnx;JqrYNfə"=nO=H i Q`bZIMC; .\>#EԔkw13IiO:;4.77S|/c_Ր K1#Iw`3VF`Izʃ.?Qr) M(lcfl5jDƔ7oaMADi:IIa6yz.]Kߞn~8^?xL :?`۔^g<8sf|[\JoTP~ W8*1FY@" 9E(SNܤ@iQ^II,<ז( vHrT")|ygЗe" JA mI"v))F,De`;S(9)I RJi4ৼ,HƈErd#ʖ)CK) ;ڢ$-)Ɇ,;V̹֗ި21VJ0Yߤ%DP$dv:cW؉S=wIpqe']#ðiю OM :~E)h}R,&)9E1 L%` tjŔ /pҩEɆ0vQrl'easvdPb4b2Aw9xf8@b#dIKSub;[KIƔ6c4˘4EJ1q+0AY+$z%ϝ{XogD6gd&MSG IeG RQ̈C; ,e Ⱦĺ ߮׮ ?N?{7Й㜸85C9W.F`8ho?#Q{Hmz\ߺwe5>ҪF5De Y9#웒5"m##1Ǟ (ݎn!C~I8 g!$2yc쁅?`9&i$953Xŝk-_ƽgƊ佹=qP2$`oi'm sxI0P;L(h&:Ysrcֺ!^J4*O'=\/FJi[:́a'k8 aPCpXp~UF(,>hq:ƷKO{e\(D|):sGiݚtnr:'Ԯ&+!{8p[ ]NR^ LЂ6RԤGjB ԾitIs漿5=r)iLq-8m'x`޶isg7Uk_lwא\&ݾ׋LI 7ʭ@|efO7KFC녹^#rCQQ$&*_44H3iqӸGSͱ-1Cz<174N.ƽ'TXtNc0U{Q89K0dL=Bs>!$ #y?k{\/F[3FUW"p=n?{ƑB_w{C!@s8ě͗nkJI*~CJ̐$EW]Udt(F\a&AN5B\KքØ:A\%8eNi6i=[>O'9|3SO]5I.9O?aELuk֒$ij2w).@vi(X`r4U/=Yvo&l<4m4?7@ bEX;~`8(O~83 IdEI2 q90Ll$ Qi> ܻrvv6<~ՙN`:N-p $\5 r̘L݂4*Iׇu$V@T @T|p3J"QڄKb$\57da@-5jo"!:+[ BTXqFOuZ )TٟVFU(g[[VTc>Uh۩VɼJqu*X!٠RH:VKbk yHrLGvUXEzn2_vd4_l~Nҽ a8͂F{w^n1\2,CQp0FeSyN*x6;EW-1z.T bXtQwM7 G;Xς/s:#1-t_*ۄgvլf8X -apWi+삆ݏ nx: V g'{XeeK[0ё>Xkux8Lc/kW(F 0ţ EMѓN#"ZԎXt:04Tn )4~?~`tz~2`-%%~ro3#X&iu?%%;Ϣ)^Sh֜FlEOp7o)}tZ"x^1ED6X[>(j>9f{n[ bD><, ;t͂p_<21@ *JoxxjTIio4I1ۛouՃB8`x[g bmA)I[gR͛ޑ=ۧ8k?:aQFBS|j1Q2*WKUY.*B_;QT)RӠp-!/d^^ޗaB:Xcy@m ~~ɛ7A'oNTurp,WR$gA 1K++4 ͥN%D y0jC\K.>N{ 1Pg`+G%Y9fov*>E+3 R}a2eGب-2=U d(Hgwa4j4\1޹{%́aWaq*ϩ)^44\cA(|͘aMXx2Vz8]#;RFTW)ÍH9[-7y c%ȕ:%Rr.RtqDHEl-Uq 8|<Rp {cݏϪs9׶?{8W%ȯz-)=|F l|uy Ι׋^10̎{O@NsΔS6Y>*^Y^RK|[\̿ *s}$-ʺUd/ASBs&չLi3B[DzK̖7/M.IbDžȊ(g4x0k#%c+,M`6wl*H|.˻(ơn+ J'#Qh6VG%$\FPQh : c0fE")Xc)tg#LJ]fj>{ay܄`V]yFx5.Caӻ-0.DJVj؃d3c@*)iM_IΈ MFSjyIO PDB Ht#Zi90Ck/"wbjd"V8[ 02B% 0ߝϙ Vk6:XJ e,OWs@DsXzW#S QP`m,UHt9F Vh@ oCz"US90 +O#TmJz7_ +/]%GKƵ0QX$;*CD*,ʤȐV&R}_ofU?=O7^8?!8H+P>퇧\@[X>|xE]HՇA}?gYu/(d:[cR]_8+!tU[ξ[-$<B5Ab۽pH_i1"p&5^K5BRrBVc%۟R-]֜Fʐ9^y Z8O 3ornǂxl5%ϩŞzgTad_1rhɠvEB#)tGZpktצoŰa,! B +o$ Bif(7 v0̀4>paz.Ż\tH}%dԊ +1Od & #FX+K Cb9Ađ)N5 4XY!43a2U0m>/jCp(V T *hP^ IYKb兄02,O-$A\{H{wPK׷nY _DgQU#oDsG!pnkWHտQ"SYgpE1ҭĶ([Bh=ӓ4QDn ͟^4 m7'FquDp;\d1p6ǘSBhdkˣ㶅!:[>sЖw&.u'jCejUlah6Ôh`tiM KڷOp%@Fm,]s8%2th# -NI"B7I6K^1XӲr+S@ c=G,!֡YcXb+(Ng)RȰ"%fcYnyh4Pe#5fY Nc$PNXtnub$;x}( sZEoF- rUێy N&7eu ȣ& GT]le^@@du; 5xBF{J,.U 'rGə=<-j݃DŽ1 Oms* ^ْ8~a'p,M7"T-B0BsoNRiyŚ؂#"y]X,ub Zƴ|*NcVL~V*r ~Gv<] xmG9xoZ0C~m>fI~WDV6xgڡh `Ly6 |Njv%pe6zڱˋ n7lN5;uf GeY#Ik"wYC8ZaR%ϑldJVPg`>1egH :[{⋒bFo6;(Q#/n78Ƒ\wbnr`$QljV0C3.<qnkVzɎɥrW~5H>RJIv3jqu~_ǨRcx;*׊j4݆hfǠO`'썷19? ܴaW{¢!DuBN'e<բSû8-~38DKSaÍvtӝFB0-09vS[{s ~FMYe}hN\PJQp:0[O3y,ؚ@Dطܹ?UKȷN8AoR{jP7 fۗו(N-z\R[vLD qD:{Is3 9TSL}ȈҌ\P1-Ú*|?/e&(#'??S-\zb@T+r. Ŏ|/mC8h~/&c;? A N׽;+Kynjg;z p,_/<<c,k^#7-?h:; ﯻZyecU7OOoݬJMChNCOiU`Re:cԱnG $m;FZ&4W$b# `Re:cԱnG\ Fm[dFc[:jWߴ\d1F0vGS?&E ήV!$E.sNP aŃ{ťKQ+G\9JtzrT簭ހ&f"ŭ]Bŭӝø.n#J0R5~S_G-~+;'ݪ$(m`7Axk 7eD'( qo4ߕff`/\⫙2/.3 \ }NInH].>yV:j Vжf1`_dB殒JM荛5P)ML>R"ݰm E:L`Ztf!NstAemy.۬#}8QVq/IZr[h\ ǃo(< D*xKHu_B)Y%(A슒Ԧ3N$P*0J}@R$m4L*.a5[@k''aRٳp4,p ;GceX` і0ρͦv2Ϧ)B\ʻ!vWzJN[uNIQ_)80F4c܆rl4bLKٳrmo !ʙckpFRr(IK%9]Fjmub /q"н f8\<ϫˈi=̜Ɉ6mo_6 O!Yt ߠgQ*硋4:t@T;FvfKid{"`)ISB4ewL%H]6M Tq|Ox{*gMA*.nKp6nx{ 1.Eird#.AwZavLaJ1' ҈փҫޏYq!+'x,ǜ8\BЫ{]PQ)coT neBf_ U8>!pR3N1Jd,+x/rO;LǢ =9M]k8c})Zֲw:3$=f9|2nYk^-n{ADG_|Wޟ;@l5s!wtSlA{RTqx^ފEkB &۠!x>OIqG>h*17χ{<=m6jK\mWf6`پ/Bհӵsc'P5MHBHߊ U=5}_)()v(|QGy(| ?_?WdXlI#ˋ"wlJdϼ1UtKNb)I,E:"4;VE`@hC-8ӂَ! I>W>"__ S֛};ѫQl}3ۡ5AGa1J?9GCe?/濏.|˱PCK((,Vdc3c(FCJ{ü peGzUXRGL6;Q6;BGAiU$wUdLbI,2J(gu<$pBj&Pq^ ]8-!RkI7r'ʒw\*9hVy߼[$g9ɫ$'OcZ|̯#e_,ۧGf9O/xgtA<}~oVT-x}}Ê|vqhhv/^-=r%tvgDȫO;v6bz1=;nf:#h[p@{tuNA[26t6}ULq&KSm+Mh=!ZTᤫsSYq` Hk @PMv"CE#&Tm@l _28^"\k>4Lkks%(Ԅ4e S4heA/#TzEyw&QLޙD3 I]䥰%#p- ނօ8 TLZ0 HsDY̹,̸*z/̋X(]?,q^'u-$7gI9w'įhu}0#Ɩ<{Y!J^eK R<ہ9.?pFr uv1?]I=v1A;:F9$,((u0Z @wҡJPDQeJAK) ̓B9&:ܨH1LS0 ^rZK0`J)Jptޠ. } :V x˥Zưj #tQE0As˜`qaԋDeA;p%d"ֺuNs4.W'CqTY I%@s1/UhU9 8@#h´6EPץ3 I)cC^Gu9$jhDaeZ.D+wRhP J ^T:ea(K8DJN DׁKK2CeWP-(+|A$QB%MJ*X V8qg5,Y&p+ހ h1 8|TZ+ J(7R0rSk]T;A~y\ 3dl?pvUs'#2:9WI`WwWM @ ~*~Kd`rdr. N|]:8κzaj֭(uE=[A 49ON7!u"f%6ﴇ98>ѻ˛g6rr. >Q9GwP \};V?Ť勶!)IOcYoh蛅ZVU fLgnqθVò؀ODmZ߄/Exg?A!}:x6&:}{[>2o=j=9.`ޑܜ/Ui?fnfe=T_T׹c`8 R'ͤ7iէ/^?\oR`VgOȁ^o!*\-/cɏ7(#>fW!>X*uϮ}5f@Hт!t i@s麢c TDĐkBQ9 |R+ǂנR2tzG݈ Ue/:Ē99@!ĕҰ yB%!Ve1*I6 |w8ysubvK|m G-S=Gm : fnte41K|cħ=8(>j% 9  KBy9Dlն}&B\o iM%۰&65+-6%^bqȘ-wW˧{gf2ڊlQC4Ԛk%?FvׄXS4_Lk9/[\1zkfC`FwZVoA@񝽞:gk.v9¯(HAX&o6~IM~j )u>hRL@|RJ l! Neȝ|U 8DK~>[>XaHj{k:aД{BQXD)aFڪ;ޜt~۝(Nȃ>x` kn֢ŢfÝ>^"p-uWI%7Ɓ.?}|X-bwe_ʻś1,UЀ]HF` ItHOF {&(VT,ZLP&Aj]@RE2p7ZnͮK"vR<(W9`~[85ǬV3RQQ[_-fE(ӼV\[݃_\b;CTSRSM{j#@ECꏫӝI1hȕP0F1&η*c i`B<;(ŇWiǾ۫Ĥ>ux ׈ C 8;]S'Gb,h+GA+W嫦[BkX5~n1@ŭf]7q3,fCvH9F?8k|b2=c2 [sg{5|@h[F@ M$Һ+?~L?1']3ǐ! z;k$+ۤFvLzx`g pաaUW7ؼ^dؼ]Z@z3\Ob;X>ݻLF 0y;Вp50nw{N(F+s-5Zܽg7v[j ?tYPi=ۉ[MvXbwP|V;/:]%`]9Tss0^9Uv2(\# H@.  ʐK/c\YJ6'}:b4;`_0R 1J T;LM& U(pdɀ# ]nS G\,670 -*۱g.Z g[b^͞62D,!ו= y~vKֽa(gu:iBT ji-Jh4A=:JY*b\^LZ3.;;sFdHQ(G3]tQz<]#m17n Ѥ}zod [z,xZ}zoV}<[|59W7:cBߢ%_>>z65,|N/~FZTsigo7|En- !ĘoξS Dμ(V6y[ 7 7lA4rkv7dPmhݱtoԆ++Jֺ3|oQB06DI#C!ez_؃#y8H&`qv;'w=G3%2[dZ0VՅb =xXTTwY\L]&~kBwg]Z ޾_ލv?a^ة?[SeXV{_1rN?\"wUQrso_t6 ) תz͜9񴵘_[:- ]Y:ZHB^tu#0?Ѻb":cnGl.\˜ɏjѿ #B|9lMQ̄E[?ɷG eOe֘IZn>G]=ř"dJm{f^Bg[A įBn_, gUHh;X_gWН5pi'7[{mNSSIaTcDX BpF:3n(*MymzL 22zg¿]}-!Q4Tj=>2A-ߓ%de~dIG Bvmϗgi)!"i@\lbl{ ~@#2gyv~mdR"27HT oF`7@6~ },X_ܷ]3K Qǘ=H 8i70mֹeq:O/c!3[B"~i9xpfޝ]&7bp(JbKQɧ77O3"ځNF9C&$Ѓ*ʦS eT3h-Q e2o,vw6Xyb IRP9 H %MŬ)B΅V:Wx<]em@`@ڱ͛mahL**+7 B@q+>Yu;m_N7mkkC5Re=VpRߴ8m^;W%Uu'7ƿ̛#lDi;Ь}/W4+1^8 caxrA͊!_Vɐ< ³2 a%9V"~ˊq̭zj3{@{!#Ex^%8jӞ%{WYԃlWV$ĞkTˍ$SeRkUYYh*g<6N0 (vB`+)¥GZ @V ;0!,"#bxL#im95;nЩ}^4ZXB`c9嵍yp&^S 8C̏DSL9H,rRLI/E5F ^,,~\ [GR92vj) 8[{==Тnspp:BPI[5yqK?BtST`sෟu.*%/8BѮ Bb,G2Nq;i+wK 'qܜ\ܟ8ܙ8Rߞ=>~|IIʷ3<ʂ5nyVߦxZFwC` w]|7pFAa+ K *Gy|ݦQs c rOo>?&۝(?5vuxb;?~ۤч뛻3;P9QܘGEJ9G2=,\w-ǤD 3IƤŷYδ< ̼iGf(̃ <.vn\{{z|b .^|ַ{uFw:hP}%o`68Ga@xIppeQ8zltLv:䇀 q1ԘJO>RcrJH^!hNmAnMZΙȡ"xv)㤼RK[v0 'TŌJF1* B 2[)C嚒%RQh|ѹgy JOI^v\BOz״HЎmnd @U՘PƊp2x/Gj[7HjۑVbz ^(ݓOa!CNϕ%^8N DZ;&8\\ S j2Js#蚣y `}\(B#bW#0m5xAX›?o3kG&F{ V>,i In>>+e!ǚ&Ipt eEc(J%e:"z2ND`rv,2MvMdUZ= ڥ-pt7]ZpOnfE_#vzMgaPĆTlL_$ff,qZh@+ɞ/ $Q.eTT0;^cEx"GeZ| v?DT?y!5sʍteW Pb[i+&5JP F1LѕC{ʍ҅ 4$K?Yp11\TJAOaٗ`XPC(08mw8íȃK@rbhK q"{e^7,xa^rv`yOMR^K܊/ɎV# Ӷ8m CzPwr[گ8q 9$]Uf:(: WYnw7~Kd'wg>O.ѻg?-ة?[3ͷd/gf^iN]r6s0rso_=t6 =TWu[Rve4GK^5}8 U\Oy\9owԢvN/$!\DsdJJX@{_`r7J:OK.J!LzId2lC.tv̾q{ j%t2|K$N;I<>R,X!;Pj={őMyi-~yR|5_$64q: !x50`\t CڑKya9gxK.y}ּtBg%o&* )MKצ4 mJ[M2@ O>ɾɧE\u2*GΟ7@D)G%uě`/ ?އCK0[hϧ7.Wv:γg){W!{Wqv=M^c,23 UB]N+6g3@8rJA BwNċל⛧ " 4wQ/OhrVP! 1Xj+U[8C Lĸ<9rb$13yTyJâ'ycHS=pYu;m_B@(次BaB\+j9'CdG agwSg@1\@FoZn %ë }]nHK8_"~}ܽ`)cyMc_?XTNz8}MF:8z-hd{gE 82X?N/S߂[H1,n ' 9ND EuZ7n5{Ipzؽ:W;c)&e٫C408 O8[:v&[ ? _tu$08Ȟ]$DZQ*%/\ACtPlQZe%;z3?Rv䇹iez$0JSy]#|,Cc!| $tfwsP ;/yPG؊["WdRQgVЖ#Gɳz*3yA|#%m'RYYNRg<: *[&@Ɓr2q&V1r@ӱA?9vx;KyIT R1zZv oSL ;B!~KK,btg*]\c[چh8>Uv`![1E=6hCaDb]Q]OxU#,%L݆fނNo W/-o/r0ګꐩdF}J>\}v(oWVMy 0/Ù e?ɷO|Zi>ah Gp$ G)چ\ ]qPːw'+ق4%B,,̐P3zU C1޵>q#/{Zxt*Jݦַ/R5)۹HJ C jQD_7F?gÞ}TD^:TCYlhh.=+b#;i\OY! ElOS5yE Molo\7Za~:˼M%*Ǝ昬 x̬Z&Gv*SPVx+GR 9u[M:p< "'丸A:ϐ''#!p+FX\vpUIRS}d}A-I]1GeWRN9f=}漠vX{| ij$,x>G=9i3evp2?_>_g0ƈ!J¡jnkVGSɬN%XHX,Aa*H4Q)E"U2.E!PCB/EXKڲ mdڝ nvBExЌ@p-ln7][pTƳP붖xpu݆[2=.nuwYzj?[Ho:u{oNĈ2ڹucvpy;d N P eR2g2MESפ3G֛M$Py{:ٯ А`>SJRKHڏ-6X-%)a U;jiGf j)H` tYwU0ZvŃ`cJV_uQrdb.L+-DH|\b56s rPo6VJZZҊؼn9Iz׆/\_>\]+x`}=ŏL>.n.*<~zk1FgʿZEIO,?(>54_e> _\*FB-01ryݽD/ゾN|8?no 6O._IJ{=++FBq-)1~v\nNm9"Oi򦋙ڭ Eta:. Ni`31SE4nfm-3eź8oD$yU^ c ;?Yxx䈹dbAgTUr``lEi'5kUO KDq1P[eMo\%2|ߓ?N Z66~yeAUpH#∨ ,HZlWԅy}LtiD\撖G jgU&h+)N*?5ҩWH/we-yX9iۅ)A8~Z=g>[/YhcLȘd m)&r _ɋ|/ׂ8W0z#CT-T )VpPXZJ*LC6hqڳ_H+@xt%GBe,ZBs] e$nb mI?R ),epٵ/;_pԆ"To[Y%Y}7)@)A wTz++Nh`P1q$Y#P4KOɛ̹ |8)t{AI2]!x +gTP`b[9,9  {j:`DZ1m#6b֛J8Zpr6hj,|lW`b/T},gi=KY}E-PbEP\k͙)q Ȭ1$Vo+dr@k)b@EjtVR%Y]ri뒛.m&v%D?67Ob~G`{T 6|w2Y|}JWFO(OZ7Vtn qu%*Syqb$LBw|%Q(buw[a㘰bI z$iw.xKBrDc"TDȈV J.WX1UQ lA%gJ2 TZT)øW&?#ʇ`+]5ڊA",6BhOvx`dA\ "j9lG)MqFJEG^p,x 0VVf$JRzN*BREvZT+`Ѥ*'To!A RrP)B<dAw;%WHFMl1/ wmZpDj>X-U|hLAXuvtڭ)v;> ۷v+hvBBq_4Wczg/?tsڝkM`ՈCt:v; 1F 9AoX_Ma}Y@nfA5 OɽSr/-J'#LgW/wGc#*Yo?nF<ӿ;f\^nDlͻgvrm?}st?Ӥεہ\;2FƧ;n#6kyqyzpf="#^ay++ +L EJɥ6|X%ʻ\&?Kٯ܆?ѯp +wC.^Q2aW8t|mJ@S%Z)9QMqy Gݒ33.D׍6G+eagcV:W Qq/w~vRo~ć6yIdM}95Jn".E".ɃE ĒNp!|T۰pm9wZ^36AWcR`$:78n3c9qDZ|Z?v zh˱+דJJ^բ/(XS]T@0(l0ƭ}7ӌAv&v-=?Ǖc:w)9|~DiڵXaf%O?;dyk" 4F,ç`"h#]oRJ *`&%2'UM'QM} 4}(jPU7yr"IɋR_Ϋ$UC?uL^4#T>]9Gy>s69M"nMAxjP` (aX>mc9Om5n34U!sO2fYd9цh9/ZL"99H sKgE)S,@r-x_6 i;uɏwYjQṰj+*&HYrd%u"{68wש E"L﷧tj:Ktj:o*j*./eJfQ MR)D%`WhpK%MJ[&=S:dX 6ix6}{ 0"׀- 2N9JQ,7=-4I&ݛt7&pP!N5B%TWgͽ$RKC)Q lMP:P&(|C>hD U7Err PT<8\V10RTTV\{󚅸|(q4j'H;WO.1 jhu)[{Lz,Aࢪ9;we=nI2؝Vއ= d 0Yذg<;D%{Iţ,DpdU+20apRw l ,eQk5=S-H ds6o[nh&hKe טIz]0.D)o>d޹WcF@!0$i8áegjP*tVzN,J-VY+SK)h)& 독'YgįX5x,Tk_UYv~2Z*\RԪsndst9幟9J%,R<-^nnMH$j=WSSpj^.q1 Rˈ\0_B!Z(*V\hk{M5E\bzi_R;hܘ1[㙓㷠/H^Yb*>}?\y iZ+Q{Qo4hnf7,vp zǡkfa5f*5۫P1Yn JK`t *LFT/>.~,~Q?\. ^8o oB%KX_ȧSH[֧[aI/jA?™z ǜe]7Fe9(42V!L}O͉}t6#m 4X#3R F}kw {qGsW(c§ټHUcE|i}"?,őw\c1쎴_Gw|$v&q}_j. W]N_]#3L;`Vh׍ޒ2WU~vm ? :eЯ{ /FWu;0SD@ޔ|"Big[֪{%6oRNak(2ʸa"7>5\(q*w.B!zIdV ;-6JkLY5LO4  jD{4N|jMtGeke˪^*La7ߑ5t Y|Ҁoo1X:I6#FQLg{<||wb߹3WU0rG[bZQ^=G.Պ)&7I?} Ŷ̬2`J1%y.&R tm6oWIfo(4Fz]]i. c-,o,<㱾dIg 33DԝaG`λ7Z[V9y[*9kpU%8|\y*!0;A}FɆ-AH=WT}H < g^u-8QCìr7p6UVwC"4aC*G[ōVNMN8$9.;R:B025WrDtx,OAZTjuGI׭8ŝVչ+ރ_}c|UnsEuV~@!ι"~qY۵PEJO%dYI Q!Xv2r+.JkGeIb7( I.d\ꗧ_k"^)Du,ԡ`*) dPn**ޱUf 3}M($0uƇМ!ˆ? }:(lx.O>dJgIUE4x}GjK@\xL}`oL^>r0;;J*`SX](v lփ 4Bg!kq$X%Lǚ/ͼHjm J8Z5FiEo#2_Px5N[ |Y6|]+@`ފ R=cNdgoYi[E*>ԉl P)%&c,O?W5C(Eߤ; ?6!”՟痋t1`.ŀqb 7?ltZ*Mxo"J:ihg05y) OSP3li>?,|E}NFFގRfl km /9 k24/?YTB3ML1HRR//h48iq@*AFQ"9'9F#rZ1 GcQ C#U@2fq5/ϗȇj%|Ҥ2m*#BV r }fbA̵FRp#6>\2A@Q2*2AY+ON#%AۨTTKV9VSDJDP-kS`MQIc6JZ/pKPb$`H)5R-1N%=8A8WS]ׄꅅc( !~~vkQfsk=eBfVjk&G mU HcV0eEadTOTm,=yqwIfyZƠ/J{KU`R&;@I,bf0:h&1jBc4*QݦxP6,Ď1b)J nZQ\pW/M*n[$'G 6b`Ȝِ0 ` VlZ`ړ_,.ݡTټ,p*Av^KH!| ,U0j ohV@mDl?t[\400;F jt判K`ZUS0D(R[{e TBDZ`P"p1=`b#헄ӪE\@=N5# v8m XXo0@Bp#OCvd|M%\_~|(ed` ^ _hT>{z|BT{8cHv{{{>[nt}^KIBcC@+mpcZn< paCS.?1yp1`*Á0!hRm=٠酝F;3hc\D4b(QN{H27f6iжR4v"-]QؒNPaSE:].>lN$H3d8MID/XKf:O9Wp̬.f MYVH>G)&vf3r D]RCN׉Hzr%T/A-6'wOӧ LH1ݼlo&oR,,RջL^ie 70k^߭ _[J h "c!jU<`OR3Sa5i%bjD;)Q[RaZXPՅoCԠPbx{;P -oPo遼IVǥ ando6.X ԩ㫂Hko,ltYÊRVtsMJ4{N{V28pb'Ljd\َϏ3euh*]]~UKy,}aüD4fZ˳J>%FW]]o7W9zb%u"IO Krp%˔,j;u"Qor8P"8N5DiSz:t/5&V /3b @Zou^1rD d4]'C119鱜Ue4Ye4YRXΊb9kptuO@Xz ` n %Ap>yMm: ⹲A]3fX5ca"XC;n֑O(ԑSc, }[U P[\]y~s4x Up𯂃;qr-eXAPYVaW8'QQR!+ 69C9z>Nn=LnGeC ~}8/?3.(Ҥk5gٙ?u(h-$9 Bg#høp^!DP=rdˆKӌSj{d 9uцyGqn)*(&}_7H""(H*'K&4T}sZBHWjqP Uy*tbwXxn633 ۻre\ dQڰ~~1}Rǂ0pwgK>@ ʅ^"Xw 3l}L>L*̺j/ )Wv'?XHђpm.S){gNDvs%Ajݞ]nR-\M"gFAa.- mD":kώ[AvHЊ Yz̻.Ԡl}:ٍۧd"分Ϙ20t9 E'HR53^gë"sG?ٳGkq!v {3<4[I,q2Ռę9p`"pdNu:;s" u5(kL1x(1/I)%LB ~@Dq;F?}|s2*?WA"@5B?scf%[x@8L{N?=rJv,AKJJc+׭ v.Qan\ȐQ;V1/ E<-]e`43S%V>K@] J0g{/H`*m sv!Ѿ1)`=w?& l +)KR3fY)`*Qs0 - ђє̅,KVRndA5ő?|E8F* .'KI & |JBCU9P HWxE4!hiB*QHkZ J ˭U#I \!KlP^,e4ؗG3k&`Y8r!ӴVIwϳ!)xcАk7wEi‹Gg i0QMl&yڻ3V2V=̮xWv}VCLHU=)6WY {gv2JXJp=Lk-Gֻy9)KDZhn.Y)32=A k?*;_J-İX(fLk9J:EÄCJy@d5jBc5s= us} {s9=#r&!AC~J])QMl .mƞT͔0a|u t%9@&;2ݡ 4ż=d!iO%{i^'5eO{$ HSYhkO%`qOm]ӦkbFV {I}򮛽,Ԃs}u_:J*t8 c/KLf=vM::iEC`6;EjL5T or{9 @:lxaA ]™REF(E+8 `B4􎢂 +%5l)W8O ;4 iTF:81t_&;u䒮<ԑKi/ߩg*pI)Gb?:#gWzD~}b6$䅋hLڍpr1H>h\ǖS-{ڭ y"\hrm5_% <&{ $)i-W#&%W(I;}d]ՖNd,-Qbm=6 Qj)ocÈfzSۨV/uF.R5: )0*1yD"`H2HVgT~j(;)3tT3 l34eF&=q! E kr8jfL"J^ TR)uk^:rIYgQu{*֠UYs7]8I΢ rkNp~܌ 1Hˇwg/C-gvW$n_{td j4Ɔ1zL5^ nDϗkV(&.婻eta_/? ,bz no$w1l&H٧kO&*]I64ךŘ9K4+Xkƺ,,chq h<.pDۓߢ PB>-]'#ad>Q]bA9묤-h|5ɷ#LUd[bI7mlAo̲d=iW3z{M`tj3ĎqQ%G~&c_UA절~F1A(AjWa; 9EPг 1 '2 FVĬc_5yp3cHTQ;(fLY:֜1S R53^-3ekg*A38N #Vji|aaB[ 4Hg Y 3 VB9vf-DƸqFp9ZRIHN.NRQlN-ñ)sL5cBcd.м4%ra,.q=,2,wYi6 ϷYɘ$ Ke`oN|"OB0j>.`+B #wX4ˇ{ p皬޿$?5Ux(L "bqbh7'z:/ڏ77׷.N] m h#7(Sa1/cBuvTV=MɟG88GJUzOg7֯~6%hwÉU8 'W*>_>N-Y, L2OܪJR .qHOx>TpːJRmh!VC9SSg<y$¨pTڨ*m0vL{;&r$Am"|Ci%A& 4F`@`q z-p s"sD + TkByA+koWJ8!AiQPX^* +nEQ" %,i[fF`{ǥUS8Sp`ԂBxn2i  wƵThBpE ЭA-5Э;HDЛQV8HABt㶇J5|ub(MT:EZt.*l [56)$X+d%/K.pP1yZ4H=˹)FxFÉ[ :40pI(fJ=ph,*njGu6TP ~q kC:BZGdbWrJ :bև ]J)=8Qz)!*nf>O=\X\It~5c"o4hEsx2o_~±p!]n/ƈ;GO/x8yMyFtIz3:~@G[:-jµmDwGp ie`g0S[Ljv2\|UR+S ӆAmQm:,Rs:,6*C'-C[[*krK6V*%5y6l}XC d-a5bLڮm]B?/6˥w]S!8-Mͬ]rTzԂ3:R,\c1k0dvړz@9pAZ=}$D FG)jv?qsgOsngn6{dȨRk\=?fAEwːڟݏл[}_}FifO&jAMe|6PL>[3Px5/D^E7VN#tQXDd3Oat>G(C!Pɕ{xO']ToXN]z` GHaŅmg"F_-u2{T c^rH3OcJBu~ k ۦ6&)b6?jCQլQ';rxSE\6[-<Ϙ8nBUh;5Gl*iRVWYDк dڧ\ \5y MbCj)x,lj5s_c&/?ܒJ(N{9 ] 3!G Og- OJjTƓlڤ^<)Ե8`/{Q lR)RW<!˧FA+d?g'ݫxP^Zں,!@|qd 17@eI%_5}J[pWhl.2$*t<뽏`&S 0V %X˥/$y.loנrpW}g 6vl6mƲ%bT%lŎŜ$8Ek #1h{ <%u;UX5(Y3$` k񾮳*3ǔ&buV0S 1.1˽XRt%'sq3@>J m1}Da7bDkeFV+5 V'G\DTodmeJ3_Q4WWf[BJybIܬ5*c(<1Qa '5UXYmZ$V(zJ# ,.qq쥉Me jp`EI`Ő^p:J!SRNEJBT܀K=H,۠: &xWu_?&_ PA㆓wv)v׆m{aYlbS,\rFodrĕř4dݒY^h LkC*$(hq+JLugve޶oμOay8=7Up OzA% >,^a=l6JQp US %xo~J馠SUM),ctX4y9`Cc(x򩪂}5|g>hEh9OU! Iܗx J0<ECV6M5R-(%a'"OU%vίĴ%aTXn~*H< C. rҴUtY}38KV-e-+;{3*=f]XDTȿ3}]3W8 :z9ߪMFNQs_wd=sIX[} iV@rweFQjy賶\˩xgƚŗS$#L&fC BPXȹTAr^TwEA`|Bl݆̜5"VM{ lpLr٫oAdWIPJS*pNQe1.(s9"̨P]M~[G |o#SKJHQ"KngZb8Y 0˴{gNNk5gDȕНa8Ϩ$[+| kUo xFq .nm:$pfĀ8G%lI)#.hW@0 ā *s  /yWS%6rcV |7@pݙ=rP̖E9DKw6Ѯq8q#ADfzӻ ʉ8>y)qtFCPtGOx/Lc2fB"iC8t缳QBtLT8^p 3<T[>g/gLodsCih?Y~?}]7=s 6 {q MbCj3z ObKoOYgO=~>`;J1XBpdčױ 5BSg-p,,)c#hd3F૵ MOofM +%0hn3^2_x)O?{7<2i؇ϟ3L&M\ϣ}=<ar@`gzt~4|O Guw쯣hzWw4 `l~&PR~pg_\S7c^KɅ1 ٦h6*&!F‡h}^0_ '0-^N 9*tNG{v`cV \ck6pfƖ-O8p ^:BXq64K e,z|́1h#sz~M}0$i^<vt4O{@םhe::FI`W#3`u-UR25d-1//^m8eVX҄*# 5$дMp&k.%qTZk;W 㰱V3ּ1u aF YpڶI?{>Orx8A<݅Cn,.=Qm,AԀXEDu5ᦿjjOn:@ N 𽛡|7=]_]Q+ҡ؇spziM[>_s `c'L3d}%؍&28@:2bZ*!U8lBN;G#KTb$bb+![HDU8HQZDq6XZZ3$p8Rqj 'R%NFZIHq̄yE oI(Íb5$`6jFb,•# uVQbhuBC7:q%҈(N=@/~s}`EYmG1hqJ)$,N8b\We8QfwC@j`JAi0GW]jX !ٜ$l  IPba@ a3Li8i{H" B( cd3@Z`&pS>3F*+zoNbۛFwEV| 堋 %hLKxY0c:pL25.r3Z*\bfAo ?A*[3ybgԱ2čl@@/B a;@bm[`.5$w^X^즿 l>Njַ2VB@T<_U A? ** x> *F怨A5v(l."8k jHv$kYoՓfs2Elw8vRwdC\-ouiId{s{x}$I9xMvP.w^7E_);M|MzrB1k㒑nx.Fxa|?JZӂ= PeWT6b I,m4)WW`4Je@iz2у+2GmZ\ Ѱ K*\n 5 d-l# BڐΓe&ȣΌזҌb3ptwJ܂CqoIw_F7568 ת6&;|{%Hqd(hZۄC V3Zń_WZ ʵ!u5sh TnَDuϖ5J%d_,hK+$H t֒UqARz/Z˓P&ڇXVSRV3DDX 0iɤl*;IBPSDx~ ##  DLf?W0S{?E}b(g=<~q/6@ԊIINd|[N䑦br"))9?tqBs- 8͊ӇKTi @+-$ $0NjI5*N0J"#q  ~-$?dXf淙2iO>5s 򪂸ҹq:ØuaU>k|rRO\!}"2{+" k)r\e~Pe| J%;we2=ej ,Z_wIh?wK_ ѝ)v48~Oi'G+WYȞ-)zɊo,L42,ݻ(:̟z^9nI)} 4~l%]JEXd'_38o8|;,N>]Rt6//ĺ BY<9\wRː~6]PR"C<8EY® >rK= mL~CZ3ve& t_]|!})NFƑr4e^kD2~.B㚤kW ajo?OAR0ciJ_2kEDMIEFbqZA.J ?½w$HAW}9nsd])+ҭ06BHPήH RÔkXu ڐ})KL2s+q]7)!1' eYt-#$A (-H [ #-8fuy:o Ga$b="?1Cnð-6nVE)Ym#}?xn2xNJ^ Sm$.C`"?YcD(0Ä\#kʖHU5uX! ԀA WoS)Xdpޢ3uC^ 0Q8 f9<`umv õ<+ ^Nc.>lLYe WV4~sz*fP">^*&c ewPF}ZhIz#i:ۓk}_ՙ`-@[m!yde"r*x Aͽ3BBqN{6̀YJBh lV;o1f筡sgo5 Soe%0։p '`M*Μbٖg<@SQ+]B*; cDZ"@SG-$!m_A+HkcTعbMa&"MLz0H<I{aя7!k$&@M%qxc(x2sB3m2d3#Xe!CV?p}vbApإqAwS)R 5WuCuh-ou 8-ƍ8eLE8^UoŝJugm@yWba q9;ynAjjSI 3& s3mr i%(3kb7y!UB3'CfY2Q2U4hd)>8 ^܉QN;++?PC׫6W%WUO<]HӺ`Tz*vTɄ/K^:Ics5$]iIkc!<wgOydՒU,Fo8EN5FƱM nE 8l#FnN a$_:펨a`#CA:\36[02+in\e9+Pn|Yj-fvz͋œ:mEH˛I,[#'UmM>lͮ"|Z9}4 w%HB1? ,f-xx3$=}'>$L5(Y;rJ[tWּIhg:zHEYRJJJAY)hS~VRk}nҬT@RD9 +fԚ8NVzV2JQF+5\>_;mҬTԠ5}7f)J,]\9Oyo J.T@'5F3 q,BA#-jma sʇJ/rŪ&Qj3tſuSřUl?nO7BdT9kb.e_ߵ0Po+w& [ʱS˽|}ů^#ՔAI&wHwԱO B@xhox19c aYALȌV9ȅ]?g qo<нmpJϲIJx*A+A@VzJ$R(~C@/֧cq#Ǵ`({N<41߮a -mF } 7#\;Uha1 .W'>_b@LGi%DHu5`6}6 m~@؈#)#S~EQm^ݱ_ҞlUiCz&?ֹo4}/j!Z *33c3&R&'UN>/Ԇ'kdm3+U@ / T.Ijz+& `CO˻٥~ǻ5n"Ҋ K-ɾ!b~x>ɵ9Nx=C(8 cDoo~Û H:sg(T0YA 0Ăɶ &|r#BDJۓgZ`N[4K7S!wU1=b)G .dP?>C"Jj3SyiBԥ;n]/7S.9  p!9EP IƆdu-Ih\HB*`*biH<:+|ȵw%9O K Zusҥ^J?=LKƂ ԥd9z)-CʽE*(ĂKv`OhE- s!/g0`--N*"wBxmM/m$W}G[O$UV~-M.^򌵁@(y(Ŵo.{A6#kmB%0˲.H Tljb 1V!Lf*qa#x Fn= Raxsx.x=ƣ15!TB hl<8n)0n<:{Bb#a<7D8Tx=v CLV.a = ѱP. $ DV8+3qKQZh^B$n_²g 61YՊ HWRt@@1 3v- *9eW.Z\s Du,Kg_h#bWm~)v~=qFZ{u( =yK-q\&\!)Mbn7sn*ɷWeMo&׳տ{-.YZɗfNsZTD(.H b$ qFrsTHi&-m.@[yȫ)bM?.ꦣXHf:圩+IJ 4I-s`T'$ҡQqCpLo8 GQ`( öTZN-2?޷EQu2duz4V C ¨ƥCMϩ[0aH)5:6ӌ+C1D9M څd%1d6Qt04 0mϫc6]hRi/vs"p08?>'OWg*1EFuմQ.iZ#y=a:9cxQ(Tpq_oWGPUV"ѼO߲I<:`f_ڼܭB~ W|=Lk8>p^e=jU L'?]ܽ?x|3od )UT ag+bBI+"J+1gYrWk wçs~2ү~m[smo E:>! +, 鿲FAecLI ێg;/㙢mxux ܴG.t{DLg ]nʹVλOg >.|rW׷O}5v Ѻw+9k){Cs U5%_//eY,͓^Z#[恍Qp1^k-¶@(G8Gl{AI5n[b\-qY%4t:fY)K:?wlF(uE޺wmlzEi>d$#滟sHV3L^c$Wrjwܻc<9gUZ9`Eٟ"H]f誼I]elZާQNW[ޱZC}72= "^Ve#Q]#Qo+?kc= 'J qc94X N[YALјPQiG̹j;VЗZHK^s"o|rwauº kVVp,#Qh4A :6 ]'/%5]OuadNXJ6PVrbUELOPuCJɢuZ|D"p|4bA'h4WY 1v#u v3sa1}&5s^fPf(͡d9@T5dy5͡u=nh YK;7D8l_ç;*( B7=ֈМ)O/UEpASwϥxmuФtâ* Nx14ri Rs|pRkB'FQ aMLcRBvNrbY{NE]o&774fYlq<4I_\3_U$\O?껙>?N?Ƙ'LcK2Ov@7 ' Ti?.>je忨 *VV`\4XqAץkvԮ*x*ޅ4k-nzFؠ3{&J˴m-%l9#T9.6D) ̢Di({Qe dM]C&Ӄf1D)G"MA]JPe&jGۺwmlD|Hy!*S]X!+RH$4'v.XQnɏ6!kizMjb+ wQ-3 RmF!ViM4jո4 #}/aդOL9OJd5@b>m}5j>#BcEJuk:z1z)p 7T|P#c(CV2SQiu@Zjyjfje%,{$Hk{$j33M"}z,Z?rt)SDžaGӜP];_^p[Uk@4w4:@C8 G-?%fl3(/uE޿n8*NSP8ykIE P:);OK JW(Ɇ[vxTxUSNAU)j9rGJ%}O7t_:?/9[2YX킃$>_ce6ĄOe;wn~-$r::!b\kchJ:P<bD-:aP(轟Bߒ~;?W~xEN|n `ml]sr!^W8_3V/IF?gOޅ8  U孽ԷM.'97OC4w/<[8>[KXor[,bij+)17ߦwWvpM5`'ND/@a..nWf ^ݾr]<ݻM{W!zr>|> o&WϿ~fUrB%L?ͫKg[%zm&6 k丘bWfZs UH/v*W槫lIm 1GR31]"^tDRmIҺc)yI}Dѽ>~ `V2* K/Az~[2hZ~E$܈@^27[|=hZ~-0Ep Wi $kNv0{%VRieUH 08M0R3NKwuZ%A$BSqx-ݟ8Za>;f83Ƃ&4Z'MD"x|0XnfDJK9%]\cl6!,<o.Z^&F-1S`jgV(KqXZEx۾>lHҡ xіA[/qmt uЮˇ%kR+NOG͡JR];iá]4/yz+v3`n˝e:X3L׭{[eRqvi"$\ TU,24ߡ.}KJgC{` 4+6{m=ˁU*8_2:'GBvK%/Y֕Q JYV6cнcf,;(8Qj저LR}I@of FAhNGCۤpňX$/gL\I,ikH|EnKNkA@mCrEx"+ i:g0{eE>DYg_eqVsz[C@E Bʀb,|'0 Re[~{:o#C҆NNS")@s^P:"RIAr@パ`u.ʣ(AJ*XRweqHtebw[HCz鍉;F̋'*5i6QLoIV_lnJeYjV_HyTFr]fBJKir- r k!XPmom޴-Nc}UO*$[W=^USWf~տsĵI 'U*HML蓺aA%Z"ٵ\$4nA'Ѡ$h)_:"`V3Eg;KmNH0q8݄E"ĄJ+J04szTج$H%hߥ{ \\G"$mMkڈQ_fa22+g#;%|޵`$߯r>_瀒lYz3P 7V*7C3"(DoN8!WWM^@i5SMXM|p!&ȳ?˳gFGIpV_'D\Ϯo*{k^>%Zt@H@ W@z+J'n2 Bwd<=⧤ࢽH=JL,FU+C .w.LgU9 ,:}9D4 J_էV(Wߔh/^_z? m7}\@yO;G$V+4N")loviմ'q!3es'xA*@ܒ2*lָQJdV(eM!ڪl⭱饰L|kDAn'm9Zkѽ(r|Qllڿg¿+uŤ#RX& jJT:~&xăW{87[o4gz*Sx!N$׾9S[5{TZgIiyiu LoW%!c}˿ܷuO%V|j͑إ]ywO0fp߇8{ͥwfysସH k7оl^&L?*;B7yd `hЮCQ׌EQRjZƂ]d>TQef]"B#CfU^`TY !,+oMc0xXd 1lt \pM(.n"+x$C!ca"zM̱dHr[@ a \L'ہحl!J6ih5FC>ߡZ1!kCyp+ 7IX: !_Q"ahj:Eޚ-ӷm`Uf [Mlc O~j~4IxQV@jSIBnprRAJ3M逨3 TgWJVө:|(&ǩdM`Mj q-El/?nWM b=(djO|_%CṞ^^7}zxtL{9.>-#ɻg33\{t.0.0\a4wR@ȡNd`xOL.b-v",2g*2l&sg;pt_8PȊf7`ZF;֣_>hL_Q^Y]x5^Q}{xT܀zq7_-T$k_? 3si!#@LpCSL0ΆrKf(I}xOC2m[6> o٬J;LbqwyA$?u/1٥?9A.?Cu //neh.J2irѹȄa@i`0峿ټ8v*/SYgI{VM૷aUsk/n./o,%cqv.9(hT<p2'ӇU1"(#U!MS>oAEAw* 5P}o{(HEZJ,nJˍK&H>DsY80vϿ}x쬚p~WW?*\ݾ;㌫9yݗ#f?r6|\C +*oϮ- vaٳ٥[}1Vό'(dimf7z}Mc/ !B_Zӊ֮n=8Iv7*RaJbcQ}K5 >TP+ڧo/o\1G[,K ~8^氼^?C^.*#esd^sLsxdӨU>X\MkNL:#Af?V>7o߼)<B{R4 9;2yݒ3,;HCt.4y4 _qs֑V1R4<1ŨǷ5kѰљJ6c{#/6ͣ[mz&7cI[ooΛ0Kf Z.Kn=58׶_E.}(`Z2Lq퓑޽A;NӾ$$롳 {*S}ZL.Nz,Pj3_gt,ʄ+d)|-*/uK e|֬oE5—"Ae6?+Ax '%(AdZZrO:̨s=g12kC8G0vXwz5=yq.o>VK: ˛iɱEnA: ,䄼f]U8w9=,$Btsk ~>{{oggmT mwG\ܭciZf*zO{ =?Arޛ=BW5Vs;\D[ɔg %hb#:mߑbFc_зuk=Xօ|"#S6RlUvMc 0ŌpS%-+픂Q^o瞗m#-z 6duY5,eQ&r=|s8B7jQ}r5ݢvۺ44C@u`Zn(/}R2}n ?_aY7Yș?}zd .$Q`^htC{Q8s0sY>blZv ,!4ѲS]c|)\RٓM__/Ҍ% 88SUEC3&ǫj4aFLMciߜxNPbdW8^h'XӻU#1YT92Y/׊窠?J ;ŔҖ6J0"ϝi$1`Ò]'nѾIzwO`l'Sظ:L 2Bjkmb䢴i'{t~k,EƹRq+QiY@!EEJ(,04/mwFb- }U@PKeL'FK,n.笷ٟïO^jy^.VNRʭ-@GihEoE 2#KmE&4h@ M(XÅW,۳0b@m{JNP7_Z 4-~XwkRo] -jR "{)ߜ/Ȓ)dB"t>a>M8-zo80Z۠2){#5:{qX,-}6^"'pX9lq˱R&8Ih8e32n&GxHlr6c>ꎒ[];\ 9 3VGGG PGh^TŮ&BhCu4u-#--[P-/QÚߺispEٻ6r$W|9,Z!Yd\ˌp=`g _QR,luKjdbUz!.*A^>\g&s'݋zgO0zGrZ1߫)z>{G sK{*{?U`!D{ٔUϼGn}ePb:}Ż/:m^nz!,䕛h/r0w"ޭ JL3xcV`_S݆Wn+6e m`E[{b'RALjwYlC})uWh.$ ݹeǢkDu+ Dur pӯ;mcv:rI(1} kTRV"cd)79lCYEz wՙf tnn jC?aX-S8p ڑÿ3YAye T:Zk+_|dBFhS7{[- ڪ`Z 6(ԇd}8qH}vVv7iX<ĵNhH|`<תnÇT3/8qYvR?=Rrej:i+CI@8K]+N, Y9zQNO猪*HTqN6 +i yNJ^s'sE{Ozz`Η ~eZbp-vSPv.:%j; za;R)6 048g[Pe%_kDߋ?|!?|ʍ0e#߉Θ:/Bz4vgw*U#}Ej(x6CnV$L4څ$T_%秙߉%3-xgXP+l"O?Uu YZLDE 7~I)P=tՂ;!tp9݁;Ю'atˢ>͌.HqdH t9~EƊT'8FZFפD9HF2S 好S҈&3pl;%mJ*NNuul޳<(D8WX h"0ʊ@3sו2e;3kʏZ!Az-P5#$GB㟘N=g/#D$XH 1g;:%UeQY!%J<_|Sru@mFi룔MݠA ,ESc:kj³חCx9K4s𿬀$-׻^*{^E2sFX++jp jfˑ.kWE^\_;HN"ͻUwG A IK{@v\< [<a!nIuA*Av KUr#$B4Q\P4ȁ11sضڔAf,8Sє(b>Q7(Q捺J!GJD/ۍ^ ަnS^>1J5"NM_SYf<=em.8 0uD,c-$GSoxoboBZNRHcl''UeE5:J-;8Jߖo8(DS859{\rD44RT\%(^_pg O1—:c|S=l4|Ouz/n?RH%ArKoEdA$b]p=fsmaMC!JqŔ<5P_>Cݾ#| )^J 2xL 6Rp*щANΧHg~#K}qkλDȽec39Բܻjw]3G[晃R֨[x&\I)mO&Y $>9AixfBFnN%ܵjHjp82ZF_|z8{Y|~h-0ZU#/0+}/;!HHdֵ~|ۛvyoKFxx' )! eeI+!iOdMӄQz?~*c4;€ XZ؎lC}Z :v'tRhvruPFҕuSi}@/n'W0'gl^m`& 5k 9d4XiMcjK2j(; Qk!AHQ)sЌE5Rb~SlgYTLD;Mu0IPRI5#oS)m))4RK<[ =Ps&vIP`JynS!) wiCvY)} .)3vCmM-eckh Ė#b<&hyB1𢉸l(fʟ11lG}9`pT*RQbV *jǁlmXh8FÕowi𩽵xB7տ>='6l>4! ; Bл,n0֕d_[&pHV:W eoM{U:'b^$v^x{J2X?K>嫕RJW2H_icRkzk-6}qʮ9o~As@dx=N5i+Ēh ,;9{x.:iM^D?}4 }9 ت(Rh@v9X}:a& RP Bx!Zd#5P`EDѸ#j6mgEercXQi$ vNJąD1vE-v baQ.p,CҌ l y#a!:WQ)Y wJʼ]qM۰E#I%PeBBf8Y toc"@!pȭWNo^[#@ pM QO! ke)+=xaC-I-MxUN2NɐCsCD'A1hkDݲM0SpFWB!7n"Vh7"aNݗ nCPj"Uu C '9Y#8!:7 1n"ƋBdh]H9l߮b&V)&4XkVtⵈ DjNVKHVugġUd-l5W"'<Y"Mx},ucDefEإaྶyomq=4-Dn'FugT՝ϙYz᣿l\X.HŇӻo{ogl"Zi|qN;M)l\KiRZ|w;"W q>E{xc#Eo$<9?n=e i>Ź:d \e?V~Ǜwo][od7+F˞(&3X`v0;/Y4tv<`Re}\uneA*vDIQD}I_zONֆ}ima{5kq74z'Y5꤃:TW[ϖME+rmgGgi3]F~.#ġ44ԅ9H䲜a:ᡷb}ms53K~qA DH/U1=W1LC̽?TúGҬM$&[T7H<#j\44k*j$ Ik5.Dl@ϾL6?i¾ewVsZ s#MUw{뫣ryv2S̆"/kƁG}S\9{<>MEk֣r ~~s?lCǵtZjjfc)ȇb!{길73{@i}}YFOf';=Mz=U0d!_N)>@yH 6U-3C SݦW7z9,+7I65cf{w*w*ә!Jnb6z9,+7ч6)Xa޽80f|9ɩ.k ڜ%qċ뷷WosA9dvmc47&֫.fTo?GNnFq:<ه uaJzWνJ-&)zx_J-ە+|8R]Q/׊"ܡ{l:jz>`y>.C3*m{DZ^#UGL'b]:6֌3޽7,fjx-g&yǗ.# ؀\sVS7P~^u=76P7>FG5+ξzԯZ$˓Y u#@$^_:I߶"[o}@;ni5ۢXSwcƼ7j.\_jidoi۾i6/O;,Q*ҔzQ,T ) )!hGF joQ͝hu/:>ݾD[[Րvg>oo͜/߲hjRA -Ng(̂"QGM,5ɔ6&w! D 4|~X$J'K_Sd}2|D8u,>#6W2(Y`D㓶F ǝNxc(%+?cP[#5Vzuvj;ߩ 8ͨrJjpz4ƨY_+cLe1XV9šc0m1bc5WA>]Zʚ.ry`ĕqX7HkMg=ܴJx9qʩ%@߳\LGF"T>qTP-`d|6B;$X(;isBnJ(8z@n7Jyc<~@eN='ɨP82RG9"'bj# UVzL !%`Qc \c8 w1q_-}塗LAJ{W5 @DU wc f7/!V&@Wi`٨̝d([˪9ES{Qt[Z}y:_,B ϩYuNJ<u6vޮB!izDwhl$5ǣ|X)5Ah(ٕ) QL0K~u|g5cY`p땡Jݣ,%(yZ}] --sC:de7$lw`uȉ4;uE asOw9:OBu9 3d<ޕt[ZyjT'Fm]dL7]ݪtn{eV'H#W霵IDcD+" ړPBJlKٌ f*]4p8Dd2Q(t5Erl d~6ƣ<1]Ip4kRCD$-EPL@M©]yK@Hh< du!i8Y5Á/f`! #e](dR4`'wބE.YdNx"D,O9AG8R,?Œ+ށ, 5ʤ cc1FkJbDW\SKQ7NMT&kalfG"{>R:.@ggisSSC ;&b-ϭԴL_ikNza`5Z&mke}~hXfg= *$@E:`|gOV>¾a:R<꣨h`e09Ag뺘Ύgrt^Keo fo7 a~rΗjSȾ{Y.'X@$x절Ff$ SyG(lm^ESHs.A QTDtB35{j[|aIZNuH:@6$zbJ\{ @s>g?5֍]5ԮT"xq(!2?ٲ,ɖ$5cb STE%̞$e1/E6L1鬲!-N@28dR!T+Zc]`Ѹd<>֦"mDGZԜEQ,KN Z1@!͎~?L+|Dj?'Rj!jE鼋щN*;s1[zf;qt7^/Ef` (R6t-[7:a/C7\7ھ Cu!ըCNr pMQ;)w `{_mx}ON6LJC4B#a GJi~11RbsRxHdR=8[v㈧п}<9=$Hܾ2Egh4K줮l[F]Jl%"lG7}O/@+dU# &c1,NUbL[\IIфZ& |2(g!^Vr!%TS2'-crlߌF͹V4;yqth;&vXʏy-Is+D?7RbKBșH id/L(ڒ(}6AL@|)eђ&#$Z(SRu\+.kBbkDVBȑEgB@;T9`6CB!T>)~H^J% P!ύ~KMm݆6p$d-j ~^wǢi}>WOC*7cy\/bS/CqOY AtMs.1 %gnhiagɝ277U`r~p`~H_C [m 3ffSvYu!E0 م]'"ȽQ<=&X_obƫztyˏgt[UCТT޵5q#nv4U!RNĵ/N0D*n?!% 7̍JS%&ZWPKE(Դsam@g,ĻN$sԑjs6 ۀ"'\%ց3.$SP^yt&9r{ .p!)nb>rCV>XS$D'\y2`=K~>PD(J2s\kزYw^rJt^\eV(\[׫|O>=rj'x7<2k.e?\dG}ȖۇO3i駠y 4K=~ ,{@!`s pqU^{ |A;~Ӿ$ ӾYrYRYW;֠ՒV ֜I+l*PJŀJwQ}]ZsDi\1\@)Ҥ(E՚3JO<򂖳Pvv_>Onc5w"oy-iOWhhE[gRȾkv)DM-qQbfpήDS50E|Zu ]$ع0lq)N(™&5fT"1:M5GJ 0Aas z"`+^ϐ\ _jBp1IS?S@xjJ`@75C;R(TTìZj N8RIr ܠ!V(Pf U&kvl05j8,{jQpqxΩ5*0K)Yu)ScŤp_#Ml(X4IuH ©) ±A4vQ}FTl&D%ɞN#Y1Caba7{K+;vֿSִt(u5$g f rP< b=6m2Ɋl׺K1|6ΆGiip=yR=]gQvIE%8Pd.@B)? "EpK$q8MCYx[-É)poď'RmʓPIwh6*-!qojF.\Y&^O~a0dzFkoKZQP'/1p$ dlˇh]c1*8 N*`5B NL*Ēs(N-QN1JMؤ\ .[3-eH[>B'"Vh4s"9A"kO""vM*[Y t&|췛}[nNE鞛/vF ʘ~x͈,>&W_<}ȧ|_we~w‰#k"7t8ΖBz;2^_C skY#3^n &k۽WRazz~7SpENה:9SܛUV6ޔfhK(4HR┭YBL-,!X[IDY.qz]fy+[cqAd9CoŸ/L_GO |'q{Yr8rWZ7G-[o_'9[[,lGaF>#P0`Wf ~4ɥ&*hX}:bu]:bd ,eU|>7K\iT [G JdmN",6 LL [rN妗77.`a 7Cl10fgZ-ʓDKI& xMkI0d6= 9,+tZ%Cc}')'*%ȤjO#7lJIȍ+TrZiw{.R N:8e- \QJ#⊢D ev3|4}{?MeqEuX!sUYn%p^BڤUKs;t.v[+!%X[c;Hq !$k.ںszck w#EmaW>cZQ( i?yƀv|ȆnX8ko&"\9ŋ8|%-c<q.m+|C&Z+UK9[2P\SzD.,9@HKtJɄJ99C C? rmyп~x-~_W2xg0v4?|OMr~Їoyyyyv2Q*ȩeh )IRh-2Uׂ{!R{ y粷O#֦@EKC |v> Y6fo{ ] Q:5$E(9dL3s"eN(eS%Iw,Z(l+Oֈt{RD)hDY IbjtHEH}<\K}QPO} cz w> -Nd͆WOvv*[ {@fd(e@f`v%HI¡#sz-OLJ%JHb,^:mIOvM%( ѴYQO)Y"&ڷܶΡƹCn-p\b)"U⤁h[1 ,0 ~ƹ|C{O3 f) uNd1myKN*tbRvͭRH+z Yy;.0E١r5&ߝ0mf; e$]`ѭ:HGJ Q;HyMMMMyGa$X!L9Îx+ jHX'N$x$K6'$ڑRwZWsz-݉ns!4BS\ڕh .*̅>;]vwT)Q߬HP-"<sRǹJ2EE )jA` vgJh ba3*T=6rK}U1/F7Fڜ0!{X`w t* .u"M0QDH8SR, DZ_]YE"m\Jw֔`[JxlBKkxR/0QJÁP6СS1hL+&&OBI[3(6OKK\$cjBų1U `l4q⿻eSPN2E#yUb"}>ӷC4b9,}ׇD"y 2QWݪ]R tl16 d>NToQ/gEO}`a宒7^d_zUvo1ѷ<_\WnKhXUF0>l~ ZaMe":WbeL(g[-ݸ,IuX˹:X#$v咋:&@Emכ֘zbYW>Y֕zx[]wՒZe%pk.d4D"جņX$+qIth8s\ EUeAV U.jD>{r-b˱2a&'Z?aխtZ)JN3a(J0g}e%jT׎RN5גc_̗W`*|kAṔͪ \ߛa\݊"|`gw{Z]osK72@`ro*u쇬MS -G\'Cm|,z~=d?;S߫r7G`MgL+3 :{&xŃ;>~B 5:¹ˮM V1Щ;Fv<1Ȕi\ڭ pノTp[]@em*$iS#[y-`*2!pո]Wcް*[bj,ׂs[l }(Н-+$ E34ޙODE]fG(\W|,yVgJ"]NZ %c\| _Fzp1|e%"᫮cO\h*D\wlՌ;ۊ዗ Ah!j[ʱgP%Y<3HϠrTK0ױ\N5ƣVz,Z9{0]8"Cn1.\NPbSKթbN YLL]2 Sq緬"Ռ*j]ĸQƃz<3d0;!҃B^8Dkao|M*SAtjQJ͘\ֵ[jڭpTUv@sMqe[]@e1OnGj&!ZTd`T2poq]o1o#Iy#E1j4?׫=z^[8bPX7P58QJtJ i H5F)((2R,;]T_Q;QqJyk rҌjM.=eKU=<PJJwS}]Zc(=ibR񙒝@)q(ͨ$WoSD)q(َOJ)CiFJ3JOq H֐K)H'P"Jҳ.=mg;>fХE"jR$#ũ$OaHr^gj{#_[rјd\`? %6||KZK,gfьD{{Y^Z=ŧ"Գ+ GRzjyr v3eX9\V-G?lҁ~xseHf f ' ToO_|kwhRV"W[=g<8I@9IF>-SL=ȖJs]ߐvRg?w'\B-D@'SJde N^s(t;܍[.+ƻ1- ?)gdy-%KE{>EvPQd)gsi*=v+D ѣF].፶C̓n3R/ ջxVC؛cM[wR6(dRC*)d[ˁ!=ydqKnBG&C,h*==QbHXHrh5 b1:f J*16H# m&v@P#G2VtQ7A]A֑ɦZфA]D|bA O}VCeCJBa& hN'af(&t7T Ll_۠Zkc Z^%>|դmsuCnn~߮ knDj(_N.LvV#߿W7^[ބկׯ޽QB񥐗"߾IvQw~Zsl5+ůl̂I7WO%g&1P Ǐ]voFJ`IK' iC+Gt| ;_1`hKo6Vyi{}Qenj{zU CqzulZessjzhnJcpWMǬGjZ--^=ig^]?=w1H"ݦhBVJЧoߨ)"CW6o9o⡱`AL@ɜ|uLTok6jaXWg:5jDX2dЖԐ)SfzvR'GuRR)HZT>~>sxsS##h׉Ԍs!{y~HS8D 絺H']^QInҍbTzMzf)  ^T Ǭs9@Z/8֝8F7JyLoyDнͤU3MA"Ri!V4$n8phEGJ-xu#Ȍ̰k-,.afKԆ,/6ʜgԱc=6!C<+)LyuGz|ϻu5"1OVHH!e{;ƥ>I"mι M8.$w.ZaĴOs'!6)oK1mI& \=H/]>a?DVMiiãh+m%Gp&X?=w1@FAf6*˼ʼKG)|9 F J1LrQ XiC=IO?wKBKtv#1rDA". KfV5Ղ ˲ &|z 5ARB12)Z ]VpXZK ISQ,8 Ǒװ<];6,$R@"2:RG:+BIVl^$uVsdB&D Y2ƃl IL3kiBR[ѫlI1*2˻ۋxsD;N&%I|r7]ВBwqS^֥ݕOqղezRb7o}nXK mrnW?J!r>^w{~ZWkL^)ZtL+E>}YVIa;aS:7rCɷF\I{7YxZ rL3JۄAi2[@sMʦ >|lo72w tj?ĻMVѼ[@sMt˦LM(k1 OI~[j#zj 'yA4َf=K.C76)|.>6)%ۋ?H`u˗Җn݄姻7S{r7ߤ-DO5ښosh.'V;- Mr?dkzQ=kVrְmg2|`߲]a wW˵Oy3jЙ+t\2|!,{-=?S._nE sj'5xF^k#\AClLA}GՖF~{|_uu0;t>r݌`bl8IЫ fYgZo8BLjxPBlBfBv%} 全aBX ~ nK%"#!(B::Dà%o-Za㨨TVsjޮ}=scLcc௞ ^g3b8o*g L0(towKmÚgd}-+)Cv;6\wFc@T*yXJzGTCءf3Kvv,WSΓaAl7ܛdw^?=w1DBiAlUvO]?4I%'r1(tM0 ]' ɼCPƾAW{ \pn_eӇp}Ӫ5_%uk 84sQ}uBjAL|Z 1D8= .(iKM#y-5KЀvQҀq!Tx CX+7*":ػQޭ,ө"6YtʒnQm y&ZeS 8y7-zzZ rL3J۔-z{c{L6rݲ)ʴ)Z~0mNjK]mQ34ҏ .>--Y`]2oɳR/Jɳҵ곕*̳R TR;w?,?^oOb1}Af._jX- lԲq@R7'&PJvWS*IɐC`H =9| $8~,[u*? !q, .iUۅʈ/g|³LG0)vmME.QюSVKuКQZƊ#5?8b _zrUbԳ!k ;>ym(҉Nܘ&2@ PVZz//uWq7˻Ua96hq@g1(M1D#ҏV1'ć5 p rxcgl84fHw@FCA>R.! m-A$ن}TH'iIK6$2  LeFZAQ7^쬇ȦQAGa5X>7'̨D~fTL L9/é&mB!D 0e X³DhZo%hgD+`L$)XOؾ2Hٻq$W~Y̑iޗ`g }S/ %U ;עRV* ]C)#Bp؞;Vk_jkτmaKV׾4ڡ£Ъy?EHl\گH)T6L7r8%j "i׃iCWŭKx8`Q|} )h> rkP;A{3DvwA[dQz+B0Epݚ[*!~tKMϨ~c%ԺKB 8B /=Ma99)Ŗ)q=A֨ 3kzx x@i ـ𖭍7euRMF'aU{b-b/AC s A `H6 g+CX;:p)->WJ cJi]׀ynamvP5#ZQT+R6$Fɠ%.gt G{5NSejN$p|byt#ߣ=q|АwQ:wm&k#ZXN3Bۈށ`-Ӻu!\EtJ{[֭lmu偏>#ĺEHj0iݺАwuJq(nT Wi?##G@+|{0.[be sS-UT!# p)KݣUalB7M]7ZւNH&,nV:j^Ur: Fyk(0b\U}ZkrwL,sm9#&+jnk)QƽWKUYŴ4WX++ݨ@؇,?ɮn1\uP%^)lC:B͗<^}m5pL ]څzZ"팩F-B \*)@ʄ8/2*"M"KLJ2erk),y1i4a,%0vjjri$/H$)L,1H&hxK=yK\L9=͞t'%)@zEf'@c>t8`siGk`w&~z)30kQ2cT7h lVWcx}"i/gWe^ tMfw"+kPo]g-Ĺ;~?X `,&f-K%R1A2mMl ϰx]/CQ=i1Jj}qګwzp)&\Ӟǻݗk:Z_8s3@)J6gXS\]eH^\ܟ__G¹ώY<#:^_}g 7_-V([,O1BJE"t)@`j=ͣGQ2Dd5l EնޓhxO:\_Db.lT^R.zBo|HwY80f l.,x;P\ H9pXZ@ LցƆַ/< ,ӝyU;sW4w=f]_ᅲp<j1bmOxhEƄiɆ}.`6' `u uI(':)]n:} t>;>SnLBД|z/8\F6:yXmmO,(޺\den׌-l:)MZP-9̗oofﮛ"C i69Q!Adz6A 8"o~st>.>W֙˪Ցnع$vUc㽍ooWwgw&~bkIمwU3 עU6I2Rz"]/l\C4z%Hz@a^~ 3MO!L>%lnx7uga2q5I >B!c,WT`L?l6*x 0iTWyoN`Ã$*A6 `& jcWYH`^!S\VyPt`_ V<(, ]W=a,5R T۞j.QQӨ *sbphۗ1[=h0>yܢ!7&d$"59ڶdZkk2Ś@C޹)VKL}^nQ#ZXN3Bx~)[|/uBC޹F锜ǹmݸinq<滍EQ:{wאwu SBu k>sPʳ] w]Kw;wuL$ jvXj\ZUʋ?~]7`n|<}~:Z]۵- Jju^u:̮6x|Ry;-me9R|ĀX=z;JSnٔ/f[ი)!5읓{*|(9ODm\?KN"$qͿ+ PvGeE%-Rf7@D]%Ÿ6Sz)׮SCA=͘yb#Z96NlBZB1IjQNLQ G(XQlv@(GNa<[(-ΜH HHn2$G^$OD/%"  QyJAm(s*W<H}yZ n}ZMZ5坢ҷ:we?aZm`-'1HvN FOo|;Ģb9,)].At?ji2:լ̥i`j )h#1U!"W<2I1m̴9a^rAtL!)@ŠYD^<:\UlHc3VOw[Ɩ9珜%C>:}m{(._s Dg½S,ނmRe^(_Xji*ʪѺh}QY.K߸Y1 EVʓ#8dTEjڶVLҥn}TLUkP@׶PPUUxH?ddkoԕE"A&}I'-2b+[#qUkJSZ \]M l)h(2}xmBQ{a#Ef5x\'|,OxZЦ'L& ?#%@ί0,QtK'< ÄsQt}W=>"#{{oY)%OkxԫOM1bNں4v C6!EM]rF+؀0wF-*x~h3M: كihUUڱQ+#]/M(΁ُ(EBnhI}-+秫Bl|=[Xά?]?BiR|Ng s/;%B`XOC8}] ٞϳ,u&!U2[m5U_CCC=.GTrqe! d=],teGtygBx!SA|)]‡uSu*bϨtyHw]<'Jm_RHN\El7|<6.%Bp،tݗ/OvW5X_zay~϶*em+g̻hc/~{ȑ'Uȷo-{zV*U4h7W~䩈b̐ V6=%[}g5"s+cu#dHXBf4V3ݨ4BELz/}&N{k,b'hHs&o[[n$;guJz]b;7r)ETXsd&Pg-Q1wk!"0Z$ ߎae:=u.R.ڷKU}Q:%_xN.b7"B/>QGFe."rс s  *Y4Rጂ.!,7.d#ل:sp4x_`/^kf#V\Ɔ4803 Z@`1}'㮽{ˤտF=U6'ߨ5 űOw?^_kO\?>ScH͏ܼ|]1-;sEZS@ ޖ޻B)4j-7U핾cl]p~z:+۹@@;"nCtWg7Y9c8,y;iAD -+:ׂraDhl**}Q6ژmt]V%Х rTrh Elk[enj'äA`4%_ܥb1ވUSBUb[j(. T/b0 cpݲ`E]Qjq~7Azqp ӺP)mƲ)<Ҫ4V!1 6u 1k("EWH(;R!vV4Y 1sU"EF (0#{,{aB2)':Akx|r '*b2P.{/Yogeƍ$5`tG$x:?bmcd)_+"'<] *prH/zx"ݔDof;nJh"Myzzr/6cݔ$jw/Sc:/0YU<*FU*F2 f M: fJw@s;A6>Jzga_Ulgn2s5 Vkϼb/fQ 1;ۨqHea՚'szܹH~#>g"wHX{sp_Hgr1P')mWN<%?aR݆`;h<-~ƹ\[.~#%͹@n3ms!X`aKVfVD~ Y&8kN>q;񴱷uIku/"P"ĝ{Z&oľ[C@N:% \)2KN$)>Kls8Y[stGǓmF]1yo^̝+k|zIe8q)@պ$G.4_KQc0vjHzQ=*kQ:%sH"݂dt.Q-7jiS7X#>QHSugufYw,8F:'蠗H76F);"]no83R5nWQR_7UPbXTCp-aZ%R9˪IJ0.媖"I0D8=;e~JNPR;ͩ@%93Q1Q-c^z]-vօRѢyi|nKO[}kW|QtЬ e@L䢣9VWLU͖+t] tab;SvV'Pޢ*6KzU_;\2,.n-o)卞u-7veG>OpPrP:e]Vj"ju6MɎ*C@,$iUum J5{ IZoC6MjՍCm h2ZUʨʵ2R {:!nh)(Bc/p14PD7Жچ7kM\URl+LU 1"ڠ!.٦,)QJ[Zgnm1} ǿ1\Z!r8އI篏_>~r`߾-4SS=ԡpc)+~ #U+n؅?汸!qcL79SW?DGtxswa8 Ϥ޸a.x|MhJ_7OU0mz\m?q쇗߄uߴ_"9s{]~6С9C<0C"n9W:ʘ0Z DE5|cY3"[0S5\i+qai1N3b]#bL<ـUe47qX{0΃ҏr`6O&8!G% _=^ ؍}'Wa S70&zG/M?U܋D1ɣĖC-=貥_S!e898 /,BgtNGLo\;,WZ)4/:&Bp}ƗphYKf ;NJ{VɦoҪg5V-9Q%git ɏ98Ġun}vjZtO."Mo$.OO L(=.wN,N91z?Qʌ- urw2ɢ[~0 wN,NYx/Yr1P')m%O鄢WmspJx %V%,h'k>%h'yþQ*j7ϭtk;P%݂OwmS\}j7Ҿb 6kOj nZ- Y;W=<;*M܄.nvV6Er#Ben=R&bG\b>Rf00pvT@uRvC;1]'SdbBtR,BE)'!jY&'$"]0%4ڭ Y@P5wajqܑE,HST5}\d%FA;AU_a$ZjD GT2M$kLݡdZ,rkBZ-tE&sw'f-5m[ ~p^4y[94z8N}~j2>@-Nm/85>kNJP{yPpj4ia9PmHj;]H*f<=xԃ$qh0!a4 .ssdF#wG YX{c1e+sm q.758\ t01rLDb+*Zo(46/`YN[ 9K51ƕtUa2HB#laak L c^%!nD Kd@ [Ĕ `H,jN_i<|؈ Ăd "s .c%O Vuuí|Ag˧5'LGy)/VOOOyQ&F'16qշԶzrP7\:&] :#AgsL$)D0d$rɕ_wӓ$`EDiV/ y5?:~O"#ܳ{x8@ w7'G BmR(F K #Q\$2Z2p5EXN([Vb/" yB/*rSg?0G0u 循% >@U Ƕ8/$]A}D2Zo՗p&/V">WnӺ,n-5sV< d;$+|+#c*˥X0A" b]owS*ދ /g!puiCg4x,cX#3/Ikay&~?min8BÈF Dveo4Wd%1Srg~sra, )'[*z;eN:H+-jJ@GYQz"1] `l& j, _#ӾxUi̞;/ z^E5bs#czUÉ Hyt25֭hxvBojgGy/ϏCWI 9,n ӯ8U^E&b*բ _,2maQ -6>>w9j9#1/hf:h8fʊbaE'E9b}G05_IvwtGi(R>v>v5@֪q ``DQ;e%cJ+ ü ~FڪJahu܇/UUkVబï)=, a?X{T44u8M \~X#75V51%:_2@,yǔ 05`$N槣oH=:%%o~? k2*S0 ~y{qfˇ/yy{W:,x,4XiӺ?ޜ]-JfIeºt :#-*5< 9WJK=I`󮸺7њ^2`E5?ݧ[pV~yg3LJfyI㋢.I @J`_ez8iW)FΉ[h|4=d5z}-z`U/ק 4p'U oOw~ZWp|‡ը8J>DǵVɂ_ߔՏqvr.? &h$ޭ?_t|Q9Ls}N5V> pؼD`~bC;nh3AdhlIvq{-Ll5k˾H4p̰/Ao\G|6$hRqk%U?T֔ QQ*_ޟP!Q[d^>֤;5Y==cZ4?C2e OJ`: b֗gfP˧0Ȼd^ڋE0`LL M 3M= l?9O΀QwzKGz8bMPs==D=-@0DGɃ'|"(dDH]R*!@ 29//A系_q8?צE4 RZoxaRZzz,x^Y}L.C`: >E8ty _c^eJ.`T/yEne,gp qQ&K-pՉ#>{k{ȫELYn&cXKoX9O>R'Pm(A]C)G*:xX>.>*#=T}Ңƞ.:t= F.;$h4ڤS 7 aޤSLjyX`IJ/7^5EPtwo4OiHRg _ӕww%'a R2Q JZl`WQ:I dpSAuo@& XF5ɉ/*;.* 0\Oؓ0{D e&UkԵ=>Rz w}FB}0^TwZ'/qq0%YEtΣF;KNּ*uގutpo*k{X񀄼*ee=Axd0y>*oongև d+;K%9Z67e~%.1 ^cԏ]MK,䍛hJԻ 6_tp2?-һ a!oDؔ%{jԻZޭ/1)m]auvlֿz һ a!oD7mJ8| hOW鷕`N__Y/GW7ןG(15:?wg߅##ww7DO 2B 747\.$PՋm$?+H`f+[ʺ[xVqZSU\u\@XDL ˉG%cy s!}~NDUsZ5nj^_l8*vqn 7\e]ҙcgؚ862|x]l/ 6̿\ݜ7XXHz^:=?_7ʖ+n?n7y3:sO#dv~rg}[3_Cߑ1#0H%UovߖaA:h{< [^pz!:; YeP)QjǔHV#re$w,Ze Q]>SZ4`ǏɨeaȖrՐC);`Ϗ9jR郕RҒIb܌:~So->r3< @:ީWrUtA_jO.?RKDRC)Ex̔R>*15/5<@őG?ъ$o]N@(HC,~*R4EfXbfQfF-)( E,ϋmmhՊFc>ElHi"=3^ƗTE)BYJ=JUL"{kQMDp^i!4z*Q||Z8LV_@12%a}H(ɶP~Ϳ!W"h%q ypՒ04LM5o)0tiZslr?뇫ԪE.!`#\jl> oG/G0, k,87 0˵#ᮐYJa .YQ0^EftEP,p֒m.:A[ ̢Kf \[X[ds=5 sΚFs7M+Uo9s)x iXgc~s&P\qZQ̵4*d0n(FF"D)8#C+rL/Fx%fiAiX6y\0 N2q?1&Du@҇%>hWRȂ8qb 8=ᘗ@2c~M[ϓK.#d }jmZ6M/2j hesl=7ƨ8R:eKD:H'4}O@m0D ʵOVF4hr?$}aBa;pl d@Kx7?ێ? m;4;" &gՋu d}f8z;WO?9*K]ovZ]L~y;3aaO>q {= i+GSA \ۇ1H$*Ϳ}(}w~'Y-fFS;՘Bz\,fCe'A9{k\N;ATKI>j#T ׏1\J87SSQ|V@"s9@؜Wyύ1jbƘU^_HYqP-ikN+!ըT+o+ P`+Vը+Ek8&8=C6oF Kx}N9ϳ[O{tL,JڈyJ"g)Jf )Me34ZesIw WS:7i>ޔ<]C^h-1 (|! EW^HdwgXK yϋ#Wi(mɍs= RNk^>ȯ-@jw(m4:@v:M ͏K2tXSucjm) Om~)I'&PԻI.E`2soҟJ.cZt͈`pZuF'ᯐeU])g~_.gJ{#ɑ_j ) 3abiFݒ%\wY%)+ &*¨c0H_|oʽX)@SOMh_P_.̐9W+hIMY}~PF0=ǠqIy_saacVqs֬nmP5/YͷF@5Ы"d>R_6ܞrygJ}g^ 04h^K.}h朏{u8j%*9O|<%~/RR`>cyI;٢ϩs)zx;GFcݚLp&A:8gJVEj͓Bzx$΢8oq;ɜEGn1z,h f3Eyq΢(I.тxQvXF<5)8s:MK_w;M3Aa_`ZQ6DK{ZD۰+GK3QAvwÙ1Zð_ !f"놊䚨Rh(u^: P.Z~X_mև[wŪ4$->up񟷏k=ۍyպ%l.ۇYXAw$ ]ȧDާ@$U>KɅZUs[:Wby:3dž+Wh=!C4 S~ގn[ (I6]|,2K[yMfvtӍӐ [ (I6*BjѢ[bt!o-L 9_JLi[χ)oI"VqfQmzH꫖ԚY!զ']mJWkpڄgR[(=iv(ڂ=SF)j炿Ex:%5%g4J3f0 rACi-R4 QI`۔(=$U[jy?mJTy|(HCi-iQz(MC)Ė[1 4H9cg2J5qW'l3&Re;QR4rXfR4RoQz(yWmgF)HJAղv(=$U[j(=is!J1TiJkmS3JOCAixFf3@aR[F4p5YTJLQz(R)CTR )Q*% AjdF3JO4BõJh(]Im;QJk ef(Z#+~mDDctaLl&2pjB(VeeiRLڬPn)3O׻y ~! tIڒZ8{'>קO$[ ѭ*lD6v aSQ1S(&/Tni38ezx(\-gL@J i \m/G;xiSU~0I'({wLIr^RtRi5P#s ;n?^19:Eyq&6tXXc3#<`_a8]5g|v mNT/$4(ާXhӅU1a#ysun stMQت[~ow?0OBd//^J_&ݥbLU 5 M¯7ӡ!>[zŐ5L6 k!qZ 9}mzrƴmz0%q)(R2ɭ t5 $78Z i)&lm>fiF w=؛iafqLMS>ͤ-gzw髧Gn1Qe$6.s{9rq,)ZLL 3fF{y[c6F5 h%k4 (}ѹ(\Q'TA N}Dv RkWԣD>6j·޿Z*ޑ<ҧ$U[j)CTP~GLi)m=F} C4 Sf?GoG7atK:ݦ˻OAkѭq&a aVtѨ60ڀnS"zl~1@h S):nk.{nNk>$]M9](R]}, g v?u5ޅm|~sWO%ϧwGw.߿-_mUmn.'.j>Ky_VB񻸭CR|d &5z;|(wHlB}ր7g>Ȫ>_ b߼{<FVI9~R+v:JP ;k?}_? bsf ]e;LWd7;N k4j3;ol$c2VrVI)"Z]lED&hz;]FuEt\̠Sykdg&GES:[P~(F/Nl֏-BI!fH[ ,2|dvQ0KvG;NX3[q:GtJ*(\bm$3Ҏ>鮃0L-y2`l)+,T(@<]EqIܗH!Z;9{oK$z!gEHrԞx BRΎ`2D+QϧA[LLӷCxchj`J$ݘ"M`K;qEzV~)jXDUH<) * LS Jقʿo߁] SHeG3$;K)6DN҆[[H SsViQZKDVq*7ff-ј1^̹-3rxI1cOimdXTgVx(*mb-י7yfZlˌhFD`e(z$0P귒cX2gEVdeUe{4RKVVRdNY5nȍ[f$u: o' "p4!4'wR806w5S0K->MZ7j;zNa)fNMόʁڇNm5X>XRJrHkPOsFGn1za(@;9 +>_]}4x s ֗Z) fz@N*םN*L̾#m(_=hSHڎ ,u?Ѹ%u!awj^,hc5B^ܨa؇v\]j#jKje,8m@"N(颪@9q&a gf&n6@';bۄٗx&kшѭq&a 3έ"wK:ݦ\ElU ү[ymaJ1Us7[?촭C7`"zI~f1FYF3Co @,5Pgw|klQ/%#"U[@+id~>l\1F7;q."h()*pIVT.9QcpI*(ESkL M0$(f}(ɪ6$P3ۻo~ݪ?uj6U :޿Rho$)b,vQoeqP/Zp Rb5y!$0-%2\HYW?+\=%{_XՂ ^dFidtnhd(߲jg='s$zn0m~Idӱ⦺W<;^'蕖֏}VnS+ԤgVsaBہMR_oA-^RMb_ oRQPnZ skT^ +"dV 2^FYS:3RVlシX>%qzɠǮg%j%^a]V,/S%1fB2-y)!3b\y^XYnK[Lqe[AX9"d̿&=#M{}gLMBpL]gbzp^'xjM'B˨t"҉l|!2g폑yl N=}VObBQrynͧFwrI$grɤKdhK4f|F̓5ǐe0cjvna8~)̰տD_˖Ck)Yt04tƽ]r ϿhF[cQ ET"\゙U(Dv<[?& k_a58E7XVVPuc_|+[Aq0Vup Jx%ESF=g>%%V&[,EnєΖD/W'2bQ:i)̻?zk?4raj}mf#}wMH/B4\sٔ`3;]$c2Fզ/6c~',d2W`8y芠{fLӊ~%@6y_{q/MۋO8AUBf5ODTTD!mR.$QzHԜ3JO3%>7a\RXs% TJf,IQ)f)Պ`!HH0B7RCR7uXRi-HP u p( EQz(5B)AӲ(=5R=@!e%aғF-3+dsdOnjR~֥`ROR *rRP E)|(=$,uFI ?"ax (EP3JO= A@?ZQz( " 1oT8L+-b3MӲ;/RؕC/ @#;N7P Xo_/&qlv@S_HXꘌ3%fv>-:W}޳s1؝/3[~DN߇Y$C8O6V߉8͑5)ɋY\j^my9\_؜=Mǂ1MȞ@e;-J6@ q='@RMzZK}+{>*_Ye1GKPJq#D"bH-C K5up%}Wb=m^ݱKz3DnS.Sl` ՘\$4a0VDRq d4&De RCD)lu}YB ٰ<[?uW0rtWyܛg%1r({C_o7[R:):O*{P RۙSZ8fqS'_W]i"LDJ5&PdJM/yêkEk.ǖ1D¡^fڻU9m1wW!DP UL`33H56w[9 a[Cp04K~XFSf<˲$1ff**@ӟ YaS؀ILT7q(Y `FEYH % P̨af5 Tsa3-J:#g"5n&nTd _`必fU1-h0L9f$զ]GdӁgr$I<( xN=҂ÙOysefUݭ":¯ݻ_Vlt>jQZx[."{F2.]l7wzvYrodz($[}J/ղ6t]).|I.m]Jӿ6yltx8|I/Sӆ!)}nfCn:N;xB2֦w/^yz&!ޭ0߶wc: Q[8wk!1%ǘ:6t^s^^%zzM})h 5@ߪr8 VN'֔ ET=79O_ڒS^[&Mp߫=E:{9ױVwswY\? ^\;ыŋ,y~ /X{T ۻݥ!Ƶ*Ywg!e"O D; ]!&wA(h]#?ڜ,;!ň/o"kj&u7gTj.poZ(z=~kqm!}_<ݹw~"  <Ίm#D7W7Ք@ؤeP 8R ֔,_+x۹K%B~F@P1{R*l\ EM>`ˇHM5L ꜾTR0Qm-=_ %Xi)݆!\2T0a}Pɴ,hY::|G,&kS,oDy rBCh-T> }G˼μ[8gwk!Ar(`nIks1n="tfnk!Z`N E' zA3r>RORKT:՞_@Z"q(=$,LQz(%.V(%4Z(=iRO_Js_ J/- Qz(R^ATzKRKPQz(5B)"AӲҽBtF)K?b9D!eg_z(aӗRK λF)hLE)@e%^RPDrɞh#Wj3>}ɌG$%enuT'<*R_lkKBUZ_'㩤^N/~:BSv&MRNQ&qLqv{<]o 2}:6 q)e.H߅&FMXQ:E$sFiI7.̓anhgܛ`hn`$=Wp7 ޼ c{_a`FS7 I)^̲ov,.iB#XF ZPұ(k[9I2g&s&!ѝqj,xeٝA_Ÿ8{qچF { LokeoK?Vᐷd^ȣ4e, p4?1۩w ^GIXiFgW-pjZSVB< <Ԃ@H),*n,&!Xb@@M\ky}g:@l =P?ч/GƗ,ksYe_v}{.,ӷ%F [GC F<_O pb=9ACEkM]ܘlvg͈$lV%^_7VF1LJ\` 0K f۝s5huYojV,O]h@re1&9u6 t;6vۉ5}QQRtAӹ!k8÷Ԅ2Y3y-6@j&5ZAXʩs/f ˡd5?jK؛F:F|JJuiM C4Sع=9zۻ=zP@'U[.cSA ݚ@3hs˻aBuwTn}"0갲Yxwk!%Lyfn+1lk>Ţ]>$& Q\kWO١f-ͧ'~*w6,rEwmuCfD}}^..^0}?x!/aM526x|H޺lܪo!J.Ioz -lHRnqm@}؂ < xOWURπoL_0gl??ZQutc;!|(W6+΀nnn7Gq}|GzG6p@1ܞ+,4-]yc#76%[@O[ A85GC*j-g $* /f6IU5JFb @ m[J<3)P98MP`L(:JabM*ؤlc>m3䰰Ӳ8~ 0׺wGEC][o[9+F 0%ފ O fAO۱夻߷(α,9<>G'ntH:WU,օ^gTBpp<>T%TrOU,|`huy2 Mv =O}`}=y9B`7,,kc@&>me*CѸtc[E;0 $݁IU: yI;u9pG}D6~jYTz5'{ͅ Ս3xn1g!XXd?UΉrhD;mm}Ip:tnkf`"=IîŬ >}*AIxxo /;R):u:tw]̶lk m O{oWRբ-WUIS)0*b1(g du!~F)ʛ_!`֪uudKaɯ':jݻf+ݽɟ5/Jp.p{J4egw 6ؗCTLYw:NQ0$֦&Cx Cg0IA6G~饉dKo]l/JD [x/d݁} ʗr޳Q{e5%kt Z~íS޸F@\SXnl ѯ$O=w+C)iܛwѻMa!?)'>w{x@6V-3;xCf݂1CwB~p])lfmcxs<-O`Orb8Y)_ /[jCe}Eמ#(z[+_۴gThTjRR)/A-WstZҫQ)Bj7;4=W {u.9_OR73֐er>2N_N, zYf7-<*bYz[lEfyZi5ӾY=!]] V;gy  *dW8N.+JK~wK/w'w2+Z: .Y76Zq '`͏w[F&`h󱾯!̠*(tE1!2 ?:Ul 4}B >4*8B1O|1K( aP2`΍ژR洈uau71gz dY-IǞ1{vV[[ƘϣTqګu)1ѡV9dmI9Ƈ:鼓r-.ҍ+3pwmRr}/>}ܯDjsd_!:~WDe}9yĤIXY$ТMw %?fJ+s:8c>|1)3iXQ 쇘 dW򿭍kfe 4fkAs߻HbTn۱ini3wsBp!+l`sF'"2s.ts%,9C e=5&B39{"㼇RsVΓ NGp&Y+UZu$K(#BD^OVfϜ\-6 84޴ tk:É $Kj[@ XbWk4hy3n*Rڴ Vsy, O1Zx7= ۤRPrVw䪎tIH9Xke-ӳ kE}N.xy>y^[!S d ʋ-Ws6z|E[7wgdNcyWtfۛzf9e1BV//S']»?ó|z4l}}?' o|::šiYӅkԅc]x4]Ҳ<>4z3TWb?暙zphw`;WhehxXYfJ?muߝ|畗c(.T dr=aeӺ|OXmo}X*Ѥ;!àWB)Gfջn1 e.k7jϤ4O0uq a@e*sNt5 YKIFǀgM(h&ldlH)] O?Qa xe->~;=$g;`h51^>~ol+ v""aG. FR2̡a[d>U{Ф-:ꭅtxɠ7K }zCXbYb'l!FZ eNA6TQlp.u2Pѕx1IlՋT8옮p4n%ks0>c4VvɠRJz͎eC0jWgE£%rBpn( DnjygS>] nxšQe+8Z+,pۺ|YkzCCN+<ȮɆAh_W+zmw0& tFkwr@soN/jtOH󋫫SlN`ŞޚD 4Lh͋ȬtǬ~$zO PyHu`Fp'.(K'e@%q"&NCsJ蜙=caF=zc.z_ZL/Ssh B>4(.>IuHILOq}#@?SRJ;q,DCg}Y5)DJ]A$@.ER%/>5Oi?ua|BhSH#'"g †R )ƬꕮB4?E.~]XS H p>A(QQIn˫Z y4ȷ/( l>RiSR%K-:p.+33N֬#3ѬH]~w1zLI܉X2/&$B9xxMs$x:v<8!;^WX 2e#cE/Ɓ4هFN#z< =1uJAXvNֳ&3E߫rMfǯUP*>2h|̋P׽ 8w&--T OLyCۦmZ9Ӟ'3R* bJĉ@c1/E֢#ڧ/ AO\KLl@ <̢9S^)+2MF޽r[l:2Sq/ !,0X>T t̮XL4MK&XK9D o"(Dq1t}cOۣ^dޜ kT|#y)v|gtrPaOC -L1IR4S-a^U [V:RM~?}}>=mOfM(3+E-H}Qv>4:< w..*RXrh"Fahȕ ӳ-H,gqmu0%[6puGu]_Witީ$[&q*}u!UFMp`١ ɯp_U[{TT.X- \@yMbJ7׶&d6SM? 5dm6,s`@Y}y !VDsxZ g2P2o.6>YMʂ O¼8Zʫ{qħ 4]%X}-w ڸϽ(h H @fӺ܀y!B@ 6Xi=/s r;JOy&ًE|h`a9 Eό-i6y$R=jx, â=Gwrv 4BnENu&k_A CAk(8$ހ "C`$0~R4Eը)tȽ)S[BfgtPt~gudg2M:owQȖաd4WP=8mu6׽I.V|t&/'t);YaY1+)a =;gks˗2%@gbx%7FDdZ&[h®;;3deӲ ~2 zsH/u/`D_D \( }{_9׽I8K m+)L%X) =H/ugJlޙ/*=h4`yB?fz6sD ):o;)0vK%h]Lބ"4BVsuSo{odˊdOC|1rw'B*#WT > w@;y߰k^gK#GRr1#@ Ӈ-cmJkTԗ$hAv/U9g]~(qB֚7DD$P]\vc&&epѾ~~8 @x2ꬳ\73 \έߩKQ Nt6p[w͍FAV\OkzK/-v@6DNM:CkmZLѴ鸄\/G"`(S} |:[Yh}$@k=\ S$U"ZEٌVzk+wO/uZ?׺J 5~8=i4Qvvmm ip5$mYoJn^ oنy#Zo&˞bd#o%g wUfQA&7/В0]| .ćru'ySys<+TKCnq ;%d[<\rp38곏])n2E4%ƣhM{H/J՟ 5/8M Q@MFM<`n6t?08nGG*/py1yXQ5KGV[iXtx6讗A}rʹ.yg,F#g9.SAE}KI [ؐ(pW.3feְǝ߉fϷ 6{a8eѴ*D +;P4᪨Ix MP!GZE _u7knςڌ~i_"soV;bxg6dvwsr.%AܸNI"PZ5@eo}M'6A~N< 8~O;SUiEBAl:f-vNSa5'ٖ4ub;!U T#b=p T; <أ5r Ur1YY }r'?V-rE$v`馜s<_Hޘ%,UAϿRg{q's?,u'aǭ:戟 >̱G؞>ޛQIO7{%9rzEts:ڋ{z7$z Rû/O~~ïG\7^´x'TiY ^g9>|N?G ټ&?Ӻw]~aRAFMyyM`|XOfELA .]yE8J5=bͭ?3776#R (ǩessөN<ɘZ6SF^$\]9"ZzK"ȋ_%fj`[F:xmuCtW_|z7Ï` (M@kyCzb3Rϲ]' h01H 1)?)H|@-Kܦ&fuS9kuvr.^m"(MckƔ)6քDB{Rujm6c϶濶]*|I2应BZ1ZqGF}X]b5MFZ-CLk86se1\}6/x08!^w! ^{fo^s?5 j7Ft/~—cZb=9\(6ⴻzaP$xױ]/)9YůPdˉۢIEp8$у>"BXTg<ދwK=^ Ҁ2' Kk>OUE+3qmEW6}=sHILR1O8DOAͽeS21BdIPv;ͻ+E4Rq uj{Z%H>!Bg^G%ӗOWXZ)_>mawg$NLu:> g_|xH'DQ%8 }Hf|J1~yŽ 0 a#G .|4Da ST J$Qwbv_JTu-*.+LHTYtXė8$+]8?r4A;"sU\ v@/S+Ȓѐ5~&0(fB~MqJP`Za|7G[_=K ["cIEKC@U]*#tzv1<^5^ݬ3Wᗡ}Gw:hIqh5a7"齣m>HQD)HAߒڈs{GqH  s~jF s& ,Rr(130C,E9 (a\?\5Fפ6fkҸHb.nZD=*hd1u蘺齣b BtlՌ-UOYnǰKP(JBr`hS{0>dLݮ`kQXquV۹~>MTa)hEttOE H露ZR]ޝЪn0׻SU{Z\[Rj\Iڵ-h MtdḻjJ}5^6Gp=ŽF~65,+kܭ?zjܘuzz)^-=\<0*Nmn*H/}l`/6ԹP4]/xYˁ|ӿ/߹PYrYCUNq&NWnPZh FuQźUMTͺՏnMh WZ:U:ǹb$ ֭ bTU!wgm'EZ&4䅫:E%@N;uޖ I?$#}Ya=n\FirvniYN`!%ӑL hF2HRDqB+~l| iNJ0DLj A2M1B!{nI0m]`BŔ1S?z\|4^&BZ9C#BOj?8yDwJU%OD%Ru6/WV|Syc,A#(6EPqUN:y[pHو6`r32\LQFL3Eԥ*JS*"oD0p2B$͸ѩF<1SJ wڂpt/)S'KB4s 'Eh8vᾄ:JP`RԞ,wWHKŌN* B9,C2"ceW|d?\TD&Bd̻C%`HrF02;H H#הG.ͳ m#)5SbU0r5 ^==NAl]8*`=5X(N]@`lՆ {řCeijҜ"e-6HL~H8r8QDJ0,E5"`!OX#D?`$BF415B5F\ 4REj]s% xJEK(%L!B=KoyCC.JV6:-C%2ޟF Bg-e|1qV.mPX6Wx8ZcTJCa%L ϯu*ZH4/DA~XUHDLZeܲ>ؑc[[neB(֢].m{[ xV<߳ApԵN!:'냕&$T{^zh)^wOl my)`S&TL0p~&vɒrŋV2}!*Mi1$Qc,gݝQwU[.j$gut= V/YB&$x`3p? 47"m^`F㆑3_ 7d`K{!*,)>>\#Sڙ*cyd2&yR!zQ}L5P%Ow=G|ף^\RYt;՗T􌓖HUJnz5z++#^Y n(kFƣqe}nn= }H"^>V?+*y6]U/MнpKJHzp'6/VS\=ƹJyRk)mT_.SR&㴔9{ҜjGz|h`{߾{vx5ɻAHȡ Y9 Q@*,Ap,"JU@!<5a?M˰Rn읉UX:<, 0& /U:LY\ԂD0yyRUHdXǡה& |!Of_˨P֠-J%/jss`/o KA3 )*7:_1BJy0N~ krHxO:e$blbVC~Fpf B#\؉{]%e1PTMI(o$߽K.1AU%3A6~Y7섈d|ř{7f__?y>yb.-@,ڇ:ˋ {mM|g ÿ\ ~1'퀹)AD@9O@KRA4Fd?'"P3Zxo'D(Sok|N&O!X<ҋ_lr-N`+8̑ -5U[ݎN6VwiQl #[dIR߱,Wn\P4Kxw~nڻf<#,mkt 6'-8gxp;y#}Shhrh8^C|H9 g1Z aqECe>,`bpΚiEUfY+?='TGRn8ⴌܧg&uxO>{sq*N?uA3CʏgzǵETGs*^d1Dm0S2b.RŜ_cB7H>A=ªQ"`bލo׌[[Դ|;MuPiMuQPڰ3מGby~wР4~Y1Sg ÷pGU77{oGwWq ܟ+оף;d hh6r;[=~j=~lœb(¼m̚X6·ømp>HQD}u:n108Z&q)eKJソCY/ҭI̋=U/=]I9( c=PTI M>](q2(N ?͏x$(Bɖ8OKCa'?-uxTA(5i'Ǣ!U;64~Ѝ$V1Q#1ڴtM," K Nթ%:@%c2"$MʱZoj{q_1naW$ 9pk/nQh4mpi8fdDx@XP9$%Ricίe2y戏}peHSlt]ϖPOB`F2ε>.i~s' K4gR MDhe4[1_>DϺDxHrЕmaNv,~`qlZKg+fpaQMM^^D[XN[!uMf0M%4fbMǿτ|EE! t+de3 wUZnk"0yжᝫ2LDjZvWzIpl*'޻!Og~qȖF9ޅH+g_tIE#+HNoqIr%;+Rnq|Jt~3^L8]U{wF~3F1W;n:)as&NƷeoU~ly;C87. ƅ#[9rq !4S~qb4快Jzq6Z6㣟o'qH҇ɰfЎ>87SMNh/8]O{@pP3?pahʸ##Od&,M aIx@PJͱBέ4:x$Cs R"UR"Tf@OjawCٙj91j<X O: Cv*T T@U*H9p4aMm8|l,"ΆnK_5*wojV]tއg?z6}c0yu_y~6xj˵Ϡ-XQlfi04M6$Yv+HYOE4GxM@mvӝÂ#j\ bD')x~Wiʾ[~svCB^Ȕǹ;EɎr1ghc"8v˿9W.;˔$τT2%I_|%0X_%ϫVNӅ),^,X<,&re) CA:#-Jp<4JτłGlV|Ƭt07SCL.yj:'5>^hpE4Cef9ִwa-}Fv;yϯXW.+2 oM$O#msN|67LYbrM<->J'WE1=G*z/wŇEټ0z rR<՗kTk&'.)0%Jյ$Bi%3Y7qER I ͬPT%JW*IuQa[q}pTV~-8)E ~ 0>G*ՆћRji T6Tk&'-I)7MECԃRn⤴Z ޤ#0nJ x2!Hj-,r*DBi([VPeDuPYn֨TJ(>wS#"`Z^{ VO*dDVDYIUd8QZayU #GC YP8)Wռ_rvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000006307274715136227551017724 0ustar rootrootJan 27 18:41:47 crc systemd[1]: Starting Kubernetes Kubelet... Jan 27 18:41:47 crc restorecon[4748]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:41:47 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:41:48 crc restorecon[4748]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 18:41:48 crc restorecon[4748]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 27 18:41:49 crc kubenswrapper[4915]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 18:41:49 crc kubenswrapper[4915]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 27 18:41:49 crc kubenswrapper[4915]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 18:41:49 crc kubenswrapper[4915]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 18:41:49 crc kubenswrapper[4915]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 27 18:41:49 crc kubenswrapper[4915]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.090291 4915 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.095880 4915 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.095909 4915 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.095920 4915 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.095946 4915 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.095956 4915 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.095965 4915 feature_gate.go:330] unrecognized feature gate: Example Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.095973 4915 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.095981 4915 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.095989 4915 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096001 4915 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096012 4915 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096021 4915 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096030 4915 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096039 4915 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096048 4915 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096057 4915 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096066 4915 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096075 4915 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096084 4915 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096093 4915 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096102 4915 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096110 4915 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096119 4915 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096127 4915 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096135 4915 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096144 4915 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096152 4915 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096163 4915 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096172 4915 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096181 4915 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096189 4915 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096198 4915 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096207 4915 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096215 4915 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096223 4915 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096232 4915 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096241 4915 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096253 4915 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096263 4915 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096273 4915 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096281 4915 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096290 4915 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096298 4915 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096306 4915 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096314 4915 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096323 4915 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096331 4915 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096339 4915 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096347 4915 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096355 4915 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096364 4915 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096374 4915 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096382 4915 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096390 4915 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096398 4915 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096406 4915 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096415 4915 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096425 4915 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096434 4915 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096442 4915 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096450 4915 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096459 4915 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096467 4915 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096475 4915 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096486 4915 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096496 4915 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096504 4915 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096513 4915 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096522 4915 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096532 4915 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.096542 4915 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097597 4915 flags.go:64] FLAG: --address="0.0.0.0" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097623 4915 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097642 4915 flags.go:64] FLAG: --anonymous-auth="true" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097654 4915 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097667 4915 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097678 4915 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097690 4915 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097702 4915 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097712 4915 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097722 4915 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097732 4915 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097742 4915 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097752 4915 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097762 4915 flags.go:64] FLAG: --cgroup-root="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097771 4915 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097781 4915 flags.go:64] FLAG: --client-ca-file="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097821 4915 flags.go:64] FLAG: --cloud-config="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097832 4915 flags.go:64] FLAG: --cloud-provider="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097841 4915 flags.go:64] FLAG: --cluster-dns="[]" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097853 4915 flags.go:64] FLAG: --cluster-domain="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097862 4915 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097872 4915 flags.go:64] FLAG: --config-dir="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097882 4915 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097892 4915 flags.go:64] FLAG: --container-log-max-files="5" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097904 4915 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097914 4915 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097924 4915 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097934 4915 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097944 4915 flags.go:64] FLAG: --contention-profiling="false" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097954 4915 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097964 4915 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097974 4915 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097984 4915 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.097996 4915 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098006 4915 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098015 4915 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098025 4915 flags.go:64] FLAG: --enable-load-reader="false" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098034 4915 flags.go:64] FLAG: --enable-server="true" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098044 4915 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098057 4915 flags.go:64] FLAG: --event-burst="100" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098068 4915 flags.go:64] FLAG: --event-qps="50" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098078 4915 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098087 4915 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098097 4915 flags.go:64] FLAG: --eviction-hard="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098108 4915 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098117 4915 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098127 4915 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098137 4915 flags.go:64] FLAG: --eviction-soft="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098146 4915 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098156 4915 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098165 4915 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098175 4915 flags.go:64] FLAG: --experimental-mounter-path="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098184 4915 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098194 4915 flags.go:64] FLAG: --fail-swap-on="true" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098204 4915 flags.go:64] FLAG: --feature-gates="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098215 4915 flags.go:64] FLAG: --file-check-frequency="20s" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098225 4915 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098235 4915 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098245 4915 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098255 4915 flags.go:64] FLAG: --healthz-port="10248" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098265 4915 flags.go:64] FLAG: --help="false" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098275 4915 flags.go:64] FLAG: --hostname-override="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098285 4915 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098295 4915 flags.go:64] FLAG: --http-check-frequency="20s" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098305 4915 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098314 4915 flags.go:64] FLAG: --image-credential-provider-config="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098324 4915 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098334 4915 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098343 4915 flags.go:64] FLAG: --image-service-endpoint="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098353 4915 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098362 4915 flags.go:64] FLAG: --kube-api-burst="100" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098372 4915 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098382 4915 flags.go:64] FLAG: --kube-api-qps="50" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098392 4915 flags.go:64] FLAG: --kube-reserved="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098401 4915 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098411 4915 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098422 4915 flags.go:64] FLAG: --kubelet-cgroups="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098431 4915 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098441 4915 flags.go:64] FLAG: --lock-file="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098450 4915 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098460 4915 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098470 4915 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098483 4915 flags.go:64] FLAG: --log-json-split-stream="false" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098493 4915 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098503 4915 flags.go:64] FLAG: --log-text-split-stream="false" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098513 4915 flags.go:64] FLAG: --logging-format="text" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098522 4915 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098533 4915 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098542 4915 flags.go:64] FLAG: --manifest-url="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098551 4915 flags.go:64] FLAG: --manifest-url-header="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098564 4915 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098575 4915 flags.go:64] FLAG: --max-open-files="1000000" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098586 4915 flags.go:64] FLAG: --max-pods="110" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098595 4915 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098606 4915 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098616 4915 flags.go:64] FLAG: --memory-manager-policy="None" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098626 4915 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098636 4915 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098645 4915 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098655 4915 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098675 4915 flags.go:64] FLAG: --node-status-max-images="50" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098685 4915 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098695 4915 flags.go:64] FLAG: --oom-score-adj="-999" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098705 4915 flags.go:64] FLAG: --pod-cidr="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098714 4915 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098727 4915 flags.go:64] FLAG: --pod-manifest-path="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098737 4915 flags.go:64] FLAG: --pod-max-pids="-1" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098746 4915 flags.go:64] FLAG: --pods-per-core="0" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098756 4915 flags.go:64] FLAG: --port="10250" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098765 4915 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098775 4915 flags.go:64] FLAG: --provider-id="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098785 4915 flags.go:64] FLAG: --qos-reserved="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098819 4915 flags.go:64] FLAG: --read-only-port="10255" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098829 4915 flags.go:64] FLAG: --register-node="true" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098839 4915 flags.go:64] FLAG: --register-schedulable="true" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098848 4915 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098864 4915 flags.go:64] FLAG: --registry-burst="10" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098873 4915 flags.go:64] FLAG: --registry-qps="5" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098883 4915 flags.go:64] FLAG: --reserved-cpus="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098892 4915 flags.go:64] FLAG: --reserved-memory="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098904 4915 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098914 4915 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098924 4915 flags.go:64] FLAG: --rotate-certificates="false" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098933 4915 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098943 4915 flags.go:64] FLAG: --runonce="false" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098952 4915 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098964 4915 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098974 4915 flags.go:64] FLAG: --seccomp-default="false" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098984 4915 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.098993 4915 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.099003 4915 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.099014 4915 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.099023 4915 flags.go:64] FLAG: --storage-driver-password="root" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.099033 4915 flags.go:64] FLAG: --storage-driver-secure="false" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.099042 4915 flags.go:64] FLAG: --storage-driver-table="stats" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.099052 4915 flags.go:64] FLAG: --storage-driver-user="root" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.099061 4915 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.099071 4915 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.099081 4915 flags.go:64] FLAG: --system-cgroups="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.099091 4915 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.099105 4915 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.099114 4915 flags.go:64] FLAG: --tls-cert-file="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.099124 4915 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.099135 4915 flags.go:64] FLAG: --tls-min-version="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.099145 4915 flags.go:64] FLAG: --tls-private-key-file="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.099154 4915 flags.go:64] FLAG: --topology-manager-policy="none" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.099164 4915 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.099174 4915 flags.go:64] FLAG: --topology-manager-scope="container" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.099184 4915 flags.go:64] FLAG: --v="2" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.099196 4915 flags.go:64] FLAG: --version="false" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.099208 4915 flags.go:64] FLAG: --vmodule="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.099219 4915 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.099230 4915 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099470 4915 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099481 4915 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099490 4915 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099499 4915 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099509 4915 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099519 4915 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099528 4915 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099536 4915 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099545 4915 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099553 4915 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099562 4915 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099570 4915 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099578 4915 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099586 4915 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099595 4915 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099603 4915 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099612 4915 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099622 4915 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099633 4915 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099643 4915 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099654 4915 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099665 4915 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099675 4915 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099684 4915 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099694 4915 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099703 4915 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099714 4915 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099722 4915 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099731 4915 feature_gate.go:330] unrecognized feature gate: Example Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099740 4915 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099748 4915 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099757 4915 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099766 4915 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099774 4915 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099783 4915 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099835 4915 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099856 4915 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099867 4915 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099883 4915 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099898 4915 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099911 4915 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099922 4915 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099931 4915 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099940 4915 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099948 4915 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099959 4915 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099969 4915 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099978 4915 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099987 4915 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.099996 4915 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.100004 4915 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.100013 4915 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.100021 4915 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.100029 4915 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.100038 4915 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.100046 4915 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.100055 4915 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.100064 4915 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.100072 4915 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.100080 4915 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.100089 4915 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.100097 4915 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.100106 4915 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.100114 4915 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.100122 4915 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.100131 4915 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.100139 4915 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.100150 4915 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.100159 4915 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.100167 4915 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.100175 4915 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.100202 4915 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.116147 4915 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.116214 4915 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116351 4915 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116365 4915 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116374 4915 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116383 4915 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116392 4915 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116402 4915 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116411 4915 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116419 4915 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116427 4915 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116435 4915 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116443 4915 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116451 4915 feature_gate.go:330] unrecognized feature gate: Example Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116459 4915 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116466 4915 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116474 4915 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116483 4915 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116490 4915 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116498 4915 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116506 4915 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116517 4915 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116530 4915 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116540 4915 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116548 4915 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116557 4915 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116567 4915 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116576 4915 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116588 4915 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116613 4915 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116623 4915 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116632 4915 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116640 4915 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116649 4915 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116657 4915 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116666 4915 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116674 4915 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116682 4915 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116694 4915 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116704 4915 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116713 4915 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116722 4915 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116730 4915 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116739 4915 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116748 4915 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116756 4915 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116763 4915 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116835 4915 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116845 4915 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116853 4915 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116860 4915 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116868 4915 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116876 4915 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116884 4915 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116892 4915 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116900 4915 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116908 4915 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116916 4915 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116924 4915 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116933 4915 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116940 4915 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116948 4915 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116956 4915 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116964 4915 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116972 4915 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116980 4915 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116988 4915 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.116998 4915 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117008 4915 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117017 4915 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117026 4915 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117034 4915 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117042 4915 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.117056 4915 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117305 4915 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117321 4915 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117329 4915 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117337 4915 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117345 4915 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117352 4915 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117360 4915 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117371 4915 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117382 4915 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117393 4915 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117402 4915 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117411 4915 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117419 4915 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117427 4915 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117436 4915 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117444 4915 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117452 4915 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117459 4915 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117467 4915 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117476 4915 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117484 4915 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117492 4915 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117500 4915 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117507 4915 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117518 4915 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117526 4915 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117534 4915 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117542 4915 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117549 4915 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117557 4915 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117565 4915 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117575 4915 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117584 4915 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117595 4915 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117604 4915 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117613 4915 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117622 4915 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117644 4915 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117652 4915 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117660 4915 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117667 4915 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117675 4915 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117683 4915 feature_gate.go:330] unrecognized feature gate: Example Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117691 4915 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117699 4915 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117708 4915 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117717 4915 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117725 4915 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117733 4915 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117741 4915 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117749 4915 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117757 4915 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117765 4915 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117773 4915 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117780 4915 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117788 4915 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117849 4915 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117856 4915 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117864 4915 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117871 4915 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117879 4915 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117887 4915 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117895 4915 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117903 4915 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117910 4915 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117918 4915 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117926 4915 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117934 4915 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117942 4915 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117950 4915 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.117958 4915 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.117971 4915 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.118263 4915 server.go:940] "Client rotation is on, will bootstrap in background" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.123947 4915 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.124099 4915 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.125986 4915 server.go:997] "Starting client certificate rotation" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.126034 4915 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.127319 4915 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-15 05:32:57.443229799 +0000 UTC Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.127406 4915 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.151651 4915 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 18:41:49 crc kubenswrapper[4915]: E0127 18:41:49.153786 4915 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.156218 4915 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.179755 4915 log.go:25] "Validated CRI v1 runtime API" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.220289 4915 log.go:25] "Validated CRI v1 image API" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.222860 4915 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.228966 4915 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-27-18-37-14-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.229030 4915 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.257123 4915 manager.go:217] Machine: {Timestamp:2026-01-27 18:41:49.253465971 +0000 UTC m=+0.611319665 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:c1860c83-6319-46ea-ba35-7a2106e4ce10 BootID:bd2c9101-94f4-4460-a5d7-3dbfd978bc2d Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:26:30:51 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:26:30:51 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:64:c8:b5 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:40:76:6e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:7e:8a:66 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:66:97:27 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:6d:69:79 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:a2:21:58:25:b4:b4 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:d6:55:21:1d:d8:e4 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.257503 4915 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.257816 4915 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.259179 4915 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.259474 4915 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.259535 4915 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.259876 4915 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.259895 4915 container_manager_linux.go:303] "Creating device plugin manager" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.260502 4915 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.260552 4915 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.261409 4915 state_mem.go:36] "Initialized new in-memory state store" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.261546 4915 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.265009 4915 kubelet.go:418] "Attempting to sync node with API server" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.265040 4915 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.265079 4915 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.265098 4915 kubelet.go:324] "Adding apiserver pod source" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.265115 4915 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.269376 4915 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.270400 4915 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.271866 4915 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Jan 27 18:41:49 crc kubenswrapper[4915]: E0127 18:41:49.272019 4915 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.272056 4915 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Jan 27 18:41:49 crc kubenswrapper[4915]: E0127 18:41:49.272172 4915 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.277073 4915 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.278847 4915 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.278901 4915 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.278923 4915 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.278941 4915 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.278970 4915 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.279180 4915 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.279197 4915 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.279225 4915 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.279246 4915 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.279263 4915 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.279309 4915 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.279327 4915 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.280323 4915 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.281200 4915 server.go:1280] "Started kubelet" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.281554 4915 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.281775 4915 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.282379 4915 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.283051 4915 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Jan 27 18:41:49 crc systemd[1]: Started Kubernetes Kubelet. Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.285636 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.285695 4915 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.285830 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 19:15:03.946266906 +0000 UTC Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.286071 4915 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.286454 4915 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 27 18:41:49 crc kubenswrapper[4915]: E0127 18:41:49.286079 4915 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.286698 4915 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.286832 4915 factory.go:55] Registering systemd factory Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.286863 4915 factory.go:221] Registration of the systemd container factory successfully Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.287213 4915 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.287262 4915 factory.go:153] Registering CRI-O factory Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.287297 4915 factory.go:221] Registration of the crio container factory successfully Jan 27 18:41:49 crc kubenswrapper[4915]: E0127 18:41:49.287296 4915 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.291113 4915 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.291189 4915 factory.go:103] Registering Raw factory Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.291230 4915 manager.go:1196] Started watching for new ooms in manager Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.291326 4915 server.go:460] "Adding debug handlers to kubelet server" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.293204 4915 manager.go:319] Starting recovery of all containers Jan 27 18:41:49 crc kubenswrapper[4915]: E0127 18:41:49.294661 4915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="200ms" Jan 27 18:41:49 crc kubenswrapper[4915]: E0127 18:41:49.295389 4915 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.106:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188eaaa059cb094f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 18:41:49.281151311 +0000 UTC m=+0.639005035,LastTimestamp:2026-01-27 18:41:49.281151311 +0000 UTC m=+0.639005035,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.307574 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.307640 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.307662 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.307681 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.307699 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.307717 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.307734 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.307751 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.307772 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.307817 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.307837 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.307856 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.307874 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.307894 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.307911 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.307929 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.307950 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.307967 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.307985 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308031 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308050 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308067 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308085 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308104 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308122 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308163 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308184 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308203 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308221 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308237 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308255 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308273 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308317 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308335 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308354 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308372 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308391 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308409 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308428 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308446 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308463 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308482 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308500 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308518 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308536 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308555 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308610 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308629 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308649 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308693 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308710 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308728 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308752 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308773 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308818 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308840 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308859 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308877 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308894 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308912 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308930 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308948 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308967 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.308986 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309006 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309024 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309043 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309060 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309078 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309097 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309114 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309131 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309149 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309166 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309184 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309201 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309219 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309236 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309253 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309271 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309289 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309306 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309323 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309341 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309360 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309377 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309395 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309411 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309429 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309446 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309464 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309483 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309500 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309518 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309536 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309553 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309572 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309591 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.309609 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311359 4915 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311398 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311420 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311440 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311458 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311478 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311505 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311526 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311555 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311574 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311594 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311611 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311631 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311651 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311671 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311690 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311710 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311729 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311746 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311763 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311782 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311826 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311844 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311862 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311880 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311912 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311930 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311947 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311964 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.311982 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312034 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312052 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312070 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312088 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312105 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312121 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312139 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312155 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312174 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312194 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312212 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312230 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312248 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312267 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312284 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312308 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312328 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312345 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312363 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312381 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312400 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312418 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312435 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312452 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312471 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312489 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312506 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312522 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312601 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312623 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312642 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312661 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312681 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312700 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312718 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312735 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312754 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312771 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312869 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312888 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312907 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312924 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312942 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312959 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312977 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.312994 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313012 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313033 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313050 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313069 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313086 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313104 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313122 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313139 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313157 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313175 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313193 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313216 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313233 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313250 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313267 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313284 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313300 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313320 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313338 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313356 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313374 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313390 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313409 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313426 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313443 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313462 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313479 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313498 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313515 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313535 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313552 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313579 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313597 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313615 4915 reconstruct.go:97] "Volume reconstruction finished" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.313628 4915 reconciler.go:26] "Reconciler: start to sync state" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.333520 4915 manager.go:324] Recovery completed Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.348705 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.350375 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.350412 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.350423 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.351138 4915 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.351157 4915 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.351174 4915 state_mem.go:36] "Initialized new in-memory state store" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.354131 4915 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.356275 4915 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.356316 4915 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.356339 4915 kubelet.go:2335] "Starting kubelet main sync loop" Jan 27 18:41:49 crc kubenswrapper[4915]: E0127 18:41:49.356382 4915 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.357240 4915 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Jan 27 18:41:49 crc kubenswrapper[4915]: E0127 18:41:49.357394 4915 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.374930 4915 policy_none.go:49] "None policy: Start" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.375650 4915 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.375671 4915 state_mem.go:35] "Initializing new in-memory state store" Jan 27 18:41:49 crc kubenswrapper[4915]: E0127 18:41:49.386760 4915 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.428722 4915 manager.go:334] "Starting Device Plugin manager" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.428910 4915 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.428929 4915 server.go:79] "Starting device plugin registration server" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.429468 4915 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.429506 4915 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.429785 4915 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.429922 4915 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.429937 4915 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 18:41:49 crc kubenswrapper[4915]: E0127 18:41:49.444620 4915 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.456875 4915 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.457028 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.458571 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.458606 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.458621 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.458767 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.458938 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.458987 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.459603 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.459636 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.459647 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.459821 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.459854 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.459876 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.459886 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.460126 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.460163 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.460527 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.460545 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.460556 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.460691 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.460964 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.461016 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.461512 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.461539 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.461549 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.461643 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.461822 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.461833 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.461852 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.461872 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.461887 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.462600 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.462642 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.462651 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.462709 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.462742 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.462759 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.462713 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.462824 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.462833 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.462990 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.463011 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.463776 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.463822 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.463836 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:49 crc kubenswrapper[4915]: E0127 18:41:49.495658 4915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="400ms" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.515300 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.515360 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.515394 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.515507 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.515553 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.515601 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.515636 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.515664 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.515709 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.515728 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.515745 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.515848 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.515909 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.515946 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.515976 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.530521 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.532615 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.532663 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.532680 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.532712 4915 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 18:41:49 crc kubenswrapper[4915]: E0127 18:41:49.533358 4915 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.106:6443: connect: connection refused" node="crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.617470 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.617556 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.617591 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.617620 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.617651 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.617690 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.617730 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.617832 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.617901 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.617842 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.617940 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.617967 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.617992 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.618000 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.618005 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.618065 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.618078 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.618031 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.618065 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.618128 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.618140 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.618115 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.618041 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.618213 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.618226 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.618289 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.618325 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.618406 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.618456 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.618546 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.748307 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.750049 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.750091 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.750103 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.750130 4915 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 18:41:49 crc kubenswrapper[4915]: E0127 18:41:49.750707 4915 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.106:6443: connect: connection refused" node="crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.792814 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.811341 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.820577 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.841657 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: I0127 18:41:49.847812 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.851397 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-f667c48b4826ce8445a06b00e2879364c63bab1e23773e819027b2ac721b5f17 WatchSource:0}: Error finding container f667c48b4826ce8445a06b00e2879364c63bab1e23773e819027b2ac721b5f17: Status 404 returned error can't find the container with id f667c48b4826ce8445a06b00e2879364c63bab1e23773e819027b2ac721b5f17 Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.854045 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-80c5fde2ae5666e1caf140520370f67339e475a450e8d231fc5f01886fd71acf WatchSource:0}: Error finding container 80c5fde2ae5666e1caf140520370f67339e475a450e8d231fc5f01886fd71acf: Status 404 returned error can't find the container with id 80c5fde2ae5666e1caf140520370f67339e475a450e8d231fc5f01886fd71acf Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.864199 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-a49a2ecead0ebc234def2242790badb6ceec8d090ac6ea07e4c7011052d2dede WatchSource:0}: Error finding container a49a2ecead0ebc234def2242790badb6ceec8d090ac6ea07e4c7011052d2dede: Status 404 returned error can't find the container with id a49a2ecead0ebc234def2242790badb6ceec8d090ac6ea07e4c7011052d2dede Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.874936 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-16e3141afe5e72b3b0326994e548314218d632553d10a8ccb194411d6ad27f37 WatchSource:0}: Error finding container 16e3141afe5e72b3b0326994e548314218d632553d10a8ccb194411d6ad27f37: Status 404 returned error can't find the container with id 16e3141afe5e72b3b0326994e548314218d632553d10a8ccb194411d6ad27f37 Jan 27 18:41:49 crc kubenswrapper[4915]: W0127 18:41:49.878506 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-8bd84ebcd5a3a22ba5f2c723245912c813652613cc1177f84c2d312ce839a33c WatchSource:0}: Error finding container 8bd84ebcd5a3a22ba5f2c723245912c813652613cc1177f84c2d312ce839a33c: Status 404 returned error can't find the container with id 8bd84ebcd5a3a22ba5f2c723245912c813652613cc1177f84c2d312ce839a33c Jan 27 18:41:49 crc kubenswrapper[4915]: E0127 18:41:49.896722 4915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="800ms" Jan 27 18:41:50 crc kubenswrapper[4915]: I0127 18:41:50.151183 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:50 crc kubenswrapper[4915]: I0127 18:41:50.153124 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:50 crc kubenswrapper[4915]: I0127 18:41:50.153149 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:50 crc kubenswrapper[4915]: I0127 18:41:50.153157 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:50 crc kubenswrapper[4915]: I0127 18:41:50.153175 4915 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 18:41:50 crc kubenswrapper[4915]: E0127 18:41:50.153369 4915 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.106:6443: connect: connection refused" node="crc" Jan 27 18:41:50 crc kubenswrapper[4915]: I0127 18:41:50.284546 4915 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Jan 27 18:41:50 crc kubenswrapper[4915]: I0127 18:41:50.286604 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 07:02:28.548206383 +0000 UTC Jan 27 18:41:50 crc kubenswrapper[4915]: I0127 18:41:50.360229 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"16e3141afe5e72b3b0326994e548314218d632553d10a8ccb194411d6ad27f37"} Jan 27 18:41:50 crc kubenswrapper[4915]: I0127 18:41:50.361363 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a49a2ecead0ebc234def2242790badb6ceec8d090ac6ea07e4c7011052d2dede"} Jan 27 18:41:50 crc kubenswrapper[4915]: I0127 18:41:50.362282 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f667c48b4826ce8445a06b00e2879364c63bab1e23773e819027b2ac721b5f17"} Jan 27 18:41:50 crc kubenswrapper[4915]: I0127 18:41:50.363255 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"80c5fde2ae5666e1caf140520370f67339e475a450e8d231fc5f01886fd71acf"} Jan 27 18:41:50 crc kubenswrapper[4915]: I0127 18:41:50.364283 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8bd84ebcd5a3a22ba5f2c723245912c813652613cc1177f84c2d312ce839a33c"} Jan 27 18:41:50 crc kubenswrapper[4915]: W0127 18:41:50.455201 4915 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Jan 27 18:41:50 crc kubenswrapper[4915]: E0127 18:41:50.455274 4915 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:41:50 crc kubenswrapper[4915]: W0127 18:41:50.636023 4915 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Jan 27 18:41:50 crc kubenswrapper[4915]: E0127 18:41:50.636152 4915 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:41:50 crc kubenswrapper[4915]: W0127 18:41:50.666397 4915 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Jan 27 18:41:50 crc kubenswrapper[4915]: E0127 18:41:50.666498 4915 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:41:50 crc kubenswrapper[4915]: E0127 18:41:50.698044 4915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="1.6s" Jan 27 18:41:50 crc kubenswrapper[4915]: W0127 18:41:50.792631 4915 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Jan 27 18:41:50 crc kubenswrapper[4915]: E0127 18:41:50.792752 4915 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:41:50 crc kubenswrapper[4915]: I0127 18:41:50.953465 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:50 crc kubenswrapper[4915]: I0127 18:41:50.955282 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:50 crc kubenswrapper[4915]: I0127 18:41:50.955347 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:50 crc kubenswrapper[4915]: I0127 18:41:50.955365 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:50 crc kubenswrapper[4915]: I0127 18:41:50.955399 4915 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 18:41:50 crc kubenswrapper[4915]: E0127 18:41:50.956476 4915 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.106:6443: connect: connection refused" node="crc" Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.263885 4915 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 18:41:51 crc kubenswrapper[4915]: E0127 18:41:51.266540 4915 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.285007 4915 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.287554 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 17:14:53.743977178 +0000 UTC Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.372117 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e"} Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.372181 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a"} Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.372201 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd"} Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.372218 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c"} Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.372333 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.373584 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.373619 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.373636 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.375690 4915 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612" exitCode=0 Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.375820 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612"} Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.375931 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.377397 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.377451 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.377476 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.378863 4915 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="80c5ee84a20acb7640d051fe1d0898ccf22f6d70d46822f7b06d19a910e164b3" exitCode=0 Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.378961 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.378978 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"80c5ee84a20acb7640d051fe1d0898ccf22f6d70d46822f7b06d19a910e164b3"} Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.381275 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.381337 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.381360 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.384386 4915 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="293e2a1883668d1318598c3a2acd50c472c736f414e3c20ed2a6ee6e65f9d9b0" exitCode=0 Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.384475 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"293e2a1883668d1318598c3a2acd50c472c736f414e3c20ed2a6ee6e65f9d9b0"} Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.384532 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.388511 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.388645 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.388690 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.389948 4915 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26" exitCode=0 Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.390254 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.390270 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26"} Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.394533 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.394551 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.394559 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.400711 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.402100 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.402155 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:51 crc kubenswrapper[4915]: I0127 18:41:51.402172 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.283999 4915 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.288311 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 11:58:37.177848747 +0000 UTC Jan 27 18:41:52 crc kubenswrapper[4915]: E0127 18:41:52.298688 4915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="3.2s" Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.395006 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7"} Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.395057 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4"} Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.395070 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4"} Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.395084 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9"} Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.396785 4915 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3" exitCode=0 Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.396858 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3"} Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.396946 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.397822 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.397847 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.397856 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.399750 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"32a3d7950da3a646b04f04374be21ff070c64d28974d5b36911229c111970f33"} Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.399924 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.400677 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.400736 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.400752 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.402190 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.402543 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.402854 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0ef38e583db7c2d1d0b27bcbcc8a54937759afabf49daabe8767d5a3f3f2cf78"} Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.402993 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9ca94e1dfc1266b556ea7788f6fffdd6b6c0e903b260fa0e24bb3a153921b198"} Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.403009 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6fcad643b766147459cff8e5d86ba0f08183df4600e0a8a49b55c3423b9c2136"} Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.403264 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.403291 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.403300 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.403927 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.403947 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.403955 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.557334 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.558431 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.558479 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.558495 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:52 crc kubenswrapper[4915]: I0127 18:41:52.558523 4915 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 18:41:52 crc kubenswrapper[4915]: E0127 18:41:52.559023 4915 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.106:6443: connect: connection refused" node="crc" Jan 27 18:41:53 crc kubenswrapper[4915]: I0127 18:41:53.288985 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 14:38:00.279828306 +0000 UTC Jan 27 18:41:53 crc kubenswrapper[4915]: I0127 18:41:53.409146 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518"} Jan 27 18:41:53 crc kubenswrapper[4915]: I0127 18:41:53.409303 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:53 crc kubenswrapper[4915]: I0127 18:41:53.411121 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:53 crc kubenswrapper[4915]: I0127 18:41:53.411171 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:53 crc kubenswrapper[4915]: I0127 18:41:53.411188 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:53 crc kubenswrapper[4915]: I0127 18:41:53.413256 4915 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903" exitCode=0 Jan 27 18:41:53 crc kubenswrapper[4915]: I0127 18:41:53.413314 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903"} Jan 27 18:41:53 crc kubenswrapper[4915]: I0127 18:41:53.413377 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:53 crc kubenswrapper[4915]: I0127 18:41:53.413418 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:53 crc kubenswrapper[4915]: I0127 18:41:53.413377 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:53 crc kubenswrapper[4915]: I0127 18:41:53.413551 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:41:53 crc kubenswrapper[4915]: I0127 18:41:53.414842 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:53 crc kubenswrapper[4915]: I0127 18:41:53.414870 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:53 crc kubenswrapper[4915]: I0127 18:41:53.414882 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:53 crc kubenswrapper[4915]: I0127 18:41:53.414886 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:53 crc kubenswrapper[4915]: I0127 18:41:53.414898 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:53 crc kubenswrapper[4915]: I0127 18:41:53.414916 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:53 crc kubenswrapper[4915]: I0127 18:41:53.414922 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:53 crc kubenswrapper[4915]: I0127 18:41:53.414930 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:53 crc kubenswrapper[4915]: I0127 18:41:53.414937 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:53 crc kubenswrapper[4915]: I0127 18:41:53.495353 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:41:53 crc kubenswrapper[4915]: I0127 18:41:53.495627 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:53 crc kubenswrapper[4915]: I0127 18:41:53.497358 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:53 crc kubenswrapper[4915]: I0127 18:41:53.497420 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:53 crc kubenswrapper[4915]: I0127 18:41:53.497445 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:54 crc kubenswrapper[4915]: I0127 18:41:54.289449 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 02:08:44.862613342 +0000 UTC Jan 27 18:41:54 crc kubenswrapper[4915]: I0127 18:41:54.300943 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:41:54 crc kubenswrapper[4915]: I0127 18:41:54.419493 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159"} Jan 27 18:41:54 crc kubenswrapper[4915]: I0127 18:41:54.419537 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1"} Jan 27 18:41:54 crc kubenswrapper[4915]: I0127 18:41:54.419557 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7"} Jan 27 18:41:54 crc kubenswrapper[4915]: I0127 18:41:54.419574 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05"} Jan 27 18:41:54 crc kubenswrapper[4915]: I0127 18:41:54.419584 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:54 crc kubenswrapper[4915]: I0127 18:41:54.419701 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:54 crc kubenswrapper[4915]: I0127 18:41:54.420531 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:54 crc kubenswrapper[4915]: I0127 18:41:54.420558 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:54 crc kubenswrapper[4915]: I0127 18:41:54.420570 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:54 crc kubenswrapper[4915]: I0127 18:41:54.421034 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:54 crc kubenswrapper[4915]: I0127 18:41:54.421080 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:54 crc kubenswrapper[4915]: I0127 18:41:54.421096 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:55 crc kubenswrapper[4915]: I0127 18:41:55.290349 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 13:52:21.797857561 +0000 UTC Jan 27 18:41:55 crc kubenswrapper[4915]: I0127 18:41:55.428717 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85"} Jan 27 18:41:55 crc kubenswrapper[4915]: I0127 18:41:55.428770 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:55 crc kubenswrapper[4915]: I0127 18:41:55.428874 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:55 crc kubenswrapper[4915]: I0127 18:41:55.430543 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:55 crc kubenswrapper[4915]: I0127 18:41:55.430572 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:55 crc kubenswrapper[4915]: I0127 18:41:55.430600 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:55 crc kubenswrapper[4915]: I0127 18:41:55.430613 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:55 crc kubenswrapper[4915]: I0127 18:41:55.430623 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:55 crc kubenswrapper[4915]: I0127 18:41:55.430636 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:55 crc kubenswrapper[4915]: I0127 18:41:55.612258 4915 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 18:41:55 crc kubenswrapper[4915]: I0127 18:41:55.759482 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:55 crc kubenswrapper[4915]: I0127 18:41:55.761298 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:55 crc kubenswrapper[4915]: I0127 18:41:55.761354 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:55 crc kubenswrapper[4915]: I0127 18:41:55.761372 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:55 crc kubenswrapper[4915]: I0127 18:41:55.761407 4915 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 18:41:56 crc kubenswrapper[4915]: I0127 18:41:56.291863 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 23:51:14.383639605 +0000 UTC Jan 27 18:41:56 crc kubenswrapper[4915]: I0127 18:41:56.431551 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:56 crc kubenswrapper[4915]: I0127 18:41:56.433138 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:56 crc kubenswrapper[4915]: I0127 18:41:56.433200 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:56 crc kubenswrapper[4915]: I0127 18:41:56.433224 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:56 crc kubenswrapper[4915]: I0127 18:41:56.496219 4915 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 18:41:56 crc kubenswrapper[4915]: I0127 18:41:56.496368 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:41:56 crc kubenswrapper[4915]: I0127 18:41:56.648286 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 27 18:41:56 crc kubenswrapper[4915]: I0127 18:41:56.801139 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:41:56 crc kubenswrapper[4915]: I0127 18:41:56.801408 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:56 crc kubenswrapper[4915]: I0127 18:41:56.802948 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:56 crc kubenswrapper[4915]: I0127 18:41:56.802999 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:56 crc kubenswrapper[4915]: I0127 18:41:56.803016 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:57 crc kubenswrapper[4915]: I0127 18:41:57.226523 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:41:57 crc kubenswrapper[4915]: I0127 18:41:57.293061 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 15:53:41.975236515 +0000 UTC Jan 27 18:41:57 crc kubenswrapper[4915]: I0127 18:41:57.434723 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:57 crc kubenswrapper[4915]: I0127 18:41:57.434742 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:57 crc kubenswrapper[4915]: I0127 18:41:57.436835 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:57 crc kubenswrapper[4915]: I0127 18:41:57.436863 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:57 crc kubenswrapper[4915]: I0127 18:41:57.436902 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:57 crc kubenswrapper[4915]: I0127 18:41:57.436915 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:57 crc kubenswrapper[4915]: I0127 18:41:57.436919 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:57 crc kubenswrapper[4915]: I0127 18:41:57.436943 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:57 crc kubenswrapper[4915]: I0127 18:41:57.483064 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:41:57 crc kubenswrapper[4915]: I0127 18:41:57.483251 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:57 crc kubenswrapper[4915]: I0127 18:41:57.484764 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:57 crc kubenswrapper[4915]: I0127 18:41:57.484850 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:57 crc kubenswrapper[4915]: I0127 18:41:57.484870 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:58 crc kubenswrapper[4915]: I0127 18:41:58.293482 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 09:49:45.338154639 +0000 UTC Jan 27 18:41:58 crc kubenswrapper[4915]: I0127 18:41:58.660330 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:41:58 crc kubenswrapper[4915]: I0127 18:41:58.660554 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:58 crc kubenswrapper[4915]: I0127 18:41:58.662178 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:58 crc kubenswrapper[4915]: I0127 18:41:58.662221 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:58 crc kubenswrapper[4915]: I0127 18:41:58.662233 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:59 crc kubenswrapper[4915]: I0127 18:41:59.294387 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 11:57:01.16258852 +0000 UTC Jan 27 18:41:59 crc kubenswrapper[4915]: I0127 18:41:59.385225 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:41:59 crc kubenswrapper[4915]: I0127 18:41:59.392104 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:41:59 crc kubenswrapper[4915]: I0127 18:41:59.440661 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:41:59 crc kubenswrapper[4915]: I0127 18:41:59.442107 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:41:59 crc kubenswrapper[4915]: I0127 18:41:59.442166 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:41:59 crc kubenswrapper[4915]: I0127 18:41:59.442184 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:41:59 crc kubenswrapper[4915]: E0127 18:41:59.444724 4915 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 18:42:00 crc kubenswrapper[4915]: I0127 18:42:00.294561 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 20:26:32.798758973 +0000 UTC Jan 27 18:42:00 crc kubenswrapper[4915]: I0127 18:42:00.443151 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:00 crc kubenswrapper[4915]: I0127 18:42:00.444462 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:00 crc kubenswrapper[4915]: I0127 18:42:00.444496 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:00 crc kubenswrapper[4915]: I0127 18:42:00.444508 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:00 crc kubenswrapper[4915]: I0127 18:42:00.447783 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:42:00 crc kubenswrapper[4915]: I0127 18:42:00.892311 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 27 18:42:00 crc kubenswrapper[4915]: I0127 18:42:00.892651 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:00 crc kubenswrapper[4915]: I0127 18:42:00.894390 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:00 crc kubenswrapper[4915]: I0127 18:42:00.894465 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:00 crc kubenswrapper[4915]: I0127 18:42:00.894484 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:01 crc kubenswrapper[4915]: I0127 18:42:01.295704 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 15:38:22.812766356 +0000 UTC Jan 27 18:42:01 crc kubenswrapper[4915]: I0127 18:42:01.445110 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:01 crc kubenswrapper[4915]: I0127 18:42:01.449466 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:01 crc kubenswrapper[4915]: I0127 18:42:01.449526 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:01 crc kubenswrapper[4915]: I0127 18:42:01.449545 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:02 crc kubenswrapper[4915]: I0127 18:42:02.296467 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 19:03:07.719730231 +0000 UTC Jan 27 18:42:02 crc kubenswrapper[4915]: I0127 18:42:02.528744 4915 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 18:42:02 crc kubenswrapper[4915]: I0127 18:42:02.528881 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 18:42:02 crc kubenswrapper[4915]: W0127 18:42:02.929270 4915 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 27 18:42:02 crc kubenswrapper[4915]: I0127 18:42:02.929407 4915 trace.go:236] Trace[1089793808]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 18:41:52.927) (total time: 10002ms): Jan 27 18:42:02 crc kubenswrapper[4915]: Trace[1089793808]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:42:02.929) Jan 27 18:42:02 crc kubenswrapper[4915]: Trace[1089793808]: [10.002093258s] [10.002093258s] END Jan 27 18:42:02 crc kubenswrapper[4915]: E0127 18:42:02.929445 4915 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 27 18:42:03 crc kubenswrapper[4915]: W0127 18:42:03.280356 4915 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 27 18:42:03 crc kubenswrapper[4915]: I0127 18:42:03.280444 4915 trace.go:236] Trace[1469651246]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 18:41:53.278) (total time: 10001ms): Jan 27 18:42:03 crc kubenswrapper[4915]: Trace[1469651246]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:42:03.280) Jan 27 18:42:03 crc kubenswrapper[4915]: Trace[1469651246]: [10.001557205s] [10.001557205s] END Jan 27 18:42:03 crc kubenswrapper[4915]: E0127 18:42:03.280464 4915 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 27 18:42:03 crc kubenswrapper[4915]: I0127 18:42:03.285119 4915 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 27 18:42:03 crc kubenswrapper[4915]: I0127 18:42:03.297623 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 06:43:08.710458466 +0000 UTC Jan 27 18:42:03 crc kubenswrapper[4915]: W0127 18:42:03.436152 4915 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 27 18:42:03 crc kubenswrapper[4915]: I0127 18:42:03.436247 4915 trace.go:236] Trace[779825107]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 18:41:53.434) (total time: 10001ms): Jan 27 18:42:03 crc kubenswrapper[4915]: Trace[779825107]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:42:03.436) Jan 27 18:42:03 crc kubenswrapper[4915]: Trace[779825107]: [10.001671011s] [10.001671011s] END Jan 27 18:42:03 crc kubenswrapper[4915]: E0127 18:42:03.436273 4915 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 27 18:42:03 crc kubenswrapper[4915]: W0127 18:42:03.791836 4915 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 27 18:42:03 crc kubenswrapper[4915]: I0127 18:42:03.791996 4915 trace.go:236] Trace[99693970]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 18:41:53.790) (total time: 10001ms): Jan 27 18:42:03 crc kubenswrapper[4915]: Trace[99693970]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:42:03.791) Jan 27 18:42:03 crc kubenswrapper[4915]: Trace[99693970]: [10.001840757s] [10.001840757s] END Jan 27 18:42:03 crc kubenswrapper[4915]: E0127 18:42:03.792040 4915 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 27 18:42:03 crc kubenswrapper[4915]: I0127 18:42:03.967843 4915 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 18:42:03 crc kubenswrapper[4915]: I0127 18:42:03.968176 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 18:42:03 crc kubenswrapper[4915]: I0127 18:42:03.975201 4915 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 18:42:03 crc kubenswrapper[4915]: I0127 18:42:03.975288 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 18:42:04 crc kubenswrapper[4915]: I0127 18:42:04.299230 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 07:59:56.192214676 +0000 UTC Jan 27 18:42:05 crc kubenswrapper[4915]: I0127 18:42:05.300329 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 21:04:48.960411255 +0000 UTC Jan 27 18:42:06 crc kubenswrapper[4915]: I0127 18:42:06.301200 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 01:46:00.087740363 +0000 UTC Jan 27 18:42:06 crc kubenswrapper[4915]: I0127 18:42:06.497311 4915 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 18:42:06 crc kubenswrapper[4915]: I0127 18:42:06.497439 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 18:42:06 crc kubenswrapper[4915]: I0127 18:42:06.807888 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:42:06 crc kubenswrapper[4915]: I0127 18:42:06.808137 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:06 crc kubenswrapper[4915]: I0127 18:42:06.809869 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:06 crc kubenswrapper[4915]: I0127 18:42:06.809931 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:06 crc kubenswrapper[4915]: I0127 18:42:06.809950 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:06 crc kubenswrapper[4915]: I0127 18:42:06.815882 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:42:06 crc kubenswrapper[4915]: I0127 18:42:06.946900 4915 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 18:42:06 crc kubenswrapper[4915]: I0127 18:42:06.992106 4915 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 18:42:07 crc kubenswrapper[4915]: I0127 18:42:07.301554 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 17:35:24.903562197 +0000 UTC Jan 27 18:42:07 crc kubenswrapper[4915]: I0127 18:42:07.461191 4915 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:42:07 crc kubenswrapper[4915]: I0127 18:42:07.461260 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 18:42:07 crc kubenswrapper[4915]: I0127 18:42:07.462680 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:07 crc kubenswrapper[4915]: I0127 18:42:07.462746 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:07 crc kubenswrapper[4915]: I0127 18:42:07.462827 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:08 crc kubenswrapper[4915]: I0127 18:42:08.302382 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 09:32:19.556351141 +0000 UTC Jan 27 18:42:08 crc kubenswrapper[4915]: E0127 18:42:08.967354 4915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 27 18:42:08 crc kubenswrapper[4915]: I0127 18:42:08.976123 4915 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.001635 4915 csr.go:261] certificate signing request csr-rnjwf is approved, waiting to be issued Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.303465 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 07:12:02.636664569 +0000 UTC Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.331162 4915 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.331286 4915 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.331626 4915 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 27 18:42:09 crc kubenswrapper[4915]: E0127 18:42:09.331680 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.332255 4915 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.337147 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.337191 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.337203 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.337219 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.337230 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:09Z","lastTransitionTime":"2026-01-27T18:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.341808 4915 csr.go:257] certificate signing request csr-rnjwf is issued Jan 27 18:42:09 crc kubenswrapper[4915]: E0127 18:42:09.362998 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.367778 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.367850 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.367862 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.367886 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.367898 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:09Z","lastTransitionTime":"2026-01-27T18:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:09 crc kubenswrapper[4915]: E0127 18:42:09.392885 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.403237 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.403588 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.403717 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.403856 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.403977 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:09Z","lastTransitionTime":"2026-01-27T18:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:09 crc kubenswrapper[4915]: E0127 18:42:09.432277 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.433035 4915 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46152->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.433089 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46152->192.168.126.11:17697: read: connection reset by peer" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.433476 4915 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.433578 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.438148 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.438296 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.438431 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.438497 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.438591 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:09Z","lastTransitionTime":"2026-01-27T18:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:09 crc kubenswrapper[4915]: E0127 18:42:09.444929 4915 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 18:42:09 crc kubenswrapper[4915]: E0127 18:42:09.455275 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:09 crc kubenswrapper[4915]: E0127 18:42:09.456122 4915 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:42:09 crc kubenswrapper[4915]: E0127 18:42:09.456245 4915 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 18:42:09 crc kubenswrapper[4915]: E0127 18:42:09.556490 4915 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.556632 4915 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.658331 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.658355 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.658363 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.658376 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.658385 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:09Z","lastTransitionTime":"2026-01-27T18:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.760978 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.761022 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.761037 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.761051 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.761062 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:09Z","lastTransitionTime":"2026-01-27T18:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.863560 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.863589 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.863599 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.863611 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.863620 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:09Z","lastTransitionTime":"2026-01-27T18:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.966479 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.966570 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.966587 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.966616 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:09 crc kubenswrapper[4915]: I0127 18:42:09.966635 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:09Z","lastTransitionTime":"2026-01-27T18:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.069253 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.069308 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.069320 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.069338 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.069351 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:10Z","lastTransitionTime":"2026-01-27T18:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.173058 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.173132 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.173160 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.173191 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.173213 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:10Z","lastTransitionTime":"2026-01-27T18:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.275642 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.275701 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.275717 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.275738 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.275753 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:10Z","lastTransitionTime":"2026-01-27T18:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.275963 4915 apiserver.go:52] "Watching apiserver" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.303845 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 16:07:02.518723291 +0000 UTC Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.343234 4915 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-27 18:37:09 +0000 UTC, rotation deadline is 2026-12-16 19:49:49.827013181 +0000 UTC Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.343317 4915 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7753h7m39.483698834s for next certificate rotation Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.380565 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.380591 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.380600 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.380614 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.380623 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:10Z","lastTransitionTime":"2026-01-27T18:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.469351 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.471307 4915 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518" exitCode=255 Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.471356 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518"} Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.482408 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.482442 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.482450 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.482462 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.482470 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:10Z","lastTransitionTime":"2026-01-27T18:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.491626 4915 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.491923 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-78l9d","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.492254 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.492406 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.492429 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:10 crc kubenswrapper[4915]: E0127 18:42:10.492469 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.492534 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.492688 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-78l9d" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.492696 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:42:10 crc kubenswrapper[4915]: E0127 18:42:10.492725 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.492768 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:10 crc kubenswrapper[4915]: E0127 18:42:10.492977 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.494395 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.494896 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.494968 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.495020 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.495249 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.495436 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.495754 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.495769 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.496944 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.497027 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.498366 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.498678 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.514679 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.522282 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.534273 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.542754 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.559460 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.573754 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.582477 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.585024 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.585065 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.585075 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.585118 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.585133 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:10Z","lastTransitionTime":"2026-01-27T18:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.587938 4915 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.591848 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640287 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640328 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640347 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640367 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640383 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640400 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640415 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640434 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640453 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640468 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640484 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640499 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640515 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640534 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640553 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640567 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640583 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640636 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640653 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640668 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640683 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640699 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640714 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640729 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640731 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640765 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640813 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640840 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640879 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640898 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640915 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640973 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640988 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641004 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641019 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641050 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641113 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641129 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641170 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641186 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641221 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641257 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641273 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641393 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641409 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641425 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641441 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641456 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641470 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641487 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641503 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641518 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641533 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641548 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.642235 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.642269 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.642292 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.642310 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.642339 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.642364 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.642405 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.642430 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640747 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.645602 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640732 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640787 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640991 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.640986 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641081 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641133 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641223 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641239 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641304 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641345 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641474 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.641487 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.642332 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.642550 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.642610 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.642625 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.642761 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.642840 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: E0127 18:42:10.642853 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:42:11.14282851 +0000 UTC m=+22.500682174 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.642954 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.642976 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.643161 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.643554 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.643589 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.643679 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.643921 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.643989 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.643997 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.644207 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.644218 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.644442 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.644462 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.644571 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.644606 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.644701 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.644721 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.644760 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.644806 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.644871 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.645031 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.645107 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.645268 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.645334 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.645343 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.645345 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.646359 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.645567 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.645590 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.646392 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.645694 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.646157 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.642471 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.646473 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.646520 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.646554 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.646585 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.646615 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.646646 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.643021 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.646697 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.646742 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.646883 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.646947 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.646994 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.647076 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.647112 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.647212 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.647588 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.647858 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.647886 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.647932 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.647984 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.648072 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.647170 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.648395 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.648537 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.648670 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.648718 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.648771 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.648892 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.649503 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.649742 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.647676 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.649864 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.650074 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.650120 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.650335 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.650370 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.650559 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.650587 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.650613 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.650642 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.650672 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.650695 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.650723 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.650751 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.650779 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.650828 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.651168 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.651419 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.653618 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.653724 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.653827 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.653851 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.654107 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.654113 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.654684 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.653175 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.655158 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.655247 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.655330 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.655405 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.655475 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.655547 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.655823 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.655907 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.656319 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.656578 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.656682 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.657005 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.654803 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.654893 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.655977 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.656218 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.656600 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.656608 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.656981 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.657425 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.657524 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.657604 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.657676 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.657768 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.657881 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.657980 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.658080 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.658189 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.658319 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.658418 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.658525 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.658604 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.658752 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.658869 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.658964 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.659050 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.659139 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.659236 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.659375 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.659469 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.659552 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.659647 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.659832 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.659991 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.660106 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.660261 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.660408 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.660520 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.660627 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.660725 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.660851 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.660952 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.661044 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.661137 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.661226 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.661308 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.661390 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.661486 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.661570 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.661695 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.661816 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.661918 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.662008 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.662104 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.662198 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.660547 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.662543 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.662641 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.662741 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.663336 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.664010 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.664326 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.664361 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.664389 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.664420 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.664449 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.665649 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.665707 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666147 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666175 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666203 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666229 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666253 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666278 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666304 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666351 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666381 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666407 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666442 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666471 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666522 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666550 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666580 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666626 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666653 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666685 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666715 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666741 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666773 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666820 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666850 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666876 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666905 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666933 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666961 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666987 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667015 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667047 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667074 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667102 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667131 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667159 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667186 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667214 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667266 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667293 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667320 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667344 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667376 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667408 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667441 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667469 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667495 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667520 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667555 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667585 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/81cd7e15-a585-4cae-b306-701292248ea6-hosts-file\") pod \"node-resolver-78l9d\" (UID: \"81cd7e15-a585-4cae-b306-701292248ea6\") " pod="openshift-dns/node-resolver-78l9d" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667612 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vjrg\" (UniqueName: \"kubernetes.io/projected/81cd7e15-a585-4cae-b306-701292248ea6-kube-api-access-5vjrg\") pod \"node-resolver-78l9d\" (UID: \"81cd7e15-a585-4cae-b306-701292248ea6\") " pod="openshift-dns/node-resolver-78l9d" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667643 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667674 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667706 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667822 4915 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667842 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667859 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667874 4915 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667888 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667904 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667919 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667936 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667949 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667965 4915 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667979 4915 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667993 4915 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668007 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668020 4915 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668035 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668049 4915 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668064 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668077 4915 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668089 4915 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668102 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668115 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668130 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668143 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668157 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668171 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668187 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668201 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668216 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668230 4915 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668244 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668257 4915 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668272 4915 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668285 4915 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668299 4915 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668312 4915 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668325 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668339 4915 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668352 4915 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668366 4915 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668380 4915 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668393 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668405 4915 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668417 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668430 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668445 4915 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668459 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668472 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668485 4915 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668499 4915 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668514 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668530 4915 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668543 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668556 4915 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668570 4915 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668584 4915 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668597 4915 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668611 4915 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668625 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668637 4915 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668652 4915 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668666 4915 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668679 4915 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668692 4915 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668705 4915 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668718 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668731 4915 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668744 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668758 4915 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668772 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668786 4915 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668818 4915 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668831 4915 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668844 4915 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668857 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668870 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668884 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668897 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668911 4915 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668927 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668941 4915 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668954 4915 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668970 4915 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668983 4915 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668997 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.669010 4915 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.669023 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.669038 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.669051 4915 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.669065 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.661050 4915 scope.go:117] "RemoveContainer" containerID="067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.663227 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.663522 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.664495 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666078 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666226 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.666270 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.664906 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.665078 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.665344 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667367 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667388 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667390 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667551 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667677 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667886 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.667896 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668010 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.670742 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.670520 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.670984 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.671009 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.671188 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.671323 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668179 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668325 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668333 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668586 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.668638 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.669040 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.669052 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.669405 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.669423 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.669600 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.669909 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.669915 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.669942 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.670079 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.670179 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.670228 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.671683 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.672149 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.672423 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.674236 4915 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.673768 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.665366 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.675506 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:42:10 crc kubenswrapper[4915]: E0127 18:42:10.684508 4915 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:42:10 crc kubenswrapper[4915]: E0127 18:42:10.684594 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:11.184575212 +0000 UTC m=+22.542428966 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:42:10 crc kubenswrapper[4915]: E0127 18:42:10.687626 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:42:10 crc kubenswrapper[4915]: E0127 18:42:10.687674 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:42:10 crc kubenswrapper[4915]: E0127 18:42:10.687694 4915 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:10 crc kubenswrapper[4915]: E0127 18:42:10.687779 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:11.187750933 +0000 UTC m=+22.545604637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.689132 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.689188 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.689849 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.690066 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: E0127 18:42:10.690880 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:42:10 crc kubenswrapper[4915]: E0127 18:42:10.690905 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:42:10 crc kubenswrapper[4915]: E0127 18:42:10.690919 4915 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:10 crc kubenswrapper[4915]: E0127 18:42:10.690965 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:11.190949084 +0000 UTC m=+22.548802768 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.691058 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.691079 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.691090 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.691124 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.691137 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:10Z","lastTransitionTime":"2026-01-27T18:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:10 crc kubenswrapper[4915]: E0127 18:42:10.691243 4915 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:42:10 crc kubenswrapper[4915]: E0127 18:42:10.691379 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:11.191367325 +0000 UTC m=+22.549220999 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.694888 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.698706 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.698977 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.699092 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.699108 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.699379 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.699686 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.699707 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.699714 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.699929 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.699939 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.699952 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.700051 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.700100 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.700197 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.700781 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.701308 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.703130 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.704042 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.704088 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.704146 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.704279 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.704853 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.705582 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.706065 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.706227 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.706313 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.706903 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.706957 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.707083 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.707351 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.707504 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.707552 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.707873 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.708647 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.710457 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.715161 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.715352 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.715369 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.715552 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.715846 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.716087 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.716725 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.716245 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.716539 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.717207 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.717382 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.717587 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.717583 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.717893 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.718166 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.718937 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.719213 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.719363 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.719605 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.720218 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.720590 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.721071 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.722006 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.727343 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.770607 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.770683 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.770718 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/81cd7e15-a585-4cae-b306-701292248ea6-hosts-file\") pod \"node-resolver-78l9d\" (UID: \"81cd7e15-a585-4cae-b306-701292248ea6\") " pod="openshift-dns/node-resolver-78l9d" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.770746 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vjrg\" (UniqueName: \"kubernetes.io/projected/81cd7e15-a585-4cae-b306-701292248ea6-kube-api-access-5vjrg\") pod \"node-resolver-78l9d\" (UID: \"81cd7e15-a585-4cae-b306-701292248ea6\") " pod="openshift-dns/node-resolver-78l9d" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.770848 4915 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.770864 4915 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.770876 4915 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.770889 4915 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.770901 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.770911 4915 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.770922 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.770938 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.770949 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.770960 4915 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.770970 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771017 4915 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771032 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771045 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771087 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771097 4915 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771108 4915 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771160 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771260 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/81cd7e15-a585-4cae-b306-701292248ea6-hosts-file\") pod \"node-resolver-78l9d\" (UID: \"81cd7e15-a585-4cae-b306-701292248ea6\") " pod="openshift-dns/node-resolver-78l9d" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771349 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771154 4915 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771381 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771393 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771405 4915 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771428 4915 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771439 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771452 4915 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771463 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771474 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771484 4915 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771495 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771506 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771517 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771528 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771536 4915 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771544 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771552 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771560 4915 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771569 4915 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771581 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771590 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771598 4915 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771606 4915 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771616 4915 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771624 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771633 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771641 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771650 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771659 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771668 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771676 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771684 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771692 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771700 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771708 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771717 4915 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771725 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771733 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771742 4915 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771749 4915 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771757 4915 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771765 4915 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.771995 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772008 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772016 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772026 4915 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772034 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772042 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772050 4915 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772058 4915 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772066 4915 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772074 4915 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772082 4915 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772090 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772099 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772107 4915 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772115 4915 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772124 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772132 4915 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772139 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772148 4915 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772155 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772165 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772173 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772180 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772188 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772196 4915 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772204 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772213 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772222 4915 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772230 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772238 4915 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772246 4915 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772254 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772262 4915 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772270 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772279 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772287 4915 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772296 4915 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772308 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772317 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772326 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772335 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.772343 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.791608 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vjrg\" (UniqueName: \"kubernetes.io/projected/81cd7e15-a585-4cae-b306-701292248ea6-kube-api-access-5vjrg\") pod \"node-resolver-78l9d\" (UID: \"81cd7e15-a585-4cae-b306-701292248ea6\") " pod="openshift-dns/node-resolver-78l9d" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.794823 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-5bpjb"] Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.795164 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5bpjb" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.795176 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.795232 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.795246 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.795284 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.795299 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:10Z","lastTransitionTime":"2026-01-27T18:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.797163 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.797406 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.797564 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.797820 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.800200 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.803624 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.813522 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.813587 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:10 crc kubenswrapper[4915]: W0127 18:42:10.815684 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-5112eb85b43900f2767dc44fbe97a27cf413c01b04cc8635705bada5587c42ef WatchSource:0}: Error finding container 5112eb85b43900f2767dc44fbe97a27cf413c01b04cc8635705bada5587c42ef: Status 404 returned error can't find the container with id 5112eb85b43900f2767dc44fbe97a27cf413c01b04cc8635705bada5587c42ef Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.817629 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-78l9d" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.822411 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.823135 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.832330 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:10 crc kubenswrapper[4915]: W0127 18:42:10.833469 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81cd7e15_a585_4cae_b306_701292248ea6.slice/crio-3ce396a130518d0c8c3978d8ddf447bb3920a1f56349718addb92e7cc7d8b808 WatchSource:0}: Error finding container 3ce396a130518d0c8c3978d8ddf447bb3920a1f56349718addb92e7cc7d8b808: Status 404 returned error can't find the container with id 3ce396a130518d0c8c3978d8ddf447bb3920a1f56349718addb92e7cc7d8b808 Jan 27 18:42:10 crc kubenswrapper[4915]: W0127 18:42:10.839301 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-d9d7329dc6c84bb82b2c60757a41d90de6eb4e4a75f86a9ec5f9659c40827c8c WatchSource:0}: Error finding container d9d7329dc6c84bb82b2c60757a41d90de6eb4e4a75f86a9ec5f9659c40827c8c: Status 404 returned error can't find the container with id d9d7329dc6c84bb82b2c60757a41d90de6eb4e4a75f86a9ec5f9659c40827c8c Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.843399 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.854452 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.862387 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.873232 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.881247 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.896873 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.906746 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.906831 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.906845 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.906864 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.906875 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:10Z","lastTransitionTime":"2026-01-27T18:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.911375 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.951838 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.963060 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.968803 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.973610 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-host-var-lib-kubelet\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.973650 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-etc-kubernetes\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.973670 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf5d8\" (UniqueName: \"kubernetes.io/projected/fe27a668-1ea7-44c8-9490-55cf8db5dad9-kube-api-access-lf5d8\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.973780 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-system-cni-dir\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.973880 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fe27a668-1ea7-44c8-9490-55cf8db5dad9-multus-daemon-config\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.973904 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-multus-cni-dir\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.973932 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-cnibin\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.973955 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-host-run-netns\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.973987 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-multus-socket-dir-parent\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.974015 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-host-var-lib-cni-multus\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.974047 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fe27a668-1ea7-44c8-9490-55cf8db5dad9-cni-binary-copy\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.974066 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-host-var-lib-cni-bin\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.974094 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-hostroot\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.974118 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-os-release\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.974137 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-host-run-k8s-cni-cncf-io\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.974156 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-multus-conf-dir\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.974305 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-host-run-multus-certs\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.974431 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.986236 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.991722 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.992159 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:10 crc kubenswrapper[4915]: I0127 18:42:10.999127 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.005160 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.011315 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.011364 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.011377 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.011396 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.011409 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:11Z","lastTransitionTime":"2026-01-27T18:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.020326 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.031105 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.042592 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.052877 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.062262 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.064542 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.074260 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.074762 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-host-run-multus-certs\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.074835 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-host-var-lib-kubelet\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.074865 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-etc-kubernetes\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.074895 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf5d8\" (UniqueName: \"kubernetes.io/projected/fe27a668-1ea7-44c8-9490-55cf8db5dad9-kube-api-access-lf5d8\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.074925 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-system-cni-dir\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.074967 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fe27a668-1ea7-44c8-9490-55cf8db5dad9-multus-daemon-config\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.074998 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-multus-cni-dir\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.075027 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-cnibin\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.075057 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-host-run-netns\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.075100 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-multus-socket-dir-parent\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.075137 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-host-var-lib-cni-multus\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.075167 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-host-var-lib-cni-bin\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.075263 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-host-run-netns\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.075259 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-etc-kubernetes\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.075307 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-host-var-lib-kubelet\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.075349 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-host-run-multus-certs\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.075349 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-host-var-lib-cni-multus\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.075400 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-cnibin\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.075435 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-system-cni-dir\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.075426 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-multus-socket-dir-parent\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.075482 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-multus-cni-dir\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.075508 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-host-var-lib-cni-bin\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.075546 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-hostroot\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.075603 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-hostroot\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.075613 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fe27a668-1ea7-44c8-9490-55cf8db5dad9-cni-binary-copy\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.075644 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-host-run-k8s-cni-cncf-io\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.075699 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-host-run-k8s-cni-cncf-io\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.075674 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-multus-conf-dir\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.075843 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-multus-conf-dir\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.075966 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-os-release\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.076027 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.076042 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.076072 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.076136 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fe27a668-1ea7-44c8-9490-55cf8db5dad9-os-release\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.076249 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fe27a668-1ea7-44c8-9490-55cf8db5dad9-cni-binary-copy\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.076272 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fe27a668-1ea7-44c8-9490-55cf8db5dad9-multus-daemon-config\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.083406 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.090757 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf5d8\" (UniqueName: \"kubernetes.io/projected/fe27a668-1ea7-44c8-9490-55cf8db5dad9-kube-api-access-lf5d8\") pod \"multus-5bpjb\" (UID: \"fe27a668-1ea7-44c8-9490-55cf8db5dad9\") " pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.093664 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.102099 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.110108 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5bpjb" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.110590 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.114347 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.114376 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.114387 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.114402 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.114414 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:11Z","lastTransitionTime":"2026-01-27T18:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.122753 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.133555 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.141834 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.151294 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-q8dsj"] Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.151699 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.153981 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.154618 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n8spt"] Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.155286 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.155939 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.156609 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 18:42:11 crc kubenswrapper[4915]: W0127 18:42:11.159387 4915 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 27 18:42:11 crc kubenswrapper[4915]: E0127 18:42:11.159463 4915 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:42:11 crc kubenswrapper[4915]: W0127 18:42:11.159578 4915 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: secrets "ovn-kubernetes-node-dockercfg-pwtwl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 27 18:42:11 crc kubenswrapper[4915]: E0127 18:42:11.159611 4915 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-node-dockercfg-pwtwl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:42:11 crc kubenswrapper[4915]: W0127 18:42:11.159708 4915 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: secrets "ovn-node-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 27 18:42:11 crc kubenswrapper[4915]: E0127 18:42:11.163763 4915 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-node-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.164184 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-fxrlf"] Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.166244 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.166414 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.166618 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 18:42:11 crc kubenswrapper[4915]: W0127 18:42:11.166675 4915 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: configmaps "ovnkube-script-lib" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 27 18:42:11 crc kubenswrapper[4915]: E0127 18:42:11.170447 4915 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-script-lib\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:42:11 crc kubenswrapper[4915]: W0127 18:42:11.173162 4915 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: configmaps "env-overrides" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.173210 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.173327 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.173383 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 18:42:11 crc kubenswrapper[4915]: E0127 18:42:11.173209 4915 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"env-overrides\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.173819 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.176486 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.177321 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:42:11 crc kubenswrapper[4915]: E0127 18:42:11.177537 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:42:12.177515978 +0000 UTC m=+23.535369662 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.217004 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.217046 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.217054 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.217068 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.217078 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:11Z","lastTransitionTime":"2026-01-27T18:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.218774 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.234288 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.243414 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.252393 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.261824 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.271157 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.278504 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1c37793b-7e30-4f54-baab-48a358a948b2-system-cni-dir\") pod \"multus-additional-cni-plugins-fxrlf\" (UID: \"1c37793b-7e30-4f54-baab-48a358a948b2\") " pod="openshift-multus/multus-additional-cni-plugins-fxrlf" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.278548 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7e61db92-39b6-4acf-89af-34169c61e709-proxy-tls\") pod \"machine-config-daemon-q8dsj\" (UID: \"7e61db92-39b6-4acf-89af-34169c61e709\") " pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.278583 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.278608 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eb87671e-1bee-4bef-843d-6fce9467079d-ovnkube-config\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.278628 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1c37793b-7e30-4f54-baab-48a358a948b2-cnibin\") pod \"multus-additional-cni-plugins-fxrlf\" (UID: \"1c37793b-7e30-4f54-baab-48a358a948b2\") " pod="openshift-multus/multus-additional-cni-plugins-fxrlf" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.278648 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-run-openvswitch\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.278668 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-log-socket\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.278689 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-run-ovn-kubernetes\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.278711 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eb87671e-1bee-4bef-843d-6fce9467079d-ovn-node-metrics-cert\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.278731 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-systemd-units\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: E0127 18:42:11.278759 4915 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.278777 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-run-netns\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: E0127 18:42:11.278867 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:12.278850687 +0000 UTC m=+23.636704431 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.279033 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-kubelet\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.279127 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1c37793b-7e30-4f54-baab-48a358a948b2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-fxrlf\" (UID: \"1c37793b-7e30-4f54-baab-48a358a948b2\") " pod="openshift-multus/multus-additional-cni-plugins-fxrlf" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.279157 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7e61db92-39b6-4acf-89af-34169c61e709-rootfs\") pod \"machine-config-daemon-q8dsj\" (UID: \"7e61db92-39b6-4acf-89af-34169c61e709\") " pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.279177 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7e61db92-39b6-4acf-89af-34169c61e709-mcd-auth-proxy-config\") pod \"machine-config-daemon-q8dsj\" (UID: \"7e61db92-39b6-4acf-89af-34169c61e709\") " pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.279199 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-run-ovn\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.279218 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/eb87671e-1bee-4bef-843d-6fce9467079d-ovnkube-script-lib\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.279252 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwsc5\" (UniqueName: \"kubernetes.io/projected/eb87671e-1bee-4bef-843d-6fce9467079d-kube-api-access-mwsc5\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.279292 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1c37793b-7e30-4f54-baab-48a358a948b2-cni-binary-copy\") pod \"multus-additional-cni-plugins-fxrlf\" (UID: \"1c37793b-7e30-4f54-baab-48a358a948b2\") " pod="openshift-multus/multus-additional-cni-plugins-fxrlf" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.279336 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-slash\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.279361 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eb87671e-1bee-4bef-843d-6fce9467079d-env-overrides\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.279417 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-etc-openvswitch\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.279450 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1c37793b-7e30-4f54-baab-48a358a948b2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-fxrlf\" (UID: \"1c37793b-7e30-4f54-baab-48a358a948b2\") " pod="openshift-multus/multus-additional-cni-plugins-fxrlf" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.279482 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-run-systemd\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.279505 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-node-log\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.279572 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.279633 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-cni-bin\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: E0127 18:42:11.279739 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:42:11 crc kubenswrapper[4915]: E0127 18:42:11.279766 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:42:11 crc kubenswrapper[4915]: E0127 18:42:11.279780 4915 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:11 crc kubenswrapper[4915]: E0127 18:42:11.279844 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:12.279826002 +0000 UTC m=+23.637679666 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.279886 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-cni-netd\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.279908 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8nk6\" (UniqueName: \"kubernetes.io/projected/1c37793b-7e30-4f54-baab-48a358a948b2-kube-api-access-p8nk6\") pod \"multus-additional-cni-plugins-fxrlf\" (UID: \"1c37793b-7e30-4f54-baab-48a358a948b2\") " pod="openshift-multus/multus-additional-cni-plugins-fxrlf" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.279938 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.279966 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.279992 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mplxv\" (UniqueName: \"kubernetes.io/projected/7e61db92-39b6-4acf-89af-34169c61e709-kube-api-access-mplxv\") pod \"machine-config-daemon-q8dsj\" (UID: \"7e61db92-39b6-4acf-89af-34169c61e709\") " pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.280021 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.280043 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-var-lib-openvswitch\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.280064 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1c37793b-7e30-4f54-baab-48a358a948b2-os-release\") pod \"multus-additional-cni-plugins-fxrlf\" (UID: \"1c37793b-7e30-4f54-baab-48a358a948b2\") " pod="openshift-multus/multus-additional-cni-plugins-fxrlf" Jan 27 18:42:11 crc kubenswrapper[4915]: E0127 18:42:11.280169 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:42:11 crc kubenswrapper[4915]: E0127 18:42:11.280189 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:42:11 crc kubenswrapper[4915]: E0127 18:42:11.280199 4915 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:11 crc kubenswrapper[4915]: E0127 18:42:11.280226 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:12.280217972 +0000 UTC m=+23.638071636 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:11 crc kubenswrapper[4915]: E0127 18:42:11.280275 4915 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:42:11 crc kubenswrapper[4915]: E0127 18:42:11.280304 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:12.280295674 +0000 UTC m=+23.638149428 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.281041 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.290584 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.297201 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.304910 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 04:11:55.125088215 +0000 UTC Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.313902 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.318845 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.318875 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.318884 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.318896 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.318906 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:11Z","lastTransitionTime":"2026-01-27T18:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.337782 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.352241 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.364717 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.379856 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.380671 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-cni-netd\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.380730 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8nk6\" (UniqueName: \"kubernetes.io/projected/1c37793b-7e30-4f54-baab-48a358a948b2-kube-api-access-p8nk6\") pod \"multus-additional-cni-plugins-fxrlf\" (UID: \"1c37793b-7e30-4f54-baab-48a358a948b2\") " pod="openshift-multus/multus-additional-cni-plugins-fxrlf" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.380785 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.380863 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-cni-netd\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.380887 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mplxv\" (UniqueName: \"kubernetes.io/projected/7e61db92-39b6-4acf-89af-34169c61e709-kube-api-access-mplxv\") pod \"machine-config-daemon-q8dsj\" (UID: \"7e61db92-39b6-4acf-89af-34169c61e709\") " pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.380931 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-var-lib-openvswitch\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.380961 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1c37793b-7e30-4f54-baab-48a358a948b2-os-release\") pod \"multus-additional-cni-plugins-fxrlf\" (UID: \"1c37793b-7e30-4f54-baab-48a358a948b2\") " pod="openshift-multus/multus-additional-cni-plugins-fxrlf" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.381007 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1c37793b-7e30-4f54-baab-48a358a948b2-system-cni-dir\") pod \"multus-additional-cni-plugins-fxrlf\" (UID: \"1c37793b-7e30-4f54-baab-48a358a948b2\") " pod="openshift-multus/multus-additional-cni-plugins-fxrlf" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.381064 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7e61db92-39b6-4acf-89af-34169c61e709-proxy-tls\") pod \"machine-config-daemon-q8dsj\" (UID: \"7e61db92-39b6-4acf-89af-34169c61e709\") " pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.381122 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1c37793b-7e30-4f54-baab-48a358a948b2-os-release\") pod \"multus-additional-cni-plugins-fxrlf\" (UID: \"1c37793b-7e30-4f54-baab-48a358a948b2\") " pod="openshift-multus/multus-additional-cni-plugins-fxrlf" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.381122 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eb87671e-1bee-4bef-843d-6fce9467079d-ovnkube-config\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.381153 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1c37793b-7e30-4f54-baab-48a358a948b2-system-cni-dir\") pod \"multus-additional-cni-plugins-fxrlf\" (UID: \"1c37793b-7e30-4f54-baab-48a358a948b2\") " pod="openshift-multus/multus-additional-cni-plugins-fxrlf" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.381173 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1c37793b-7e30-4f54-baab-48a358a948b2-cnibin\") pod \"multus-additional-cni-plugins-fxrlf\" (UID: \"1c37793b-7e30-4f54-baab-48a358a948b2\") " pod="openshift-multus/multus-additional-cni-plugins-fxrlf" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.381196 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.381241 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-log-socket\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.381292 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-run-ovn-kubernetes\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.381299 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1c37793b-7e30-4f54-baab-48a358a948b2-cnibin\") pod \"multus-additional-cni-plugins-fxrlf\" (UID: \"1c37793b-7e30-4f54-baab-48a358a948b2\") " pod="openshift-multus/multus-additional-cni-plugins-fxrlf" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.381341 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eb87671e-1bee-4bef-843d-6fce9467079d-ovn-node-metrics-cert\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.381376 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-log-socket\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.381390 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-run-openvswitch\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.381184 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-var-lib-openvswitch\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.381436 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-kubelet\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.381522 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-run-ovn-kubernetes\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.381534 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-systemd-units\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.381569 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-run-openvswitch\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.381615 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-run-netns\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.381627 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-kubelet\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.381610 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-systemd-units\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.381591 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-run-netns\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.381766 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1c37793b-7e30-4f54-baab-48a358a948b2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-fxrlf\" (UID: \"1c37793b-7e30-4f54-baab-48a358a948b2\") " pod="openshift-multus/multus-additional-cni-plugins-fxrlf" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.381903 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7e61db92-39b6-4acf-89af-34169c61e709-rootfs\") pod \"machine-config-daemon-q8dsj\" (UID: \"7e61db92-39b6-4acf-89af-34169c61e709\") " pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.381966 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-run-ovn\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.382046 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/eb87671e-1bee-4bef-843d-6fce9467079d-ovnkube-script-lib\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.382073 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7e61db92-39b6-4acf-89af-34169c61e709-rootfs\") pod \"machine-config-daemon-q8dsj\" (UID: \"7e61db92-39b6-4acf-89af-34169c61e709\") " pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.382094 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-run-ovn\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.382108 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwsc5\" (UniqueName: \"kubernetes.io/projected/eb87671e-1bee-4bef-843d-6fce9467079d-kube-api-access-mwsc5\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.382171 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1c37793b-7e30-4f54-baab-48a358a948b2-cni-binary-copy\") pod \"multus-additional-cni-plugins-fxrlf\" (UID: \"1c37793b-7e30-4f54-baab-48a358a948b2\") " pod="openshift-multus/multus-additional-cni-plugins-fxrlf" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.382224 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7e61db92-39b6-4acf-89af-34169c61e709-mcd-auth-proxy-config\") pod \"machine-config-daemon-q8dsj\" (UID: \"7e61db92-39b6-4acf-89af-34169c61e709\") " pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.382324 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-slash\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.382420 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eb87671e-1bee-4bef-843d-6fce9467079d-env-overrides\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.382494 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-etc-openvswitch\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.382495 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-slash\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.382535 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1c37793b-7e30-4f54-baab-48a358a948b2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-fxrlf\" (UID: \"1c37793b-7e30-4f54-baab-48a358a948b2\") " pod="openshift-multus/multus-additional-cni-plugins-fxrlf" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.382567 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-etc-openvswitch\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.382573 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-run-systemd\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.382602 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-node-log\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.382632 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-cni-bin\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.382678 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-node-log\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.382607 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-run-systemd\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.382702 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-cni-bin\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.401964 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.422068 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.422149 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.422174 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.422208 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.422232 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:11Z","lastTransitionTime":"2026-01-27T18:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:11 crc kubenswrapper[4915]: E0127 18:42:11.491409 4915 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.526880 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.526936 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.526947 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.526964 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.526975 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:11Z","lastTransitionTime":"2026-01-27T18:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.598706 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1c37793b-7e30-4f54-baab-48a358a948b2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-fxrlf\" (UID: \"1c37793b-7e30-4f54-baab-48a358a948b2\") " pod="openshift-multus/multus-additional-cni-plugins-fxrlf" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.599249 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.600185 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.602249 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.603120 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.604519 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.605207 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.605997 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.607458 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.608363 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.609988 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.611175 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.613023 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.613747 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.614451 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.615691 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.616520 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.618009 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.618554 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.619342 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.626834 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.627442 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.628014 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.629038 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.629848 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.630762 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.631471 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.632505 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.632556 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.632573 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.632584 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.632597 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.632614 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:11Z","lastTransitionTime":"2026-01-27T18:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.633107 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.634103 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.634604 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.635098 4915 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.635540 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.637223 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.637727 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.638737 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.640537 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.641201 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.642171 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.642967 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.643964 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.644689 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.645649 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.646307 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.647314 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.647846 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.648717 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.649201 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.650280 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.650729 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.651564 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.652067 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.652911 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.653453 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.653959 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.654745 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-78l9d" event={"ID":"81cd7e15-a585-4cae-b306-701292248ea6","Type":"ContainerStarted","Data":"3ce396a130518d0c8c3978d8ddf447bb3920a1f56349718addb92e7cc7d8b808"} Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.654770 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4bc1f467df092080a04db2c66d20b099e3a5909bdac2e2b96382ab11f8fa822b"} Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.654782 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d9d7329dc6c84bb82b2c60757a41d90de6eb4e4a75f86a9ec5f9659c40827c8c"} Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.654812 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5112eb85b43900f2767dc44fbe97a27cf413c01b04cc8635705bada5587c42ef"} Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.692244 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7e61db92-39b6-4acf-89af-34169c61e709-mcd-auth-proxy-config\") pod \"machine-config-daemon-q8dsj\" (UID: \"7e61db92-39b6-4acf-89af-34169c61e709\") " pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.696656 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eb87671e-1bee-4bef-843d-6fce9467079d-ovnkube-config\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.702492 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8nk6\" (UniqueName: \"kubernetes.io/projected/1c37793b-7e30-4f54-baab-48a358a948b2-kube-api-access-p8nk6\") pod \"multus-additional-cni-plugins-fxrlf\" (UID: \"1c37793b-7e30-4f54-baab-48a358a948b2\") " pod="openshift-multus/multus-additional-cni-plugins-fxrlf" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.704728 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1c37793b-7e30-4f54-baab-48a358a948b2-cni-binary-copy\") pod \"multus-additional-cni-plugins-fxrlf\" (UID: \"1c37793b-7e30-4f54-baab-48a358a948b2\") " pod="openshift-multus/multus-additional-cni-plugins-fxrlf" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.705015 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1c37793b-7e30-4f54-baab-48a358a948b2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-fxrlf\" (UID: \"1c37793b-7e30-4f54-baab-48a358a948b2\") " pod="openshift-multus/multus-additional-cni-plugins-fxrlf" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.709345 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mplxv\" (UniqueName: \"kubernetes.io/projected/7e61db92-39b6-4acf-89af-34169c61e709-kube-api-access-mplxv\") pod \"machine-config-daemon-q8dsj\" (UID: \"7e61db92-39b6-4acf-89af-34169c61e709\") " pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.709627 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7e61db92-39b6-4acf-89af-34169c61e709-proxy-tls\") pod \"machine-config-daemon-q8dsj\" (UID: \"7e61db92-39b6-4acf-89af-34169c61e709\") " pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.735025 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.735053 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.735061 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.735073 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.735082 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:11Z","lastTransitionTime":"2026-01-27T18:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.838069 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.838128 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.838146 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.838169 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.838188 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:11Z","lastTransitionTime":"2026-01-27T18:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.899227 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.899516 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" Jan 27 18:42:11 crc kubenswrapper[4915]: W0127 18:42:11.926854 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c37793b_7e30_4f54_baab_48a358a948b2.slice/crio-e2b8c930710b39f54bbc85754b6d7d9b1e8551e93bc6644d0c052b6335a96064 WatchSource:0}: Error finding container e2b8c930710b39f54bbc85754b6d7d9b1e8551e93bc6644d0c052b6335a96064: Status 404 returned error can't find the container with id e2b8c930710b39f54bbc85754b6d7d9b1e8551e93bc6644d0c052b6335a96064 Jan 27 18:42:11 crc kubenswrapper[4915]: W0127 18:42:11.927987 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e61db92_39b6_4acf_89af_34169c61e709.slice/crio-05bf37e87d0f9b244d459d270dd9fa49f344eae11f53acad9c10e01ae863cc7d WatchSource:0}: Error finding container 05bf37e87d0f9b244d459d270dd9fa49f344eae11f53acad9c10e01ae863cc7d: Status 404 returned error can't find the container with id 05bf37e87d0f9b244d459d270dd9fa49f344eae11f53acad9c10e01ae863cc7d Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.941570 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.941605 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.941619 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.941637 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:11 crc kubenswrapper[4915]: I0127 18:42:11.941652 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:11Z","lastTransitionTime":"2026-01-27T18:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.045218 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.045270 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.045296 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.045314 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.045326 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:12Z","lastTransitionTime":"2026-01-27T18:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.089581 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.116743 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwsc5\" (UniqueName: \"kubernetes.io/projected/eb87671e-1bee-4bef-843d-6fce9467079d-kube-api-access-mwsc5\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.120092 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.147045 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.147328 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.147338 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.147351 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.147361 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:12Z","lastTransitionTime":"2026-01-27T18:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.162211 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eb87671e-1bee-4bef-843d-6fce9467079d-env-overrides\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.193154 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:42:12 crc kubenswrapper[4915]: E0127 18:42:12.193587 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:42:14.193554107 +0000 UTC m=+25.551407771 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.223993 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.244308 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.250129 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.250163 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.250175 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.250194 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.250207 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:12Z","lastTransitionTime":"2026-01-27T18:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.256592 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eb87671e-1bee-4bef-843d-6fce9467079d-ovn-node-metrics-cert\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.258810 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.264372 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/eb87671e-1bee-4bef-843d-6fce9467079d-ovnkube-script-lib\") pod \"ovnkube-node-n8spt\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.293676 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.293730 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.293764 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.293831 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:12 crc kubenswrapper[4915]: E0127 18:42:12.293961 4915 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:42:12 crc kubenswrapper[4915]: E0127 18:42:12.293993 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:42:12 crc kubenswrapper[4915]: E0127 18:42:12.294023 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:42:12 crc kubenswrapper[4915]: E0127 18:42:12.294027 4915 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:42:12 crc kubenswrapper[4915]: E0127 18:42:12.294074 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:14.294006503 +0000 UTC m=+25.651860177 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:42:12 crc kubenswrapper[4915]: E0127 18:42:12.294126 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:14.294104366 +0000 UTC m=+25.651958080 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:42:12 crc kubenswrapper[4915]: E0127 18:42:12.294040 4915 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:12 crc kubenswrapper[4915]: E0127 18:42:12.294217 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:14.294200578 +0000 UTC m=+25.652054242 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:12 crc kubenswrapper[4915]: E0127 18:42:12.294041 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:42:12 crc kubenswrapper[4915]: E0127 18:42:12.294242 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:42:12 crc kubenswrapper[4915]: E0127 18:42:12.294251 4915 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:12 crc kubenswrapper[4915]: E0127 18:42:12.294272 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:14.29426613 +0000 UTC m=+25.652119794 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.305957 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 10:17:27.716107978 +0000 UTC Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.352454 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.352484 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.352497 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.352513 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.352524 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:12Z","lastTransitionTime":"2026-01-27T18:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.357473 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:12 crc kubenswrapper[4915]: E0127 18:42:12.357553 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.357813 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:12 crc kubenswrapper[4915]: E0127 18:42:12.357862 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.357901 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:12 crc kubenswrapper[4915]: E0127 18:42:12.357942 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.455133 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.455170 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.455180 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.455193 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.455201 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:12Z","lastTransitionTime":"2026-01-27T18:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.483341 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" event={"ID":"1c37793b-7e30-4f54-baab-48a358a948b2","Type":"ContainerStarted","Data":"82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711"} Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.483387 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" event={"ID":"1c37793b-7e30-4f54-baab-48a358a948b2","Type":"ContainerStarted","Data":"e2b8c930710b39f54bbc85754b6d7d9b1e8551e93bc6644d0c052b6335a96064"} Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.485574 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.487308 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083"} Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.487442 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.488705 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5"} Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.490002 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3"} Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.490027 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"05bf37e87d0f9b244d459d270dd9fa49f344eae11f53acad9c10e01ae863cc7d"} Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.491386 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5bpjb" event={"ID":"fe27a668-1ea7-44c8-9490-55cf8db5dad9","Type":"ContainerStarted","Data":"f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0"} Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.491433 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5bpjb" event={"ID":"fe27a668-1ea7-44c8-9490-55cf8db5dad9","Type":"ContainerStarted","Data":"ef6fee58f022218f7b676c0c7149fd1acfd68ebd587ddb1fd5111276e19beccb"} Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.493387 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-78l9d" event={"ID":"81cd7e15-a585-4cae-b306-701292248ea6","Type":"ContainerStarted","Data":"80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d"} Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.498974 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.504163 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3"} Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.507776 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.520233 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.533126 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.549638 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.560542 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.560597 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.560609 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.560627 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.560645 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:12Z","lastTransitionTime":"2026-01-27T18:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.576762 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.593134 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.605380 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.617825 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.627974 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.636211 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.642908 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.653326 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.661054 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.662815 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.662878 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.662894 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.662918 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.662933 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:12Z","lastTransitionTime":"2026-01-27T18:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.671170 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.681120 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.704845 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.717693 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.727202 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.737687 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.746068 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.773989 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.774024 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.774033 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.774049 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.774062 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:12Z","lastTransitionTime":"2026-01-27T18:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.791842 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.830504 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.839241 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.855310 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.869722 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.875929 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.875971 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.875979 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.875997 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.876007 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:12Z","lastTransitionTime":"2026-01-27T18:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.883092 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.978938 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.979209 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.979329 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.979440 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.979535 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:12Z","lastTransitionTime":"2026-01-27T18:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.983546 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-msgjd"] Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.983868 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-msgjd" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.987141 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.987181 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.987423 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.993443 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 18:42:12 crc kubenswrapper[4915]: I0127 18:42:12.997327 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.003930 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.013316 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.024206 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.033580 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.042224 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.055997 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.074133 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.082303 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.082357 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.082370 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.082389 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.082403 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:13Z","lastTransitionTime":"2026-01-27T18:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.088191 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.099510 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.100945 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg5j5\" (UniqueName: \"kubernetes.io/projected/b776fd1b-6b54-4f8a-a42c-18e8103fded3-kube-api-access-gg5j5\") pod \"node-ca-msgjd\" (UID: \"b776fd1b-6b54-4f8a-a42c-18e8103fded3\") " pod="openshift-image-registry/node-ca-msgjd" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.101006 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b776fd1b-6b54-4f8a-a42c-18e8103fded3-serviceca\") pod \"node-ca-msgjd\" (UID: \"b776fd1b-6b54-4f8a-a42c-18e8103fded3\") " pod="openshift-image-registry/node-ca-msgjd" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.101041 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b776fd1b-6b54-4f8a-a42c-18e8103fded3-host\") pod \"node-ca-msgjd\" (UID: \"b776fd1b-6b54-4f8a-a42c-18e8103fded3\") " pod="openshift-image-registry/node-ca-msgjd" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.108887 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.123324 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.134641 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.145036 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.185359 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.185408 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.185421 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.185439 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.185460 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:13Z","lastTransitionTime":"2026-01-27T18:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.202031 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg5j5\" (UniqueName: \"kubernetes.io/projected/b776fd1b-6b54-4f8a-a42c-18e8103fded3-kube-api-access-gg5j5\") pod \"node-ca-msgjd\" (UID: \"b776fd1b-6b54-4f8a-a42c-18e8103fded3\") " pod="openshift-image-registry/node-ca-msgjd" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.202094 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b776fd1b-6b54-4f8a-a42c-18e8103fded3-serviceca\") pod \"node-ca-msgjd\" (UID: \"b776fd1b-6b54-4f8a-a42c-18e8103fded3\") " pod="openshift-image-registry/node-ca-msgjd" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.202146 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b776fd1b-6b54-4f8a-a42c-18e8103fded3-host\") pod \"node-ca-msgjd\" (UID: \"b776fd1b-6b54-4f8a-a42c-18e8103fded3\") " pod="openshift-image-registry/node-ca-msgjd" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.202256 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b776fd1b-6b54-4f8a-a42c-18e8103fded3-host\") pod \"node-ca-msgjd\" (UID: \"b776fd1b-6b54-4f8a-a42c-18e8103fded3\") " pod="openshift-image-registry/node-ca-msgjd" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.204070 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b776fd1b-6b54-4f8a-a42c-18e8103fded3-serviceca\") pod \"node-ca-msgjd\" (UID: \"b776fd1b-6b54-4f8a-a42c-18e8103fded3\") " pod="openshift-image-registry/node-ca-msgjd" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.220996 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg5j5\" (UniqueName: \"kubernetes.io/projected/b776fd1b-6b54-4f8a-a42c-18e8103fded3-kube-api-access-gg5j5\") pod \"node-ca-msgjd\" (UID: \"b776fd1b-6b54-4f8a-a42c-18e8103fded3\") " pod="openshift-image-registry/node-ca-msgjd" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.289397 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.289462 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.289480 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.289506 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.289524 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:13Z","lastTransitionTime":"2026-01-27T18:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.296928 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-msgjd" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.306694 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 01:03:42.188365752 +0000 UTC Jan 27 18:42:13 crc kubenswrapper[4915]: W0127 18:42:13.310651 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb776fd1b_6b54_4f8a_a42c_18e8103fded3.slice/crio-d22e6c887015a740b87266110afbd12f086653434860bb2b40e479281b21765b WatchSource:0}: Error finding container d22e6c887015a740b87266110afbd12f086653434860bb2b40e479281b21765b: Status 404 returned error can't find the container with id d22e6c887015a740b87266110afbd12f086653434860bb2b40e479281b21765b Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.392414 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.392456 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.392470 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.392489 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.392502 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:13Z","lastTransitionTime":"2026-01-27T18:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.495936 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.496010 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.496028 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.496056 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.496073 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:13Z","lastTransitionTime":"2026-01-27T18:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.499671 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.503973 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.508611 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.511608 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.512026 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e"} Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.515197 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-msgjd" event={"ID":"b776fd1b-6b54-4f8a-a42c-18e8103fded3","Type":"ContainerStarted","Data":"d22e6c887015a740b87266110afbd12f086653434860bb2b40e479281b21765b"} Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.518108 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228"} Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.520223 4915 generic.go:334] "Generic (PLEG): container finished" podID="1c37793b-7e30-4f54-baab-48a358a948b2" containerID="82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711" exitCode=0 Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.520315 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" event={"ID":"1c37793b-7e30-4f54-baab-48a358a948b2","Type":"ContainerDied","Data":"82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711"} Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.520545 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.524445 4915 generic.go:334] "Generic (PLEG): container finished" podID="eb87671e-1bee-4bef-843d-6fce9467079d" containerID="823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc" exitCode=0 Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.524481 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" event={"ID":"eb87671e-1bee-4bef-843d-6fce9467079d","Type":"ContainerDied","Data":"823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc"} Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.524529 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" event={"ID":"eb87671e-1bee-4bef-843d-6fce9467079d","Type":"ContainerStarted","Data":"fa9ba831fdb5173312e7761600cf5c9d6a31134af62c8716808cbd914bc5465b"} Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.534553 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.543875 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.554273 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.573773 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.586158 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.601662 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.601910 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.601925 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.601932 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.601945 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.601953 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:13Z","lastTransitionTime":"2026-01-27T18:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.612865 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.629229 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.644888 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.658613 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.680410 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.704772 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.704838 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.704852 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.704872 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.704884 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:13Z","lastTransitionTime":"2026-01-27T18:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.724167 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.766654 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.801761 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.807848 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.807913 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.807933 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.807958 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.807976 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:13Z","lastTransitionTime":"2026-01-27T18:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.843100 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.885960 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.910713 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.910776 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.910805 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.910823 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.910836 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:13Z","lastTransitionTime":"2026-01-27T18:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.922322 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:13 crc kubenswrapper[4915]: I0127 18:42:13.964250 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.001601 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:13Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.013081 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.013126 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.013137 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.013151 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.013161 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:14Z","lastTransitionTime":"2026-01-27T18:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.038898 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.086606 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.120563 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.120900 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.120914 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.120930 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.120942 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:14Z","lastTransitionTime":"2026-01-27T18:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.122906 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.160414 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.200121 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.212060 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:42:14 crc kubenswrapper[4915]: E0127 18:42:14.212220 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:42:18.212200023 +0000 UTC m=+29.570053687 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.222766 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.222805 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.222818 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.222833 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.222844 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:14Z","lastTransitionTime":"2026-01-27T18:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.249228 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.285442 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.307826 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 08:12:25.795872236 +0000 UTC Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.313102 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.313231 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:14 crc kubenswrapper[4915]: E0127 18:42:14.313293 4915 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:42:14 crc kubenswrapper[4915]: E0127 18:42:14.313359 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:42:14 crc kubenswrapper[4915]: E0127 18:42:14.313384 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:42:14 crc kubenswrapper[4915]: E0127 18:42:14.313400 4915 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:14 crc kubenswrapper[4915]: E0127 18:42:14.313410 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:18.313392528 +0000 UTC m=+29.671246202 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.313334 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:14 crc kubenswrapper[4915]: E0127 18:42:14.313449 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:18.313434509 +0000 UTC m=+29.671288183 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.313499 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:14 crc kubenswrapper[4915]: E0127 18:42:14.313572 4915 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:42:14 crc kubenswrapper[4915]: E0127 18:42:14.313611 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:18.313598344 +0000 UTC m=+29.671452098 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:42:14 crc kubenswrapper[4915]: E0127 18:42:14.313823 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:42:14 crc kubenswrapper[4915]: E0127 18:42:14.313890 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:42:14 crc kubenswrapper[4915]: E0127 18:42:14.313942 4915 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:14 crc kubenswrapper[4915]: E0127 18:42:14.314019 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:18.314009374 +0000 UTC m=+29.671863038 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.325268 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.325309 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.325325 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.325342 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.325358 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:14Z","lastTransitionTime":"2026-01-27T18:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.326133 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.357470 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.357527 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:14 crc kubenswrapper[4915]: E0127 18:42:14.357605 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:42:14 crc kubenswrapper[4915]: E0127 18:42:14.357723 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.357992 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:14 crc kubenswrapper[4915]: E0127 18:42:14.358181 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.428367 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.428403 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.428416 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.428432 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.428442 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:14Z","lastTransitionTime":"2026-01-27T18:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.529645 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80"} Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.530479 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.530637 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.530722 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.530829 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.530965 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:14Z","lastTransitionTime":"2026-01-27T18:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.532640 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-msgjd" event={"ID":"b776fd1b-6b54-4f8a-a42c-18e8103fded3","Type":"ContainerStarted","Data":"4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948"} Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.535678 4915 generic.go:334] "Generic (PLEG): container finished" podID="1c37793b-7e30-4f54-baab-48a358a948b2" containerID="96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c" exitCode=0 Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.535769 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" event={"ID":"1c37793b-7e30-4f54-baab-48a358a948b2","Type":"ContainerDied","Data":"96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c"} Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.539755 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" event={"ID":"eb87671e-1bee-4bef-843d-6fce9467079d","Type":"ContainerStarted","Data":"0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f"} Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.539812 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" event={"ID":"eb87671e-1bee-4bef-843d-6fce9467079d","Type":"ContainerStarted","Data":"0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73"} Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.539827 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" event={"ID":"eb87671e-1bee-4bef-843d-6fce9467079d","Type":"ContainerStarted","Data":"1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047"} Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.539841 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" event={"ID":"eb87671e-1bee-4bef-843d-6fce9467079d","Type":"ContainerStarted","Data":"390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271"} Jan 27 18:42:14 crc kubenswrapper[4915]: E0127 18:42:14.545493 4915 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.557998 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.571250 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.587702 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.607382 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.621377 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.632567 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.634576 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.634697 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.634842 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.634939 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.635015 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:14Z","lastTransitionTime":"2026-01-27T18:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.646631 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.662680 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.699865 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.748018 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.748269 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.748380 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.748470 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.748544 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:14Z","lastTransitionTime":"2026-01-27T18:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.772891 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.782044 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.824500 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.850827 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.850871 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.850881 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.850894 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.850904 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:14Z","lastTransitionTime":"2026-01-27T18:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.866175 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.902899 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.940499 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.953413 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.953435 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.953450 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.953462 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.953473 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:14Z","lastTransitionTime":"2026-01-27T18:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:14 crc kubenswrapper[4915]: I0127 18:42:14.985870 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:14Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.021829 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.055669 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.055711 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.055720 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.055737 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.055747 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:15Z","lastTransitionTime":"2026-01-27T18:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.060369 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.105955 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.144930 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.158293 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.158332 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.158345 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.158369 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.158381 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:15Z","lastTransitionTime":"2026-01-27T18:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.185035 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.238990 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.261061 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.261116 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.261133 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.261157 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.261173 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:15Z","lastTransitionTime":"2026-01-27T18:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.270161 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.306844 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.309908 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 04:09:18.280641968 +0000 UTC Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.343841 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.363648 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.363700 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.363718 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.363746 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.363765 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:15Z","lastTransitionTime":"2026-01-27T18:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.396179 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.430333 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.466957 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.467040 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.467063 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.467093 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.467118 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:15Z","lastTransitionTime":"2026-01-27T18:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.470312 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.506988 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.546713 4915 generic.go:334] "Generic (PLEG): container finished" podID="1c37793b-7e30-4f54-baab-48a358a948b2" containerID="072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4" exitCode=0 Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.546861 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" event={"ID":"1c37793b-7e30-4f54-baab-48a358a948b2","Type":"ContainerDied","Data":"072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4"} Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.549070 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.551771 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" event={"ID":"eb87671e-1bee-4bef-843d-6fce9467079d","Type":"ContainerStarted","Data":"ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785"} Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.551824 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" event={"ID":"eb87671e-1bee-4bef-843d-6fce9467079d","Type":"ContainerStarted","Data":"887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc"} Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.570364 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.570819 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.570839 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.570863 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.570882 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:15Z","lastTransitionTime":"2026-01-27T18:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.586579 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.628234 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.669329 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.674605 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.674668 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.674684 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.674707 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.674722 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:15Z","lastTransitionTime":"2026-01-27T18:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.703379 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.742445 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.777568 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.777606 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.777619 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.777638 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.777654 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:15Z","lastTransitionTime":"2026-01-27T18:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.789856 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.825476 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.867279 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.880962 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.881023 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.881033 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.881049 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.881059 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:15Z","lastTransitionTime":"2026-01-27T18:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.914456 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.947171 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.983521 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:15Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.984661 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.984681 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.984690 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.984702 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:15 crc kubenswrapper[4915]: I0127 18:42:15.984711 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:15Z","lastTransitionTime":"2026-01-27T18:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.021072 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.073774 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.087551 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.087592 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.087611 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.087634 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.087653 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:16Z","lastTransitionTime":"2026-01-27T18:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.111021 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.152327 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.191700 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.191784 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.191845 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.191878 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.191902 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:16Z","lastTransitionTime":"2026-01-27T18:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.295154 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.295204 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.295216 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.295231 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.295242 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:16Z","lastTransitionTime":"2026-01-27T18:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.311016 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 07:15:55.061253022 +0000 UTC Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.357060 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.357216 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.357060 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:16 crc kubenswrapper[4915]: E0127 18:42:16.357295 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:42:16 crc kubenswrapper[4915]: E0127 18:42:16.357436 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:42:16 crc kubenswrapper[4915]: E0127 18:42:16.357547 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.399019 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.399085 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.399103 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.399128 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.399147 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:16Z","lastTransitionTime":"2026-01-27T18:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.502644 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.502688 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.502856 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.502882 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.502895 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:16Z","lastTransitionTime":"2026-01-27T18:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.560310 4915 generic.go:334] "Generic (PLEG): container finished" podID="1c37793b-7e30-4f54-baab-48a358a948b2" containerID="d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95" exitCode=0 Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.560372 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" event={"ID":"1c37793b-7e30-4f54-baab-48a358a948b2","Type":"ContainerDied","Data":"d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95"} Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.588414 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.605683 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.605739 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.605755 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.605779 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.605821 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:16Z","lastTransitionTime":"2026-01-27T18:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.610456 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.628265 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.646819 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.663498 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.679383 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.692634 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.707603 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.707914 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.707970 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.707989 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.708013 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.708032 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:16Z","lastTransitionTime":"2026-01-27T18:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.721707 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.732069 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.755839 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.776194 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.793843 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.809853 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.810668 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.810704 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.810717 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.810734 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.810747 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:16Z","lastTransitionTime":"2026-01-27T18:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.822616 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:16Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.913071 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.913114 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.913127 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.913144 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:16 crc kubenswrapper[4915]: I0127 18:42:16.913156 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:16Z","lastTransitionTime":"2026-01-27T18:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.015213 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.015260 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.015272 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.015290 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.015302 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:17Z","lastTransitionTime":"2026-01-27T18:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.117891 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.117935 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.117947 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.117981 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.118005 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:17Z","lastTransitionTime":"2026-01-27T18:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.221277 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.221350 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.221368 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.221767 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.221860 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:17Z","lastTransitionTime":"2026-01-27T18:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.311999 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 11:29:01.477633068 +0000 UTC Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.325422 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.325462 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.325476 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.325499 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.325514 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:17Z","lastTransitionTime":"2026-01-27T18:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.428434 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.428491 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.428510 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.428532 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.428549 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:17Z","lastTransitionTime":"2026-01-27T18:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.531740 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.531785 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.531820 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.531839 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.531850 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:17Z","lastTransitionTime":"2026-01-27T18:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.569255 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" event={"ID":"eb87671e-1bee-4bef-843d-6fce9467079d","Type":"ContainerStarted","Data":"d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5"} Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.573495 4915 generic.go:334] "Generic (PLEG): container finished" podID="1c37793b-7e30-4f54-baab-48a358a948b2" containerID="856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa" exitCode=0 Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.573551 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" event={"ID":"1c37793b-7e30-4f54-baab-48a358a948b2","Type":"ContainerDied","Data":"856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa"} Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.596683 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.616776 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.633453 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.635511 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.635593 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.635610 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.635633 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.635650 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:17Z","lastTransitionTime":"2026-01-27T18:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.655992 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.673940 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.689509 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.705143 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.716875 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.733182 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.740870 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.740902 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.740911 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.740923 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.740932 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:17Z","lastTransitionTime":"2026-01-27T18:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.750472 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.762833 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.772137 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.795640 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.812473 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.828373 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.842927 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.842955 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.842965 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.842979 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.842988 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:17Z","lastTransitionTime":"2026-01-27T18:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.945717 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.945777 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.945817 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.945842 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:17 crc kubenswrapper[4915]: I0127 18:42:17.945861 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:17Z","lastTransitionTime":"2026-01-27T18:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.048978 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.049018 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.049032 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.049048 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.049060 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:18Z","lastTransitionTime":"2026-01-27T18:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.152897 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.152960 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.152977 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.153004 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.153022 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:18Z","lastTransitionTime":"2026-01-27T18:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.255261 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:42:18 crc kubenswrapper[4915]: E0127 18:42:18.255564 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:42:26.255526938 +0000 UTC m=+37.613380642 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.256372 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.256436 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.256459 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.256487 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.256509 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:18Z","lastTransitionTime":"2026-01-27T18:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.312317 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 19:59:59.195040723 +0000 UTC Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.356964 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.357042 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.357042 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:18 crc kubenswrapper[4915]: E0127 18:42:18.357161 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.357205 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.357288 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.357364 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.357410 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:18 crc kubenswrapper[4915]: E0127 18:42:18.357559 4915 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:42:18 crc kubenswrapper[4915]: E0127 18:42:18.357657 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:26.357603536 +0000 UTC m=+37.715457230 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:42:18 crc kubenswrapper[4915]: E0127 18:42:18.357718 4915 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:42:18 crc kubenswrapper[4915]: E0127 18:42:18.357762 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:26.3577479 +0000 UTC m=+37.715601594 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:42:18 crc kubenswrapper[4915]: E0127 18:42:18.357771 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:42:18 crc kubenswrapper[4915]: E0127 18:42:18.357855 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:42:18 crc kubenswrapper[4915]: E0127 18:42:18.357886 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:42:18 crc kubenswrapper[4915]: E0127 18:42:18.357908 4915 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:18 crc kubenswrapper[4915]: E0127 18:42:18.357930 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:42:18 crc kubenswrapper[4915]: E0127 18:42:18.357940 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:42:18 crc kubenswrapper[4915]: E0127 18:42:18.357955 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:42:18 crc kubenswrapper[4915]: E0127 18:42:18.357971 4915 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:18 crc kubenswrapper[4915]: E0127 18:42:18.357983 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:26.357952725 +0000 UTC m=+37.715806429 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:18 crc kubenswrapper[4915]: E0127 18:42:18.358014 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:26.358000606 +0000 UTC m=+37.715854300 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.359221 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.359267 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.359284 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.359308 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.359325 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:18Z","lastTransitionTime":"2026-01-27T18:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.462412 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.462468 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.462485 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.462508 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.462524 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:18Z","lastTransitionTime":"2026-01-27T18:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.566310 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.566381 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.566404 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.566434 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.566462 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:18Z","lastTransitionTime":"2026-01-27T18:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.583010 4915 generic.go:334] "Generic (PLEG): container finished" podID="1c37793b-7e30-4f54-baab-48a358a948b2" containerID="0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250" exitCode=0 Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.583075 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" event={"ID":"1c37793b-7e30-4f54-baab-48a358a948b2","Type":"ContainerDied","Data":"0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250"} Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.602896 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.626766 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.640194 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.654240 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.667857 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.669887 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.669934 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.669953 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.669977 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.669995 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:18Z","lastTransitionTime":"2026-01-27T18:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.683965 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.706677 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.727263 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.741279 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.754119 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.767470 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.772577 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.772610 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.772639 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.772654 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.772663 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:18Z","lastTransitionTime":"2026-01-27T18:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.782723 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.796860 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.807592 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.817458 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.875685 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.875721 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.875729 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.875742 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.875752 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:18Z","lastTransitionTime":"2026-01-27T18:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.978062 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.978107 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.978119 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.978138 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:18 crc kubenswrapper[4915]: I0127 18:42:18.978151 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:18Z","lastTransitionTime":"2026-01-27T18:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.081130 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.081175 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.081186 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.081240 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.081253 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:19Z","lastTransitionTime":"2026-01-27T18:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.125932 4915 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.183926 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.183986 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.184006 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.184031 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.184050 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:19Z","lastTransitionTime":"2026-01-27T18:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.287523 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.287584 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.287601 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.287628 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.287646 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:19Z","lastTransitionTime":"2026-01-27T18:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.313178 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 01:11:28.760871384 +0000 UTC Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.382395 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.395428 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.395508 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.395528 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.395567 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.395593 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:19Z","lastTransitionTime":"2026-01-27T18:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.406043 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.436766 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.468099 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.485233 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.499775 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.499876 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.499891 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.499915 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.499929 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:19Z","lastTransitionTime":"2026-01-27T18:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.505574 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.528854 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.548348 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.569155 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.588964 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.594877 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" event={"ID":"1c37793b-7e30-4f54-baab-48a358a948b2","Type":"ContainerStarted","Data":"3569fdd38b83b2b3e932ccb3555bea5f7053e1dbceb3394bee5c38d1f8d7457b"} Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.601696 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" event={"ID":"eb87671e-1bee-4bef-843d-6fce9467079d","Type":"ContainerStarted","Data":"4b825b2ae15416ff6d13893ffed5b9eca94db9c0b5c88e82e3bfe16be0abcf54"} Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.602232 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.602266 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.602281 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.602318 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.602312 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.602331 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:19Z","lastTransitionTime":"2026-01-27T18:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.602372 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.612128 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.639194 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.640781 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.640836 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.640852 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.640872 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.640887 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:19Z","lastTransitionTime":"2026-01-27T18:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.647569 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.655836 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.666641 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: E0127 18:42:19.679535 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.687527 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.687848 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.687867 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.687876 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.687896 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.687908 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:19Z","lastTransitionTime":"2026-01-27T18:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.718968 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: E0127 18:42:19.719047 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.724285 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.724338 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.724356 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.724379 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.724397 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:19Z","lastTransitionTime":"2026-01-27T18:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.732622 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: E0127 18:42:19.736672 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.740695 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.740724 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.740734 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.740748 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.740757 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:19Z","lastTransitionTime":"2026-01-27T18:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.749215 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: E0127 18:42:19.752944 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.756721 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.756748 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.756757 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.756768 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.756777 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:19Z","lastTransitionTime":"2026-01-27T18:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.761131 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: E0127 18:42:19.768400 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: E0127 18:42:19.768508 4915 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.769839 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.769877 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.769890 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.769907 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.769916 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:19Z","lastTransitionTime":"2026-01-27T18:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.773595 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.786530 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.800069 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.813105 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.826579 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.836976 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.856035 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b825b2ae15416ff6d13893ffed5b9eca94db9c0b5c88e82e3bfe16be0abcf54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.872498 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.872560 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.872581 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.872606 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.872627 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:19Z","lastTransitionTime":"2026-01-27T18:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.877516 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.891145 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.902479 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.913736 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.927234 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3569fdd38b83b2b3e932ccb3555bea5f7053e1dbceb3394bee5c38d1f8d7457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.976266 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.976336 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.976355 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.976382 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:19 crc kubenswrapper[4915]: I0127 18:42:19.976401 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:19Z","lastTransitionTime":"2026-01-27T18:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.079635 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.080273 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.080300 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.080368 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.080395 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:20Z","lastTransitionTime":"2026-01-27T18:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.183860 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.183952 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.183975 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.184006 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.184029 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:20Z","lastTransitionTime":"2026-01-27T18:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.287007 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.287101 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.287119 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.287141 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.287159 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:20Z","lastTransitionTime":"2026-01-27T18:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.314137 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 14:24:45.282718244 +0000 UTC Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.357508 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.357521 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.357757 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:20 crc kubenswrapper[4915]: E0127 18:42:20.357700 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:42:20 crc kubenswrapper[4915]: E0127 18:42:20.357999 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:42:20 crc kubenswrapper[4915]: E0127 18:42:20.358148 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.390457 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.390516 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.390535 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.390558 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.390575 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:20Z","lastTransitionTime":"2026-01-27T18:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.494247 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.494313 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.494332 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.494356 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.494373 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:20Z","lastTransitionTime":"2026-01-27T18:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.575293 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.600577 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.600689 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.600720 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.600762 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.600841 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:20Z","lastTransitionTime":"2026-01-27T18:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.704883 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.704944 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.705003 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.705029 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.705048 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:20Z","lastTransitionTime":"2026-01-27T18:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.808388 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.808442 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.808459 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.808481 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.808497 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:20Z","lastTransitionTime":"2026-01-27T18:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.911811 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.911845 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.911856 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.911872 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:20 crc kubenswrapper[4915]: I0127 18:42:20.911883 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:20Z","lastTransitionTime":"2026-01-27T18:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.014564 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.014597 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.014610 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.014625 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.014636 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:21Z","lastTransitionTime":"2026-01-27T18:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.117473 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.117538 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.117555 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.117577 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.117592 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:21Z","lastTransitionTime":"2026-01-27T18:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.226018 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.226070 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.226087 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.226108 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.226125 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:21Z","lastTransitionTime":"2026-01-27T18:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.314541 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 19:41:56.089606997 +0000 UTC Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.330206 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.330271 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.330295 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.330324 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.330345 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:21Z","lastTransitionTime":"2026-01-27T18:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.433311 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.433358 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.433375 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.433396 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.433413 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:21Z","lastTransitionTime":"2026-01-27T18:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.537298 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.537378 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.537400 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.537429 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.537452 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:21Z","lastTransitionTime":"2026-01-27T18:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.640446 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.640513 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.640534 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.640559 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.640578 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:21Z","lastTransitionTime":"2026-01-27T18:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.743424 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.743486 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.743508 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.743537 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.743561 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:21Z","lastTransitionTime":"2026-01-27T18:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.847056 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.847115 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.847136 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.847159 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.847176 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:21Z","lastTransitionTime":"2026-01-27T18:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.950432 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.950488 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.950500 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.950515 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:21 crc kubenswrapper[4915]: I0127 18:42:21.950526 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:21Z","lastTransitionTime":"2026-01-27T18:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.052918 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.052978 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.052987 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.053001 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.053010 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:22Z","lastTransitionTime":"2026-01-27T18:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.155635 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.155685 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.155697 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.155721 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.155733 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:22Z","lastTransitionTime":"2026-01-27T18:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.257881 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.257918 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.257930 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.257946 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.257957 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:22Z","lastTransitionTime":"2026-01-27T18:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.360344 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.360374 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.360384 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.360399 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.360410 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:22Z","lastTransitionTime":"2026-01-27T18:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.389191 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll"] Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.389620 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.392423 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.393179 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.407225 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.426703 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3569fdd38b83b2b3e932ccb3555bea5f7053e1dbceb3394bee5c38d1f8d7457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.438593 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.452486 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.462779 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.462831 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.462841 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.462856 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.462867 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:22Z","lastTransitionTime":"2026-01-27T18:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.581388 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 06:37:02.150989714 +0000 UTC Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.581769 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:22 crc kubenswrapper[4915]: E0127 18:42:22.582027 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.582547 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:22 crc kubenswrapper[4915]: E0127 18:42:22.582687 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.582781 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:22 crc kubenswrapper[4915]: E0127 18:42:22.582937 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.583931 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.586518 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.586578 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.586596 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.586622 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.586646 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:22Z","lastTransitionTime":"2026-01-27T18:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.607591 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.622492 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.634630 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b9bf3c9-fe11-40a2-8577-a53574d1f527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7plll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.650639 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.660858 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.673211 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.681258 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zf8q\" (UniqueName: \"kubernetes.io/projected/8b9bf3c9-fe11-40a2-8577-a53574d1f527-kube-api-access-5zf8q\") pod \"ovnkube-control-plane-749d76644c-7plll\" (UID: \"8b9bf3c9-fe11-40a2-8577-a53574d1f527\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.681329 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8b9bf3c9-fe11-40a2-8577-a53574d1f527-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7plll\" (UID: \"8b9bf3c9-fe11-40a2-8577-a53574d1f527\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.681361 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8b9bf3c9-fe11-40a2-8577-a53574d1f527-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7plll\" (UID: \"8b9bf3c9-fe11-40a2-8577-a53574d1f527\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.681765 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8b9bf3c9-fe11-40a2-8577-a53574d1f527-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7plll\" (UID: \"8b9bf3c9-fe11-40a2-8577-a53574d1f527\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.684760 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.689224 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.689259 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.689273 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.689294 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.689310 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:22Z","lastTransitionTime":"2026-01-27T18:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.693402 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.713886 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b825b2ae15416ff6d13893ffed5b9eca94db9c0b5c88e82e3bfe16be0abcf54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.729522 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.742345 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.783160 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8b9bf3c9-fe11-40a2-8577-a53574d1f527-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7plll\" (UID: \"8b9bf3c9-fe11-40a2-8577-a53574d1f527\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.783266 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zf8q\" (UniqueName: \"kubernetes.io/projected/8b9bf3c9-fe11-40a2-8577-a53574d1f527-kube-api-access-5zf8q\") pod \"ovnkube-control-plane-749d76644c-7plll\" (UID: \"8b9bf3c9-fe11-40a2-8577-a53574d1f527\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.783376 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8b9bf3c9-fe11-40a2-8577-a53574d1f527-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7plll\" (UID: \"8b9bf3c9-fe11-40a2-8577-a53574d1f527\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.783484 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8b9bf3c9-fe11-40a2-8577-a53574d1f527-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7plll\" (UID: \"8b9bf3c9-fe11-40a2-8577-a53574d1f527\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.783885 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8b9bf3c9-fe11-40a2-8577-a53574d1f527-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7plll\" (UID: \"8b9bf3c9-fe11-40a2-8577-a53574d1f527\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.784480 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8b9bf3c9-fe11-40a2-8577-a53574d1f527-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7plll\" (UID: \"8b9bf3c9-fe11-40a2-8577-a53574d1f527\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.789182 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8b9bf3c9-fe11-40a2-8577-a53574d1f527-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7plll\" (UID: \"8b9bf3c9-fe11-40a2-8577-a53574d1f527\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.792266 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.792340 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.792359 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.792384 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.792404 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:22Z","lastTransitionTime":"2026-01-27T18:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.814445 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zf8q\" (UniqueName: \"kubernetes.io/projected/8b9bf3c9-fe11-40a2-8577-a53574d1f527-kube-api-access-5zf8q\") pod \"ovnkube-control-plane-749d76644c-7plll\" (UID: \"8b9bf3c9-fe11-40a2-8577-a53574d1f527\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.896004 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.896071 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.896089 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.896112 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.896129 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:22Z","lastTransitionTime":"2026-01-27T18:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.906561 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" Jan 27 18:42:22 crc kubenswrapper[4915]: W0127 18:42:22.930842 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b9bf3c9_fe11_40a2_8577_a53574d1f527.slice/crio-b630a60e7bc3ae8eac83cd9586096e03d1efd76e7f6794b3064acc0e537950cd WatchSource:0}: Error finding container b630a60e7bc3ae8eac83cd9586096e03d1efd76e7f6794b3064acc0e537950cd: Status 404 returned error can't find the container with id b630a60e7bc3ae8eac83cd9586096e03d1efd76e7f6794b3064acc0e537950cd Jan 27 18:42:22 crc kubenswrapper[4915]: I0127 18:42:22.999745 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:22.999869 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:22.999889 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:22.999910 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:22.999923 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:22Z","lastTransitionTime":"2026-01-27T18:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.102229 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.102263 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.102278 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.102292 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.102302 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:23Z","lastTransitionTime":"2026-01-27T18:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.204898 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.205186 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.205197 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.205211 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.205220 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:23Z","lastTransitionTime":"2026-01-27T18:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.308635 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.308720 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.308750 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.308780 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.308842 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:23Z","lastTransitionTime":"2026-01-27T18:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.411403 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.411452 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.411467 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.411488 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.411534 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:23Z","lastTransitionTime":"2026-01-27T18:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.514933 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.515012 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.515036 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.515064 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.515085 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:23Z","lastTransitionTime":"2026-01-27T18:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.582531 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 13:22:24.847229407 +0000 UTC Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.617623 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.617674 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.617692 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.617721 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.617739 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:23Z","lastTransitionTime":"2026-01-27T18:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.629074 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8spt_eb87671e-1bee-4bef-843d-6fce9467079d/ovnkube-controller/0.log" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.632977 4915 generic.go:334] "Generic (PLEG): container finished" podID="eb87671e-1bee-4bef-843d-6fce9467079d" containerID="4b825b2ae15416ff6d13893ffed5b9eca94db9c0b5c88e82e3bfe16be0abcf54" exitCode=1 Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.633075 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" event={"ID":"eb87671e-1bee-4bef-843d-6fce9467079d","Type":"ContainerDied","Data":"4b825b2ae15416ff6d13893ffed5b9eca94db9c0b5c88e82e3bfe16be0abcf54"} Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.634344 4915 scope.go:117] "RemoveContainer" containerID="4b825b2ae15416ff6d13893ffed5b9eca94db9c0b5c88e82e3bfe16be0abcf54" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.634410 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" event={"ID":"8b9bf3c9-fe11-40a2-8577-a53574d1f527","Type":"ContainerStarted","Data":"1beda7b1981ecc74169bf4d243ac420100b1376379cb97f8f8910773567bb7be"} Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.634457 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" event={"ID":"8b9bf3c9-fe11-40a2-8577-a53574d1f527","Type":"ContainerStarted","Data":"b630a60e7bc3ae8eac83cd9586096e03d1efd76e7f6794b3064acc0e537950cd"} Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.657071 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.677057 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3569fdd38b83b2b3e932ccb3555bea5f7053e1dbceb3394bee5c38d1f8d7457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.700015 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.721913 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.721961 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.721975 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.721993 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.722009 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:23Z","lastTransitionTime":"2026-01-27T18:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.724726 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.742840 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b9bf3c9-fe11-40a2-8577-a53574d1f527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7plll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.763156 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.776139 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.787681 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.801058 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.810943 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.818935 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.824950 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.824988 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.824997 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.825009 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.825018 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:23Z","lastTransitionTime":"2026-01-27T18:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.837811 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.850876 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.861716 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.872017 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.891029 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b825b2ae15416ff6d13893ffed5b9eca94db9c0b5c88e82e3bfe16be0abcf54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b825b2ae15416ff6d13893ffed5b9eca94db9c0b5c88e82e3bfe16be0abcf54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:23Z\\\",\\\"message\\\":\\\" 18:42:23.173783 6230 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 18:42:23.174397 6230 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 18:42:23.174438 6230 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 18:42:23.175019 6230 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:42:23.175037 6230 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 18:42:23.175044 6230 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 18:42:23.175054 6230 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 18:42:23.176909 6230 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:42:23.176960 6230 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:42:23.176988 6230 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:42:23.176996 6230 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:42:23.177021 6230 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 18:42:23.177036 6230 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:42:23.177044 6230 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:42:23.177053 6230 factory.go:656] Stopping watch factory\\\\nI0127 18:42:23.177068 6230 ovnkube.go:599] Stopped ovnkube\\\\nI0127 18\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.926945 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.926988 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.926998 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.927013 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:23 crc kubenswrapper[4915]: I0127 18:42:23.927025 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:23Z","lastTransitionTime":"2026-01-27T18:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.029707 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.029737 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.029748 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.029764 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.029775 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:24Z","lastTransitionTime":"2026-01-27T18:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.133000 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.133046 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.133059 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.133081 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.133094 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:24Z","lastTransitionTime":"2026-01-27T18:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.236604 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.236667 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.236686 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.236711 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.236729 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:24Z","lastTransitionTime":"2026-01-27T18:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.308269 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.322870 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.339150 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.339467 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.339504 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.339519 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.339538 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.339553 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:24Z","lastTransitionTime":"2026-01-27T18:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.356235 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b9bf3c9-fe11-40a2-8577-a53574d1f527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7plll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.356716 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:24 crc kubenswrapper[4915]: E0127 18:42:24.356941 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.357200 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:24 crc kubenswrapper[4915]: E0127 18:42:24.357348 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.357418 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:24 crc kubenswrapper[4915]: E0127 18:42:24.357491 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.375013 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.394637 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.409912 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.429147 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.441511 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.441573 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.441590 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.441619 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.441636 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:24Z","lastTransitionTime":"2026-01-27T18:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.444659 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.451305 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-d467q"] Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.451894 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:24 crc kubenswrapper[4915]: E0127 18:42:24.451961 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.479141 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b825b2ae15416ff6d13893ffed5b9eca94db9c0b5c88e82e3bfe16be0abcf54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b825b2ae15416ff6d13893ffed5b9eca94db9c0b5c88e82e3bfe16be0abcf54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:23Z\\\",\\\"message\\\":\\\" 18:42:23.173783 6230 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 18:42:23.174397 6230 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 18:42:23.174438 6230 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 18:42:23.175019 6230 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:42:23.175037 6230 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 18:42:23.175044 6230 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 18:42:23.175054 6230 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 18:42:23.176909 6230 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:42:23.176960 6230 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:42:23.176988 6230 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:42:23.176996 6230 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:42:23.177021 6230 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 18:42:23.177036 6230 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:42:23.177044 6230 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:42:23.177053 6230 factory.go:656] Stopping watch factory\\\\nI0127 18:42:23.177068 6230 ovnkube.go:599] Stopped ovnkube\\\\nI0127 18\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.507051 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.508352 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65be8e09-e032-40de-b290-c66c07282211-metrics-certs\") pod \"network-metrics-daemon-d467q\" (UID: \"65be8e09-e032-40de-b290-c66c07282211\") " pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.508493 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d4fj\" (UniqueName: \"kubernetes.io/projected/65be8e09-e032-40de-b290-c66c07282211-kube-api-access-2d4fj\") pod \"network-metrics-daemon-d467q\" (UID: \"65be8e09-e032-40de-b290-c66c07282211\") " pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.522409 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.534355 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.543983 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.544042 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.544086 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.544112 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.544131 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:24Z","lastTransitionTime":"2026-01-27T18:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.546087 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.558707 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3569fdd38b83b2b3e932ccb3555bea5f7053e1dbceb3394bee5c38d1f8d7457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.569046 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.583048 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 02:53:04.417505783 +0000 UTC Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.583223 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.596539 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.607131 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d467q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65be8e09-e032-40de-b290-c66c07282211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d467q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.609706 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65be8e09-e032-40de-b290-c66c07282211-metrics-certs\") pod \"network-metrics-daemon-d467q\" (UID: \"65be8e09-e032-40de-b290-c66c07282211\") " pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.609844 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d4fj\" (UniqueName: \"kubernetes.io/projected/65be8e09-e032-40de-b290-c66c07282211-kube-api-access-2d4fj\") pod \"network-metrics-daemon-d467q\" (UID: \"65be8e09-e032-40de-b290-c66c07282211\") " pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:24 crc kubenswrapper[4915]: E0127 18:42:24.609906 4915 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:42:24 crc kubenswrapper[4915]: E0127 18:42:24.609983 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65be8e09-e032-40de-b290-c66c07282211-metrics-certs podName:65be8e09-e032-40de-b290-c66c07282211 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:25.109965041 +0000 UTC m=+36.467818695 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/65be8e09-e032-40de-b290-c66c07282211-metrics-certs") pod "network-metrics-daemon-d467q" (UID: "65be8e09-e032-40de-b290-c66c07282211") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.619956 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.631079 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.632606 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d4fj\" (UniqueName: \"kubernetes.io/projected/65be8e09-e032-40de-b290-c66c07282211-kube-api-access-2d4fj\") pod \"network-metrics-daemon-d467q\" (UID: \"65be8e09-e032-40de-b290-c66c07282211\") " pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.639218 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8spt_eb87671e-1bee-4bef-843d-6fce9467079d/ovnkube-controller/0.log" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.644014 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" event={"ID":"eb87671e-1bee-4bef-843d-6fce9467079d","Type":"ContainerStarted","Data":"71dd38bb39520c289d210e71db67bf3a594c7e86f7142d49cbe7f06a5ebf05e9"} Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.644555 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.644963 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.648955 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.648991 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.649002 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.649016 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.649026 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:24Z","lastTransitionTime":"2026-01-27T18:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.649812 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" event={"ID":"8b9bf3c9-fe11-40a2-8577-a53574d1f527","Type":"ContainerStarted","Data":"eb8f42a86baa3e985c411dde55aaee372902556650a28cef391464e618f456a5"} Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.658317 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.669217 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.682386 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b9bf3c9-fe11-40a2-8577-a53574d1f527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7plll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.692765 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.704049 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.716148 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.727340 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.750282 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b825b2ae15416ff6d13893ffed5b9eca94db9c0b5c88e82e3bfe16be0abcf54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b825b2ae15416ff6d13893ffed5b9eca94db9c0b5c88e82e3bfe16be0abcf54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:23Z\\\",\\\"message\\\":\\\" 18:42:23.173783 6230 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 18:42:23.174397 6230 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 18:42:23.174438 6230 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 18:42:23.175019 6230 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:42:23.175037 6230 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 18:42:23.175044 6230 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 18:42:23.175054 6230 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 18:42:23.176909 6230 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:42:23.176960 6230 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:42:23.176988 6230 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:42:23.176996 6230 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:42:23.177021 6230 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 18:42:23.177036 6230 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:42:23.177044 6230 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:42:23.177053 6230 factory.go:656] Stopping watch factory\\\\nI0127 18:42:23.177068 6230 ovnkube.go:599] Stopped ovnkube\\\\nI0127 18\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.752014 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.752072 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.752086 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.752102 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.752114 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:24Z","lastTransitionTime":"2026-01-27T18:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.773210 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.793595 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.811437 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.833781 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3569fdd38b83b2b3e932ccb3555bea5f7053e1dbceb3394bee5c38d1f8d7457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.853980 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.854129 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.854213 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.854313 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.854398 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:24Z","lastTransitionTime":"2026-01-27T18:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.862487 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.883785 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.906900 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.923537 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.952634 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71dd38bb39520c289d210e71db67bf3a594c7e86f7142d49cbe7f06a5ebf05e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b825b2ae15416ff6d13893ffed5b9eca94db9c0b5c88e82e3bfe16be0abcf54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:23Z\\\",\\\"message\\\":\\\" 18:42:23.173783 6230 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 18:42:23.174397 6230 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 18:42:23.174438 6230 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 18:42:23.175019 6230 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:42:23.175037 6230 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 18:42:23.175044 6230 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 18:42:23.175054 6230 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 18:42:23.176909 6230 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:42:23.176960 6230 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:42:23.176988 6230 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:42:23.176996 6230 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:42:23.177021 6230 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 18:42:23.177036 6230 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:42:23.177044 6230 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:42:23.177053 6230 factory.go:656] Stopping watch factory\\\\nI0127 18:42:23.177068 6230 ovnkube.go:599] Stopped ovnkube\\\\nI0127 18\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.956636 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.956680 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.956692 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.956709 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.956722 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:24Z","lastTransitionTime":"2026-01-27T18:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.967935 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:24 crc kubenswrapper[4915]: I0127 18:42:24.989916 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3569fdd38b83b2b3e932ccb3555bea5f7053e1dbceb3394bee5c38d1f8d7457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.008047 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.026190 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.039422 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d467q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65be8e09-e032-40de-b290-c66c07282211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d467q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.055971 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.059852 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.059996 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.060127 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.060223 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.060310 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:25Z","lastTransitionTime":"2026-01-27T18:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.076504 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.091038 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.107518 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.114067 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65be8e09-e032-40de-b290-c66c07282211-metrics-certs\") pod \"network-metrics-daemon-d467q\" (UID: \"65be8e09-e032-40de-b290-c66c07282211\") " pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:25 crc kubenswrapper[4915]: E0127 18:42:25.114249 4915 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:42:25 crc kubenswrapper[4915]: E0127 18:42:25.114356 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65be8e09-e032-40de-b290-c66c07282211-metrics-certs podName:65be8e09-e032-40de-b290-c66c07282211 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:26.114342718 +0000 UTC m=+37.472196382 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/65be8e09-e032-40de-b290-c66c07282211-metrics-certs") pod "network-metrics-daemon-d467q" (UID: "65be8e09-e032-40de-b290-c66c07282211") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.119971 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.134215 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.144576 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b9bf3c9-fe11-40a2-8577-a53574d1f527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beda7b1981ecc74169bf4d243ac420100b1376379cb97f8f8910773567bb7be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f42a86baa3e985c411dde55aaee372902556650a28cef391464e618f456a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7plll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.163398 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.163461 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.163479 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.163503 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.163523 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:25Z","lastTransitionTime":"2026-01-27T18:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.267000 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.267060 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.267079 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.267154 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.267174 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:25Z","lastTransitionTime":"2026-01-27T18:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.370095 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.370156 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.370173 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.370209 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.370227 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:25Z","lastTransitionTime":"2026-01-27T18:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.474018 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.474116 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.474134 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.474159 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.474178 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:25Z","lastTransitionTime":"2026-01-27T18:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.576348 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.576440 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.576457 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.576481 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.576499 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:25Z","lastTransitionTime":"2026-01-27T18:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.583632 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 03:23:33.106250256 +0000 UTC Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.656547 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8spt_eb87671e-1bee-4bef-843d-6fce9467079d/ovnkube-controller/1.log" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.657753 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8spt_eb87671e-1bee-4bef-843d-6fce9467079d/ovnkube-controller/0.log" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.662036 4915 generic.go:334] "Generic (PLEG): container finished" podID="eb87671e-1bee-4bef-843d-6fce9467079d" containerID="71dd38bb39520c289d210e71db67bf3a594c7e86f7142d49cbe7f06a5ebf05e9" exitCode=1 Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.662113 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" event={"ID":"eb87671e-1bee-4bef-843d-6fce9467079d","Type":"ContainerDied","Data":"71dd38bb39520c289d210e71db67bf3a594c7e86f7142d49cbe7f06a5ebf05e9"} Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.662192 4915 scope.go:117] "RemoveContainer" containerID="4b825b2ae15416ff6d13893ffed5b9eca94db9c0b5c88e82e3bfe16be0abcf54" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.663589 4915 scope.go:117] "RemoveContainer" containerID="71dd38bb39520c289d210e71db67bf3a594c7e86f7142d49cbe7f06a5ebf05e9" Jan 27 18:42:25 crc kubenswrapper[4915]: E0127 18:42:25.663995 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n8spt_openshift-ovn-kubernetes(eb87671e-1bee-4bef-843d-6fce9467079d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.679967 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.680052 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.680077 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.680106 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.680130 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:25Z","lastTransitionTime":"2026-01-27T18:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.685988 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.705661 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.720544 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d467q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65be8e09-e032-40de-b290-c66c07282211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d467q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.740486 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.760260 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.779755 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.783592 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.783676 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.783696 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.783748 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.783768 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:25Z","lastTransitionTime":"2026-01-27T18:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.804864 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.823696 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.842489 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.861709 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b9bf3c9-fe11-40a2-8577-a53574d1f527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beda7b1981ecc74169bf4d243ac420100b1376379cb97f8f8910773567bb7be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f42a86baa3e985c411dde55aaee372902556650a28cef391464e618f456a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7plll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.888206 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.888273 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.888296 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.888341 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.888366 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:25Z","lastTransitionTime":"2026-01-27T18:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.899987 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.921088 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.939414 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.954566 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.982191 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71dd38bb39520c289d210e71db67bf3a594c7e86f7142d49cbe7f06a5ebf05e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b825b2ae15416ff6d13893ffed5b9eca94db9c0b5c88e82e3bfe16be0abcf54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:23Z\\\",\\\"message\\\":\\\" 18:42:23.173783 6230 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 18:42:23.174397 6230 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 18:42:23.174438 6230 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 18:42:23.175019 6230 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:42:23.175037 6230 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 18:42:23.175044 6230 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 18:42:23.175054 6230 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 18:42:23.176909 6230 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:42:23.176960 6230 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:42:23.176988 6230 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 18:42:23.176996 6230 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 18:42:23.177021 6230 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 18:42:23.177036 6230 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:42:23.177044 6230 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:42:23.177053 6230 factory.go:656] Stopping watch factory\\\\nI0127 18:42:23.177068 6230 ovnkube.go:599] Stopped ovnkube\\\\nI0127 18\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71dd38bb39520c289d210e71db67bf3a594c7e86f7142d49cbe7f06a5ebf05e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"3c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:42:24.899064 6398 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:42:24.899072 6398 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:42:24.899080 6398 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 18:42:24.899078 6398 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0127 18:42:24.899140 6398 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.991492 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.991567 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.991587 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.991610 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:25 crc kubenswrapper[4915]: I0127 18:42:25.991657 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:25Z","lastTransitionTime":"2026-01-27T18:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.001944 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.028226 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3569fdd38b83b2b3e932ccb3555bea5f7053e1dbceb3394bee5c38d1f8d7457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.094955 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.095007 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.095020 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.095038 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.095051 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:26Z","lastTransitionTime":"2026-01-27T18:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.125934 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65be8e09-e032-40de-b290-c66c07282211-metrics-certs\") pod \"network-metrics-daemon-d467q\" (UID: \"65be8e09-e032-40de-b290-c66c07282211\") " pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:26 crc kubenswrapper[4915]: E0127 18:42:26.126135 4915 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:42:26 crc kubenswrapper[4915]: E0127 18:42:26.126238 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65be8e09-e032-40de-b290-c66c07282211-metrics-certs podName:65be8e09-e032-40de-b290-c66c07282211 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:28.126208741 +0000 UTC m=+39.484062435 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/65be8e09-e032-40de-b290-c66c07282211-metrics-certs") pod "network-metrics-daemon-d467q" (UID: "65be8e09-e032-40de-b290-c66c07282211") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.198349 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.198409 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.198426 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.198453 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.198474 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:26Z","lastTransitionTime":"2026-01-27T18:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.302059 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.302165 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.302191 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.302222 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.302246 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:26Z","lastTransitionTime":"2026-01-27T18:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.329081 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:42:26 crc kubenswrapper[4915]: E0127 18:42:26.329252 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:42:42.329221478 +0000 UTC m=+53.687075182 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.357185 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.357236 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.357255 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:26 crc kubenswrapper[4915]: E0127 18:42:26.357381 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.357493 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:26 crc kubenswrapper[4915]: E0127 18:42:26.357568 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:42:26 crc kubenswrapper[4915]: E0127 18:42:26.357646 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:42:26 crc kubenswrapper[4915]: E0127 18:42:26.357777 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.406312 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.406371 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.406388 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.406411 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.406428 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:26Z","lastTransitionTime":"2026-01-27T18:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.430527 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.430640 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.430686 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.430722 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:26 crc kubenswrapper[4915]: E0127 18:42:26.430888 4915 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:42:26 crc kubenswrapper[4915]: E0127 18:42:26.430963 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:42.430942237 +0000 UTC m=+53.788795941 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:42:26 crc kubenswrapper[4915]: E0127 18:42:26.431089 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:42:26 crc kubenswrapper[4915]: E0127 18:42:26.431132 4915 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:42:26 crc kubenswrapper[4915]: E0127 18:42:26.431185 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:42:26 crc kubenswrapper[4915]: E0127 18:42:26.431112 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:42:26 crc kubenswrapper[4915]: E0127 18:42:26.431221 4915 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:26 crc kubenswrapper[4915]: E0127 18:42:26.431242 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:42:26 crc kubenswrapper[4915]: E0127 18:42:26.431266 4915 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:26 crc kubenswrapper[4915]: E0127 18:42:26.431288 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:42.431246575 +0000 UTC m=+53.789100319 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:42:26 crc kubenswrapper[4915]: E0127 18:42:26.431341 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:42.431306446 +0000 UTC m=+53.789160200 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:26 crc kubenswrapper[4915]: E0127 18:42:26.431382 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:42.431363838 +0000 UTC m=+53.789217702 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.510278 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.510343 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.510360 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.510388 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.510406 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:26Z","lastTransitionTime":"2026-01-27T18:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.584287 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 13:08:20.511769684 +0000 UTC Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.613523 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.613605 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.613631 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.613660 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.613684 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:26Z","lastTransitionTime":"2026-01-27T18:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.669571 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8spt_eb87671e-1bee-4bef-843d-6fce9467079d/ovnkube-controller/1.log" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.676447 4915 scope.go:117] "RemoveContainer" containerID="71dd38bb39520c289d210e71db67bf3a594c7e86f7142d49cbe7f06a5ebf05e9" Jan 27 18:42:26 crc kubenswrapper[4915]: E0127 18:42:26.676696 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n8spt_openshift-ovn-kubernetes(eb87671e-1bee-4bef-843d-6fce9467079d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.709208 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.717270 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.717325 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.717344 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.717369 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.717387 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:26Z","lastTransitionTime":"2026-01-27T18:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.733203 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.756044 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.773964 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.796990 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71dd38bb39520c289d210e71db67bf3a594c7e86f7142d49cbe7f06a5ebf05e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71dd38bb39520c289d210e71db67bf3a594c7e86f7142d49cbe7f06a5ebf05e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"3c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:42:24.899064 6398 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:42:24.899072 6398 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:42:24.899080 6398 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 18:42:24.899078 6398 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0127 18:42:24.899140 6398 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n8spt_openshift-ovn-kubernetes(eb87671e-1bee-4bef-843d-6fce9467079d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.817218 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.820553 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.820612 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.820625 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.820642 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.820655 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:26Z","lastTransitionTime":"2026-01-27T18:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.840113 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3569fdd38b83b2b3e932ccb3555bea5f7053e1dbceb3394bee5c38d1f8d7457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.857420 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.879816 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.892989 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d467q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65be8e09-e032-40de-b290-c66c07282211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d467q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.909181 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b9bf3c9-fe11-40a2-8577-a53574d1f527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beda7b1981ecc74169bf4d243ac420100b1376379cb97f8f8910773567bb7be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f42a86baa3e985c411dde55aaee372902556650a28cef391464e618f456a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7plll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.924535 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.924601 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.924626 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.924658 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.924680 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:26Z","lastTransitionTime":"2026-01-27T18:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.927625 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.949284 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.963328 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.981919 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:26 crc kubenswrapper[4915]: I0127 18:42:26.998974 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:26Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.027578 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:27Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.027735 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.027771 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.027788 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.027831 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.027845 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:27Z","lastTransitionTime":"2026-01-27T18:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.130930 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.130999 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.131011 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.131028 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.131041 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:27Z","lastTransitionTime":"2026-01-27T18:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.233656 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.233715 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.233742 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.233772 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.233829 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:27Z","lastTransitionTime":"2026-01-27T18:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.336894 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.336979 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.336999 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.337027 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.337046 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:27Z","lastTransitionTime":"2026-01-27T18:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.439910 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.440035 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.440058 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.440087 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.440109 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:27Z","lastTransitionTime":"2026-01-27T18:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.543432 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.543495 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.543513 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.543537 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.543556 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:27Z","lastTransitionTime":"2026-01-27T18:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.585007 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 22:50:49.735255138 +0000 UTC Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.646933 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.646978 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.646991 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.647010 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.647024 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:27Z","lastTransitionTime":"2026-01-27T18:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.750199 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.750262 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.750280 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.750306 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.750326 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:27Z","lastTransitionTime":"2026-01-27T18:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.853851 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.853921 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.853946 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.853980 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.854006 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:27Z","lastTransitionTime":"2026-01-27T18:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.956520 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.956593 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.956616 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.956644 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:27 crc kubenswrapper[4915]: I0127 18:42:27.956662 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:27Z","lastTransitionTime":"2026-01-27T18:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.060192 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.060242 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.060258 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.060282 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.060299 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:28Z","lastTransitionTime":"2026-01-27T18:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.148068 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65be8e09-e032-40de-b290-c66c07282211-metrics-certs\") pod \"network-metrics-daemon-d467q\" (UID: \"65be8e09-e032-40de-b290-c66c07282211\") " pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:28 crc kubenswrapper[4915]: E0127 18:42:28.148287 4915 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:42:28 crc kubenswrapper[4915]: E0127 18:42:28.148476 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65be8e09-e032-40de-b290-c66c07282211-metrics-certs podName:65be8e09-e032-40de-b290-c66c07282211 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:32.148407088 +0000 UTC m=+43.506260842 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/65be8e09-e032-40de-b290-c66c07282211-metrics-certs") pod "network-metrics-daemon-d467q" (UID: "65be8e09-e032-40de-b290-c66c07282211") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.163932 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.163997 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.164020 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.164049 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.164070 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:28Z","lastTransitionTime":"2026-01-27T18:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.267712 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.267772 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.267817 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.267842 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.267862 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:28Z","lastTransitionTime":"2026-01-27T18:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.356849 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:28 crc kubenswrapper[4915]: E0127 18:42:28.357065 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.356894 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.356862 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:28 crc kubenswrapper[4915]: E0127 18:42:28.357172 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.356898 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:28 crc kubenswrapper[4915]: E0127 18:42:28.357305 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:42:28 crc kubenswrapper[4915]: E0127 18:42:28.357472 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.371935 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.372000 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.372019 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.372041 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.372058 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:28Z","lastTransitionTime":"2026-01-27T18:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.474850 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.474919 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.474943 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.474972 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.474994 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:28Z","lastTransitionTime":"2026-01-27T18:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.578166 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.578239 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.578262 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.578291 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.578314 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:28Z","lastTransitionTime":"2026-01-27T18:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.585601 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 09:30:08.030456039 +0000 UTC Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.682166 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.682219 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.682230 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.682256 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.682269 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:28Z","lastTransitionTime":"2026-01-27T18:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.786555 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.786636 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.786660 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.786690 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.786713 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:28Z","lastTransitionTime":"2026-01-27T18:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.890074 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.890135 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.890153 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.890179 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.890197 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:28Z","lastTransitionTime":"2026-01-27T18:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.993437 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.993513 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.993538 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.993585 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:28 crc kubenswrapper[4915]: I0127 18:42:28.993638 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:28Z","lastTransitionTime":"2026-01-27T18:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.096532 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.096656 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.096677 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.096702 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.096719 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:29Z","lastTransitionTime":"2026-01-27T18:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.200688 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.200741 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.200758 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.200782 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.200834 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:29Z","lastTransitionTime":"2026-01-27T18:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.304131 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.304200 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.304217 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.304242 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.304260 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:29Z","lastTransitionTime":"2026-01-27T18:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.376864 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.394250 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.408165 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.408230 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.408252 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.408282 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.408305 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:29Z","lastTransitionTime":"2026-01-27T18:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.411367 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b9bf3c9-fe11-40a2-8577-a53574d1f527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beda7b1981ecc74169bf4d243ac420100b1376379cb97f8f8910773567bb7be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f42a86baa3e985c411dde55aaee372902556650a28cef391464e618f456a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7plll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.429746 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.459017 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.479577 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.498078 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.510242 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.511307 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.511358 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.511374 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.511395 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.511410 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:29Z","lastTransitionTime":"2026-01-27T18:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.533904 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71dd38bb39520c289d210e71db67bf3a594c7e86f7142d49cbe7f06a5ebf05e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71dd38bb39520c289d210e71db67bf3a594c7e86f7142d49cbe7f06a5ebf05e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"3c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:42:24.899064 6398 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:42:24.899072 6398 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:42:24.899080 6398 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 18:42:24.899078 6398 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0127 18:42:24.899140 6398 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n8spt_openshift-ovn-kubernetes(eb87671e-1bee-4bef-843d-6fce9467079d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.565463 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.585860 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.586605 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 21:09:52.41886137 +0000 UTC Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.602726 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.614494 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.614710 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.614895 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.615070 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.615291 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:29Z","lastTransitionTime":"2026-01-27T18:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.620344 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.638102 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3569fdd38b83b2b3e932ccb3555bea5f7053e1dbceb3394bee5c38d1f8d7457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.656867 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.675218 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.690300 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d467q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65be8e09-e032-40de-b290-c66c07282211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d467q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.718638 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.718681 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.718697 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.718719 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.718735 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:29Z","lastTransitionTime":"2026-01-27T18:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.821497 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.821554 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.821571 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.821599 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.821617 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:29Z","lastTransitionTime":"2026-01-27T18:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.925206 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.925266 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.925288 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.925317 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:29 crc kubenswrapper[4915]: I0127 18:42:29.925343 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:29Z","lastTransitionTime":"2026-01-27T18:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.028738 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.028880 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.028908 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.028939 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.028962 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:30Z","lastTransitionTime":"2026-01-27T18:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.073338 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.073425 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.073445 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.073475 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.073494 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:30Z","lastTransitionTime":"2026-01-27T18:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:30 crc kubenswrapper[4915]: E0127 18:42:30.094766 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:30Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.100663 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.100739 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.100763 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.100876 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.100905 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:30Z","lastTransitionTime":"2026-01-27T18:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:30 crc kubenswrapper[4915]: E0127 18:42:30.119039 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:30Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.123430 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.123489 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.123514 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.123544 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.123566 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:30Z","lastTransitionTime":"2026-01-27T18:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:30 crc kubenswrapper[4915]: E0127 18:42:30.145180 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:30Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.150303 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.150363 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.150379 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.150404 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.150423 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:30Z","lastTransitionTime":"2026-01-27T18:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:30 crc kubenswrapper[4915]: E0127 18:42:30.170139 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:30Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.174741 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.174836 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.174866 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.174896 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.174920 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:30Z","lastTransitionTime":"2026-01-27T18:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:30 crc kubenswrapper[4915]: E0127 18:42:30.196590 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:30Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:30 crc kubenswrapper[4915]: E0127 18:42:30.196839 4915 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.199056 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.199108 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.199126 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.199148 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.199166 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:30Z","lastTransitionTime":"2026-01-27T18:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.303298 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.303347 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.303365 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.303387 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.303405 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:30Z","lastTransitionTime":"2026-01-27T18:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.357209 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.357280 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.357287 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.357209 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:30 crc kubenswrapper[4915]: E0127 18:42:30.357415 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:42:30 crc kubenswrapper[4915]: E0127 18:42:30.357552 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:42:30 crc kubenswrapper[4915]: E0127 18:42:30.357677 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:42:30 crc kubenswrapper[4915]: E0127 18:42:30.357890 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.406575 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.406665 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.406692 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.406721 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.406745 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:30Z","lastTransitionTime":"2026-01-27T18:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.510514 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.510564 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.510583 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.510608 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.510629 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:30Z","lastTransitionTime":"2026-01-27T18:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.587432 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 12:33:17.489694642 +0000 UTC Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.613378 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.613438 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.613462 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.613491 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.613512 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:30Z","lastTransitionTime":"2026-01-27T18:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.716225 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.716282 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.716301 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.716322 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.716340 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:30Z","lastTransitionTime":"2026-01-27T18:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.819857 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.819918 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.819933 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.819952 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.819972 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:30Z","lastTransitionTime":"2026-01-27T18:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.922558 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.922622 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.922663 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.922695 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:30 crc kubenswrapper[4915]: I0127 18:42:30.922717 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:30Z","lastTransitionTime":"2026-01-27T18:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.025656 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.025730 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.025753 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.025781 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.025839 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:31Z","lastTransitionTime":"2026-01-27T18:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.134685 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.134739 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.134758 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.134781 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.134830 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:31Z","lastTransitionTime":"2026-01-27T18:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.238749 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.238833 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.238851 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.238874 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.238891 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:31Z","lastTransitionTime":"2026-01-27T18:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.342383 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.342461 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.342477 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.342500 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.342518 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:31Z","lastTransitionTime":"2026-01-27T18:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.446091 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.446184 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.446210 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.446288 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.446313 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:31Z","lastTransitionTime":"2026-01-27T18:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.549578 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.550007 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.550135 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.550300 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.550424 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:31Z","lastTransitionTime":"2026-01-27T18:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.588225 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 09:29:02.355420552 +0000 UTC Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.654026 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.654103 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.654131 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.654160 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.654181 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:31Z","lastTransitionTime":"2026-01-27T18:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.757716 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.757835 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.757853 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.757877 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.757895 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:31Z","lastTransitionTime":"2026-01-27T18:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.860376 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.860423 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.860437 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.860453 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.860465 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:31Z","lastTransitionTime":"2026-01-27T18:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.963191 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.963231 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.963242 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.963257 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:31 crc kubenswrapper[4915]: I0127 18:42:31.963270 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:31Z","lastTransitionTime":"2026-01-27T18:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.066457 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.066516 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.066532 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.066555 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.066573 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:32Z","lastTransitionTime":"2026-01-27T18:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.169375 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.169449 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.169473 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.169503 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.169525 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:32Z","lastTransitionTime":"2026-01-27T18:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.196118 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65be8e09-e032-40de-b290-c66c07282211-metrics-certs\") pod \"network-metrics-daemon-d467q\" (UID: \"65be8e09-e032-40de-b290-c66c07282211\") " pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:32 crc kubenswrapper[4915]: E0127 18:42:32.196284 4915 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:42:32 crc kubenswrapper[4915]: E0127 18:42:32.196349 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65be8e09-e032-40de-b290-c66c07282211-metrics-certs podName:65be8e09-e032-40de-b290-c66c07282211 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:40.196332529 +0000 UTC m=+51.554186193 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/65be8e09-e032-40de-b290-c66c07282211-metrics-certs") pod "network-metrics-daemon-d467q" (UID: "65be8e09-e032-40de-b290-c66c07282211") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.271424 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.271470 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.271485 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.271502 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.271514 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:32Z","lastTransitionTime":"2026-01-27T18:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.357306 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.357316 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.357369 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.357431 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:32 crc kubenswrapper[4915]: E0127 18:42:32.357548 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:42:32 crc kubenswrapper[4915]: E0127 18:42:32.357698 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:42:32 crc kubenswrapper[4915]: E0127 18:42:32.357857 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:42:32 crc kubenswrapper[4915]: E0127 18:42:32.357933 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.374307 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.374374 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.374395 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.374420 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.374436 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:32Z","lastTransitionTime":"2026-01-27T18:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.478126 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.478189 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.478206 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.478231 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.478249 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:32Z","lastTransitionTime":"2026-01-27T18:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.581116 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.581185 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.581204 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.581231 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.581252 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:32Z","lastTransitionTime":"2026-01-27T18:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.589396 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 10:24:13.427003783 +0000 UTC Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.684597 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.684660 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.684681 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.684708 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.684726 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:32Z","lastTransitionTime":"2026-01-27T18:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.787469 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.787500 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.787511 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.787524 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.787534 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:32Z","lastTransitionTime":"2026-01-27T18:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.890002 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.890050 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.890062 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.890080 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.890092 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:32Z","lastTransitionTime":"2026-01-27T18:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.993053 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.993111 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.993127 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.993154 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:32 crc kubenswrapper[4915]: I0127 18:42:32.993174 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:32Z","lastTransitionTime":"2026-01-27T18:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.096670 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.096709 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.096719 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.096738 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.096749 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:33Z","lastTransitionTime":"2026-01-27T18:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.200588 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.200675 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.200693 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.200715 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.200732 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:33Z","lastTransitionTime":"2026-01-27T18:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.303614 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.303672 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.303691 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.303717 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.303736 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:33Z","lastTransitionTime":"2026-01-27T18:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.407885 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.407962 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.407983 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.408413 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.408477 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:33Z","lastTransitionTime":"2026-01-27T18:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.512314 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.512370 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.512389 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.512413 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.512431 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:33Z","lastTransitionTime":"2026-01-27T18:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.590179 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 00:03:11.015654293 +0000 UTC Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.615181 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.615247 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.615263 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.615286 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.615304 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:33Z","lastTransitionTime":"2026-01-27T18:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.718567 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.718636 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.718663 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.718692 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.718720 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:33Z","lastTransitionTime":"2026-01-27T18:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.827890 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.827961 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.827980 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.828005 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.828021 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:33Z","lastTransitionTime":"2026-01-27T18:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.931316 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.931371 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.931388 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.931412 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:33 crc kubenswrapper[4915]: I0127 18:42:33.931431 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:33Z","lastTransitionTime":"2026-01-27T18:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.034189 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.034257 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.034279 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.034303 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.034320 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:34Z","lastTransitionTime":"2026-01-27T18:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.137635 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.137742 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.137767 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.137822 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.137877 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:34Z","lastTransitionTime":"2026-01-27T18:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.241496 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.241557 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.241572 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.241639 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.241657 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:34Z","lastTransitionTime":"2026-01-27T18:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.344400 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.344464 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.344481 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.344505 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.344521 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:34Z","lastTransitionTime":"2026-01-27T18:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.357016 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.357106 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.357125 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:34 crc kubenswrapper[4915]: E0127 18:42:34.357220 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.357248 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:34 crc kubenswrapper[4915]: E0127 18:42:34.357405 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:42:34 crc kubenswrapper[4915]: E0127 18:42:34.357611 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:42:34 crc kubenswrapper[4915]: E0127 18:42:34.357761 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.447360 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.447461 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.447488 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.447527 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.447551 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:34Z","lastTransitionTime":"2026-01-27T18:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.550547 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.550609 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.550628 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.550653 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.550672 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:34Z","lastTransitionTime":"2026-01-27T18:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.590784 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 18:41:25.106600121 +0000 UTC Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.653147 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.653194 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.653205 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.653221 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.653235 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:34Z","lastTransitionTime":"2026-01-27T18:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.755706 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.755835 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.755856 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.755893 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.755918 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:34Z","lastTransitionTime":"2026-01-27T18:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.859527 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.859591 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.859613 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.859643 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.859667 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:34Z","lastTransitionTime":"2026-01-27T18:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.962962 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.963025 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.963043 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.963065 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:34 crc kubenswrapper[4915]: I0127 18:42:34.963082 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:34Z","lastTransitionTime":"2026-01-27T18:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.066243 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.066298 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.066316 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.066342 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.066359 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:35Z","lastTransitionTime":"2026-01-27T18:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.169917 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.170004 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.170029 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.170058 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.170076 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:35Z","lastTransitionTime":"2026-01-27T18:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.273326 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.273373 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.273386 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.273403 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.273438 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:35Z","lastTransitionTime":"2026-01-27T18:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.376424 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.376461 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.376470 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.376482 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.376494 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:35Z","lastTransitionTime":"2026-01-27T18:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.479461 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.479558 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.479578 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.479606 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.479663 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:35Z","lastTransitionTime":"2026-01-27T18:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.582667 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.582722 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.582738 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.582760 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.582776 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:35Z","lastTransitionTime":"2026-01-27T18:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.591100 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 21:30:08.658526319 +0000 UTC Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.686263 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.686327 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.686352 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.686385 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.686407 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:35Z","lastTransitionTime":"2026-01-27T18:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.789186 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.789241 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.789258 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.789278 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.789295 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:35Z","lastTransitionTime":"2026-01-27T18:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.892893 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.892984 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.893002 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.893023 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.893039 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:35Z","lastTransitionTime":"2026-01-27T18:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.996359 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.996422 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.996439 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.996463 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:35 crc kubenswrapper[4915]: I0127 18:42:35.996483 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:35Z","lastTransitionTime":"2026-01-27T18:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.099213 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.099633 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.099828 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.100008 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.100181 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:36Z","lastTransitionTime":"2026-01-27T18:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.207687 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.207783 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.207813 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.207836 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.207847 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:36Z","lastTransitionTime":"2026-01-27T18:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.311863 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.311943 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.312003 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.312033 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.312070 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:36Z","lastTransitionTime":"2026-01-27T18:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.357575 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.357638 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.357682 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.357776 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:36 crc kubenswrapper[4915]: E0127 18:42:36.357753 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:42:36 crc kubenswrapper[4915]: E0127 18:42:36.357977 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:42:36 crc kubenswrapper[4915]: E0127 18:42:36.358181 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:42:36 crc kubenswrapper[4915]: E0127 18:42:36.358322 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.415784 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.415874 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.415892 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.415915 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.415933 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:36Z","lastTransitionTime":"2026-01-27T18:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.518628 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.518689 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.518708 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.518731 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.518748 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:36Z","lastTransitionTime":"2026-01-27T18:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.591970 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 05:25:18.423668232 +0000 UTC Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.622083 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.622142 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.622166 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.622196 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.622218 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:36Z","lastTransitionTime":"2026-01-27T18:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.726342 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.726435 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.726455 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.726486 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.726544 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:36Z","lastTransitionTime":"2026-01-27T18:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.830022 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.830087 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.830107 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.830135 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.830155 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:36Z","lastTransitionTime":"2026-01-27T18:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.933935 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.934005 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.934029 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.934055 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:36 crc kubenswrapper[4915]: I0127 18:42:36.934072 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:36Z","lastTransitionTime":"2026-01-27T18:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.037892 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.037949 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.037966 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.037990 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.038009 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:37Z","lastTransitionTime":"2026-01-27T18:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.140946 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.141000 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.141017 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.141039 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.141055 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:37Z","lastTransitionTime":"2026-01-27T18:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.244403 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.244479 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.244503 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.244532 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.244555 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:37Z","lastTransitionTime":"2026-01-27T18:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.348190 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.348249 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.348272 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.348302 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.348328 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:37Z","lastTransitionTime":"2026-01-27T18:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.451616 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.451678 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.451696 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.451720 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.451737 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:37Z","lastTransitionTime":"2026-01-27T18:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.555535 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.555595 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.555612 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.555637 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.555654 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:37Z","lastTransitionTime":"2026-01-27T18:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.592843 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 15:23:05.414492086 +0000 UTC Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.659112 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.659184 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.659207 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.659240 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.659261 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:37Z","lastTransitionTime":"2026-01-27T18:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.762112 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.762177 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.762204 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.762231 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.762252 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:37Z","lastTransitionTime":"2026-01-27T18:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.866094 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.866166 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.866191 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.866226 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.866249 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:37Z","lastTransitionTime":"2026-01-27T18:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.969551 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.969610 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.969629 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.969655 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:37 crc kubenswrapper[4915]: I0127 18:42:37.969673 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:37Z","lastTransitionTime":"2026-01-27T18:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.072662 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.072728 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.072744 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.072769 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.072785 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:38Z","lastTransitionTime":"2026-01-27T18:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.175727 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.175780 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.175844 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.175885 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.175910 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:38Z","lastTransitionTime":"2026-01-27T18:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.279188 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.279252 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.279273 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.279304 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.279327 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:38Z","lastTransitionTime":"2026-01-27T18:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.356999 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.357073 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.357179 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:38 crc kubenswrapper[4915]: E0127 18:42:38.357175 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.357219 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:38 crc kubenswrapper[4915]: E0127 18:42:38.357415 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:42:38 crc kubenswrapper[4915]: E0127 18:42:38.357544 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:42:38 crc kubenswrapper[4915]: E0127 18:42:38.357742 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.382886 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.382993 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.383012 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.383037 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.383054 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:38Z","lastTransitionTime":"2026-01-27T18:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.486365 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.486428 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.486452 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.486484 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.486502 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:38Z","lastTransitionTime":"2026-01-27T18:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.591096 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.591163 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.591183 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.591210 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.591230 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:38Z","lastTransitionTime":"2026-01-27T18:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.593201 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 04:12:30.601212007 +0000 UTC Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.694065 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.694132 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.694155 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.694335 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.694370 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:38Z","lastTransitionTime":"2026-01-27T18:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.797880 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.797935 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.797954 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.797977 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.797998 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:38Z","lastTransitionTime":"2026-01-27T18:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.901564 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.901619 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.901637 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.901661 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:38 crc kubenswrapper[4915]: I0127 18:42:38.901679 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:38Z","lastTransitionTime":"2026-01-27T18:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.005120 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.005246 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.005266 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.005289 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.005306 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:39Z","lastTransitionTime":"2026-01-27T18:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.108655 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.108728 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.108743 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.108767 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.108783 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:39Z","lastTransitionTime":"2026-01-27T18:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.212487 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.212545 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.212564 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.212589 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.212609 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:39Z","lastTransitionTime":"2026-01-27T18:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.316289 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.316589 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.316749 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.316964 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.317193 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:39Z","lastTransitionTime":"2026-01-27T18:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.358245 4915 scope.go:117] "RemoveContainer" containerID="71dd38bb39520c289d210e71db67bf3a594c7e86f7142d49cbe7f06a5ebf05e9" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.381832 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.403317 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.421776 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d467q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65be8e09-e032-40de-b290-c66c07282211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d467q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.423111 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.423169 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.423187 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.423217 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.423235 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:39Z","lastTransitionTime":"2026-01-27T18:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.440942 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.462195 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.483958 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b9bf3c9-fe11-40a2-8577-a53574d1f527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beda7b1981ecc74169bf4d243ac420100b1376379cb97f8f8910773567bb7be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f42a86baa3e985c411dde55aaee372902556650a28cef391464e618f456a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7plll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.501734 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.517006 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.527236 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.527293 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.527305 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.527321 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.527333 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:39Z","lastTransitionTime":"2026-01-27T18:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.532941 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.548653 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.562526 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.578949 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71dd38bb39520c289d210e71db67bf3a594c7e86f7142d49cbe7f06a5ebf05e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71dd38bb39520c289d210e71db67bf3a594c7e86f7142d49cbe7f06a5ebf05e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"3c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:42:24.899064 6398 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:42:24.899072 6398 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:42:24.899080 6398 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 18:42:24.899078 6398 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0127 18:42:24.899140 6398 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n8spt_openshift-ovn-kubernetes(eb87671e-1bee-4bef-843d-6fce9467079d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.593688 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 03:30:45.143582235 +0000 UTC Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.605463 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.623185 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.630511 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.630561 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.630574 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.630594 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.630606 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:39Z","lastTransitionTime":"2026-01-27T18:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.638485 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.652352 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.668055 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3569fdd38b83b2b3e932ccb3555bea5f7053e1dbceb3394bee5c38d1f8d7457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.724197 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8spt_eb87671e-1bee-4bef-843d-6fce9467079d/ovnkube-controller/1.log" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.731731 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" event={"ID":"eb87671e-1bee-4bef-843d-6fce9467079d","Type":"ContainerStarted","Data":"4e94bb0071fa44063d710c35fd37b1af61ef461a24c240d9c057fe50fc31cb8a"} Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.732407 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.732748 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.732865 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.732895 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.732926 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.732950 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:39Z","lastTransitionTime":"2026-01-27T18:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.752652 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.769354 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.785810 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b9bf3c9-fe11-40a2-8577-a53574d1f527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beda7b1981ecc74169bf4d243ac420100b1376379cb97f8f8910773567bb7be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f42a86baa3e985c411dde55aaee372902556650a28cef391464e618f456a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7plll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.807335 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.826172 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.835931 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.835981 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.835998 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.836021 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.836040 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:39Z","lastTransitionTime":"2026-01-27T18:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.844106 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.870771 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.893657 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.916105 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e94bb0071fa44063d710c35fd37b1af61ef461a24c240d9c057fe50fc31cb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71dd38bb39520c289d210e71db67bf3a594c7e86f7142d49cbe7f06a5ebf05e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"3c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:42:24.899064 6398 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:42:24.899072 6398 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:42:24.899080 6398 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 18:42:24.899078 6398 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0127 18:42:24.899140 6398 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.936392 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.937549 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.937582 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.937592 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.937605 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.937613 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:39Z","lastTransitionTime":"2026-01-27T18:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.953611 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.969191 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:39 crc kubenswrapper[4915]: I0127 18:42:39.984734 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.000344 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3569fdd38b83b2b3e932ccb3555bea5f7053e1dbceb3394bee5c38d1f8d7457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.013296 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.034171 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.039490 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.039534 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.039545 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.039559 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.039570 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:40Z","lastTransitionTime":"2026-01-27T18:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.051015 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d467q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65be8e09-e032-40de-b290-c66c07282211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d467q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.141908 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.141938 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.141946 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.141962 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.141970 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:40Z","lastTransitionTime":"2026-01-27T18:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.244197 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.244265 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.244288 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.244316 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.244338 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:40Z","lastTransitionTime":"2026-01-27T18:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.288508 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65be8e09-e032-40de-b290-c66c07282211-metrics-certs\") pod \"network-metrics-daemon-d467q\" (UID: \"65be8e09-e032-40de-b290-c66c07282211\") " pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:40 crc kubenswrapper[4915]: E0127 18:42:40.288648 4915 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:42:40 crc kubenswrapper[4915]: E0127 18:42:40.288706 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65be8e09-e032-40de-b290-c66c07282211-metrics-certs podName:65be8e09-e032-40de-b290-c66c07282211 nodeName:}" failed. No retries permitted until 2026-01-27 18:42:56.288691114 +0000 UTC m=+67.646544768 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/65be8e09-e032-40de-b290-c66c07282211-metrics-certs") pod "network-metrics-daemon-d467q" (UID: "65be8e09-e032-40de-b290-c66c07282211") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.347227 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.347266 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.347280 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.347297 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.347308 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:40Z","lastTransitionTime":"2026-01-27T18:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.357326 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.357333 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.357334 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:40 crc kubenswrapper[4915]: E0127 18:42:40.357761 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:42:40 crc kubenswrapper[4915]: E0127 18:42:40.357501 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.357334 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:40 crc kubenswrapper[4915]: E0127 18:42:40.357858 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:42:40 crc kubenswrapper[4915]: E0127 18:42:40.358078 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.376416 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.376733 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.376889 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.376993 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.377091 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:40Z","lastTransitionTime":"2026-01-27T18:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:40 crc kubenswrapper[4915]: E0127 18:42:40.391117 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.395618 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.395691 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.395709 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.395734 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.395752 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:40Z","lastTransitionTime":"2026-01-27T18:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:40 crc kubenswrapper[4915]: E0127 18:42:40.418236 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.424409 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.424470 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.424488 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.424513 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.424533 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:40Z","lastTransitionTime":"2026-01-27T18:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:40 crc kubenswrapper[4915]: E0127 18:42:40.446508 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.452100 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.452137 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.452152 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.452173 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.452187 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:40Z","lastTransitionTime":"2026-01-27T18:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:40 crc kubenswrapper[4915]: E0127 18:42:40.473578 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.478030 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.478167 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.478189 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.478215 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.478268 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:40Z","lastTransitionTime":"2026-01-27T18:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:40 crc kubenswrapper[4915]: E0127 18:42:40.502339 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:40 crc kubenswrapper[4915]: E0127 18:42:40.502455 4915 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.504713 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.504741 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.504749 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.504764 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.504775 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:40Z","lastTransitionTime":"2026-01-27T18:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.594262 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 06:26:06.122793547 +0000 UTC Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.607416 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.607459 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.607476 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.607496 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.607510 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:40Z","lastTransitionTime":"2026-01-27T18:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.710667 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.711115 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.711252 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.711393 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.711539 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:40Z","lastTransitionTime":"2026-01-27T18:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.738440 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8spt_eb87671e-1bee-4bef-843d-6fce9467079d/ovnkube-controller/2.log" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.739771 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8spt_eb87671e-1bee-4bef-843d-6fce9467079d/ovnkube-controller/1.log" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.744660 4915 generic.go:334] "Generic (PLEG): container finished" podID="eb87671e-1bee-4bef-843d-6fce9467079d" containerID="4e94bb0071fa44063d710c35fd37b1af61ef461a24c240d9c057fe50fc31cb8a" exitCode=1 Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.744723 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" event={"ID":"eb87671e-1bee-4bef-843d-6fce9467079d","Type":"ContainerDied","Data":"4e94bb0071fa44063d710c35fd37b1af61ef461a24c240d9c057fe50fc31cb8a"} Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.744773 4915 scope.go:117] "RemoveContainer" containerID="71dd38bb39520c289d210e71db67bf3a594c7e86f7142d49cbe7f06a5ebf05e9" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.746780 4915 scope.go:117] "RemoveContainer" containerID="4e94bb0071fa44063d710c35fd37b1af61ef461a24c240d9c057fe50fc31cb8a" Jan 27 18:42:40 crc kubenswrapper[4915]: E0127 18:42:40.748049 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8spt_openshift-ovn-kubernetes(eb87671e-1bee-4bef-843d-6fce9467079d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.771240 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.795324 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.814491 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d467q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65be8e09-e032-40de-b290-c66c07282211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d467q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.815103 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.815184 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.815206 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.815237 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.815260 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:40Z","lastTransitionTime":"2026-01-27T18:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.833246 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.850497 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.871094 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b9bf3c9-fe11-40a2-8577-a53574d1f527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beda7b1981ecc74169bf4d243ac420100b1376379cb97f8f8910773567bb7be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f42a86baa3e985c411dde55aaee372902556650a28cef391464e618f456a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7plll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.892605 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.913084 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.918778 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.918850 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.918869 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.918892 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.918909 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:40Z","lastTransitionTime":"2026-01-27T18:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.931879 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.948938 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.964081 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:40 crc kubenswrapper[4915]: I0127 18:42:40.985859 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e94bb0071fa44063d710c35fd37b1af61ef461a24c240d9c057fe50fc31cb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71dd38bb39520c289d210e71db67bf3a594c7e86f7142d49cbe7f06a5ebf05e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"3c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 18:42:24.899064 6398 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:42:24.899072 6398 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 18:42:24.899080 6398 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 18:42:24.899078 6398 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.149:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {cab7c637-a021-4a4d-a4b9-06d63c44316f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0127 18:42:24.899140 6398 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e94bb0071fa44063d710c35fd37b1af61ef461a24c240d9c057fe50fc31cb8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"g/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:42:40.330850 6568 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 18:42:40.330877 6568 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 18:42:40.330913 6568 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:42:40.330921 6568 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:42:40.330921 6568 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 18:42:40.330936 6568 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:42:40.330947 6568 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:42:40.330954 6568 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 18:42:40.330960 6568 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 18:42:40.330981 6568 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:42:40.331005 6568 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 18:42:40.331008 6568 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 18:42:40.331021 6568 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:42:40.331035 6568 factory.go:656] Stopping watch factory\\\\nI0127 18:42:40.331046 6568 ovnkube.go:599] Stopped ovnkube\\\\nI0127 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:40Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.021532 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.021586 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.021595 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.021611 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.021621 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:41Z","lastTransitionTime":"2026-01-27T18:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.022625 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.037018 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.051410 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.064657 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.078116 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3569fdd38b83b2b3e932ccb3555bea5f7053e1dbceb3394bee5c38d1f8d7457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.124050 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.124121 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.124136 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.124161 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.124177 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:41Z","lastTransitionTime":"2026-01-27T18:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.227846 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.227939 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.227962 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.227997 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.228022 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:41Z","lastTransitionTime":"2026-01-27T18:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.331765 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.331892 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.331912 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.331942 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.331962 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:41Z","lastTransitionTime":"2026-01-27T18:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.436147 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.436213 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.436237 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.436273 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.436301 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:41Z","lastTransitionTime":"2026-01-27T18:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.538895 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.538941 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.538953 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.538970 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.538981 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:41Z","lastTransitionTime":"2026-01-27T18:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.595452 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 15:23:46.607355899 +0000 UTC Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.641871 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.641963 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.641987 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.642018 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.642041 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:41Z","lastTransitionTime":"2026-01-27T18:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.745958 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.746101 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.746911 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.746964 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.746987 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:41Z","lastTransitionTime":"2026-01-27T18:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.750873 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8spt_eb87671e-1bee-4bef-843d-6fce9467079d/ovnkube-controller/2.log" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.756157 4915 scope.go:117] "RemoveContainer" containerID="4e94bb0071fa44063d710c35fd37b1af61ef461a24c240d9c057fe50fc31cb8a" Jan 27 18:42:41 crc kubenswrapper[4915]: E0127 18:42:41.756429 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8spt_openshift-ovn-kubernetes(eb87671e-1bee-4bef-843d-6fce9467079d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.777390 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.795145 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d467q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65be8e09-e032-40de-b290-c66c07282211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d467q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.817544 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.838091 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.850451 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.850522 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.850541 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.850572 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.850595 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:41Z","lastTransitionTime":"2026-01-27T18:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.859532 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.880722 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.894518 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.910725 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b9bf3c9-fe11-40a2-8577-a53574d1f527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beda7b1981ecc74169bf4d243ac420100b1376379cb97f8f8910773567bb7be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f42a86baa3e985c411dde55aaee372902556650a28cef391464e618f456a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7plll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.927532 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.948573 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.953716 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.953759 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.953773 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.953811 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.953827 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:41Z","lastTransitionTime":"2026-01-27T18:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.967892 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.981346 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.982630 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:41 crc kubenswrapper[4915]: I0127 18:42:41.996525 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.015367 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e94bb0071fa44063d710c35fd37b1af61ef461a24c240d9c057fe50fc31cb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e94bb0071fa44063d710c35fd37b1af61ef461a24c240d9c057fe50fc31cb8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"g/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:42:40.330850 6568 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 18:42:40.330877 6568 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 18:42:40.330913 6568 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:42:40.330921 6568 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:42:40.330921 6568 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 18:42:40.330936 6568 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:42:40.330947 6568 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:42:40.330954 6568 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 18:42:40.330960 6568 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 18:42:40.330981 6568 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:42:40.331005 6568 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 18:42:40.331008 6568 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 18:42:40.331021 6568 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:42:40.331035 6568 factory.go:656] Stopping watch factory\\\\nI0127 18:42:40.331046 6568 ovnkube.go:599] Stopped ovnkube\\\\nI0127 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8spt_openshift-ovn-kubernetes(eb87671e-1bee-4bef-843d-6fce9467079d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.041139 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.056533 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.056600 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.056621 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.056650 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.056673 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:42Z","lastTransitionTime":"2026-01-27T18:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.060624 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.075607 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.088446 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3569fdd38b83b2b3e932ccb3555bea5f7053e1dbceb3394bee5c38d1f8d7457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.104472 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.116442 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.127412 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16010e03-bab7-40d3-8671-f387d6095bea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcad643b766147459cff8e5d86ba0f08183df4600e0a8a49b55c3423b9c2136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca94e1dfc1266b556ea7788f6fffdd6b6c0e903b260fa0e24bb3a153921b198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ef38e583db7c2d1d0b27bcbcc8a54937759afabf49daabe8767d5a3f3f2cf78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293e2a1883668d1318598c3a2acd50c472c736f414e3c20ed2a6ee6e65f9d9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://293e2a1883668d1318598c3a2acd50c472c736f414e3c20ed2a6ee6e65f9d9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.141935 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.150048 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.160611 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.160653 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.160665 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.160688 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.160714 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:42Z","lastTransitionTime":"2026-01-27T18:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.166215 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e94bb0071fa44063d710c35fd37b1af61ef461a24c240d9c057fe50fc31cb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e94bb0071fa44063d710c35fd37b1af61ef461a24c240d9c057fe50fc31cb8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"g/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:42:40.330850 6568 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 18:42:40.330877 6568 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 18:42:40.330913 6568 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:42:40.330921 6568 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:42:40.330921 6568 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 18:42:40.330936 6568 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:42:40.330947 6568 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:42:40.330954 6568 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 18:42:40.330960 6568 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 18:42:40.330981 6568 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:42:40.331005 6568 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 18:42:40.331008 6568 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 18:42:40.331021 6568 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:42:40.331035 6568 factory.go:656] Stopping watch factory\\\\nI0127 18:42:40.331046 6568 ovnkube.go:599] Stopped ovnkube\\\\nI0127 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8spt_openshift-ovn-kubernetes(eb87671e-1bee-4bef-843d-6fce9467079d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.178514 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.193770 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3569fdd38b83b2b3e932ccb3555bea5f7053e1dbceb3394bee5c38d1f8d7457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.207173 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.217881 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.227263 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d467q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65be8e09-e032-40de-b290-c66c07282211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d467q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.239687 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.251886 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.263410 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.263449 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.263462 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.263480 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.263494 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:42Z","lastTransitionTime":"2026-01-27T18:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.280959 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.311757 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.324057 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.332503 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.341459 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b9bf3c9-fe11-40a2-8577-a53574d1f527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beda7b1981ecc74169bf4d243ac420100b1376379cb97f8f8910773567bb7be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f42a86baa3e985c411dde55aaee372902556650a28cef391464e618f456a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7plll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.356779 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.356824 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.356838 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:42 crc kubenswrapper[4915]: E0127 18:42:42.356915 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.356940 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:42 crc kubenswrapper[4915]: E0127 18:42:42.357056 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:42:42 crc kubenswrapper[4915]: E0127 18:42:42.357095 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:42:42 crc kubenswrapper[4915]: E0127 18:42:42.357185 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.366407 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.366437 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.366447 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.366463 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.366475 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:42Z","lastTransitionTime":"2026-01-27T18:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.415569 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:42:42 crc kubenswrapper[4915]: E0127 18:42:42.415838 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:43:14.41578391 +0000 UTC m=+85.773637614 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.469313 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.469358 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.469376 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.469398 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.469414 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:42Z","lastTransitionTime":"2026-01-27T18:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.517656 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.517756 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:42 crc kubenswrapper[4915]: E0127 18:42:42.517896 4915 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:42:42 crc kubenswrapper[4915]: E0127 18:42:42.518005 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:43:14.5179746 +0000 UTC m=+85.875828304 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:42:42 crc kubenswrapper[4915]: E0127 18:42:42.518027 4915 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.517903 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:42 crc kubenswrapper[4915]: E0127 18:42:42.518057 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:42:42 crc kubenswrapper[4915]: E0127 18:42:42.518208 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:42:42 crc kubenswrapper[4915]: E0127 18:42:42.518239 4915 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:42 crc kubenswrapper[4915]: E0127 18:42:42.518099 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:43:14.518080143 +0000 UTC m=+85.875933837 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.518351 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:42 crc kubenswrapper[4915]: E0127 18:42:42.518452 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:43:14.518423592 +0000 UTC m=+85.876277286 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:42 crc kubenswrapper[4915]: E0127 18:42:42.518545 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:42:42 crc kubenswrapper[4915]: E0127 18:42:42.518586 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:42:42 crc kubenswrapper[4915]: E0127 18:42:42.518614 4915 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:42 crc kubenswrapper[4915]: E0127 18:42:42.518699 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:43:14.518673118 +0000 UTC m=+85.876526872 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.572016 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.572069 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.572084 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.572104 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.572122 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:42Z","lastTransitionTime":"2026-01-27T18:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.595721 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 03:49:35.16309867 +0000 UTC Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.674693 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.674751 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.674771 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.674822 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.674843 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:42Z","lastTransitionTime":"2026-01-27T18:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.777460 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.777523 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.777543 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.777566 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.777583 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:42Z","lastTransitionTime":"2026-01-27T18:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.880442 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.880506 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.880526 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.880551 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.880570 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:42Z","lastTransitionTime":"2026-01-27T18:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.983910 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.983986 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.984010 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.984040 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:42 crc kubenswrapper[4915]: I0127 18:42:42.984062 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:42Z","lastTransitionTime":"2026-01-27T18:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.087072 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.087431 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.087602 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.087822 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.087982 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:43Z","lastTransitionTime":"2026-01-27T18:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.191549 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.191996 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.192145 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.192349 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.192485 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:43Z","lastTransitionTime":"2026-01-27T18:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.295001 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.295045 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.295067 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.295093 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.295118 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:43Z","lastTransitionTime":"2026-01-27T18:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.397708 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.397774 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.397820 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.397851 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.397869 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:43Z","lastTransitionTime":"2026-01-27T18:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.501675 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.501745 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.501763 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.501787 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.501844 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:43Z","lastTransitionTime":"2026-01-27T18:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.597002 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 02:19:20.594043804 +0000 UTC Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.604418 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.604495 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.604512 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.604537 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.604554 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:43Z","lastTransitionTime":"2026-01-27T18:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.707930 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.707994 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.708017 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.708047 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.708071 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:43Z","lastTransitionTime":"2026-01-27T18:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.810867 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.810926 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.810946 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.810972 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.810990 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:43Z","lastTransitionTime":"2026-01-27T18:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.914646 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.914739 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.914762 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.914788 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:43 crc kubenswrapper[4915]: I0127 18:42:43.914842 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:43Z","lastTransitionTime":"2026-01-27T18:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.017685 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.017764 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.017830 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.017865 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.017889 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:44Z","lastTransitionTime":"2026-01-27T18:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.120691 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.120755 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.120773 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.120835 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.120861 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:44Z","lastTransitionTime":"2026-01-27T18:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.224571 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.224679 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.224702 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.224729 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.224750 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:44Z","lastTransitionTime":"2026-01-27T18:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.327710 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.327865 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.327903 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.327929 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.327946 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:44Z","lastTransitionTime":"2026-01-27T18:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.356624 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.356710 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.356749 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.356636 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:44 crc kubenswrapper[4915]: E0127 18:42:44.356915 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:42:44 crc kubenswrapper[4915]: E0127 18:42:44.357098 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:42:44 crc kubenswrapper[4915]: E0127 18:42:44.357185 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:42:44 crc kubenswrapper[4915]: E0127 18:42:44.357294 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.430770 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.430889 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.430912 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.430937 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.430955 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:44Z","lastTransitionTime":"2026-01-27T18:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.534899 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.534973 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.534997 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.535029 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.535055 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:44Z","lastTransitionTime":"2026-01-27T18:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.597884 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:29:24.619942901 +0000 UTC Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.639616 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.639687 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.639705 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.639732 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.639750 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:44Z","lastTransitionTime":"2026-01-27T18:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.742374 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.742477 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.742504 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.742535 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.742559 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:44Z","lastTransitionTime":"2026-01-27T18:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.846128 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.846236 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.846255 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.846318 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.846347 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:44Z","lastTransitionTime":"2026-01-27T18:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.950560 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.950629 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.950670 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.950705 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:44 crc kubenswrapper[4915]: I0127 18:42:44.950729 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:44Z","lastTransitionTime":"2026-01-27T18:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.053689 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.053760 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.053782 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.053843 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.053867 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:45Z","lastTransitionTime":"2026-01-27T18:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.156076 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.156147 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.156169 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.156190 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.156208 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:45Z","lastTransitionTime":"2026-01-27T18:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.259386 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.259452 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.259476 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.259505 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.259525 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:45Z","lastTransitionTime":"2026-01-27T18:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.362876 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.362953 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.362974 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.363002 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.363021 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:45Z","lastTransitionTime":"2026-01-27T18:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.466591 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.466647 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.466664 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.466686 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.466703 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:45Z","lastTransitionTime":"2026-01-27T18:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.569935 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.569997 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.570015 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.570039 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.570058 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:45Z","lastTransitionTime":"2026-01-27T18:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.598413 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 16:51:17.916918185 +0000 UTC Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.673213 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.673292 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.673335 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.673361 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.673379 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:45Z","lastTransitionTime":"2026-01-27T18:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.776142 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.776211 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.776228 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.776250 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.776269 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:45Z","lastTransitionTime":"2026-01-27T18:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.880045 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.880100 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.880117 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.880148 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.880166 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:45Z","lastTransitionTime":"2026-01-27T18:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.982937 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.983016 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.983039 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.983067 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:45 crc kubenswrapper[4915]: I0127 18:42:45.983088 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:45Z","lastTransitionTime":"2026-01-27T18:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.086384 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.086459 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.086481 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.086511 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.086534 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:46Z","lastTransitionTime":"2026-01-27T18:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.190590 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.190997 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.191096 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.191240 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.191336 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:46Z","lastTransitionTime":"2026-01-27T18:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.295479 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.295574 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.295595 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.295623 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.295646 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:46Z","lastTransitionTime":"2026-01-27T18:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.356988 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.357051 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.356981 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.357148 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:46 crc kubenswrapper[4915]: E0127 18:42:46.357365 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:42:46 crc kubenswrapper[4915]: E0127 18:42:46.357489 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:42:46 crc kubenswrapper[4915]: E0127 18:42:46.357627 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:42:46 crc kubenswrapper[4915]: E0127 18:42:46.357727 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.398480 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.398551 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.398572 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.398598 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.398619 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:46Z","lastTransitionTime":"2026-01-27T18:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.502476 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.502580 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.502600 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.502626 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.502645 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:46Z","lastTransitionTime":"2026-01-27T18:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.598920 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 03:01:53.822772137 +0000 UTC Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.605932 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.606000 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.606019 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.606045 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.606065 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:46Z","lastTransitionTime":"2026-01-27T18:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.708968 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.709031 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.709042 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.709059 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.709100 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:46Z","lastTransitionTime":"2026-01-27T18:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.811777 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.811911 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.811936 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.811971 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.811999 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:46Z","lastTransitionTime":"2026-01-27T18:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.914649 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.914729 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.914740 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.914761 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:46 crc kubenswrapper[4915]: I0127 18:42:46.914778 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:46Z","lastTransitionTime":"2026-01-27T18:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.018067 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.018141 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.018164 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.018189 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.018208 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:47Z","lastTransitionTime":"2026-01-27T18:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.121270 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.121333 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.121352 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.121376 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.121396 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:47Z","lastTransitionTime":"2026-01-27T18:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.227673 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.227725 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.227736 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.227749 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.227758 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:47Z","lastTransitionTime":"2026-01-27T18:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.331167 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.331231 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.331243 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.331268 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.331285 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:47Z","lastTransitionTime":"2026-01-27T18:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.434632 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.434695 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.434714 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.434739 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.434757 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:47Z","lastTransitionTime":"2026-01-27T18:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.537774 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.537874 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.537897 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.537929 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.537955 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:47Z","lastTransitionTime":"2026-01-27T18:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.599510 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 16:10:51.968251655 +0000 UTC Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.642018 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.642085 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.642097 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.642120 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.642135 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:47Z","lastTransitionTime":"2026-01-27T18:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.745488 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.745565 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.745594 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.745623 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.745643 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:47Z","lastTransitionTime":"2026-01-27T18:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.848563 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.848615 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.848627 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.848643 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.848656 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:47Z","lastTransitionTime":"2026-01-27T18:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.951480 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.951517 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.951529 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.951547 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:47 crc kubenswrapper[4915]: I0127 18:42:47.951561 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:47Z","lastTransitionTime":"2026-01-27T18:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.054751 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.055102 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.055246 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.055426 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.055572 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:48Z","lastTransitionTime":"2026-01-27T18:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.158325 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.158379 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.158395 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.158419 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.158437 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:48Z","lastTransitionTime":"2026-01-27T18:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.261225 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.261292 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.261313 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.261343 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.261361 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:48Z","lastTransitionTime":"2026-01-27T18:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.356639 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.356708 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.356674 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.356915 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:48 crc kubenswrapper[4915]: E0127 18:42:48.357025 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:42:48 crc kubenswrapper[4915]: E0127 18:42:48.356904 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:42:48 crc kubenswrapper[4915]: E0127 18:42:48.357103 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:42:48 crc kubenswrapper[4915]: E0127 18:42:48.357157 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.364071 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.364122 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.364139 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.364165 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.364183 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:48Z","lastTransitionTime":"2026-01-27T18:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.467656 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.467732 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.467750 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.467775 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.467836 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:48Z","lastTransitionTime":"2026-01-27T18:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.571378 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.571436 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.571456 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.571486 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.571510 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:48Z","lastTransitionTime":"2026-01-27T18:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.599637 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 23:38:49.361909486 +0000 UTC Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.674341 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.674399 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.674417 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.674440 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.674458 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:48Z","lastTransitionTime":"2026-01-27T18:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.778255 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.778318 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.778333 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.778353 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.778368 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:48Z","lastTransitionTime":"2026-01-27T18:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.880970 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.881034 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.881052 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.881076 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.881094 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:48Z","lastTransitionTime":"2026-01-27T18:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.984526 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.984583 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.984602 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.984627 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:48 crc kubenswrapper[4915]: I0127 18:42:48.984645 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:48Z","lastTransitionTime":"2026-01-27T18:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.087774 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.087898 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.087921 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.087955 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.087978 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:49Z","lastTransitionTime":"2026-01-27T18:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.191670 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.191728 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.191746 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.191770 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.191834 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:49Z","lastTransitionTime":"2026-01-27T18:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.294195 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.294271 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.294291 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.294315 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.294333 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:49Z","lastTransitionTime":"2026-01-27T18:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.374459 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.388272 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3569fdd38b83b2b3e932ccb3555bea5f7053e1dbceb3394bee5c38d1f8d7457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.396550 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.396604 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.396622 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.396645 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.396664 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:49Z","lastTransitionTime":"2026-01-27T18:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.403994 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.424296 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.438275 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d467q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65be8e09-e032-40de-b290-c66c07282211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d467q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.456756 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.473824 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.484946 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.496443 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.499192 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.499226 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.499236 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.499250 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.499260 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:49Z","lastTransitionTime":"2026-01-27T18:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.511233 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.523161 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.537756 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b9bf3c9-fe11-40a2-8577-a53574d1f527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beda7b1981ecc74169bf4d243ac420100b1376379cb97f8f8910773567bb7be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f42a86baa3e985c411dde55aaee372902556650a28cef391464e618f456a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7plll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.567776 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.587394 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.599894 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 18:11:57.998405203 +0000 UTC Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.601912 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.601986 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.602006 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.602488 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.602555 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:49Z","lastTransitionTime":"2026-01-27T18:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.605863 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16010e03-bab7-40d3-8671-f387d6095bea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcad643b766147459cff8e5d86ba0f08183df4600e0a8a49b55c3423b9c2136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca94e1dfc1266b556ea7788f6fffdd6b6c0e903b260fa0e24bb3a153921b198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ef38e583db7c2d1d0b27bcbcc8a54937759afabf49daabe8767d5a3f3f2cf78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293e2a1883668d1318598c3a2acd50c472c736f414e3c20ed2a6ee6e65f9d9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://293e2a1883668d1318598c3a2acd50c472c736f414e3c20ed2a6ee6e65f9d9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.623264 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.639272 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.668582 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e94bb0071fa44063d710c35fd37b1af61ef461a24c240d9c057fe50fc31cb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e94bb0071fa44063d710c35fd37b1af61ef461a24c240d9c057fe50fc31cb8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"g/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:42:40.330850 6568 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 18:42:40.330877 6568 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 18:42:40.330913 6568 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:42:40.330921 6568 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:42:40.330921 6568 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 18:42:40.330936 6568 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:42:40.330947 6568 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:42:40.330954 6568 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 18:42:40.330960 6568 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 18:42:40.330981 6568 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:42:40.331005 6568 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 18:42:40.331008 6568 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 18:42:40.331021 6568 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:42:40.331035 6568 factory.go:656] Stopping watch factory\\\\nI0127 18:42:40.331046 6568 ovnkube.go:599] Stopped ovnkube\\\\nI0127 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8spt_openshift-ovn-kubernetes(eb87671e-1bee-4bef-843d-6fce9467079d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:49Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.705518 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.705553 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.705564 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.705579 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.705591 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:49Z","lastTransitionTime":"2026-01-27T18:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.807771 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.807878 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.807898 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.807922 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.807940 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:49Z","lastTransitionTime":"2026-01-27T18:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.910214 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.910712 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.910938 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.911130 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:49 crc kubenswrapper[4915]: I0127 18:42:49.911281 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:49Z","lastTransitionTime":"2026-01-27T18:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.014002 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.014031 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.014040 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.014053 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.014062 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:50Z","lastTransitionTime":"2026-01-27T18:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.116403 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.116442 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.116454 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.116468 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.116478 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:50Z","lastTransitionTime":"2026-01-27T18:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.218955 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.219004 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.219019 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.219037 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.219050 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:50Z","lastTransitionTime":"2026-01-27T18:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.321345 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.321437 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.321457 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.321480 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.321496 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:50Z","lastTransitionTime":"2026-01-27T18:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.357103 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.357260 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:50 crc kubenswrapper[4915]: E0127 18:42:50.357419 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.357488 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.357533 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:50 crc kubenswrapper[4915]: E0127 18:42:50.357617 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:42:50 crc kubenswrapper[4915]: E0127 18:42:50.357707 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:42:50 crc kubenswrapper[4915]: E0127 18:42:50.358245 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.425095 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.425165 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.425180 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.425199 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.425214 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:50Z","lastTransitionTime":"2026-01-27T18:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.528192 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.528220 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.528228 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.528239 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.528248 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:50Z","lastTransitionTime":"2026-01-27T18:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.600527 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 05:51:18.567109096 +0000 UTC Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.631295 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.631372 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.631399 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.631424 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.631441 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:50Z","lastTransitionTime":"2026-01-27T18:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.663364 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.663420 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.663439 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.663461 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.663479 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:50Z","lastTransitionTime":"2026-01-27T18:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:50 crc kubenswrapper[4915]: E0127 18:42:50.684955 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:50Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.690312 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.690376 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.690393 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.690418 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.690438 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:50Z","lastTransitionTime":"2026-01-27T18:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:50 crc kubenswrapper[4915]: E0127 18:42:50.708170 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:50Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.712329 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.712365 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.712375 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.712390 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.712401 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:50Z","lastTransitionTime":"2026-01-27T18:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:50 crc kubenswrapper[4915]: E0127 18:42:50.731330 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:50Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.737454 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.737501 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.737512 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.737535 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.737550 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:50Z","lastTransitionTime":"2026-01-27T18:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:50 crc kubenswrapper[4915]: E0127 18:42:50.750244 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:50Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.755169 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.755244 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.755258 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.755283 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.755299 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:50Z","lastTransitionTime":"2026-01-27T18:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:50 crc kubenswrapper[4915]: E0127 18:42:50.766717 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:50Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:50 crc kubenswrapper[4915]: E0127 18:42:50.766888 4915 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.769006 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.769048 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.769063 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.769087 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.769105 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:50Z","lastTransitionTime":"2026-01-27T18:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.873415 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.873478 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.873494 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.873516 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.873532 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:50Z","lastTransitionTime":"2026-01-27T18:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.976297 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.976355 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.976372 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.976394 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:50 crc kubenswrapper[4915]: I0127 18:42:50.976414 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:50Z","lastTransitionTime":"2026-01-27T18:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.079151 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.079224 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.079243 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.079270 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.079290 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:51Z","lastTransitionTime":"2026-01-27T18:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.183310 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.183369 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.183385 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.183410 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.183426 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:51Z","lastTransitionTime":"2026-01-27T18:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.286639 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.286689 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.286704 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.286722 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.286740 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:51Z","lastTransitionTime":"2026-01-27T18:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.390034 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.390100 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.390122 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.390151 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.390176 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:51Z","lastTransitionTime":"2026-01-27T18:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.493371 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.493433 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.493473 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.493499 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.493520 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:51Z","lastTransitionTime":"2026-01-27T18:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.596575 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.596626 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.596643 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.596666 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.596684 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:51Z","lastTransitionTime":"2026-01-27T18:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.600965 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 03:30:18.525604383 +0000 UTC Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.699431 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.699513 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.699530 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.699557 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.699575 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:51Z","lastTransitionTime":"2026-01-27T18:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.803038 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.803099 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.803117 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.803140 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.803163 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:51Z","lastTransitionTime":"2026-01-27T18:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.906624 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.906708 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.906734 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.906764 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:51 crc kubenswrapper[4915]: I0127 18:42:51.906786 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:51Z","lastTransitionTime":"2026-01-27T18:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.009654 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.009707 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.009724 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.009749 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.009768 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:52Z","lastTransitionTime":"2026-01-27T18:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.113538 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.113596 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.113614 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.113638 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.113660 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:52Z","lastTransitionTime":"2026-01-27T18:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.217393 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.217470 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.217488 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.217513 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.217529 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:52Z","lastTransitionTime":"2026-01-27T18:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.320837 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.320892 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.320908 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.320930 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.320947 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:52Z","lastTransitionTime":"2026-01-27T18:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.357387 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.357435 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.357434 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.357409 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:52 crc kubenswrapper[4915]: E0127 18:42:52.357609 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:42:52 crc kubenswrapper[4915]: E0127 18:42:52.357785 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:42:52 crc kubenswrapper[4915]: E0127 18:42:52.357902 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:42:52 crc kubenswrapper[4915]: E0127 18:42:52.358045 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.423709 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.423769 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.423780 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.423820 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.423832 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:52Z","lastTransitionTime":"2026-01-27T18:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.527231 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.527311 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.527326 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.527345 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.527357 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:52Z","lastTransitionTime":"2026-01-27T18:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.601560 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 13:13:03.91861239 +0000 UTC Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.630653 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.630711 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.630721 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.630741 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.630753 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:52Z","lastTransitionTime":"2026-01-27T18:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.734188 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.734250 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.734268 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.734291 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.734311 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:52Z","lastTransitionTime":"2026-01-27T18:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.837950 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.838036 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.838057 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.838089 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.838112 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:52Z","lastTransitionTime":"2026-01-27T18:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.941440 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.941527 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.941594 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.941621 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:52 crc kubenswrapper[4915]: I0127 18:42:52.941638 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:52Z","lastTransitionTime":"2026-01-27T18:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.045315 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.045389 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.045403 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.045611 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.045626 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:53Z","lastTransitionTime":"2026-01-27T18:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.149341 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.149400 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.149413 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.149437 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.149451 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:53Z","lastTransitionTime":"2026-01-27T18:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.252240 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.252298 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.252316 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.252339 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.252360 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:53Z","lastTransitionTime":"2026-01-27T18:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.355101 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.355152 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.355163 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.355174 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.355202 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:53Z","lastTransitionTime":"2026-01-27T18:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.456821 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.456852 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.456860 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.456874 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.456882 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:53Z","lastTransitionTime":"2026-01-27T18:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.559203 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.559233 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.559242 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.559254 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.559262 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:53Z","lastTransitionTime":"2026-01-27T18:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.602506 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 23:12:11.036344568 +0000 UTC Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.662447 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.662488 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.662497 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.662511 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.662520 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:53Z","lastTransitionTime":"2026-01-27T18:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.764825 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.764868 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.764881 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.764898 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.764911 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:53Z","lastTransitionTime":"2026-01-27T18:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.867228 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.867617 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.867630 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.867648 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.867658 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:53Z","lastTransitionTime":"2026-01-27T18:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.970230 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.970309 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.970321 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.970338 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:53 crc kubenswrapper[4915]: I0127 18:42:53.970351 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:53Z","lastTransitionTime":"2026-01-27T18:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.072497 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.072548 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.072557 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.072576 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.072586 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:54Z","lastTransitionTime":"2026-01-27T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.175210 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.175318 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.175337 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.175359 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.175376 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:54Z","lastTransitionTime":"2026-01-27T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.278390 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.278432 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.278443 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.278458 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.278469 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:54Z","lastTransitionTime":"2026-01-27T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.357121 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.357170 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.357389 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.357457 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:54 crc kubenswrapper[4915]: E0127 18:42:54.357580 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:42:54 crc kubenswrapper[4915]: E0127 18:42:54.357659 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.357733 4915 scope.go:117] "RemoveContainer" containerID="4e94bb0071fa44063d710c35fd37b1af61ef461a24c240d9c057fe50fc31cb8a" Jan 27 18:42:54 crc kubenswrapper[4915]: E0127 18:42:54.357844 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:42:54 crc kubenswrapper[4915]: E0127 18:42:54.357981 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:42:54 crc kubenswrapper[4915]: E0127 18:42:54.357996 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8spt_openshift-ovn-kubernetes(eb87671e-1bee-4bef-843d-6fce9467079d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.382879 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.382927 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.382938 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.382956 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.382972 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:54Z","lastTransitionTime":"2026-01-27T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.485121 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.485200 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.485215 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.485235 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.485250 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:54Z","lastTransitionTime":"2026-01-27T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.587455 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.587506 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.587519 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.587535 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.587547 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:54Z","lastTransitionTime":"2026-01-27T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.602593 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 00:06:50.172367776 +0000 UTC Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.690170 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.690213 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.690224 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.690241 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.690254 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:54Z","lastTransitionTime":"2026-01-27T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.793374 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.793424 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.793435 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.793451 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.793462 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:54Z","lastTransitionTime":"2026-01-27T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.896263 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.896307 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.896319 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.896344 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.896358 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:54Z","lastTransitionTime":"2026-01-27T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.999419 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.999477 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.999496 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.999519 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:54 crc kubenswrapper[4915]: I0127 18:42:54.999536 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:54Z","lastTransitionTime":"2026-01-27T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.101777 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.101856 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.101869 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.101886 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.101897 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:55Z","lastTransitionTime":"2026-01-27T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.205063 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.205133 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.205144 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.205165 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.205177 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:55Z","lastTransitionTime":"2026-01-27T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.308048 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.308108 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.308127 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.308150 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.308169 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:55Z","lastTransitionTime":"2026-01-27T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.411150 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.411195 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.411209 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.411226 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.411238 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:55Z","lastTransitionTime":"2026-01-27T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.513686 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.513742 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.513754 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.513774 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.513789 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:55Z","lastTransitionTime":"2026-01-27T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.603111 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 03:33:47.987465946 +0000 UTC Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.616507 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.616537 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.616547 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.616559 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.616570 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:55Z","lastTransitionTime":"2026-01-27T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.719657 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.719716 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.719733 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.720398 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.720444 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:55Z","lastTransitionTime":"2026-01-27T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.822775 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.822835 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.822845 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.822859 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.822869 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:55Z","lastTransitionTime":"2026-01-27T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.926094 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.926135 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.926146 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.926164 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:55 crc kubenswrapper[4915]: I0127 18:42:55.926176 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:55Z","lastTransitionTime":"2026-01-27T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.029913 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.029967 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.029984 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.030007 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.030023 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:56Z","lastTransitionTime":"2026-01-27T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.132083 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.132119 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.132130 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.132142 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.132150 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:56Z","lastTransitionTime":"2026-01-27T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.235312 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.235690 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.235922 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.236211 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.236433 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:56Z","lastTransitionTime":"2026-01-27T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.338478 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.338549 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.338562 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.338585 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.338601 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:56Z","lastTransitionTime":"2026-01-27T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.357041 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.357449 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.357372 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:56 crc kubenswrapper[4915]: E0127 18:42:56.357770 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.357810 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:56 crc kubenswrapper[4915]: E0127 18:42:56.358352 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:42:56 crc kubenswrapper[4915]: E0127 18:42:56.358070 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:42:56 crc kubenswrapper[4915]: E0127 18:42:56.357949 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.383379 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65be8e09-e032-40de-b290-c66c07282211-metrics-certs\") pod \"network-metrics-daemon-d467q\" (UID: \"65be8e09-e032-40de-b290-c66c07282211\") " pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:56 crc kubenswrapper[4915]: E0127 18:42:56.383902 4915 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:42:56 crc kubenswrapper[4915]: E0127 18:42:56.384157 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65be8e09-e032-40de-b290-c66c07282211-metrics-certs podName:65be8e09-e032-40de-b290-c66c07282211 nodeName:}" failed. No retries permitted until 2026-01-27 18:43:28.384124575 +0000 UTC m=+99.741978279 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/65be8e09-e032-40de-b290-c66c07282211-metrics-certs") pod "network-metrics-daemon-d467q" (UID: "65be8e09-e032-40de-b290-c66c07282211") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.440899 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.440939 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.440947 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.440963 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.440974 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:56Z","lastTransitionTime":"2026-01-27T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.544100 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.544132 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.544143 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.544156 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.544164 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:56Z","lastTransitionTime":"2026-01-27T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.603508 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 15:37:36.435652233 +0000 UTC Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.647193 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.647229 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.647238 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.647252 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.647261 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:56Z","lastTransitionTime":"2026-01-27T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.749231 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.749281 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.749298 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.749318 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.749334 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:56Z","lastTransitionTime":"2026-01-27T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.852389 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.852431 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.852442 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.852457 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.852468 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:56Z","lastTransitionTime":"2026-01-27T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.956215 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.956259 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.956281 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.956298 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:56 crc kubenswrapper[4915]: I0127 18:42:56.956312 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:56Z","lastTransitionTime":"2026-01-27T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.059693 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.059750 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.059768 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.059822 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.059840 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:57Z","lastTransitionTime":"2026-01-27T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.162460 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.162506 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.162517 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.162534 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.162546 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:57Z","lastTransitionTime":"2026-01-27T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.265151 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.265194 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.265214 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.265238 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.265257 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:57Z","lastTransitionTime":"2026-01-27T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.367930 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.367988 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.368004 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.368027 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.368045 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:57Z","lastTransitionTime":"2026-01-27T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.470491 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.470526 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.470536 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.470552 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.470563 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:57Z","lastTransitionTime":"2026-01-27T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.572526 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.572577 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.572589 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.572604 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.572614 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:57Z","lastTransitionTime":"2026-01-27T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.605542 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 18:24:51.910699463 +0000 UTC Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.674638 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.674683 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.674695 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.674712 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.674725 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:57Z","lastTransitionTime":"2026-01-27T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.777195 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.777254 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.777270 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.777294 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.777310 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:57Z","lastTransitionTime":"2026-01-27T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.879215 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.879277 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.879294 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.879317 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.879334 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:57Z","lastTransitionTime":"2026-01-27T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.981067 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.981101 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.981109 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.981122 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:57 crc kubenswrapper[4915]: I0127 18:42:57.981131 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:57Z","lastTransitionTime":"2026-01-27T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.083781 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.083864 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.083877 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.083893 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.083905 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:58Z","lastTransitionTime":"2026-01-27T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.186701 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.186760 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.186778 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.186831 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.186851 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:58Z","lastTransitionTime":"2026-01-27T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.289350 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.289381 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.289389 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.289404 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.289417 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:58Z","lastTransitionTime":"2026-01-27T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.357378 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.357451 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.357453 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:42:58 crc kubenswrapper[4915]: E0127 18:42:58.357510 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:42:58 crc kubenswrapper[4915]: E0127 18:42:58.357677 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.357739 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:42:58 crc kubenswrapper[4915]: E0127 18:42:58.357853 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:42:58 crc kubenswrapper[4915]: E0127 18:42:58.357946 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.392011 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.392060 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.392078 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.392100 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.392117 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:58Z","lastTransitionTime":"2026-01-27T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.494318 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.494353 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.494387 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.494403 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.494417 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:58Z","lastTransitionTime":"2026-01-27T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.597246 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.597298 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.597307 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.597320 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.597329 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:58Z","lastTransitionTime":"2026-01-27T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.606572 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 04:27:19.642568517 +0000 UTC Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.699086 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.699134 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.699151 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.699174 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.699191 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:58Z","lastTransitionTime":"2026-01-27T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.801743 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.801774 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.801783 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.801811 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.801821 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:58Z","lastTransitionTime":"2026-01-27T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.904296 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.904366 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.904391 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.904419 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:58 crc kubenswrapper[4915]: I0127 18:42:58.904441 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:58Z","lastTransitionTime":"2026-01-27T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.007882 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.007940 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.007957 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.007980 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.007994 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:59Z","lastTransitionTime":"2026-01-27T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.110877 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.110927 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.110943 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.110964 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.110979 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:59Z","lastTransitionTime":"2026-01-27T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.213506 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.213588 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.213615 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.213648 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.213671 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:59Z","lastTransitionTime":"2026-01-27T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.316076 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.316105 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.316115 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.316128 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.316137 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:59Z","lastTransitionTime":"2026-01-27T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.373202 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.389111 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.401418 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d467q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65be8e09-e032-40de-b290-c66c07282211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d467q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.412830 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.418299 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.418351 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.418368 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.418388 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.418406 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:59Z","lastTransitionTime":"2026-01-27T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.424433 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.434210 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b9bf3c9-fe11-40a2-8577-a53574d1f527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beda7b1981ecc74169bf4d243ac420100b1376379cb97f8f8910773567bb7be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f42a86baa3e985c411dde55aaee372902556650a28cef391464e618f456a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7plll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.447152 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.462316 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.475264 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.489344 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.504429 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.521069 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.521108 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.521119 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.521134 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.521146 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:59Z","lastTransitionTime":"2026-01-27T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.529045 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e94bb0071fa44063d710c35fd37b1af61ef461a24c240d9c057fe50fc31cb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e94bb0071fa44063d710c35fd37b1af61ef461a24c240d9c057fe50fc31cb8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"g/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:42:40.330850 6568 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 18:42:40.330877 6568 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 18:42:40.330913 6568 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:42:40.330921 6568 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:42:40.330921 6568 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 18:42:40.330936 6568 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:42:40.330947 6568 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:42:40.330954 6568 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 18:42:40.330960 6568 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 18:42:40.330981 6568 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:42:40.331005 6568 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 18:42:40.331008 6568 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 18:42:40.331021 6568 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:42:40.331035 6568 factory.go:656] Stopping watch factory\\\\nI0127 18:42:40.331046 6568 ovnkube.go:599] Stopped ovnkube\\\\nI0127 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8spt_openshift-ovn-kubernetes(eb87671e-1bee-4bef-843d-6fce9467079d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.549817 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.572263 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.589699 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16010e03-bab7-40d3-8671-f387d6095bea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcad643b766147459cff8e5d86ba0f08183df4600e0a8a49b55c3423b9c2136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca94e1dfc1266b556ea7788f6fffdd6b6c0e903b260fa0e24bb3a153921b198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ef38e583db7c2d1d0b27bcbcc8a54937759afabf49daabe8767d5a3f3f2cf78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293e2a1883668d1318598c3a2acd50c472c736f414e3c20ed2a6ee6e65f9d9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://293e2a1883668d1318598c3a2acd50c472c736f414e3c20ed2a6ee6e65f9d9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.607586 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 20:05:51.4963768 +0000 UTC Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.610176 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.623651 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.623716 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.623729 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.623749 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.623760 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:59Z","lastTransitionTime":"2026-01-27T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.631870 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.650633 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3569fdd38b83b2b3e932ccb3555bea5f7053e1dbceb3394bee5c38d1f8d7457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.725667 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.725720 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.725731 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.725744 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.725777 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:59Z","lastTransitionTime":"2026-01-27T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.822501 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5bpjb_fe27a668-1ea7-44c8-9490-55cf8db5dad9/kube-multus/0.log" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.822601 4915 generic.go:334] "Generic (PLEG): container finished" podID="fe27a668-1ea7-44c8-9490-55cf8db5dad9" containerID="f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0" exitCode=1 Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.822670 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5bpjb" event={"ID":"fe27a668-1ea7-44c8-9490-55cf8db5dad9","Type":"ContainerDied","Data":"f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0"} Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.823467 4915 scope.go:117] "RemoveContainer" containerID="f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.830307 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.830400 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.830417 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.830464 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.830483 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:59Z","lastTransitionTime":"2026-01-27T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.848463 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.867616 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:59Z\\\",\\\"message\\\":\\\"2026-01-27T18:42:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cd689185-9e2e-4f6d-93de-f42b96277183\\\\n2026-01-27T18:42:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cd689185-9e2e-4f6d-93de-f42b96277183 to /host/opt/cni/bin/\\\\n2026-01-27T18:42:14Z [verbose] multus-daemon started\\\\n2026-01-27T18:42:14Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:42:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.883485 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d467q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65be8e09-e032-40de-b290-c66c07282211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d467q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.899634 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b9bf3c9-fe11-40a2-8577-a53574d1f527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beda7b1981ecc74169bf4d243ac420100b1376379cb97f8f8910773567bb7be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f42a86baa3e985c411dde55aaee372902556650a28cef391464e618f456a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7plll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.917749 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.934127 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.934188 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.934205 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.934230 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.934247 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:42:59Z","lastTransitionTime":"2026-01-27T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.934704 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.950229 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.965366 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.978415 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:42:59 crc kubenswrapper[4915]: I0127 18:42:59.991238 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.011921 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.029779 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.036945 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.036981 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.036991 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.037006 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.037016 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:00Z","lastTransitionTime":"2026-01-27T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.043584 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16010e03-bab7-40d3-8671-f387d6095bea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcad643b766147459cff8e5d86ba0f08183df4600e0a8a49b55c3423b9c2136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca94e1dfc1266b556ea7788f6fffdd6b6c0e903b260fa0e24bb3a153921b198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ef38e583db7c2d1d0b27bcbcc8a54937759afabf49daabe8767d5a3f3f2cf78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293e2a1883668d1318598c3a2acd50c472c736f414e3c20ed2a6ee6e65f9d9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://293e2a1883668d1318598c3a2acd50c472c736f414e3c20ed2a6ee6e65f9d9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.057879 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.070466 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.095433 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e94bb0071fa44063d710c35fd37b1af61ef461a24c240d9c057fe50fc31cb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e94bb0071fa44063d710c35fd37b1af61ef461a24c240d9c057fe50fc31cb8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"g/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:42:40.330850 6568 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 18:42:40.330877 6568 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 18:42:40.330913 6568 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:42:40.330921 6568 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:42:40.330921 6568 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 18:42:40.330936 6568 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:42:40.330947 6568 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:42:40.330954 6568 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 18:42:40.330960 6568 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 18:42:40.330981 6568 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:42:40.331005 6568 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 18:42:40.331008 6568 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 18:42:40.331021 6568 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:42:40.331035 6568 factory.go:656] Stopping watch factory\\\\nI0127 18:42:40.331046 6568 ovnkube.go:599] Stopped ovnkube\\\\nI0127 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8spt_openshift-ovn-kubernetes(eb87671e-1bee-4bef-843d-6fce9467079d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.111568 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.128154 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3569fdd38b83b2b3e932ccb3555bea5f7053e1dbceb3394bee5c38d1f8d7457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.139385 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.139445 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.139460 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.139478 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.139490 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:00Z","lastTransitionTime":"2026-01-27T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.241587 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.241620 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.241630 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.241644 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.241654 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:00Z","lastTransitionTime":"2026-01-27T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.351187 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.351236 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.351252 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.351272 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.351288 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:00Z","lastTransitionTime":"2026-01-27T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.356815 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.356834 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.356852 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.356886 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:00 crc kubenswrapper[4915]: E0127 18:43:00.356963 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:00 crc kubenswrapper[4915]: E0127 18:43:00.357102 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:00 crc kubenswrapper[4915]: E0127 18:43:00.357232 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:00 crc kubenswrapper[4915]: E0127 18:43:00.357417 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.454326 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.454364 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.454373 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.454387 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.454398 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:00Z","lastTransitionTime":"2026-01-27T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.556891 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.556917 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.556925 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.556937 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.556947 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:00Z","lastTransitionTime":"2026-01-27T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.608232 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 12:59:35.681609593 +0000 UTC Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.658935 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.658982 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.658999 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.659015 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.659026 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:00Z","lastTransitionTime":"2026-01-27T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.761565 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.761594 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.761603 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.761614 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.761643 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:00Z","lastTransitionTime":"2026-01-27T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.828210 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5bpjb_fe27a668-1ea7-44c8-9490-55cf8db5dad9/kube-multus/0.log" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.828277 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5bpjb" event={"ID":"fe27a668-1ea7-44c8-9490-55cf8db5dad9","Type":"ContainerStarted","Data":"999d0ed2d215938e26e9b223263ba88b519b694fdb0ae3c3c518907b54762822"} Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.846732 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.860727 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.864971 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.865331 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.865391 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.865416 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.865716 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:00Z","lastTransitionTime":"2026-01-27T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.873577 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.888479 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b9bf3c9-fe11-40a2-8577-a53574d1f527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beda7b1981ecc74169bf4d243ac420100b1376379cb97f8f8910773567bb7be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f42a86baa3e985c411dde55aaee372902556650a28cef391464e618f456a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7plll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.909093 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.921053 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.937537 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.955044 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.968201 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.968250 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.968265 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.968285 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.968298 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:00Z","lastTransitionTime":"2026-01-27T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.970096 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.976104 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.976143 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.976152 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.976166 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.976175 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:00Z","lastTransitionTime":"2026-01-27T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:00 crc kubenswrapper[4915]: E0127 18:43:00.988785 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.991963 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.991995 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.992004 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.992015 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.992023 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:00Z","lastTransitionTime":"2026-01-27T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:00 crc kubenswrapper[4915]: I0127 18:43:00.996877 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e94bb0071fa44063d710c35fd37b1af61ef461a24c240d9c057fe50fc31cb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e94bb0071fa44063d710c35fd37b1af61ef461a24c240d9c057fe50fc31cb8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"g/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:42:40.330850 6568 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 18:42:40.330877 6568 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 18:42:40.330913 6568 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:42:40.330921 6568 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:42:40.330921 6568 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 18:42:40.330936 6568 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:42:40.330947 6568 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:42:40.330954 6568 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 18:42:40.330960 6568 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 18:42:40.330981 6568 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:42:40.331005 6568 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 18:42:40.331008 6568 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 18:42:40.331021 6568 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:42:40.331035 6568 factory.go:656] Stopping watch factory\\\\nI0127 18:42:40.331046 6568 ovnkube.go:599] Stopped ovnkube\\\\nI0127 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8spt_openshift-ovn-kubernetes(eb87671e-1bee-4bef-843d-6fce9467079d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:01 crc kubenswrapper[4915]: E0127 18:43:01.007205 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.011030 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.011142 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.011229 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.011335 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.011436 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:01Z","lastTransitionTime":"2026-01-27T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:01 crc kubenswrapper[4915]: E0127 18:43:01.024099 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.024100 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.028010 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.028043 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.028055 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.028070 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.028081 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:01Z","lastTransitionTime":"2026-01-27T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:01 crc kubenswrapper[4915]: E0127 18:43:01.042420 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.043708 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.047395 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.047444 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.047509 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.047535 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.047551 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:01Z","lastTransitionTime":"2026-01-27T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.057883 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16010e03-bab7-40d3-8671-f387d6095bea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcad643b766147459cff8e5d86ba0f08183df4600e0a8a49b55c3423b9c2136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca94e1dfc1266b556ea7788f6fffdd6b6c0e903b260fa0e24bb3a153921b198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ef38e583db7c2d1d0b27bcbcc8a54937759afabf49daabe8767d5a3f3f2cf78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293e2a1883668d1318598c3a2acd50c472c736f414e3c20ed2a6ee6e65f9d9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://293e2a1883668d1318598c3a2acd50c472c736f414e3c20ed2a6ee6e65f9d9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:01 crc kubenswrapper[4915]: E0127 18:43:01.063819 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:01 crc kubenswrapper[4915]: E0127 18:43:01.064018 4915 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.070291 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.070343 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.070356 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.070381 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.070394 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:01Z","lastTransitionTime":"2026-01-27T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.072276 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.087185 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3569fdd38b83b2b3e932ccb3555bea5f7053e1dbceb3394bee5c38d1f8d7457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.096597 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d467q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65be8e09-e032-40de-b290-c66c07282211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d467q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.107310 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.119051 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://999d0ed2d215938e26e9b223263ba88b519b694fdb0ae3c3c518907b54762822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:59Z\\\",\\\"message\\\":\\\"2026-01-27T18:42:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cd689185-9e2e-4f6d-93de-f42b96277183\\\\n2026-01-27T18:42:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cd689185-9e2e-4f6d-93de-f42b96277183 to /host/opt/cni/bin/\\\\n2026-01-27T18:42:14Z [verbose] multus-daemon started\\\\n2026-01-27T18:42:14Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:42:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:01Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.172557 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.172600 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.172611 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.172626 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.172636 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:01Z","lastTransitionTime":"2026-01-27T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.275529 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.275561 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.275573 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.275588 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.275600 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:01Z","lastTransitionTime":"2026-01-27T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.377546 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.377596 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.377609 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.377627 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.377641 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:01Z","lastTransitionTime":"2026-01-27T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.480039 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.480088 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.480102 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.480121 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.480134 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:01Z","lastTransitionTime":"2026-01-27T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.583227 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.583275 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.583288 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.583306 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.583319 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:01Z","lastTransitionTime":"2026-01-27T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.608702 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 14:32:33.050040909 +0000 UTC Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.685154 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.685184 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.685194 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.685207 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.685217 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:01Z","lastTransitionTime":"2026-01-27T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.787888 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.787929 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.787941 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.787959 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.787972 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:01Z","lastTransitionTime":"2026-01-27T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.891219 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.891266 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.891280 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.891300 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.891314 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:01Z","lastTransitionTime":"2026-01-27T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.994226 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.994274 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.994286 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.994303 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:01 crc kubenswrapper[4915]: I0127 18:43:01.994315 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:01Z","lastTransitionTime":"2026-01-27T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.097388 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.097452 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.097469 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.097492 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.097509 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:02Z","lastTransitionTime":"2026-01-27T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.200522 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.200570 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.200580 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.200599 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.200611 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:02Z","lastTransitionTime":"2026-01-27T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.303456 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.303502 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.303511 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.303524 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.303532 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:02Z","lastTransitionTime":"2026-01-27T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.357458 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.357517 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.357533 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.357469 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:02 crc kubenswrapper[4915]: E0127 18:43:02.357618 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:02 crc kubenswrapper[4915]: E0127 18:43:02.357690 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:02 crc kubenswrapper[4915]: E0127 18:43:02.357834 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:02 crc kubenswrapper[4915]: E0127 18:43:02.357916 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.406865 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.406899 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.406911 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.406925 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.406937 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:02Z","lastTransitionTime":"2026-01-27T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.509579 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.509619 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.509630 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.509647 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.509658 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:02Z","lastTransitionTime":"2026-01-27T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.609092 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 10:23:02.609086251 +0000 UTC Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.612295 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.612326 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.612335 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.612349 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.612358 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:02Z","lastTransitionTime":"2026-01-27T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.715087 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.715153 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.715165 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.715186 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.715200 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:02Z","lastTransitionTime":"2026-01-27T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.817994 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.818061 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.818074 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.818097 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.818111 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:02Z","lastTransitionTime":"2026-01-27T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.922150 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.922217 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.922240 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.922270 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:02 crc kubenswrapper[4915]: I0127 18:43:02.922293 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:02Z","lastTransitionTime":"2026-01-27T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.024562 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.024845 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.024855 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.024868 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.024877 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:03Z","lastTransitionTime":"2026-01-27T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.128341 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.128408 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.128432 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.128460 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.128485 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:03Z","lastTransitionTime":"2026-01-27T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.231490 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.231567 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.231588 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.231617 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.231635 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:03Z","lastTransitionTime":"2026-01-27T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.335018 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.335082 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.335101 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.335124 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.335143 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:03Z","lastTransitionTime":"2026-01-27T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.438552 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.438601 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.438618 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.438641 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.438658 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:03Z","lastTransitionTime":"2026-01-27T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.542071 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.542132 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.542150 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.542173 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.542190 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:03Z","lastTransitionTime":"2026-01-27T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.609887 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 12:04:30.958038432 +0000 UTC Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.645098 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.645154 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.645171 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.645195 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.645213 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:03Z","lastTransitionTime":"2026-01-27T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.748523 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.748581 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.748600 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.748627 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.748650 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:03Z","lastTransitionTime":"2026-01-27T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.851648 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.851718 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.851738 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.851766 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.851784 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:03Z","lastTransitionTime":"2026-01-27T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.955186 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.955234 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.955251 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.955273 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:03 crc kubenswrapper[4915]: I0127 18:43:03.955292 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:03Z","lastTransitionTime":"2026-01-27T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.058111 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.058148 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.058159 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.058175 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.058187 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:04Z","lastTransitionTime":"2026-01-27T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.160736 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.160820 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.160838 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.160861 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.160881 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:04Z","lastTransitionTime":"2026-01-27T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.264214 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.264269 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.264299 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.264322 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.264340 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:04Z","lastTransitionTime":"2026-01-27T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.357648 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:04 crc kubenswrapper[4915]: E0127 18:43:04.357860 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.358119 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:04 crc kubenswrapper[4915]: E0127 18:43:04.358278 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.358497 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:04 crc kubenswrapper[4915]: E0127 18:43:04.358608 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.358833 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:04 crc kubenswrapper[4915]: E0127 18:43:04.358931 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.368009 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.368059 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.368080 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.368105 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.368122 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:04Z","lastTransitionTime":"2026-01-27T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.471326 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.471419 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.471443 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.471473 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.471495 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:04Z","lastTransitionTime":"2026-01-27T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.581865 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.581949 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.581982 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.582009 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.582029 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:04Z","lastTransitionTime":"2026-01-27T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.610209 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 08:25:06.789403892 +0000 UTC Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.685277 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.685308 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.685317 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.685331 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.685341 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:04Z","lastTransitionTime":"2026-01-27T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.788348 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.788401 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.788416 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.788434 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.788446 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:04Z","lastTransitionTime":"2026-01-27T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.891264 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.891353 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.891372 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.891395 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.891411 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:04Z","lastTransitionTime":"2026-01-27T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.994275 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.994330 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.994347 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.994371 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:04 crc kubenswrapper[4915]: I0127 18:43:04.994389 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:04Z","lastTransitionTime":"2026-01-27T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.097996 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.098071 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.098094 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.098120 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.098144 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:05Z","lastTransitionTime":"2026-01-27T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.201021 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.201077 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.201094 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.201118 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.201134 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:05Z","lastTransitionTime":"2026-01-27T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.304615 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.304687 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.304705 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.305217 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.305277 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:05Z","lastTransitionTime":"2026-01-27T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.408364 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.408722 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.408742 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.408766 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.408782 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:05Z","lastTransitionTime":"2026-01-27T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.512389 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.512471 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.512494 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.512525 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.512544 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:05Z","lastTransitionTime":"2026-01-27T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.611094 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 08:48:59.602416697 +0000 UTC Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.615936 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.616179 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.616204 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.616237 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.616262 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:05Z","lastTransitionTime":"2026-01-27T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.719959 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.720040 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.720061 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.720090 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.720114 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:05Z","lastTransitionTime":"2026-01-27T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.822962 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.823021 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.823046 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.823076 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.823098 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:05Z","lastTransitionTime":"2026-01-27T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.926361 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.926455 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.926478 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.926885 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:05 crc kubenswrapper[4915]: I0127 18:43:05.927130 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:05Z","lastTransitionTime":"2026-01-27T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.030641 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.030719 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.030740 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.030765 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.030784 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:06Z","lastTransitionTime":"2026-01-27T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.133419 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.133455 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.133465 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.133477 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.133486 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:06Z","lastTransitionTime":"2026-01-27T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.236901 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.236948 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.236957 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.236976 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.236985 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:06Z","lastTransitionTime":"2026-01-27T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.340094 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.340136 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.340153 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.340167 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.340176 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:06Z","lastTransitionTime":"2026-01-27T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.356533 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.356608 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:06 crc kubenswrapper[4915]: E0127 18:43:06.356631 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:06 crc kubenswrapper[4915]: E0127 18:43:06.356761 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.356800 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.356957 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:06 crc kubenswrapper[4915]: E0127 18:43:06.357027 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:06 crc kubenswrapper[4915]: E0127 18:43:06.357157 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.443441 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.443500 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.443524 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.443552 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.443570 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:06Z","lastTransitionTime":"2026-01-27T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.547096 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.547200 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.547219 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.547272 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.547290 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:06Z","lastTransitionTime":"2026-01-27T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.611225 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 06:52:09.309426974 +0000 UTC Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.649845 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.649904 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.649922 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.649946 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.649963 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:06Z","lastTransitionTime":"2026-01-27T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.752395 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.752441 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.752457 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.752481 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.752497 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:06Z","lastTransitionTime":"2026-01-27T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.854698 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.854764 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.854834 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.854870 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.854895 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:06Z","lastTransitionTime":"2026-01-27T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.958708 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.958783 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.958829 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.958852 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:06 crc kubenswrapper[4915]: I0127 18:43:06.958868 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:06Z","lastTransitionTime":"2026-01-27T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.067363 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.067429 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.067445 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.067508 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.067531 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:07Z","lastTransitionTime":"2026-01-27T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.171028 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.171089 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.171124 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.171160 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.171186 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:07Z","lastTransitionTime":"2026-01-27T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.273660 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.273758 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.273775 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.273821 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.273839 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:07Z","lastTransitionTime":"2026-01-27T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.376587 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.376693 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.376716 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.376740 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.376758 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:07Z","lastTransitionTime":"2026-01-27T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.480575 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.480634 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.480651 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.480675 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.480691 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:07Z","lastTransitionTime":"2026-01-27T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.584272 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.584687 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.584922 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.585158 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.585370 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:07Z","lastTransitionTime":"2026-01-27T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.611589 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 14:22:39.548853283 +0000 UTC Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.688587 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.688657 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.688679 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.688710 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.688732 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:07Z","lastTransitionTime":"2026-01-27T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.792471 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.792776 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.792947 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.793075 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.793228 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:07Z","lastTransitionTime":"2026-01-27T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.897018 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.897076 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.897088 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.897106 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:07 crc kubenswrapper[4915]: I0127 18:43:07.897118 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:07Z","lastTransitionTime":"2026-01-27T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.000589 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.000662 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.000685 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.000715 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.000736 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:08Z","lastTransitionTime":"2026-01-27T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.103768 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.103847 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.103866 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.103891 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.103906 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:08Z","lastTransitionTime":"2026-01-27T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.207501 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.207569 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.207588 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.207614 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.207633 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:08Z","lastTransitionTime":"2026-01-27T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.311116 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.311204 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.311230 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.311267 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.311291 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:08Z","lastTransitionTime":"2026-01-27T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.357391 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.357466 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:08 crc kubenswrapper[4915]: E0127 18:43:08.357592 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:08 crc kubenswrapper[4915]: E0127 18:43:08.357781 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.358068 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.358203 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:08 crc kubenswrapper[4915]: E0127 18:43:08.358264 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:08 crc kubenswrapper[4915]: E0127 18:43:08.358373 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.413652 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.413683 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.413693 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.413709 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.413721 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:08Z","lastTransitionTime":"2026-01-27T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.517114 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.517235 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.517257 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.517285 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.517307 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:08Z","lastTransitionTime":"2026-01-27T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.612303 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 07:35:42.467854476 +0000 UTC Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.621138 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.621195 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.621214 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.621240 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.621282 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:08Z","lastTransitionTime":"2026-01-27T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.723782 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.723891 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.723915 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.723943 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.723964 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:08Z","lastTransitionTime":"2026-01-27T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.827570 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.827626 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.827642 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.827668 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.827686 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:08Z","lastTransitionTime":"2026-01-27T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.929855 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.929926 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.929946 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.929971 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:08 crc kubenswrapper[4915]: I0127 18:43:08.929989 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:08Z","lastTransitionTime":"2026-01-27T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.033406 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.033488 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.033514 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.033545 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.033569 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:09Z","lastTransitionTime":"2026-01-27T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.136027 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.136089 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.136107 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.136130 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.136147 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:09Z","lastTransitionTime":"2026-01-27T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.239118 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.239197 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.239222 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.239251 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.239279 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:09Z","lastTransitionTime":"2026-01-27T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.342842 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.342906 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.342924 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.342948 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.342968 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:09Z","lastTransitionTime":"2026-01-27T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.357971 4915 scope.go:117] "RemoveContainer" containerID="4e94bb0071fa44063d710c35fd37b1af61ef461a24c240d9c057fe50fc31cb8a" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.385826 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.407707 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.427113 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.445654 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.446506 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.446575 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.446598 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.446629 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.446651 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:09Z","lastTransitionTime":"2026-01-27T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.476377 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.493310 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.514835 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b9bf3c9-fe11-40a2-8577-a53574d1f527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beda7b1981ecc74169bf4d243ac420100b1376379cb97f8f8910773567bb7be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f42a86baa3e985c411dde55aaee372902556650a28cef391464e618f456a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7plll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.542318 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.549477 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.549570 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.549596 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.549631 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.549649 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:09Z","lastTransitionTime":"2026-01-27T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.568814 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.590982 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16010e03-bab7-40d3-8671-f387d6095bea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcad643b766147459cff8e5d86ba0f08183df4600e0a8a49b55c3423b9c2136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca94e1dfc1266b556ea7788f6fffdd6b6c0e903b260fa0e24bb3a153921b198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ef38e583db7c2d1d0b27bcbcc8a54937759afabf49daabe8767d5a3f3f2cf78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293e2a1883668d1318598c3a2acd50c472c736f414e3c20ed2a6ee6e65f9d9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://293e2a1883668d1318598c3a2acd50c472c736f414e3c20ed2a6ee6e65f9d9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.612742 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 22:22:01.781286605 +0000 UTC Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.613008 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.629047 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.653624 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.653711 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.653735 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.653764 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.653787 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:09Z","lastTransitionTime":"2026-01-27T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.662124 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e94bb0071fa44063d710c35fd37b1af61ef461a24c240d9c057fe50fc31cb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e94bb0071fa44063d710c35fd37b1af61ef461a24c240d9c057fe50fc31cb8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"g/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:42:40.330850 6568 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 18:42:40.330877 6568 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 18:42:40.330913 6568 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:42:40.330921 6568 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:42:40.330921 6568 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 18:42:40.330936 6568 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:42:40.330947 6568 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:42:40.330954 6568 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 18:42:40.330960 6568 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 18:42:40.330981 6568 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:42:40.331005 6568 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 18:42:40.331008 6568 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 18:42:40.331021 6568 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:42:40.331035 6568 factory.go:656] Stopping watch factory\\\\nI0127 18:42:40.331046 6568 ovnkube.go:599] Stopped ovnkube\\\\nI0127 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8spt_openshift-ovn-kubernetes(eb87671e-1bee-4bef-843d-6fce9467079d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.686897 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.716354 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3569fdd38b83b2b3e932ccb3555bea5f7053e1dbceb3394bee5c38d1f8d7457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.735597 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.756477 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.756512 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.756523 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.756539 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.756550 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:09Z","lastTransitionTime":"2026-01-27T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.761254 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://999d0ed2d215938e26e9b223263ba88b519b694fdb0ae3c3c518907b54762822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:59Z\\\",\\\"message\\\":\\\"2026-01-27T18:42:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cd689185-9e2e-4f6d-93de-f42b96277183\\\\n2026-01-27T18:42:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cd689185-9e2e-4f6d-93de-f42b96277183 to /host/opt/cni/bin/\\\\n2026-01-27T18:42:14Z [verbose] multus-daemon started\\\\n2026-01-27T18:42:14Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:42:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.782948 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d467q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65be8e09-e032-40de-b290-c66c07282211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d467q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.859163 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.859200 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.859212 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.859228 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.859238 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:09Z","lastTransitionTime":"2026-01-27T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.865766 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8spt_eb87671e-1bee-4bef-843d-6fce9467079d/ovnkube-controller/2.log" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.869456 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" event={"ID":"eb87671e-1bee-4bef-843d-6fce9467079d","Type":"ContainerStarted","Data":"c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3"} Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.870032 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.887218 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.914863 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3569fdd38b83b2b3e932ccb3555bea5f7053e1dbceb3394bee5c38d1f8d7457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.937518 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.953976 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://999d0ed2d215938e26e9b223263ba88b519b694fdb0ae3c3c518907b54762822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:59Z\\\",\\\"message\\\":\\\"2026-01-27T18:42:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cd689185-9e2e-4f6d-93de-f42b96277183\\\\n2026-01-27T18:42:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cd689185-9e2e-4f6d-93de-f42b96277183 to /host/opt/cni/bin/\\\\n2026-01-27T18:42:14Z [verbose] multus-daemon started\\\\n2026-01-27T18:42:14Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:42:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.962130 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.962172 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.962185 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.962203 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.962216 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:09Z","lastTransitionTime":"2026-01-27T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.966939 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d467q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65be8e09-e032-40de-b290-c66c07282211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d467q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.977363 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:09 crc kubenswrapper[4915]: I0127 18:43:09.991450 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b9bf3c9-fe11-40a2-8577-a53574d1f527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beda7b1981ecc74169bf4d243ac420100b1376379cb97f8f8910773567bb7be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f42a86baa3e985c411dde55aaee372902556650a28cef391464e618f456a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7plll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:09Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.006722 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.020536 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.034126 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.046714 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.060910 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.064908 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.064956 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.064967 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.064984 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.064995 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:10Z","lastTransitionTime":"2026-01-27T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.085714 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e94bb0071fa44063d710c35fd37b1af61ef461a24c240d9c057fe50fc31cb8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"g/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:42:40.330850 6568 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 18:42:40.330877 6568 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 18:42:40.330913 6568 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:42:40.330921 6568 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:42:40.330921 6568 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 18:42:40.330936 6568 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:42:40.330947 6568 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:42:40.330954 6568 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 18:42:40.330960 6568 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 18:42:40.330981 6568 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:42:40.331005 6568 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 18:42:40.331008 6568 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 18:42:40.331021 6568 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:42:40.331035 6568 factory.go:656] Stopping watch factory\\\\nI0127 18:42:40.331046 6568 ovnkube.go:599] Stopped ovnkube\\\\nI0127 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.106514 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.122631 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.135481 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16010e03-bab7-40d3-8671-f387d6095bea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcad643b766147459cff8e5d86ba0f08183df4600e0a8a49b55c3423b9c2136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca94e1dfc1266b556ea7788f6fffdd6b6c0e903b260fa0e24bb3a153921b198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ef38e583db7c2d1d0b27bcbcc8a54937759afabf49daabe8767d5a3f3f2cf78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293e2a1883668d1318598c3a2acd50c472c736f414e3c20ed2a6ee6e65f9d9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://293e2a1883668d1318598c3a2acd50c472c736f414e3c20ed2a6ee6e65f9d9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.151853 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.163699 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.167397 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.167424 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.167433 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.167445 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.167456 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:10Z","lastTransitionTime":"2026-01-27T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.269657 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.269709 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.269725 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.269747 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.269764 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:10Z","lastTransitionTime":"2026-01-27T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.356711 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.356750 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.356752 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:10 crc kubenswrapper[4915]: E0127 18:43:10.356892 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.356944 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:10 crc kubenswrapper[4915]: E0127 18:43:10.357012 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:10 crc kubenswrapper[4915]: E0127 18:43:10.357118 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:10 crc kubenswrapper[4915]: E0127 18:43:10.357172 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.373004 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.373067 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.373093 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.373120 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.373142 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:10Z","lastTransitionTime":"2026-01-27T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.476155 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.476218 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.476235 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.476259 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.476277 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:10Z","lastTransitionTime":"2026-01-27T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.579336 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.579393 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.579417 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.579444 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.579465 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:10Z","lastTransitionTime":"2026-01-27T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.613992 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 09:43:49.465439843 +0000 UTC Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.682831 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.682897 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.682924 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.682956 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.682981 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:10Z","lastTransitionTime":"2026-01-27T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.786181 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.786245 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.786264 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.786288 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.786306 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:10Z","lastTransitionTime":"2026-01-27T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.876199 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8spt_eb87671e-1bee-4bef-843d-6fce9467079d/ovnkube-controller/3.log" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.877207 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8spt_eb87671e-1bee-4bef-843d-6fce9467079d/ovnkube-controller/2.log" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.880746 4915 generic.go:334] "Generic (PLEG): container finished" podID="eb87671e-1bee-4bef-843d-6fce9467079d" containerID="c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3" exitCode=1 Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.880828 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" event={"ID":"eb87671e-1bee-4bef-843d-6fce9467079d","Type":"ContainerDied","Data":"c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3"} Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.880877 4915 scope.go:117] "RemoveContainer" containerID="4e94bb0071fa44063d710c35fd37b1af61ef461a24c240d9c057fe50fc31cb8a" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.882017 4915 scope.go:117] "RemoveContainer" containerID="c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3" Jan 27 18:43:10 crc kubenswrapper[4915]: E0127 18:43:10.882258 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n8spt_openshift-ovn-kubernetes(eb87671e-1bee-4bef-843d-6fce9467079d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.888339 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.888422 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.888446 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.888475 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.888496 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:10Z","lastTransitionTime":"2026-01-27T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.907551 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.927831 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.946655 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.967253 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.985259 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.991043 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.991102 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.991120 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.991145 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:10 crc kubenswrapper[4915]: I0127 18:43:10.991164 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:10Z","lastTransitionTime":"2026-01-27T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.002097 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.025244 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b9bf3c9-fe11-40a2-8577-a53574d1f527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beda7b1981ecc74169bf4d243ac420100b1376379cb97f8f8910773567bb7be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f42a86baa3e985c411dde55aaee372902556650a28cef391464e618f456a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7plll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.058416 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.082480 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.094518 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.094567 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.094585 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.094608 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.094627 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:11Z","lastTransitionTime":"2026-01-27T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.096454 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.096501 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.096520 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.096543 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.096560 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:11Z","lastTransitionTime":"2026-01-27T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.104012 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16010e03-bab7-40d3-8671-f387d6095bea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcad643b766147459cff8e5d86ba0f08183df4600e0a8a49b55c3423b9c2136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca94e1dfc1266b556ea7788f6fffdd6b6c0e903b260fa0e24bb3a153921b198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ef38e583db7c2d1d0b27bcbcc8a54937759afabf49daabe8767d5a3f3f2cf78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293e2a1883668d1318598c3a2acd50c472c736f414e3c20ed2a6ee6e65f9d9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://293e2a1883668d1318598c3a2acd50c472c736f414e3c20ed2a6ee6e65f9d9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4915]: E0127 18:43:11.109561 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.114900 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.114954 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.114973 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.114999 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.115015 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:11Z","lastTransitionTime":"2026-01-27T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.124712 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4915]: E0127 18:43:11.133382 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.138566 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.138611 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.138627 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.138647 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.138665 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:11Z","lastTransitionTime":"2026-01-27T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.139259 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4915]: E0127 18:43:11.158192 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.163999 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.164075 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.164089 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.164113 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.164128 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:11Z","lastTransitionTime":"2026-01-27T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.172142 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e94bb0071fa44063d710c35fd37b1af61ef461a24c240d9c057fe50fc31cb8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:40Z\\\",\\\"message\\\":\\\"g/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 18:42:40.330850 6568 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 18:42:40.330877 6568 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 18:42:40.330913 6568 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 18:42:40.330921 6568 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 18:42:40.330921 6568 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 18:42:40.330936 6568 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 18:42:40.330947 6568 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 18:42:40.330954 6568 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 18:42:40.330960 6568 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 18:42:40.330981 6568 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 18:42:40.331005 6568 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 18:42:40.331008 6568 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 18:42:40.331021 6568 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 18:42:40.331035 6568 factory.go:656] Stopping watch factory\\\\nI0127 18:42:40.331046 6568 ovnkube.go:599] Stopped ovnkube\\\\nI0127 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"message\\\":\\\"rplf for pod on switch crc\\\\nI0127 18:43:10.297471 6965 services_controller.go:453] Built service openshift-machine-api/machine-api-controllers template LB for network=default: []services.LB{}\\\\nI0127 18:43:10.297479 6965 services_controller.go:454] Service openshift-machine-api/machine-api-controllers for network=default has 3 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0127 18:43:10.297494 6965 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-controllers_TCP_cluster\\\\\\\", UUID:\\\\\\\"62af83f3-e0c8-4632-aaaa-17488566a9d8\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-controllers_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/mach\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4915]: E0127 18:43:11.182121 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.186756 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.186853 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.186881 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.186913 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.186938 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:11Z","lastTransitionTime":"2026-01-27T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.191160 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4915]: E0127 18:43:11.209047 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4915]: E0127 18:43:11.209358 4915 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.211726 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.211782 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.211842 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.211872 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.211893 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:11Z","lastTransitionTime":"2026-01-27T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.218471 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3569fdd38b83b2b3e932ccb3555bea5f7053e1dbceb3394bee5c38d1f8d7457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.238224 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.257369 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://999d0ed2d215938e26e9b223263ba88b519b694fdb0ae3c3c518907b54762822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:59Z\\\",\\\"message\\\":\\\"2026-01-27T18:42:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cd689185-9e2e-4f6d-93de-f42b96277183\\\\n2026-01-27T18:42:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cd689185-9e2e-4f6d-93de-f42b96277183 to /host/opt/cni/bin/\\\\n2026-01-27T18:42:14Z [verbose] multus-daemon started\\\\n2026-01-27T18:42:14Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:42:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.272077 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d467q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65be8e09-e032-40de-b290-c66c07282211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d467q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.314844 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.314925 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.314939 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.314963 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.315002 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:11Z","lastTransitionTime":"2026-01-27T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.417988 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.418049 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.418070 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.418094 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.418114 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:11Z","lastTransitionTime":"2026-01-27T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.521553 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.521627 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.521645 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.521682 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.521702 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:11Z","lastTransitionTime":"2026-01-27T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.614842 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 19:20:00.569294013 +0000 UTC Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.624573 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.624625 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.624643 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.624669 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.624686 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:11Z","lastTransitionTime":"2026-01-27T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.727055 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.727116 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.727129 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.727144 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.727155 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:11Z","lastTransitionTime":"2026-01-27T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.830249 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.830300 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.830321 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.830349 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.830373 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:11Z","lastTransitionTime":"2026-01-27T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.887394 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8spt_eb87671e-1bee-4bef-843d-6fce9467079d/ovnkube-controller/3.log" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.892878 4915 scope.go:117] "RemoveContainer" containerID="c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3" Jan 27 18:43:11 crc kubenswrapper[4915]: E0127 18:43:11.893316 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n8spt_openshift-ovn-kubernetes(eb87671e-1bee-4bef-843d-6fce9467079d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.913760 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.934835 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.934954 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.935029 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.935061 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.935083 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:11Z","lastTransitionTime":"2026-01-27T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.935032 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://999d0ed2d215938e26e9b223263ba88b519b694fdb0ae3c3c518907b54762822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:59Z\\\",\\\"message\\\":\\\"2026-01-27T18:42:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cd689185-9e2e-4f6d-93de-f42b96277183\\\\n2026-01-27T18:42:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cd689185-9e2e-4f6d-93de-f42b96277183 to /host/opt/cni/bin/\\\\n2026-01-27T18:42:14Z [verbose] multus-daemon started\\\\n2026-01-27T18:42:14Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:42:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.949872 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d467q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65be8e09-e032-40de-b290-c66c07282211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d467q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.965132 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:11 crc kubenswrapper[4915]: I0127 18:43:11.984554 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.001857 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:11Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.017928 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.031697 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.038132 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.038204 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.038232 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.038263 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.038285 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:12Z","lastTransitionTime":"2026-01-27T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.052257 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b9bf3c9-fe11-40a2-8577-a53574d1f527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beda7b1981ecc74169bf4d243ac420100b1376379cb97f8f8910773567bb7be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f42a86baa3e985c411dde55aaee372902556650a28cef391464e618f456a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7plll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.072013 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.093162 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.111130 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16010e03-bab7-40d3-8671-f387d6095bea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcad643b766147459cff8e5d86ba0f08183df4600e0a8a49b55c3423b9c2136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca94e1dfc1266b556ea7788f6fffdd6b6c0e903b260fa0e24bb3a153921b198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ef38e583db7c2d1d0b27bcbcc8a54937759afabf49daabe8767d5a3f3f2cf78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293e2a1883668d1318598c3a2acd50c472c736f414e3c20ed2a6ee6e65f9d9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://293e2a1883668d1318598c3a2acd50c472c736f414e3c20ed2a6ee6e65f9d9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.129935 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.140950 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.141186 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.141343 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.141560 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.141732 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:12Z","lastTransitionTime":"2026-01-27T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.146677 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.182896 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"message\\\":\\\"rplf for pod on switch crc\\\\nI0127 18:43:10.297471 6965 services_controller.go:453] Built service openshift-machine-api/machine-api-controllers template LB for network=default: []services.LB{}\\\\nI0127 18:43:10.297479 6965 services_controller.go:454] Service openshift-machine-api/machine-api-controllers for network=default has 3 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0127 18:43:10.297494 6965 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-controllers_TCP_cluster\\\\\\\", UUID:\\\\\\\"62af83f3-e0c8-4632-aaaa-17488566a9d8\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-controllers_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/mach\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n8spt_openshift-ovn-kubernetes(eb87671e-1bee-4bef-843d-6fce9467079d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.216868 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.241720 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3569fdd38b83b2b3e932ccb3555bea5f7053e1dbceb3394bee5c38d1f8d7457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.245359 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.245429 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.245447 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.245472 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.245490 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:12Z","lastTransitionTime":"2026-01-27T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.264726 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:12Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.349165 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.349218 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.349236 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.349269 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.349289 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:12Z","lastTransitionTime":"2026-01-27T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.357457 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.357543 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.357653 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:12 crc kubenswrapper[4915]: E0127 18:43:12.357660 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.357698 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:12 crc kubenswrapper[4915]: E0127 18:43:12.357890 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:12 crc kubenswrapper[4915]: E0127 18:43:12.358053 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:12 crc kubenswrapper[4915]: E0127 18:43:12.358240 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.452602 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.452655 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.452676 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.452706 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.452728 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:12Z","lastTransitionTime":"2026-01-27T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.555918 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.555994 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.556014 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.556087 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.556117 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:12Z","lastTransitionTime":"2026-01-27T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.615456 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 05:08:32.818776954 +0000 UTC Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.659024 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.659077 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.659094 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.659118 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.659136 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:12Z","lastTransitionTime":"2026-01-27T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.761958 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.762053 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.762073 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.762101 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.762122 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:12Z","lastTransitionTime":"2026-01-27T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.865274 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.865339 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.865359 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.865386 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.865405 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:12Z","lastTransitionTime":"2026-01-27T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.969236 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.969324 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.969354 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.969387 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:12 crc kubenswrapper[4915]: I0127 18:43:12.969417 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:12Z","lastTransitionTime":"2026-01-27T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.072894 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.072957 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.072976 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.073000 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.073020 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:13Z","lastTransitionTime":"2026-01-27T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.177163 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.177230 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.177247 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.177271 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.177290 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:13Z","lastTransitionTime":"2026-01-27T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.281198 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.281257 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.281274 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.281296 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.281314 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:13Z","lastTransitionTime":"2026-01-27T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.383641 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.383710 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.383727 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.383751 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.383770 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:13Z","lastTransitionTime":"2026-01-27T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.487258 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.487310 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.487326 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.487350 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.487368 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:13Z","lastTransitionTime":"2026-01-27T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.590856 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.590898 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.590915 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.590986 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.591003 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:13Z","lastTransitionTime":"2026-01-27T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.616595 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 10:23:52.630705909 +0000 UTC Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.694241 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.694301 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.694317 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.694339 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.694358 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:13Z","lastTransitionTime":"2026-01-27T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.797875 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.797930 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.797948 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.797971 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.797988 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:13Z","lastTransitionTime":"2026-01-27T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.900525 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.900605 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.900628 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.900653 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:13 crc kubenswrapper[4915]: I0127 18:43:13.900716 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:13Z","lastTransitionTime":"2026-01-27T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.004346 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.004413 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.004430 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.004454 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.004471 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:14Z","lastTransitionTime":"2026-01-27T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.107695 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.107757 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.107775 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.107833 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.107853 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:14Z","lastTransitionTime":"2026-01-27T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.211140 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.211199 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.211216 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.211240 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.211257 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:14Z","lastTransitionTime":"2026-01-27T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.314963 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.315030 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.315047 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.315072 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.315088 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:14Z","lastTransitionTime":"2026-01-27T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.357538 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.357588 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.357588 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.357720 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:14 crc kubenswrapper[4915]: E0127 18:43:14.358052 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:14 crc kubenswrapper[4915]: E0127 18:43:14.358240 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:14 crc kubenswrapper[4915]: E0127 18:43:14.358364 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:14 crc kubenswrapper[4915]: E0127 18:43:14.358538 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.418471 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.418526 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.418543 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.418567 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.418583 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:14Z","lastTransitionTime":"2026-01-27T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.493894 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:43:14 crc kubenswrapper[4915]: E0127 18:43:14.494186 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:18.494134548 +0000 UTC m=+149.851988252 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.521458 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.521541 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.521560 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.521586 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.521606 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:14Z","lastTransitionTime":"2026-01-27T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.595481 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.595549 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.595591 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.595632 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:14 crc kubenswrapper[4915]: E0127 18:43:14.595826 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:43:14 crc kubenswrapper[4915]: E0127 18:43:14.595873 4915 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:43:14 crc kubenswrapper[4915]: E0127 18:43:14.595881 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 18:43:14 crc kubenswrapper[4915]: E0127 18:43:14.595920 4915 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:43:14 crc kubenswrapper[4915]: E0127 18:43:14.595942 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:43:14 crc kubenswrapper[4915]: E0127 18:43:14.595971 4915 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:43:14 crc kubenswrapper[4915]: E0127 18:43:14.596003 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:44:18.595965674 +0000 UTC m=+149.953819378 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 18:43:14 crc kubenswrapper[4915]: E0127 18:43:14.596052 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 18:44:18.596024346 +0000 UTC m=+149.953878050 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:43:14 crc kubenswrapper[4915]: E0127 18:43:14.595902 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 18:43:14 crc kubenswrapper[4915]: E0127 18:43:14.596082 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 18:44:18.596067727 +0000 UTC m=+149.953921421 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 18:43:14 crc kubenswrapper[4915]: E0127 18:43:14.596105 4915 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:43:14 crc kubenswrapper[4915]: E0127 18:43:14.596233 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 18:44:18.59620504 +0000 UTC m=+149.954058744 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.617645 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 11:04:29.542618113 +0000 UTC Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.625301 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.625349 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.625366 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.625389 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.625407 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:14Z","lastTransitionTime":"2026-01-27T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.729093 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.729160 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.729198 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.729222 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.729239 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:14Z","lastTransitionTime":"2026-01-27T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.832873 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.832963 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.833016 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.833038 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.833055 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:14Z","lastTransitionTime":"2026-01-27T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.935198 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.935259 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.935282 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.935313 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:14 crc kubenswrapper[4915]: I0127 18:43:14.935341 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:14Z","lastTransitionTime":"2026-01-27T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.038341 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.038454 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.038479 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.038661 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.038865 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:15Z","lastTransitionTime":"2026-01-27T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.142734 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.142882 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.142955 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.142986 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.143046 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:15Z","lastTransitionTime":"2026-01-27T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.246975 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.247035 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.247054 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.247078 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.247098 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:15Z","lastTransitionTime":"2026-01-27T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.349680 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.349723 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.349739 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.349762 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.349779 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:15Z","lastTransitionTime":"2026-01-27T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.453106 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.453181 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.453204 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.453232 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.453254 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:15Z","lastTransitionTime":"2026-01-27T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.556321 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.556398 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.556416 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.556447 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.556472 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:15Z","lastTransitionTime":"2026-01-27T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.618354 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 12:01:27.849262497 +0000 UTC Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.659969 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.660025 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.660043 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.660071 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.660089 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:15Z","lastTransitionTime":"2026-01-27T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.763664 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.763724 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.763742 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.763766 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.763784 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:15Z","lastTransitionTime":"2026-01-27T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.866608 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.866706 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.866732 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.866763 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.866787 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:15Z","lastTransitionTime":"2026-01-27T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.971747 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.972158 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.972171 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.972191 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:15 crc kubenswrapper[4915]: I0127 18:43:15.972202 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:15Z","lastTransitionTime":"2026-01-27T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.074984 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.075026 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.075035 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.075050 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.075063 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:16Z","lastTransitionTime":"2026-01-27T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.178129 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.178199 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.178220 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.178244 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.178265 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:16Z","lastTransitionTime":"2026-01-27T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.281457 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.281515 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.281534 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.281558 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.281576 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:16Z","lastTransitionTime":"2026-01-27T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.357359 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.357439 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:16 crc kubenswrapper[4915]: E0127 18:43:16.358253 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.357513 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.357506 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:16 crc kubenswrapper[4915]: E0127 18:43:16.358440 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:16 crc kubenswrapper[4915]: E0127 18:43:16.358581 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:16 crc kubenswrapper[4915]: E0127 18:43:16.358755 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.384499 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.384548 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.384565 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.384587 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.384604 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:16Z","lastTransitionTime":"2026-01-27T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.487963 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.488025 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.488045 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.488071 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.488088 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:16Z","lastTransitionTime":"2026-01-27T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.591093 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.591168 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.591191 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.591217 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.591234 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:16Z","lastTransitionTime":"2026-01-27T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.619762 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 05:06:30.807254463 +0000 UTC Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.694165 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.694220 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.694242 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.694266 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.694285 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:16Z","lastTransitionTime":"2026-01-27T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.797006 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.797093 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.797117 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.797149 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.797175 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:16Z","lastTransitionTime":"2026-01-27T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.900590 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.900625 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.900636 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.900652 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:16 crc kubenswrapper[4915]: I0127 18:43:16.900665 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:16Z","lastTransitionTime":"2026-01-27T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.003640 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.003715 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.003733 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.003758 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.003777 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:17Z","lastTransitionTime":"2026-01-27T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.106729 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.106831 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.106850 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.106880 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.106900 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:17Z","lastTransitionTime":"2026-01-27T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.210329 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.210390 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.210406 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.210431 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.210450 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:17Z","lastTransitionTime":"2026-01-27T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.313822 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.313929 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.313951 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.313979 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.313998 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:17Z","lastTransitionTime":"2026-01-27T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.417354 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.417416 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.417438 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.417463 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.417483 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:17Z","lastTransitionTime":"2026-01-27T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.520510 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.520578 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.520596 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.520623 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.520642 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:17Z","lastTransitionTime":"2026-01-27T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.620944 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 20:45:54.204045533 +0000 UTC Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.622911 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.622970 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.622987 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.623011 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.623028 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:17Z","lastTransitionTime":"2026-01-27T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.726431 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.726485 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.726503 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.726525 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.726541 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:17Z","lastTransitionTime":"2026-01-27T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.833289 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.833347 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.833366 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.833404 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.833423 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:17Z","lastTransitionTime":"2026-01-27T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.936651 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.936714 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.936731 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.936754 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:17 crc kubenswrapper[4915]: I0127 18:43:17.936771 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:17Z","lastTransitionTime":"2026-01-27T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.040639 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.040713 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.040738 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.040767 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.040832 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:18Z","lastTransitionTime":"2026-01-27T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.144604 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.144898 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.145011 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.145117 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.145263 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:18Z","lastTransitionTime":"2026-01-27T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.248372 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.248673 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.248770 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.248914 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.249084 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:18Z","lastTransitionTime":"2026-01-27T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.353342 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.353462 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.353489 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.353521 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.353542 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:18Z","lastTransitionTime":"2026-01-27T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.356652 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.356831 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.356878 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.356721 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:18 crc kubenswrapper[4915]: E0127 18:43:18.357186 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:18 crc kubenswrapper[4915]: E0127 18:43:18.357318 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:18 crc kubenswrapper[4915]: E0127 18:43:18.357450 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:18 crc kubenswrapper[4915]: E0127 18:43:18.357614 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.457195 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.457255 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.457273 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.457296 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.457313 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:18Z","lastTransitionTime":"2026-01-27T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.560528 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.560585 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.560602 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.560625 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.560644 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:18Z","lastTransitionTime":"2026-01-27T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.622171 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 12:51:17.356961767 +0000 UTC Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.663227 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.663302 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.663321 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.663352 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.663374 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:18Z","lastTransitionTime":"2026-01-27T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.766753 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.766858 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.766883 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.766914 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.766935 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:18Z","lastTransitionTime":"2026-01-27T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.869766 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.869871 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.869897 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.869927 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.869949 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:18Z","lastTransitionTime":"2026-01-27T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.972164 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.972216 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.972234 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.972256 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:18 crc kubenswrapper[4915]: I0127 18:43:18.972273 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:18Z","lastTransitionTime":"2026-01-27T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.076105 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.076163 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.076181 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.076208 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.076226 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:19Z","lastTransitionTime":"2026-01-27T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.178906 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.178966 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.178985 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.179009 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.179031 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:19Z","lastTransitionTime":"2026-01-27T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.282111 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.282161 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.282177 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.282199 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.282216 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:19Z","lastTransitionTime":"2026-01-27T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.377964 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.384558 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.384628 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.384647 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.384674 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.384691 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:19Z","lastTransitionTime":"2026-01-27T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.399760 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.416581 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.430607 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.443170 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b9bf3c9-fe11-40a2-8577-a53574d1f527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beda7b1981ecc74169bf4d243ac420100b1376379cb97f8f8910773567bb7be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f42a86baa3e985c411dde55aaee372902556650a28cef391464e618f456a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7plll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.469247 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.482738 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.487879 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.487942 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.487966 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.487998 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.488034 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:19Z","lastTransitionTime":"2026-01-27T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.501031 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16010e03-bab7-40d3-8671-f387d6095bea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcad643b766147459cff8e5d86ba0f08183df4600e0a8a49b55c3423b9c2136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca94e1dfc1266b556ea7788f6fffdd6b6c0e903b260fa0e24bb3a153921b198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ef38e583db7c2d1d0b27bcbcc8a54937759afabf49daabe8767d5a3f3f2cf78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293e2a1883668d1318598c3a2acd50c472c736f414e3c20ed2a6ee6e65f9d9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://293e2a1883668d1318598c3a2acd50c472c736f414e3c20ed2a6ee6e65f9d9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.513200 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.527991 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.544516 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"message\\\":\\\"rplf for pod on switch crc\\\\nI0127 18:43:10.297471 6965 services_controller.go:453] Built service openshift-machine-api/machine-api-controllers template LB for network=default: []services.LB{}\\\\nI0127 18:43:10.297479 6965 services_controller.go:454] Service openshift-machine-api/machine-api-controllers for network=default has 3 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0127 18:43:10.297494 6965 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-controllers_TCP_cluster\\\\\\\", UUID:\\\\\\\"62af83f3-e0c8-4632-aaaa-17488566a9d8\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-controllers_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/mach\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n8spt_openshift-ovn-kubernetes(eb87671e-1bee-4bef-843d-6fce9467079d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.574180 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.591614 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.591664 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.591682 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.591706 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.591725 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:19Z","lastTransitionTime":"2026-01-27T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.596046 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.611251 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.623107 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 02:13:13.022747225 +0000 UTC Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.634011 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3569fdd38b83b2b3e932ccb3555bea5f7053e1dbceb3394bee5c38d1f8d7457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.654930 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://999d0ed2d215938e26e9b223263ba88b519b694fdb0ae3c3c518907b54762822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:59Z\\\",\\\"message\\\":\\\"2026-01-27T18:42:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cd689185-9e2e-4f6d-93de-f42b96277183\\\\n2026-01-27T18:42:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cd689185-9e2e-4f6d-93de-f42b96277183 to /host/opt/cni/bin/\\\\n2026-01-27T18:42:14Z [verbose] multus-daemon started\\\\n2026-01-27T18:42:14Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:42:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.669383 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d467q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65be8e09-e032-40de-b290-c66c07282211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d467q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.694699 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:19Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.695469 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.695536 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.695555 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.695580 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.695600 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:19Z","lastTransitionTime":"2026-01-27T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.798439 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.798508 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.798530 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.798561 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.798582 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:19Z","lastTransitionTime":"2026-01-27T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.901954 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.902024 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.902043 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.902070 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:19 crc kubenswrapper[4915]: I0127 18:43:19.902088 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:19Z","lastTransitionTime":"2026-01-27T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.004717 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.004781 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.004836 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.004878 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.004899 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:20Z","lastTransitionTime":"2026-01-27T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.108137 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.108194 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.108212 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.108240 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.108260 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:20Z","lastTransitionTime":"2026-01-27T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.211972 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.212033 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.212045 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.212064 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.212077 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:20Z","lastTransitionTime":"2026-01-27T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.315719 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.315820 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.315839 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.315865 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.315887 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:20Z","lastTransitionTime":"2026-01-27T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.357237 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.357273 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.357337 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.357337 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:20 crc kubenswrapper[4915]: E0127 18:43:20.357440 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:20 crc kubenswrapper[4915]: E0127 18:43:20.357589 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:20 crc kubenswrapper[4915]: E0127 18:43:20.357690 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:20 crc kubenswrapper[4915]: E0127 18:43:20.357828 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.419712 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.419761 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.419777 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.419824 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.419841 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:20Z","lastTransitionTime":"2026-01-27T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.522330 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.522377 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.522393 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.522417 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.522437 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:20Z","lastTransitionTime":"2026-01-27T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.623742 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 10:32:52.450812453 +0000 UTC Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.625462 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.625518 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.625544 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.625573 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.625594 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:20Z","lastTransitionTime":"2026-01-27T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.729311 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.729394 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.729413 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.729448 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.729475 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:20Z","lastTransitionTime":"2026-01-27T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.832969 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.833034 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.833051 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.833078 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.833096 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:20Z","lastTransitionTime":"2026-01-27T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.936361 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.936424 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.936442 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.936467 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:20 crc kubenswrapper[4915]: I0127 18:43:20.936487 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:20Z","lastTransitionTime":"2026-01-27T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.039889 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.039979 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.040009 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.040041 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.040065 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:21Z","lastTransitionTime":"2026-01-27T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.143456 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.143524 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.143543 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.143566 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.143584 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:21Z","lastTransitionTime":"2026-01-27T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.247165 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.247237 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.247252 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.247278 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.247294 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:21Z","lastTransitionTime":"2026-01-27T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.350087 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.350167 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.350192 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.350223 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.350246 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:21Z","lastTransitionTime":"2026-01-27T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.454351 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.454423 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.454444 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.454469 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.454487 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:21Z","lastTransitionTime":"2026-01-27T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.557763 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.557847 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.557866 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.557889 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.557906 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:21Z","lastTransitionTime":"2026-01-27T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.567551 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.567613 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.567631 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.567656 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.567675 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:21Z","lastTransitionTime":"2026-01-27T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:21 crc kubenswrapper[4915]: E0127 18:43:21.588065 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.592877 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.592937 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.592954 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.592979 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.592997 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:21Z","lastTransitionTime":"2026-01-27T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:21 crc kubenswrapper[4915]: E0127 18:43:21.620902 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.624316 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 03:30:11.893073079 +0000 UTC Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.627554 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.627770 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.627986 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.628146 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.628279 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:21Z","lastTransitionTime":"2026-01-27T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:21 crc kubenswrapper[4915]: E0127 18:43:21.648460 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.655387 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.655494 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.655514 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.655547 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.655566 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:21Z","lastTransitionTime":"2026-01-27T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:21 crc kubenswrapper[4915]: E0127 18:43:21.675003 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.680771 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.680867 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.680891 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.680928 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.680958 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:21Z","lastTransitionTime":"2026-01-27T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:21 crc kubenswrapper[4915]: E0127 18:43:21.701988 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:21Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:21 crc kubenswrapper[4915]: E0127 18:43:21.702351 4915 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.704539 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.704604 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.704623 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.704648 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.704667 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:21Z","lastTransitionTime":"2026-01-27T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.808708 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.808759 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.808777 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.808830 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.808849 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:21Z","lastTransitionTime":"2026-01-27T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.912480 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.912570 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.912618 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.912643 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:21 crc kubenswrapper[4915]: I0127 18:43:21.912660 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:21Z","lastTransitionTime":"2026-01-27T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.015121 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.015173 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.015186 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.015202 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.015214 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:22Z","lastTransitionTime":"2026-01-27T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.117700 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.117771 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.117835 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.117866 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.117886 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:22Z","lastTransitionTime":"2026-01-27T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.221205 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.221265 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.221282 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.221310 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.221327 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:22Z","lastTransitionTime":"2026-01-27T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.323998 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.324054 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.324069 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.324090 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.324108 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:22Z","lastTransitionTime":"2026-01-27T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.357307 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.357362 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.357422 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.357519 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:22 crc kubenswrapper[4915]: E0127 18:43:22.357692 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:22 crc kubenswrapper[4915]: E0127 18:43:22.357917 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:22 crc kubenswrapper[4915]: E0127 18:43:22.358335 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:22 crc kubenswrapper[4915]: E0127 18:43:22.358414 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.371782 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.427293 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.427361 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.427386 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.427416 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.427438 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:22Z","lastTransitionTime":"2026-01-27T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.530639 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.530712 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.530736 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.530765 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.530788 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:22Z","lastTransitionTime":"2026-01-27T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.624625 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 06:12:11.961270635 +0000 UTC Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.632839 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.632914 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.632961 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.632993 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.633016 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:22Z","lastTransitionTime":"2026-01-27T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.736364 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.736404 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.736415 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.736431 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.736445 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:22Z","lastTransitionTime":"2026-01-27T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.840311 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.840382 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.840399 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.840424 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.840442 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:22Z","lastTransitionTime":"2026-01-27T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.943216 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.943258 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.943273 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.943295 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:22 crc kubenswrapper[4915]: I0127 18:43:22.943311 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:22Z","lastTransitionTime":"2026-01-27T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.046066 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.046188 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.046214 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.046247 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.046273 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:23Z","lastTransitionTime":"2026-01-27T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.149089 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.149150 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.149167 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.149195 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.149213 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:23Z","lastTransitionTime":"2026-01-27T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.252636 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.252685 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.252702 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.252725 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.252744 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:23Z","lastTransitionTime":"2026-01-27T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.355543 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.355580 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.355597 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.355620 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.355637 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:23Z","lastTransitionTime":"2026-01-27T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.357995 4915 scope.go:117] "RemoveContainer" containerID="c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3" Jan 27 18:43:23 crc kubenswrapper[4915]: E0127 18:43:23.358227 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n8spt_openshift-ovn-kubernetes(eb87671e-1bee-4bef-843d-6fce9467079d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.458291 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.458340 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.458356 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.458377 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.458395 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:23Z","lastTransitionTime":"2026-01-27T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.561282 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.561333 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.561349 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.561373 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.561391 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:23Z","lastTransitionTime":"2026-01-27T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.625147 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 11:54:42.49623629 +0000 UTC Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.671181 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.671247 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.671260 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.671276 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.671672 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:23Z","lastTransitionTime":"2026-01-27T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.775042 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.775125 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.775150 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.775180 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.775202 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:23Z","lastTransitionTime":"2026-01-27T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.878401 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.878475 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.878496 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.878524 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.878548 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:23Z","lastTransitionTime":"2026-01-27T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.981256 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.981327 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.981345 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.981374 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:23 crc kubenswrapper[4915]: I0127 18:43:23.981398 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:23Z","lastTransitionTime":"2026-01-27T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.084895 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.084958 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.084976 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.085006 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.085024 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:24Z","lastTransitionTime":"2026-01-27T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.187968 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.188037 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.188060 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.188090 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.188111 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:24Z","lastTransitionTime":"2026-01-27T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.291591 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.291653 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.291675 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.291702 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.291720 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:24Z","lastTransitionTime":"2026-01-27T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.357181 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.357262 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.357313 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.357510 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:24 crc kubenswrapper[4915]: E0127 18:43:24.357452 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:24 crc kubenswrapper[4915]: E0127 18:43:24.357687 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:24 crc kubenswrapper[4915]: E0127 18:43:24.357826 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:24 crc kubenswrapper[4915]: E0127 18:43:24.357922 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.395129 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.395204 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.395230 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.395262 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.395286 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:24Z","lastTransitionTime":"2026-01-27T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.499028 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.499099 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.499121 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.499151 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.499174 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:24Z","lastTransitionTime":"2026-01-27T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.601627 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.601660 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.601669 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.601683 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.601692 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:24Z","lastTransitionTime":"2026-01-27T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.625560 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 07:04:00.42381844 +0000 UTC Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.704347 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.704388 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.704401 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.704417 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.704430 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:24Z","lastTransitionTime":"2026-01-27T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.808144 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.808199 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.808217 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.808242 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.808273 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:24Z","lastTransitionTime":"2026-01-27T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.911105 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.911146 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.911154 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.911168 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:24 crc kubenswrapper[4915]: I0127 18:43:24.911178 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:24Z","lastTransitionTime":"2026-01-27T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.014110 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.014181 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.014205 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.014232 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.014250 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:25Z","lastTransitionTime":"2026-01-27T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.117365 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.117727 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.118018 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.118292 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.118510 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:25Z","lastTransitionTime":"2026-01-27T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.221434 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.222035 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.222235 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.222491 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.222709 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:25Z","lastTransitionTime":"2026-01-27T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.326169 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.327069 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.327225 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.327374 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.327497 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:25Z","lastTransitionTime":"2026-01-27T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.430955 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.431014 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.431030 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.431053 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.431071 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:25Z","lastTransitionTime":"2026-01-27T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.533731 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.533824 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.533843 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.533868 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.533886 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:25Z","lastTransitionTime":"2026-01-27T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.625865 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 01:43:31.369678869 +0000 UTC Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.636747 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.636876 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.636902 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.636933 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.636953 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:25Z","lastTransitionTime":"2026-01-27T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.740647 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.740697 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.740717 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.740741 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.740758 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:25Z","lastTransitionTime":"2026-01-27T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.843699 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.843756 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.843763 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.843781 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.843815 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:25Z","lastTransitionTime":"2026-01-27T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.946969 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.947038 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.947056 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.947084 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:25 crc kubenswrapper[4915]: I0127 18:43:25.947104 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:25Z","lastTransitionTime":"2026-01-27T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.049988 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.050056 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.050073 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.050097 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.050115 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:26Z","lastTransitionTime":"2026-01-27T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.152169 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.152202 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.152210 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.152222 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.152231 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:26Z","lastTransitionTime":"2026-01-27T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.256167 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.256251 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.256276 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.256312 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.256339 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:26Z","lastTransitionTime":"2026-01-27T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.357138 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.357208 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.357161 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.357143 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:26 crc kubenswrapper[4915]: E0127 18:43:26.357272 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:26 crc kubenswrapper[4915]: E0127 18:43:26.358340 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:26 crc kubenswrapper[4915]: E0127 18:43:26.359376 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:26 crc kubenswrapper[4915]: E0127 18:43:26.357508 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.363723 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.363823 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.363849 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.363940 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.363968 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:26Z","lastTransitionTime":"2026-01-27T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.466158 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.466204 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.466215 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.466231 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.466243 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:26Z","lastTransitionTime":"2026-01-27T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.569160 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.569202 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.569212 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.569227 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.569236 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:26Z","lastTransitionTime":"2026-01-27T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.626161 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 08:42:06.163366399 +0000 UTC Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.672017 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.672091 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.672118 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.672149 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.672175 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:26Z","lastTransitionTime":"2026-01-27T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.775497 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.775911 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.776047 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.776239 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.776401 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:26Z","lastTransitionTime":"2026-01-27T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.880346 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.880447 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.880476 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.880514 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.880539 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:26Z","lastTransitionTime":"2026-01-27T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.983413 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.983480 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.983492 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.983516 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:26 crc kubenswrapper[4915]: I0127 18:43:26.983532 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:26Z","lastTransitionTime":"2026-01-27T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.086072 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.086174 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.086194 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.086229 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.086258 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:27Z","lastTransitionTime":"2026-01-27T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.189868 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.189949 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.189974 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.190012 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.190039 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:27Z","lastTransitionTime":"2026-01-27T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.293090 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.293169 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.293194 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.293224 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.293250 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:27Z","lastTransitionTime":"2026-01-27T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.395892 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.395954 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.395966 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.395984 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.395999 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:27Z","lastTransitionTime":"2026-01-27T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.499346 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.499376 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.499384 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.499398 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.499406 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:27Z","lastTransitionTime":"2026-01-27T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.601883 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.601961 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.601979 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.602004 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.602023 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:27Z","lastTransitionTime":"2026-01-27T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.627266 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 06:09:51.561059194 +0000 UTC Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.705408 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.705481 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.705499 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.705523 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.705542 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:27Z","lastTransitionTime":"2026-01-27T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.809261 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.809325 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.809343 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.809371 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.809390 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:27Z","lastTransitionTime":"2026-01-27T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.913160 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.913212 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.913225 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.913249 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:27 crc kubenswrapper[4915]: I0127 18:43:27.913261 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:27Z","lastTransitionTime":"2026-01-27T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.016386 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.016451 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.016468 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.016493 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.016509 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:28Z","lastTransitionTime":"2026-01-27T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.119628 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.119665 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.119677 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.119695 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.119705 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:28Z","lastTransitionTime":"2026-01-27T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.223282 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.223340 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.223358 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.223379 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.223397 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:28Z","lastTransitionTime":"2026-01-27T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.326132 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.326208 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.326236 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.326267 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.326295 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:28Z","lastTransitionTime":"2026-01-27T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.357296 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.357332 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.357330 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.357436 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:28 crc kubenswrapper[4915]: E0127 18:43:28.357623 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:28 crc kubenswrapper[4915]: E0127 18:43:28.357729 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:28 crc kubenswrapper[4915]: E0127 18:43:28.357868 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:28 crc kubenswrapper[4915]: E0127 18:43:28.358052 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.429946 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.430012 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.430034 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.430064 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.430087 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:28Z","lastTransitionTime":"2026-01-27T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.455163 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65be8e09-e032-40de-b290-c66c07282211-metrics-certs\") pod \"network-metrics-daemon-d467q\" (UID: \"65be8e09-e032-40de-b290-c66c07282211\") " pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:28 crc kubenswrapper[4915]: E0127 18:43:28.455364 4915 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:43:28 crc kubenswrapper[4915]: E0127 18:43:28.455447 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65be8e09-e032-40de-b290-c66c07282211-metrics-certs podName:65be8e09-e032-40de-b290-c66c07282211 nodeName:}" failed. No retries permitted until 2026-01-27 18:44:32.455424453 +0000 UTC m=+163.813278157 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/65be8e09-e032-40de-b290-c66c07282211-metrics-certs") pod "network-metrics-daemon-d467q" (UID: "65be8e09-e032-40de-b290-c66c07282211") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.532691 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.532762 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.532788 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.532877 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.532908 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:28Z","lastTransitionTime":"2026-01-27T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.627596 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 17:04:08.08925451 +0000 UTC Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.637869 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.637939 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.637964 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.637997 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.638022 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:28Z","lastTransitionTime":"2026-01-27T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.741647 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.741729 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.741755 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.741783 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.741831 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:28Z","lastTransitionTime":"2026-01-27T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.845255 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.845312 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.845328 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.845350 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.845367 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:28Z","lastTransitionTime":"2026-01-27T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.948162 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.948250 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.948268 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.948290 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:28 crc kubenswrapper[4915]: I0127 18:43:28.948307 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:28Z","lastTransitionTime":"2026-01-27T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.050946 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.050994 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.051010 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.051032 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.051051 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:29Z","lastTransitionTime":"2026-01-27T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.154831 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.154876 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.154892 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.154916 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.154933 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:29Z","lastTransitionTime":"2026-01-27T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.258535 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.258592 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.258609 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.258633 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.258651 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:29Z","lastTransitionTime":"2026-01-27T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.362005 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.362065 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.362082 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.362107 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.362125 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:29Z","lastTransitionTime":"2026-01-27T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.379774 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4320a2d58d7ffb09a348a22520a58a976cf7fea0b892dd596be588d85fa06228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d662757c425355a214b8f308d7e295594a768bc69fe7c0f6452455b64e9954b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.397148 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-78l9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cd7e15-a585-4cae-b306-701292248ea6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80a099278492615438458ea45c5c699b0f524562f17f4e5291247636ca65440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vjrg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-78l9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.428440 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb87671e-1bee-4bef-843d-6fce9467079d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:43:10Z\\\",\\\"message\\\":\\\"rplf for pod on switch crc\\\\nI0127 18:43:10.297471 6965 services_controller.go:453] Built service openshift-machine-api/machine-api-controllers template LB for network=default: []services.LB{}\\\\nI0127 18:43:10.297479 6965 services_controller.go:454] Service openshift-machine-api/machine-api-controllers for network=default has 3 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0127 18:43:10.297494 6965 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-controllers_TCP_cluster\\\\\\\", UUID:\\\\\\\"62af83f3-e0c8-4632-aaaa-17488566a9d8\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-controllers_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/mach\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:43:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n8spt_openshift-ovn-kubernetes(eb87671e-1bee-4bef-843d-6fce9467079d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwsc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8spt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.462941 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c69f0cb-d954-4c8c-85e2-4be6fafc0034\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8a9a43b32e9ada53de5ea60427d57b9dc487e40fb4209ab6f17958994e519e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db9f014b43fc8f8c3538f26f7b8d6780027d39256687c25da3feb63e9dfbcfc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c501e3314233fac5f1468958f3504dfa508eae2c5406f71b9396cbca11e159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7969aa2f8d1d3ca23aa4076ede5a12714ae224b5fc533a9e509b6dd30f59d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60a470d99a3fcd64c902b356fe410a02ef3994b89cf4dbaeed8fde6f81cca05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://867d9e80aae209c4bef3bd233e6c9dc3e3a28941c2003d00bd379b94c5044612\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b431e756e57264ac26ca5753af730ccd8a0c594a218cac5418def5a30127c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01dff648c8834cbe47041428ccfe19a8e8d5667c155895641610fd4174de4903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.466332 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.466390 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.466413 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.466443 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.466467 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:29Z","lastTransitionTime":"2026-01-27T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.487607 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ff27c42-efc5-47ca-af7e-18a62d4dded9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T18:42:09Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0127 18:42:03.024769 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 18:42:03.029989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1035845672/tls.crt::/tmp/serving-cert-1035845672/tls.key\\\\\\\"\\\\nI0127 18:42:08.982914 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 18:42:09.352562 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 18:42:09.352601 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 18:42:09.352632 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 18:42:09.352643 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 18:42:09.370725 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 18:42:09.370826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370840 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 18:42:09.370868 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 18:42:09.370877 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 18:42:09.370885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 18:42:09.370894 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 18:42:09.380136 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 18:42:09.396176 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.507307 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16010e03-bab7-40d3-8671-f387d6095bea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fcad643b766147459cff8e5d86ba0f08183df4600e0a8a49b55c3423b9c2136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca94e1dfc1266b556ea7788f6fffdd6b6c0e903b260fa0e24bb3a153921b198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ef38e583db7c2d1d0b27bcbcc8a54937759afabf49daabe8767d5a3f3f2cf78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293e2a1883668d1318598c3a2acd50c472c736f414e3c20ed2a6ee6e65f9d9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://293e2a1883668d1318598c3a2acd50c472c736f414e3c20ed2a6ee6e65f9d9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.528220 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca54a52b2f9e8f3f55dcb5356826426e4b680acb83e2285db9d26d4b839ede5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.551746 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c37793b-7e30-4f54-baab-48a358a948b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3569fdd38b83b2b3e932ccb3555bea5f7053e1dbceb3394bee5c38d1f8d7457b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fbc9279ec3db0d278246cd643ff44cc97bf2be5f0536158029bb083fd27711\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d97507502fcd27113ecd14d3202f8608c6f96d8e1794172716ee9aca24807c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://072a85e47099cabff6f70083f337e9b62c87fc1cea0a50017ea4923f34d4f1f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34ed343b226776e4e6276efa9662e43289f09ad530843a0608050a7ebe6ed95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://856a09f357abbe01ac7db94885798adba2a430a9a859ee62da04b996149accfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd9deeadb5613cf6dca85e947734987ea4399d92837d383b26c58500ea4f250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8nk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-fxrlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.572036 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-d467q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65be8e09-e032-40de-b290-c66c07282211\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2d4fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-d467q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.574633 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.574707 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.574740 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.574767 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.574785 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:29Z","lastTransitionTime":"2026-01-27T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.592906 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26b231d4-6ced-464a-88a1-efb7d52d11b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32a3d7950da3a646b04f04374be21ff070c64d28974d5b36911229c111970f33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80c5ee84a20acb7640d051fe1d0898ccf22f6d70d46822f7b06d19a910e164b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80c5ee84a20acb7640d051fe1d0898ccf22f6d70d46822f7b06d19a910e164b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T18:41:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.612884 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.627767 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 14:21:15.947455227 +0000 UTC Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.634473 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5bpjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe27a668-1ea7-44c8-9490-55cf8db5dad9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://999d0ed2d215938e26e9b223263ba88b519b694fdb0ae3c3c518907b54762822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T18:42:59Z\\\",\\\"message\\\":\\\"2026-01-27T18:42:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cd689185-9e2e-4f6d-93de-f42b96277183\\\\n2026-01-27T18:42:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cd689185-9e2e-4f6d-93de-f42b96277183 to /host/opt/cni/bin/\\\\n2026-01-27T18:42:14Z [verbose] multus-daemon started\\\\n2026-01-27T18:42:14Z [verbose] Readiness Indicator file check\\\\n2026-01-27T18:42:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lf5d8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5bpjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.653897 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.672680 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e61db92-39b6-4acf-89af-34169c61e709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb800d1b88caf3b2b92d88194294798f9caf51fa1813c9749d21108ed1e8177e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mplxv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8dsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.678002 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.678092 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.678112 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.678145 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.678162 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:29Z","lastTransitionTime":"2026-01-27T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.690639 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-msgjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b776fd1b-6b54-4f8a-a42c-18e8103fded3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4131064f71ef1d17c9c9edcaf6e4626f08d6c39cedc07a62e075cd1a1a91e948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg5j5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-msgjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.708677 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b9bf3c9-fe11-40a2-8577-a53574d1f527\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1beda7b1981ecc74169bf4d243ac420100b1376379cb97f8f8910773567bb7be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f42a86baa3e985c411dde55aaee372902556650a28cef391464e618f456a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zf8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:42:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7plll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.721283 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"096f7671-6e96-4fd3-9ed1-26cb83f75fd9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T18:41:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1efe04c37e450836c571f9bdf7c8e33837102195bacc797f5beeb1fe0c381fdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99258c70d764f49ba6495bf0503baaa016a2e5165affdca07db082061b300e2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05e3d04cf9c67686af23c61a44eda5a2952766bafb81e7ad614d411608ef439e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:41:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T18:41:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.736108 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.753909 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T18:42:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c2fcb7b7fb4cebd4460b081b8bc888d7818fb53aa414e824e36ce74f090d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T18:42:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:29Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.780857 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.780908 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.780925 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.780948 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.780964 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:29Z","lastTransitionTime":"2026-01-27T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.883937 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.883984 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.884001 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.884021 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.884036 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:29Z","lastTransitionTime":"2026-01-27T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.986914 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.986969 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.986986 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.987008 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:29 crc kubenswrapper[4915]: I0127 18:43:29.987024 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:29Z","lastTransitionTime":"2026-01-27T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.090178 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.090634 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.090823 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.091020 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.091192 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:30Z","lastTransitionTime":"2026-01-27T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.194463 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.194536 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.194565 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.194596 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.194620 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:30Z","lastTransitionTime":"2026-01-27T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.297884 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.297944 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.297962 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.297989 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.298008 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:30Z","lastTransitionTime":"2026-01-27T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.357457 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.357465 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.357857 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:30 crc kubenswrapper[4915]: E0127 18:43:30.357944 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.357487 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:30 crc kubenswrapper[4915]: E0127 18:43:30.358053 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:30 crc kubenswrapper[4915]: E0127 18:43:30.358094 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:30 crc kubenswrapper[4915]: E0127 18:43:30.357682 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.401400 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.401453 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.401472 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.401498 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.401519 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:30Z","lastTransitionTime":"2026-01-27T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.504097 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.504155 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.504172 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.504197 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.504214 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:30Z","lastTransitionTime":"2026-01-27T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.607081 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.607157 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.607181 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.607215 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.607238 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:30Z","lastTransitionTime":"2026-01-27T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.628587 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 18:45:16.283598212 +0000 UTC Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.710166 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.710249 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.710309 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.710343 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.710369 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:30Z","lastTransitionTime":"2026-01-27T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.813392 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.813464 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.813489 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.813518 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.813539 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:30Z","lastTransitionTime":"2026-01-27T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.917211 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.917270 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.917287 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.917312 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:30 crc kubenswrapper[4915]: I0127 18:43:30.917329 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:30Z","lastTransitionTime":"2026-01-27T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.020478 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.020549 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.020574 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.020604 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.020626 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:31Z","lastTransitionTime":"2026-01-27T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.123975 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.124061 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.124085 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.124119 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.124142 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:31Z","lastTransitionTime":"2026-01-27T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.227687 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.227753 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.227771 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.227826 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.227844 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:31Z","lastTransitionTime":"2026-01-27T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.330177 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.330236 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.330254 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.330279 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.330296 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:31Z","lastTransitionTime":"2026-01-27T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.433828 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.433897 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.433914 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.433939 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.433956 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:31Z","lastTransitionTime":"2026-01-27T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.536734 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.536838 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.536860 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.536885 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.537075 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:31Z","lastTransitionTime":"2026-01-27T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.629147 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 07:16:37.751322877 +0000 UTC Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.639477 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.639764 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.639905 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.639937 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.639956 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:31Z","lastTransitionTime":"2026-01-27T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.743188 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.743253 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.743275 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.743305 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.743331 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:31Z","lastTransitionTime":"2026-01-27T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.845868 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.845950 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.845983 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.846016 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.846037 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:31Z","lastTransitionTime":"2026-01-27T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.893540 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.893586 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.893598 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.893616 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.893628 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:31Z","lastTransitionTime":"2026-01-27T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:31 crc kubenswrapper[4915]: E0127 18:43:31.911356 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:31Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.915327 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.915364 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.915374 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.915391 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.915403 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:31Z","lastTransitionTime":"2026-01-27T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:31 crc kubenswrapper[4915]: E0127 18:43:31.934569 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:31Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.940763 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.940859 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.940884 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.941227 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.941257 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:31Z","lastTransitionTime":"2026-01-27T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:31 crc kubenswrapper[4915]: E0127 18:43:31.961874 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:31Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.967250 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.967308 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.967326 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.967354 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.967375 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:31Z","lastTransitionTime":"2026-01-27T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:31 crc kubenswrapper[4915]: E0127 18:43:31.988282 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:31Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.993142 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.993195 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.993214 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.993236 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:31 crc kubenswrapper[4915]: I0127 18:43:31.993254 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:31Z","lastTransitionTime":"2026-01-27T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:32 crc kubenswrapper[4915]: E0127 18:43:32.012837 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T18:43:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bd2c9101-94f4-4460-a5d7-3dbfd978bc2d\\\",\\\"systemUUID\\\":\\\"c1860c83-6319-46ea-ba35-7a2106e4ce10\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T18:43:32Z is after 2025-08-24T17:21:41Z" Jan 27 18:43:32 crc kubenswrapper[4915]: E0127 18:43:32.013085 4915 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.015110 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.015173 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.015198 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.015228 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.015251 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:32Z","lastTransitionTime":"2026-01-27T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.118935 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.118986 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.119004 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.119026 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.119043 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:32Z","lastTransitionTime":"2026-01-27T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.222953 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.223013 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.223034 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.223062 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.223085 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:32Z","lastTransitionTime":"2026-01-27T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.327245 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.327306 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.327324 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.327346 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.327364 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:32Z","lastTransitionTime":"2026-01-27T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.357574 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.357680 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.357680 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.358081 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:32 crc kubenswrapper[4915]: E0127 18:43:32.358562 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:32 crc kubenswrapper[4915]: E0127 18:43:32.358206 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:32 crc kubenswrapper[4915]: E0127 18:43:32.358413 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:32 crc kubenswrapper[4915]: E0127 18:43:32.359059 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.430520 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.431008 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.431212 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.431453 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.431682 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:32Z","lastTransitionTime":"2026-01-27T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.535116 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.535167 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.535184 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.535211 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.535293 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:32Z","lastTransitionTime":"2026-01-27T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.629353 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 01:14:53.923326747 +0000 UTC Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.638192 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.638424 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.638580 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.638741 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.638959 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:32Z","lastTransitionTime":"2026-01-27T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.742454 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.742938 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.743087 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.743216 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.743377 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:32Z","lastTransitionTime":"2026-01-27T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.847329 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.847378 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.847395 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.847437 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.847457 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:32Z","lastTransitionTime":"2026-01-27T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.950569 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.950619 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.950629 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.950646 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:32 crc kubenswrapper[4915]: I0127 18:43:32.950676 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:32Z","lastTransitionTime":"2026-01-27T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.053144 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.053498 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.053606 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.053703 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.053815 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:33Z","lastTransitionTime":"2026-01-27T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.156750 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.156815 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.156827 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.156844 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.156856 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:33Z","lastTransitionTime":"2026-01-27T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.260011 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.260089 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.260118 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.260147 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.260172 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:33Z","lastTransitionTime":"2026-01-27T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.361840 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.361976 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.361998 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.362023 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.362042 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:33Z","lastTransitionTime":"2026-01-27T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.464241 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.464313 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.464329 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.464344 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.464354 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:33Z","lastTransitionTime":"2026-01-27T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.567596 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.567676 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.567701 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.567730 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.567754 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:33Z","lastTransitionTime":"2026-01-27T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.630404 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 14:34:10.153492042 +0000 UTC Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.671646 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.671726 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.671746 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.671784 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.671839 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:33Z","lastTransitionTime":"2026-01-27T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.775279 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.775347 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.775366 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.775398 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.775420 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:33Z","lastTransitionTime":"2026-01-27T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.879516 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.879579 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.879598 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.879626 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.879646 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:33Z","lastTransitionTime":"2026-01-27T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.981982 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.982041 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.982062 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.982089 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:33 crc kubenswrapper[4915]: I0127 18:43:33.982107 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:33Z","lastTransitionTime":"2026-01-27T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.085006 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.085071 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.085094 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.085122 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.085143 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:34Z","lastTransitionTime":"2026-01-27T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.187881 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.187941 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.187961 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.187988 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.188006 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:34Z","lastTransitionTime":"2026-01-27T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.291117 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.291201 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.291221 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.291245 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.291279 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:34Z","lastTransitionTime":"2026-01-27T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.356593 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:34 crc kubenswrapper[4915]: E0127 18:43:34.356772 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.357038 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.357166 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:34 crc kubenswrapper[4915]: E0127 18:43:34.357228 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.357280 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:34 crc kubenswrapper[4915]: E0127 18:43:34.357324 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:34 crc kubenswrapper[4915]: E0127 18:43:34.357468 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.394272 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.394324 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.394342 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.394364 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.394383 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:34Z","lastTransitionTime":"2026-01-27T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.497480 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.497558 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.497740 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.497781 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.497864 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:34Z","lastTransitionTime":"2026-01-27T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.601541 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.601916 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.602100 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.602299 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.602435 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:34Z","lastTransitionTime":"2026-01-27T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.631586 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 02:29:21.181638827 +0000 UTC Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.706188 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.706236 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.706257 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.706283 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.706301 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:34Z","lastTransitionTime":"2026-01-27T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.809443 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.809928 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.810242 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.810392 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.810536 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:34Z","lastTransitionTime":"2026-01-27T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.913974 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.914050 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.914075 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.914104 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:34 crc kubenswrapper[4915]: I0127 18:43:34.914128 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:34Z","lastTransitionTime":"2026-01-27T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.017301 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.017365 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.017382 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.017404 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.017420 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:35Z","lastTransitionTime":"2026-01-27T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.121192 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.121239 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.121255 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.121277 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.121296 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:35Z","lastTransitionTime":"2026-01-27T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.224493 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.224548 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.224567 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.224589 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.224608 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:35Z","lastTransitionTime":"2026-01-27T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.327568 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.327619 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.327634 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.327655 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.327670 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:35Z","lastTransitionTime":"2026-01-27T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.430151 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.430209 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.430226 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.430263 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.430298 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:35Z","lastTransitionTime":"2026-01-27T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.533836 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.533906 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.533923 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.533948 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.533964 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:35Z","lastTransitionTime":"2026-01-27T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.632537 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 16:06:48.927809383 +0000 UTC Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.636821 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.636873 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.636892 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.636916 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.636939 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:35Z","lastTransitionTime":"2026-01-27T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.739288 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.739386 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.739406 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.739431 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.739453 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:35Z","lastTransitionTime":"2026-01-27T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.842751 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.842840 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.842858 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.842882 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.842901 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:35Z","lastTransitionTime":"2026-01-27T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.945852 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.945905 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.945921 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.945942 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:35 crc kubenswrapper[4915]: I0127 18:43:35.945958 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:35Z","lastTransitionTime":"2026-01-27T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.048752 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.048827 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.048839 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.048853 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.048864 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:36Z","lastTransitionTime":"2026-01-27T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.151518 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.151584 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.151606 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.151640 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.151661 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:36Z","lastTransitionTime":"2026-01-27T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.253501 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.253830 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.253937 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.254044 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.254164 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:36Z","lastTransitionTime":"2026-01-27T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.356717 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.356822 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:36 crc kubenswrapper[4915]: E0127 18:43:36.357585 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.356968 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.356929 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.358100 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.358138 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.358157 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.358181 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.358201 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:36Z","lastTransitionTime":"2026-01-27T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:36 crc kubenswrapper[4915]: E0127 18:43:36.358968 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:36 crc kubenswrapper[4915]: E0127 18:43:36.359065 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:36 crc kubenswrapper[4915]: E0127 18:43:36.359248 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.360190 4915 scope.go:117] "RemoveContainer" containerID="c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3" Jan 27 18:43:36 crc kubenswrapper[4915]: E0127 18:43:36.360681 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n8spt_openshift-ovn-kubernetes(eb87671e-1bee-4bef-843d-6fce9467079d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.461154 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.461201 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.461255 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.461279 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.461296 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:36Z","lastTransitionTime":"2026-01-27T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.564482 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.564541 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.564558 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.564582 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.564602 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:36Z","lastTransitionTime":"2026-01-27T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.633455 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 17:36:41.160405159 +0000 UTC Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.667991 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.668063 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.668081 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.668110 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.668133 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:36Z","lastTransitionTime":"2026-01-27T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.770981 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.771044 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.771067 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.771099 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.771122 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:36Z","lastTransitionTime":"2026-01-27T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.874636 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.874696 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.874713 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.874737 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.874756 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:36Z","lastTransitionTime":"2026-01-27T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.977451 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.977523 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.977549 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.977577 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:36 crc kubenswrapper[4915]: I0127 18:43:36.977600 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:36Z","lastTransitionTime":"2026-01-27T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.081192 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.081280 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.081296 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.081324 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.081348 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:37Z","lastTransitionTime":"2026-01-27T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.185351 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.185422 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.185444 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.185471 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.185492 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:37Z","lastTransitionTime":"2026-01-27T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.288113 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.288201 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.288212 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.288228 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.288238 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:37Z","lastTransitionTime":"2026-01-27T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.389993 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.390064 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.390103 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.390131 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.390154 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:37Z","lastTransitionTime":"2026-01-27T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.493022 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.493059 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.493070 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.493085 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.493097 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:37Z","lastTransitionTime":"2026-01-27T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.596559 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.596641 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.596666 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.596697 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.596720 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:37Z","lastTransitionTime":"2026-01-27T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.634359 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 22:17:00.137584713 +0000 UTC Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.699529 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.699590 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.699613 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.699646 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.699668 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:37Z","lastTransitionTime":"2026-01-27T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.801774 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.802239 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.802274 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.802306 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.802329 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:37Z","lastTransitionTime":"2026-01-27T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.905429 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.905488 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.905507 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.905535 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:37 crc kubenswrapper[4915]: I0127 18:43:37.905553 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:37Z","lastTransitionTime":"2026-01-27T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.008691 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.008750 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.008761 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.008781 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.008816 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:38Z","lastTransitionTime":"2026-01-27T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.112081 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.112145 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.112154 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.112174 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.112188 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:38Z","lastTransitionTime":"2026-01-27T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.215346 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.215387 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.215399 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.215414 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.215427 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:38Z","lastTransitionTime":"2026-01-27T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.318821 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.318869 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.318881 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.318899 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.318911 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:38Z","lastTransitionTime":"2026-01-27T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.356730 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.356826 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.356751 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.356754 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:38 crc kubenswrapper[4915]: E0127 18:43:38.356964 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:38 crc kubenswrapper[4915]: E0127 18:43:38.357138 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:38 crc kubenswrapper[4915]: E0127 18:43:38.357309 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:38 crc kubenswrapper[4915]: E0127 18:43:38.357354 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.421661 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.421707 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.421717 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.421739 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.421750 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:38Z","lastTransitionTime":"2026-01-27T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.524582 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.524658 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.524671 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.524687 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.524719 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:38Z","lastTransitionTime":"2026-01-27T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.627303 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.627365 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.627380 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.627396 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.627409 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:38Z","lastTransitionTime":"2026-01-27T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.634857 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 05:47:12.148863605 +0000 UTC Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.730997 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.731074 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.731104 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.731148 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.731171 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:38Z","lastTransitionTime":"2026-01-27T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.834019 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.834065 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.834087 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.834105 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.834130 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:38Z","lastTransitionTime":"2026-01-27T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.936899 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.936943 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.936955 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.936971 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:38 crc kubenswrapper[4915]: I0127 18:43:38.936983 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:38Z","lastTransitionTime":"2026-01-27T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.040664 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.040718 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.040734 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.040756 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.040772 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:39Z","lastTransitionTime":"2026-01-27T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.143982 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.144132 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.144156 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.144180 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.144200 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:39Z","lastTransitionTime":"2026-01-27T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.246264 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.246325 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.246336 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.246351 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.246361 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:39Z","lastTransitionTime":"2026-01-27T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.349006 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.349046 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.349056 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.349071 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.349082 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:39Z","lastTransitionTime":"2026-01-27T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.387314 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=86.387290604 podStartE2EDuration="1m26.387290604s" podCreationTimestamp="2026-01-27 18:42:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:43:39.38673843 +0000 UTC m=+110.744592124" watchObservedRunningTime="2026-01-27 18:43:39.387290604 +0000 UTC m=+110.745144308" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.447284 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podStartSLOduration=89.447262639 podStartE2EDuration="1m29.447262639s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:43:39.445527215 +0000 UTC m=+110.803380899" watchObservedRunningTime="2026-01-27 18:43:39.447262639 +0000 UTC m=+110.805116313" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.456918 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.456966 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.457023 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.457047 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.457097 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:39Z","lastTransitionTime":"2026-01-27T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.461868 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-msgjd" podStartSLOduration=89.461846873 podStartE2EDuration="1m29.461846873s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:43:39.457547273 +0000 UTC m=+110.815400957" watchObservedRunningTime="2026-01-27 18:43:39.461846873 +0000 UTC m=+110.819700567" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.483571 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7plll" podStartSLOduration=89.483553718 podStartE2EDuration="1m29.483553718s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:43:39.482751818 +0000 UTC m=+110.840605482" watchObservedRunningTime="2026-01-27 18:43:39.483553718 +0000 UTC m=+110.841407382" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.508685 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=89.508669361 podStartE2EDuration="1m29.508669361s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:43:39.507416569 +0000 UTC m=+110.865270233" watchObservedRunningTime="2026-01-27 18:43:39.508669361 +0000 UTC m=+110.866523025" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.543833 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.543819021 podStartE2EDuration="1m29.543819021s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:43:39.528861078 +0000 UTC m=+110.886714742" watchObservedRunningTime="2026-01-27 18:43:39.543819021 +0000 UTC m=+110.901672685" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.559954 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=58.559943934 podStartE2EDuration="58.559943934s" podCreationTimestamp="2026-01-27 18:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:43:39.544051257 +0000 UTC m=+110.901904921" watchObservedRunningTime="2026-01-27 18:43:39.559943934 +0000 UTC m=+110.917797598" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.560455 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.560490 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.560500 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.560511 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.560520 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:39Z","lastTransitionTime":"2026-01-27T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.604122 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-78l9d" podStartSLOduration=90.604095194 podStartE2EDuration="1m30.604095194s" podCreationTimestamp="2026-01-27 18:42:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:43:39.575359528 +0000 UTC m=+110.933213192" watchObservedRunningTime="2026-01-27 18:43:39.604095194 +0000 UTC m=+110.961948858" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.635427 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 02:50:46.633486523 +0000 UTC Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.652830 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-fxrlf" podStartSLOduration=89.652812961 podStartE2EDuration="1m29.652812961s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:43:39.65238197 +0000 UTC m=+111.010235644" watchObservedRunningTime="2026-01-27 18:43:39.652812961 +0000 UTC m=+111.010666625" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.663042 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.663105 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.663117 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.663133 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.663148 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:39Z","lastTransitionTime":"2026-01-27T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.689956 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=17.689942131 podStartE2EDuration="17.689942131s" podCreationTimestamp="2026-01-27 18:43:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:43:39.689751026 +0000 UTC m=+111.047604690" watchObservedRunningTime="2026-01-27 18:43:39.689942131 +0000 UTC m=+111.047795795" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.758034 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-5bpjb" podStartSLOduration=89.758006513 podStartE2EDuration="1m29.758006513s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:43:39.728442397 +0000 UTC m=+111.086296061" watchObservedRunningTime="2026-01-27 18:43:39.758006513 +0000 UTC m=+111.115860177" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.766206 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.766244 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.766257 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.766275 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.766288 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:39Z","lastTransitionTime":"2026-01-27T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.869508 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.869576 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.869594 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.869619 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.869637 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:39Z","lastTransitionTime":"2026-01-27T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.972354 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.972418 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.972430 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.972450 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:39 crc kubenswrapper[4915]: I0127 18:43:39.972463 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:39Z","lastTransitionTime":"2026-01-27T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.076178 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.076238 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.076249 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.076268 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.076281 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:40Z","lastTransitionTime":"2026-01-27T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.179832 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.179923 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.179940 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.179963 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.179977 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:40Z","lastTransitionTime":"2026-01-27T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.282777 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.282839 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.282851 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.282867 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.282880 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:40Z","lastTransitionTime":"2026-01-27T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.356894 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:40 crc kubenswrapper[4915]: E0127 18:43:40.357123 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.357429 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:40 crc kubenswrapper[4915]: E0127 18:43:40.357536 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.357731 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:40 crc kubenswrapper[4915]: E0127 18:43:40.357860 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.358073 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:40 crc kubenswrapper[4915]: E0127 18:43:40.358271 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.385999 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.386052 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.386067 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.386087 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.386104 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:40Z","lastTransitionTime":"2026-01-27T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.489728 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.489811 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.489825 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.489849 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.489865 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:40Z","lastTransitionTime":"2026-01-27T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.592020 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.592058 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.592066 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.592084 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.592094 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:40Z","lastTransitionTime":"2026-01-27T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.636648 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 04:06:41.726543777 +0000 UTC Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.694765 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.694811 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.694820 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.694841 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.694853 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:40Z","lastTransitionTime":"2026-01-27T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.797306 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.797363 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.797382 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.797408 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.797426 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:40Z","lastTransitionTime":"2026-01-27T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.900167 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.900215 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.900225 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.900240 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:40 crc kubenswrapper[4915]: I0127 18:43:40.900254 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:40Z","lastTransitionTime":"2026-01-27T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.002575 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.002642 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.002664 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.002692 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.002708 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:41Z","lastTransitionTime":"2026-01-27T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.105963 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.106013 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.106030 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.106056 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.106080 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:41Z","lastTransitionTime":"2026-01-27T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.209053 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.209111 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.209133 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.209164 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.209183 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:41Z","lastTransitionTime":"2026-01-27T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.312439 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.312504 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.312521 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.312543 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.312560 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:41Z","lastTransitionTime":"2026-01-27T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.414959 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.415020 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.415042 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.415068 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.415088 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:41Z","lastTransitionTime":"2026-01-27T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.518579 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.518653 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.518674 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.518703 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.518727 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:41Z","lastTransitionTime":"2026-01-27T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.621936 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.622364 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.622577 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.622760 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.622941 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:41Z","lastTransitionTime":"2026-01-27T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.637375 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 08:43:10.72254433 +0000 UTC Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.726360 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.726418 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.726436 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.726460 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.726480 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:41Z","lastTransitionTime":"2026-01-27T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.829539 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.829611 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.829631 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.829656 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.829676 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:41Z","lastTransitionTime":"2026-01-27T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.933276 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.933344 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.933362 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.933388 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:41 crc kubenswrapper[4915]: I0127 18:43:41.933409 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:41Z","lastTransitionTime":"2026-01-27T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.036430 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.036505 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.036526 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.036551 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.036567 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:42Z","lastTransitionTime":"2026-01-27T18:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.140222 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.140872 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.141557 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.141881 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.142149 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:42Z","lastTransitionTime":"2026-01-27T18:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.245871 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.246512 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.246713 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.246897 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.247372 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:42Z","lastTransitionTime":"2026-01-27T18:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.349600 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.349643 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.349653 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.349669 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.349680 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:42Z","lastTransitionTime":"2026-01-27T18:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.357023 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.357061 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:42 crc kubenswrapper[4915]: E0127 18:43:42.357126 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.357191 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:42 crc kubenswrapper[4915]: E0127 18:43:42.357212 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:42 crc kubenswrapper[4915]: E0127 18:43:42.357415 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.357509 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:42 crc kubenswrapper[4915]: E0127 18:43:42.357594 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.395171 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.395202 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.395210 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.395224 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.395235 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T18:43:42Z","lastTransitionTime":"2026-01-27T18:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.444383 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-k5qhv"] Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.444740 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k5qhv" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.446455 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.446738 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.447676 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.447965 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.517123 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f402ce1a-e63b-4664-b323-0ba4cbed8a42-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-k5qhv\" (UID: \"f402ce1a-e63b-4664-b323-0ba4cbed8a42\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k5qhv" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.517252 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f402ce1a-e63b-4664-b323-0ba4cbed8a42-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-k5qhv\" (UID: \"f402ce1a-e63b-4664-b323-0ba4cbed8a42\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k5qhv" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.517286 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f402ce1a-e63b-4664-b323-0ba4cbed8a42-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-k5qhv\" (UID: \"f402ce1a-e63b-4664-b323-0ba4cbed8a42\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k5qhv" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.517314 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f402ce1a-e63b-4664-b323-0ba4cbed8a42-service-ca\") pod \"cluster-version-operator-5c965bbfc6-k5qhv\" (UID: \"f402ce1a-e63b-4664-b323-0ba4cbed8a42\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k5qhv" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.517355 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f402ce1a-e63b-4664-b323-0ba4cbed8a42-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-k5qhv\" (UID: \"f402ce1a-e63b-4664-b323-0ba4cbed8a42\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k5qhv" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.618468 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f402ce1a-e63b-4664-b323-0ba4cbed8a42-service-ca\") pod \"cluster-version-operator-5c965bbfc6-k5qhv\" (UID: \"f402ce1a-e63b-4664-b323-0ba4cbed8a42\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k5qhv" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.618552 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f402ce1a-e63b-4664-b323-0ba4cbed8a42-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-k5qhv\" (UID: \"f402ce1a-e63b-4664-b323-0ba4cbed8a42\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k5qhv" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.618604 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f402ce1a-e63b-4664-b323-0ba4cbed8a42-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-k5qhv\" (UID: \"f402ce1a-e63b-4664-b323-0ba4cbed8a42\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k5qhv" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.618677 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f402ce1a-e63b-4664-b323-0ba4cbed8a42-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-k5qhv\" (UID: \"f402ce1a-e63b-4664-b323-0ba4cbed8a42\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k5qhv" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.618725 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f402ce1a-e63b-4664-b323-0ba4cbed8a42-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-k5qhv\" (UID: \"f402ce1a-e63b-4664-b323-0ba4cbed8a42\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k5qhv" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.618769 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f402ce1a-e63b-4664-b323-0ba4cbed8a42-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-k5qhv\" (UID: \"f402ce1a-e63b-4664-b323-0ba4cbed8a42\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k5qhv" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.618733 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f402ce1a-e63b-4664-b323-0ba4cbed8a42-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-k5qhv\" (UID: \"f402ce1a-e63b-4664-b323-0ba4cbed8a42\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k5qhv" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.620489 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f402ce1a-e63b-4664-b323-0ba4cbed8a42-service-ca\") pod \"cluster-version-operator-5c965bbfc6-k5qhv\" (UID: \"f402ce1a-e63b-4664-b323-0ba4cbed8a42\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k5qhv" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.627450 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f402ce1a-e63b-4664-b323-0ba4cbed8a42-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-k5qhv\" (UID: \"f402ce1a-e63b-4664-b323-0ba4cbed8a42\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k5qhv" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.638207 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 01:44:26.003957751 +0000 UTC Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.638282 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.648162 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f402ce1a-e63b-4664-b323-0ba4cbed8a42-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-k5qhv\" (UID: \"f402ce1a-e63b-4664-b323-0ba4cbed8a42\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k5qhv" Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.648748 4915 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 18:43:42 crc kubenswrapper[4915]: I0127 18:43:42.767773 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k5qhv" Jan 27 18:43:42 crc kubenswrapper[4915]: W0127 18:43:42.791034 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf402ce1a_e63b_4664_b323_0ba4cbed8a42.slice/crio-e92fe8b7a97216034d03d478d9caf54b166a96499c7ab89a06e2f2c0e5149b53 WatchSource:0}: Error finding container e92fe8b7a97216034d03d478d9caf54b166a96499c7ab89a06e2f2c0e5149b53: Status 404 returned error can't find the container with id e92fe8b7a97216034d03d478d9caf54b166a96499c7ab89a06e2f2c0e5149b53 Jan 27 18:43:43 crc kubenswrapper[4915]: I0127 18:43:43.009148 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k5qhv" event={"ID":"f402ce1a-e63b-4664-b323-0ba4cbed8a42","Type":"ContainerStarted","Data":"4bc312e62a8d3339b4b83e55c27b496aa29f20ccdc6706d2cbd5f80494930a41"} Jan 27 18:43:43 crc kubenswrapper[4915]: I0127 18:43:43.009291 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k5qhv" event={"ID":"f402ce1a-e63b-4664-b323-0ba4cbed8a42","Type":"ContainerStarted","Data":"e92fe8b7a97216034d03d478d9caf54b166a96499c7ab89a06e2f2c0e5149b53"} Jan 27 18:43:43 crc kubenswrapper[4915]: I0127 18:43:43.030404 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-k5qhv" podStartSLOduration=93.030379534 podStartE2EDuration="1m33.030379534s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:43:43.029916922 +0000 UTC m=+114.387770626" watchObservedRunningTime="2026-01-27 18:43:43.030379534 +0000 UTC m=+114.388233228" Jan 27 18:43:44 crc kubenswrapper[4915]: I0127 18:43:44.357452 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:44 crc kubenswrapper[4915]: E0127 18:43:44.358298 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:44 crc kubenswrapper[4915]: I0127 18:43:44.357528 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:44 crc kubenswrapper[4915]: E0127 18:43:44.358590 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:44 crc kubenswrapper[4915]: I0127 18:43:44.357525 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:44 crc kubenswrapper[4915]: E0127 18:43:44.358821 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:44 crc kubenswrapper[4915]: I0127 18:43:44.357573 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:44 crc kubenswrapper[4915]: E0127 18:43:44.359018 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:46 crc kubenswrapper[4915]: I0127 18:43:46.020736 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5bpjb_fe27a668-1ea7-44c8-9490-55cf8db5dad9/kube-multus/1.log" Jan 27 18:43:46 crc kubenswrapper[4915]: I0127 18:43:46.022105 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5bpjb_fe27a668-1ea7-44c8-9490-55cf8db5dad9/kube-multus/0.log" Jan 27 18:43:46 crc kubenswrapper[4915]: I0127 18:43:46.022143 4915 generic.go:334] "Generic (PLEG): container finished" podID="fe27a668-1ea7-44c8-9490-55cf8db5dad9" containerID="999d0ed2d215938e26e9b223263ba88b519b694fdb0ae3c3c518907b54762822" exitCode=1 Jan 27 18:43:46 crc kubenswrapper[4915]: I0127 18:43:46.022174 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5bpjb" event={"ID":"fe27a668-1ea7-44c8-9490-55cf8db5dad9","Type":"ContainerDied","Data":"999d0ed2d215938e26e9b223263ba88b519b694fdb0ae3c3c518907b54762822"} Jan 27 18:43:46 crc kubenswrapper[4915]: I0127 18:43:46.022221 4915 scope.go:117] "RemoveContainer" containerID="f8fb8b3233644f7ed4654d8ed296b8a360cec5d84127aee74a7d5ba510437dd0" Jan 27 18:43:46 crc kubenswrapper[4915]: I0127 18:43:46.022762 4915 scope.go:117] "RemoveContainer" containerID="999d0ed2d215938e26e9b223263ba88b519b694fdb0ae3c3c518907b54762822" Jan 27 18:43:46 crc kubenswrapper[4915]: E0127 18:43:46.023005 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-5bpjb_openshift-multus(fe27a668-1ea7-44c8-9490-55cf8db5dad9)\"" pod="openshift-multus/multus-5bpjb" podUID="fe27a668-1ea7-44c8-9490-55cf8db5dad9" Jan 27 18:43:46 crc kubenswrapper[4915]: I0127 18:43:46.356659 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:46 crc kubenswrapper[4915]: E0127 18:43:46.357257 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:46 crc kubenswrapper[4915]: I0127 18:43:46.356779 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:46 crc kubenswrapper[4915]: I0127 18:43:46.356934 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:46 crc kubenswrapper[4915]: E0127 18:43:46.357508 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:46 crc kubenswrapper[4915]: E0127 18:43:46.357605 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:46 crc kubenswrapper[4915]: I0127 18:43:46.356847 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:46 crc kubenswrapper[4915]: E0127 18:43:46.357708 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:47 crc kubenswrapper[4915]: I0127 18:43:47.027679 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5bpjb_fe27a668-1ea7-44c8-9490-55cf8db5dad9/kube-multus/1.log" Jan 27 18:43:48 crc kubenswrapper[4915]: I0127 18:43:48.357202 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:48 crc kubenswrapper[4915]: I0127 18:43:48.357277 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:48 crc kubenswrapper[4915]: I0127 18:43:48.357292 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:48 crc kubenswrapper[4915]: I0127 18:43:48.357396 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:48 crc kubenswrapper[4915]: E0127 18:43:48.357394 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:48 crc kubenswrapper[4915]: E0127 18:43:48.357509 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:48 crc kubenswrapper[4915]: E0127 18:43:48.357682 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:48 crc kubenswrapper[4915]: E0127 18:43:48.357815 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:48 crc kubenswrapper[4915]: I0127 18:43:48.359580 4915 scope.go:117] "RemoveContainer" containerID="c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3" Jan 27 18:43:48 crc kubenswrapper[4915]: E0127 18:43:48.359944 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n8spt_openshift-ovn-kubernetes(eb87671e-1bee-4bef-843d-6fce9467079d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" Jan 27 18:43:49 crc kubenswrapper[4915]: E0127 18:43:49.377835 4915 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 27 18:43:49 crc kubenswrapper[4915]: E0127 18:43:49.460204 4915 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:43:50 crc kubenswrapper[4915]: I0127 18:43:50.357602 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:50 crc kubenswrapper[4915]: I0127 18:43:50.357602 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:50 crc kubenswrapper[4915]: I0127 18:43:50.357663 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:50 crc kubenswrapper[4915]: E0127 18:43:50.358416 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:50 crc kubenswrapper[4915]: E0127 18:43:50.358174 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:50 crc kubenswrapper[4915]: I0127 18:43:50.357859 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:50 crc kubenswrapper[4915]: E0127 18:43:50.358530 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:50 crc kubenswrapper[4915]: E0127 18:43:50.358645 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:52 crc kubenswrapper[4915]: I0127 18:43:52.357400 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:52 crc kubenswrapper[4915]: I0127 18:43:52.357473 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:52 crc kubenswrapper[4915]: I0127 18:43:52.357631 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:52 crc kubenswrapper[4915]: E0127 18:43:52.357615 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:52 crc kubenswrapper[4915]: E0127 18:43:52.357868 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:52 crc kubenswrapper[4915]: I0127 18:43:52.357998 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:52 crc kubenswrapper[4915]: E0127 18:43:52.358071 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:52 crc kubenswrapper[4915]: E0127 18:43:52.358178 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:54 crc kubenswrapper[4915]: I0127 18:43:54.356721 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:54 crc kubenswrapper[4915]: I0127 18:43:54.356775 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:54 crc kubenswrapper[4915]: I0127 18:43:54.356823 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:54 crc kubenswrapper[4915]: I0127 18:43:54.356740 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:54 crc kubenswrapper[4915]: E0127 18:43:54.356918 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:54 crc kubenswrapper[4915]: E0127 18:43:54.356964 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:54 crc kubenswrapper[4915]: E0127 18:43:54.357019 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:54 crc kubenswrapper[4915]: E0127 18:43:54.357102 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:54 crc kubenswrapper[4915]: E0127 18:43:54.462194 4915 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:43:56 crc kubenswrapper[4915]: I0127 18:43:56.357352 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:56 crc kubenswrapper[4915]: I0127 18:43:56.357435 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:56 crc kubenswrapper[4915]: E0127 18:43:56.357542 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:56 crc kubenswrapper[4915]: I0127 18:43:56.357482 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:56 crc kubenswrapper[4915]: E0127 18:43:56.357691 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:56 crc kubenswrapper[4915]: I0127 18:43:56.357588 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:56 crc kubenswrapper[4915]: E0127 18:43:56.357750 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:56 crc kubenswrapper[4915]: E0127 18:43:56.357911 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:58 crc kubenswrapper[4915]: I0127 18:43:58.356890 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:43:58 crc kubenswrapper[4915]: I0127 18:43:58.356956 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:43:58 crc kubenswrapper[4915]: E0127 18:43:58.357064 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:43:58 crc kubenswrapper[4915]: I0127 18:43:58.356910 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:43:58 crc kubenswrapper[4915]: I0127 18:43:58.357174 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:43:58 crc kubenswrapper[4915]: E0127 18:43:58.357211 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:43:58 crc kubenswrapper[4915]: E0127 18:43:58.357427 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:43:58 crc kubenswrapper[4915]: E0127 18:43:58.357474 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:43:58 crc kubenswrapper[4915]: I0127 18:43:58.357890 4915 scope.go:117] "RemoveContainer" containerID="999d0ed2d215938e26e9b223263ba88b519b694fdb0ae3c3c518907b54762822" Jan 27 18:43:59 crc kubenswrapper[4915]: I0127 18:43:59.078543 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5bpjb_fe27a668-1ea7-44c8-9490-55cf8db5dad9/kube-multus/1.log" Jan 27 18:43:59 crc kubenswrapper[4915]: I0127 18:43:59.079436 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5bpjb" event={"ID":"fe27a668-1ea7-44c8-9490-55cf8db5dad9","Type":"ContainerStarted","Data":"9e91d782087fea97d711280d394674401f8eaffbe0664580708da1efeaab5904"} Jan 27 18:43:59 crc kubenswrapper[4915]: E0127 18:43:59.462942 4915 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:44:00 crc kubenswrapper[4915]: I0127 18:44:00.357219 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:00 crc kubenswrapper[4915]: I0127 18:44:00.357219 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:00 crc kubenswrapper[4915]: I0127 18:44:00.357258 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:44:00 crc kubenswrapper[4915]: I0127 18:44:00.357259 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:00 crc kubenswrapper[4915]: E0127 18:44:00.357576 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:00 crc kubenswrapper[4915]: E0127 18:44:00.357753 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:44:00 crc kubenswrapper[4915]: E0127 18:44:00.357909 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:00 crc kubenswrapper[4915]: E0127 18:44:00.357993 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:02 crc kubenswrapper[4915]: I0127 18:44:02.356839 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:44:02 crc kubenswrapper[4915]: I0127 18:44:02.356888 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:02 crc kubenswrapper[4915]: I0127 18:44:02.356921 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:02 crc kubenswrapper[4915]: I0127 18:44:02.357378 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:02 crc kubenswrapper[4915]: E0127 18:44:02.362898 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:44:02 crc kubenswrapper[4915]: E0127 18:44:02.363376 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:02 crc kubenswrapper[4915]: E0127 18:44:02.363542 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:02 crc kubenswrapper[4915]: E0127 18:44:02.364544 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:02 crc kubenswrapper[4915]: I0127 18:44:02.365868 4915 scope.go:117] "RemoveContainer" containerID="c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3" Jan 27 18:44:03 crc kubenswrapper[4915]: I0127 18:44:03.094088 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8spt_eb87671e-1bee-4bef-843d-6fce9467079d/ovnkube-controller/3.log" Jan 27 18:44:03 crc kubenswrapper[4915]: I0127 18:44:03.097765 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" event={"ID":"eb87671e-1bee-4bef-843d-6fce9467079d","Type":"ContainerStarted","Data":"63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8"} Jan 27 18:44:03 crc kubenswrapper[4915]: I0127 18:44:03.098279 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:44:03 crc kubenswrapper[4915]: I0127 18:44:03.132499 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" podStartSLOduration=113.132479577 podStartE2EDuration="1m53.132479577s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:03.131061609 +0000 UTC m=+134.488915303" watchObservedRunningTime="2026-01-27 18:44:03.132479577 +0000 UTC m=+134.490333261" Jan 27 18:44:03 crc kubenswrapper[4915]: I0127 18:44:03.403108 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-d467q"] Jan 27 18:44:03 crc kubenswrapper[4915]: I0127 18:44:03.403288 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:44:03 crc kubenswrapper[4915]: E0127 18:44:03.403487 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:44:04 crc kubenswrapper[4915]: I0127 18:44:04.357269 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:04 crc kubenswrapper[4915]: I0127 18:44:04.357321 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:04 crc kubenswrapper[4915]: I0127 18:44:04.357268 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:04 crc kubenswrapper[4915]: E0127 18:44:04.357523 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:04 crc kubenswrapper[4915]: E0127 18:44:04.357630 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:04 crc kubenswrapper[4915]: E0127 18:44:04.357919 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:04 crc kubenswrapper[4915]: E0127 18:44:04.464949 4915 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:44:05 crc kubenswrapper[4915]: I0127 18:44:05.357191 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:44:05 crc kubenswrapper[4915]: E0127 18:44:05.357723 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:44:06 crc kubenswrapper[4915]: I0127 18:44:06.357354 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:06 crc kubenswrapper[4915]: I0127 18:44:06.357410 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:06 crc kubenswrapper[4915]: I0127 18:44:06.357410 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:06 crc kubenswrapper[4915]: E0127 18:44:06.357554 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:06 crc kubenswrapper[4915]: E0127 18:44:06.357871 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:06 crc kubenswrapper[4915]: E0127 18:44:06.357983 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:07 crc kubenswrapper[4915]: I0127 18:44:07.357767 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:44:07 crc kubenswrapper[4915]: E0127 18:44:07.358017 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:44:08 crc kubenswrapper[4915]: I0127 18:44:08.356932 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:08 crc kubenswrapper[4915]: I0127 18:44:08.356984 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:08 crc kubenswrapper[4915]: I0127 18:44:08.356984 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:08 crc kubenswrapper[4915]: E0127 18:44:08.357120 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 18:44:08 crc kubenswrapper[4915]: E0127 18:44:08.357412 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 18:44:08 crc kubenswrapper[4915]: E0127 18:44:08.357307 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 18:44:09 crc kubenswrapper[4915]: I0127 18:44:09.358109 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:44:09 crc kubenswrapper[4915]: E0127 18:44:09.359736 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-d467q" podUID="65be8e09-e032-40de-b290-c66c07282211" Jan 27 18:44:10 crc kubenswrapper[4915]: I0127 18:44:10.357529 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:10 crc kubenswrapper[4915]: I0127 18:44:10.357617 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:10 crc kubenswrapper[4915]: I0127 18:44:10.358385 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:10 crc kubenswrapper[4915]: I0127 18:44:10.361581 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 18:44:10 crc kubenswrapper[4915]: I0127 18:44:10.361639 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 18:44:10 crc kubenswrapper[4915]: I0127 18:44:10.361712 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 18:44:10 crc kubenswrapper[4915]: I0127 18:44:10.361736 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 18:44:11 crc kubenswrapper[4915]: I0127 18:44:11.357311 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:44:11 crc kubenswrapper[4915]: I0127 18:44:11.360722 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 18:44:11 crc kubenswrapper[4915]: I0127 18:44:11.361200 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.277208 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.330357 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vzw97"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.331387 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.334030 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5s7q5"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.335321 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.342510 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.343070 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.343639 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6kwr6"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.344096 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6kwr6" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.344597 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.345372 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.351972 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.352055 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.375775 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.376573 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.377953 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.378715 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.379406 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.379399 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.384610 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.385022 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.398236 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.398826 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.399127 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.399235 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.399323 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.399457 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.399572 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.399703 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.399832 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.399927 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.400098 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.400189 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.400333 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.400430 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.400628 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.400767 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a76afbe7-44a3-4e31-ba26-53695d082598-client-ca\") pod \"route-controller-manager-6576b87f9c-882qk\" (UID: \"a76afbe7-44a3-4e31-ba26-53695d082598\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.400849 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3c01f2fd-2df1-4def-9cb0-141697bd5e80-encryption-config\") pod \"apiserver-7bbb656c7d-bb2js\" (UID: \"3c01f2fd-2df1-4def-9cb0-141697bd5e80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.400880 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vr8f\" (UniqueName: \"kubernetes.io/projected/a76afbe7-44a3-4e31-ba26-53695d082598-kube-api-access-6vr8f\") pod \"route-controller-manager-6576b87f9c-882qk\" (UID: \"a76afbe7-44a3-4e31-ba26-53695d082598\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.400904 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c01f2fd-2df1-4def-9cb0-141697bd5e80-serving-cert\") pod \"apiserver-7bbb656c7d-bb2js\" (UID: \"3c01f2fd-2df1-4def-9cb0-141697bd5e80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.400933 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/16f09be8-b919-45ca-8a58-38e72e3bb85c-etcd-serving-ca\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.400957 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/16f09be8-b919-45ca-8a58-38e72e3bb85c-node-pullsecrets\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.400980 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/331b9c26-d7e8-4ef9-97cc-ec36c884cc2d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6kwr6\" (UID: \"331b9c26-d7e8-4ef9-97cc-ec36c884cc2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6kwr6" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.401026 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16f09be8-b919-45ca-8a58-38e72e3bb85c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.401059 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f09be8-b919-45ca-8a58-38e72e3bb85c-config\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.401083 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16f09be8-b919-45ca-8a58-38e72e3bb85c-serving-cert\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.401108 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b2b7254-d441-4474-b545-16b89b18f845-client-ca\") pod \"controller-manager-879f6c89f-vzw97\" (UID: \"8b2b7254-d441-4474-b545-16b89b18f845\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.401134 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b2b7254-d441-4474-b545-16b89b18f845-serving-cert\") pod \"controller-manager-879f6c89f-vzw97\" (UID: \"8b2b7254-d441-4474-b545-16b89b18f845\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.401158 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/16f09be8-b919-45ca-8a58-38e72e3bb85c-image-import-ca\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.401177 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/16f09be8-b919-45ca-8a58-38e72e3bb85c-encryption-config\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.401201 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3c01f2fd-2df1-4def-9cb0-141697bd5e80-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bb2js\" (UID: \"3c01f2fd-2df1-4def-9cb0-141697bd5e80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.401221 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b2b7254-d441-4474-b545-16b89b18f845-config\") pod \"controller-manager-879f6c89f-vzw97\" (UID: \"8b2b7254-d441-4474-b545-16b89b18f845\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.401243 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a76afbe7-44a3-4e31-ba26-53695d082598-serving-cert\") pod \"route-controller-manager-6576b87f9c-882qk\" (UID: \"a76afbe7-44a3-4e31-ba26-53695d082598\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.401267 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b2b7254-d441-4474-b545-16b89b18f845-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vzw97\" (UID: \"8b2b7254-d441-4474-b545-16b89b18f845\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.401298 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a76afbe7-44a3-4e31-ba26-53695d082598-config\") pod \"route-controller-manager-6576b87f9c-882qk\" (UID: \"a76afbe7-44a3-4e31-ba26-53695d082598\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.401329 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/331b9c26-d7e8-4ef9-97cc-ec36c884cc2d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6kwr6\" (UID: \"331b9c26-d7e8-4ef9-97cc-ec36c884cc2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6kwr6" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.401364 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/16f09be8-b919-45ca-8a58-38e72e3bb85c-etcd-client\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.401385 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c01f2fd-2df1-4def-9cb0-141697bd5e80-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bb2js\" (UID: \"3c01f2fd-2df1-4def-9cb0-141697bd5e80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.401422 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cdgl\" (UniqueName: \"kubernetes.io/projected/3c01f2fd-2df1-4def-9cb0-141697bd5e80-kube-api-access-5cdgl\") pod \"apiserver-7bbb656c7d-bb2js\" (UID: \"3c01f2fd-2df1-4def-9cb0-141697bd5e80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.401449 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/16f09be8-b919-45ca-8a58-38e72e3bb85c-audit-dir\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.401494 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dklh5\" (UniqueName: \"kubernetes.io/projected/16f09be8-b919-45ca-8a58-38e72e3bb85c-kube-api-access-dklh5\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.401519 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3c01f2fd-2df1-4def-9cb0-141697bd5e80-audit-policies\") pod \"apiserver-7bbb656c7d-bb2js\" (UID: \"3c01f2fd-2df1-4def-9cb0-141697bd5e80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.401542 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/16f09be8-b919-45ca-8a58-38e72e3bb85c-audit\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.401575 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95xb7\" (UniqueName: \"kubernetes.io/projected/331b9c26-d7e8-4ef9-97cc-ec36c884cc2d-kube-api-access-95xb7\") pod \"openshift-apiserver-operator-796bbdcf4f-6kwr6\" (UID: \"331b9c26-d7e8-4ef9-97cc-ec36c884cc2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6kwr6" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.401607 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vq88\" (UniqueName: \"kubernetes.io/projected/8b2b7254-d441-4474-b545-16b89b18f845-kube-api-access-2vq88\") pod \"controller-manager-879f6c89f-vzw97\" (UID: \"8b2b7254-d441-4474-b545-16b89b18f845\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.401630 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c01f2fd-2df1-4def-9cb0-141697bd5e80-audit-dir\") pod \"apiserver-7bbb656c7d-bb2js\" (UID: \"3c01f2fd-2df1-4def-9cb0-141697bd5e80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.401655 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3c01f2fd-2df1-4def-9cb0-141697bd5e80-etcd-client\") pod \"apiserver-7bbb656c7d-bb2js\" (UID: \"3c01f2fd-2df1-4def-9cb0-141697bd5e80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.400783 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.402039 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.400835 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.402337 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.402380 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.402451 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.402561 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.411144 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.411761 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.413467 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.413837 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-rd4g2"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.414248 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ljskk"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.414485 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ltcdr"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.414808 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kcmfd"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.415015 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd4g2" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.415118 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.415026 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-sd8x9"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.415472 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ljskk" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.415650 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcdr" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.416408 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5nrn"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.416746 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-l57nd"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.417069 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4sxzb"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.417360 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.417400 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xhjvn"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.417850 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xhjvn" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.418183 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5nrn" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.418434 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-l57nd" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.418635 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4sxzb" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.421037 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.421188 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.421506 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.424111 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lqcx4"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.424551 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zkhzf"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.425027 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-m7bv2"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.425565 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7bv2" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.425936 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.426226 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zkhzf" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.428029 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rwhgq"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.429097 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rwhgq" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.429377 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-d27cj"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.429981 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-d27cj" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.431430 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-8sqx6"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.431668 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.431894 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.431939 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.431952 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.432119 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.432158 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.432172 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-8sqx6" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.432250 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.432334 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.432467 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.433817 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mx2qp"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.434395 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mx2qp" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.438081 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5xckh"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.459357 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-v5sff"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.460137 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-l8m44"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.460846 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l8m44" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.461095 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5xckh" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.461261 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v5sff" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.461562 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.461709 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.461874 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.461943 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.462125 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.462429 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.462490 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.462706 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.462437 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.463055 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.463304 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.463829 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.464222 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.464307 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.465259 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.465868 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.465738 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.465939 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.466001 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.466084 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.466176 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.466306 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.466630 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.466681 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.466748 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.467032 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.467103 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.467179 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.467221 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.467033 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.467370 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.467243 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.467481 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.467936 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.468191 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.468347 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.468380 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.468660 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.468889 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.469128 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.469408 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.469571 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.469861 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.470246 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.501538 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.506985 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.508366 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.509281 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.509562 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.509935 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.510462 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.511645 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3c01f2fd-2df1-4def-9cb0-141697bd5e80-encryption-config\") pod \"apiserver-7bbb656c7d-bb2js\" (UID: \"3c01f2fd-2df1-4def-9cb0-141697bd5e80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.511668 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c01f2fd-2df1-4def-9cb0-141697bd5e80-serving-cert\") pod \"apiserver-7bbb656c7d-bb2js\" (UID: \"3c01f2fd-2df1-4def-9cb0-141697bd5e80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.511687 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vr8f\" (UniqueName: \"kubernetes.io/projected/a76afbe7-44a3-4e31-ba26-53695d082598-kube-api-access-6vr8f\") pod \"route-controller-manager-6576b87f9c-882qk\" (UID: \"a76afbe7-44a3-4e31-ba26-53695d082598\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.511706 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/16f09be8-b919-45ca-8a58-38e72e3bb85c-etcd-serving-ca\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.511722 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/16f09be8-b919-45ca-8a58-38e72e3bb85c-node-pullsecrets\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.511737 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/331b9c26-d7e8-4ef9-97cc-ec36c884cc2d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6kwr6\" (UID: \"331b9c26-d7e8-4ef9-97cc-ec36c884cc2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6kwr6" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.511758 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16f09be8-b919-45ca-8a58-38e72e3bb85c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.511774 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f09be8-b919-45ca-8a58-38e72e3bb85c-config\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.511799 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16f09be8-b919-45ca-8a58-38e72e3bb85c-serving-cert\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.511814 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b2b7254-d441-4474-b545-16b89b18f845-client-ca\") pod \"controller-manager-879f6c89f-vzw97\" (UID: \"8b2b7254-d441-4474-b545-16b89b18f845\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.511828 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b2b7254-d441-4474-b545-16b89b18f845-serving-cert\") pod \"controller-manager-879f6c89f-vzw97\" (UID: \"8b2b7254-d441-4474-b545-16b89b18f845\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.511846 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/16f09be8-b919-45ca-8a58-38e72e3bb85c-image-import-ca\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.511859 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/16f09be8-b919-45ca-8a58-38e72e3bb85c-encryption-config\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.511874 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3c01f2fd-2df1-4def-9cb0-141697bd5e80-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bb2js\" (UID: \"3c01f2fd-2df1-4def-9cb0-141697bd5e80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.511888 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b2b7254-d441-4474-b545-16b89b18f845-config\") pod \"controller-manager-879f6c89f-vzw97\" (UID: \"8b2b7254-d441-4474-b545-16b89b18f845\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.511903 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a76afbe7-44a3-4e31-ba26-53695d082598-serving-cert\") pod \"route-controller-manager-6576b87f9c-882qk\" (UID: \"a76afbe7-44a3-4e31-ba26-53695d082598\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.511936 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b2b7254-d441-4474-b545-16b89b18f845-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vzw97\" (UID: \"8b2b7254-d441-4474-b545-16b89b18f845\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.511958 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a76afbe7-44a3-4e31-ba26-53695d082598-config\") pod \"route-controller-manager-6576b87f9c-882qk\" (UID: \"a76afbe7-44a3-4e31-ba26-53695d082598\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.511974 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/331b9c26-d7e8-4ef9-97cc-ec36c884cc2d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6kwr6\" (UID: \"331b9c26-d7e8-4ef9-97cc-ec36c884cc2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6kwr6" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.511996 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c01f2fd-2df1-4def-9cb0-141697bd5e80-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bb2js\" (UID: \"3c01f2fd-2df1-4def-9cb0-141697bd5e80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.512010 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/16f09be8-b919-45ca-8a58-38e72e3bb85c-etcd-client\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.512034 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cdgl\" (UniqueName: \"kubernetes.io/projected/3c01f2fd-2df1-4def-9cb0-141697bd5e80-kube-api-access-5cdgl\") pod \"apiserver-7bbb656c7d-bb2js\" (UID: \"3c01f2fd-2df1-4def-9cb0-141697bd5e80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.512050 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/16f09be8-b919-45ca-8a58-38e72e3bb85c-audit-dir\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.512065 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dklh5\" (UniqueName: \"kubernetes.io/projected/16f09be8-b919-45ca-8a58-38e72e3bb85c-kube-api-access-dklh5\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.512081 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3c01f2fd-2df1-4def-9cb0-141697bd5e80-audit-policies\") pod \"apiserver-7bbb656c7d-bb2js\" (UID: \"3c01f2fd-2df1-4def-9cb0-141697bd5e80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.512097 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/16f09be8-b919-45ca-8a58-38e72e3bb85c-audit\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.512118 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95xb7\" (UniqueName: \"kubernetes.io/projected/331b9c26-d7e8-4ef9-97cc-ec36c884cc2d-kube-api-access-95xb7\") pod \"openshift-apiserver-operator-796bbdcf4f-6kwr6\" (UID: \"331b9c26-d7e8-4ef9-97cc-ec36c884cc2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6kwr6" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.512136 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vq88\" (UniqueName: \"kubernetes.io/projected/8b2b7254-d441-4474-b545-16b89b18f845-kube-api-access-2vq88\") pod \"controller-manager-879f6c89f-vzw97\" (UID: \"8b2b7254-d441-4474-b545-16b89b18f845\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.512151 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c01f2fd-2df1-4def-9cb0-141697bd5e80-audit-dir\") pod \"apiserver-7bbb656c7d-bb2js\" (UID: \"3c01f2fd-2df1-4def-9cb0-141697bd5e80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.512186 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3c01f2fd-2df1-4def-9cb0-141697bd5e80-etcd-client\") pod \"apiserver-7bbb656c7d-bb2js\" (UID: \"3c01f2fd-2df1-4def-9cb0-141697bd5e80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.512202 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a76afbe7-44a3-4e31-ba26-53695d082598-client-ca\") pod \"route-controller-manager-6576b87f9c-882qk\" (UID: \"a76afbe7-44a3-4e31-ba26-53695d082598\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.512384 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xmqnm"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.512919 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7npbf"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.513075 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.513094 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a76afbe7-44a3-4e31-ba26-53695d082598-client-ca\") pod \"route-controller-manager-6576b87f9c-882qk\" (UID: \"a76afbe7-44a3-4e31-ba26-53695d082598\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.513314 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tc9fv"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.513703 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tc9fv" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.515457 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.515655 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xmqnm" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.517439 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b2b7254-d441-4474-b545-16b89b18f845-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vzw97\" (UID: \"8b2b7254-d441-4474-b545-16b89b18f845\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.519043 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gwxs9"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.519776 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gwxs9" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.519873 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qgxwz"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.520338 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qgxwz" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.520358 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/16f09be8-b919-45ca-8a58-38e72e3bb85c-audit\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.520535 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/16f09be8-b919-45ca-8a58-38e72e3bb85c-audit-dir\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.521081 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3c01f2fd-2df1-4def-9cb0-141697bd5e80-audit-policies\") pod \"apiserver-7bbb656c7d-bb2js\" (UID: \"3c01f2fd-2df1-4def-9cb0-141697bd5e80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.521172 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7npbf" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.525656 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9fkzb"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.526377 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9fkzb" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.527486 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c01f2fd-2df1-4def-9cb0-141697bd5e80-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bb2js\" (UID: \"3c01f2fd-2df1-4def-9cb0-141697bd5e80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.531245 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c01f2fd-2df1-4def-9cb0-141697bd5e80-audit-dir\") pod \"apiserver-7bbb656c7d-bb2js\" (UID: \"3c01f2fd-2df1-4def-9cb0-141697bd5e80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.531994 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/16f09be8-b919-45ca-8a58-38e72e3bb85c-node-pullsecrets\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.533911 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a76afbe7-44a3-4e31-ba26-53695d082598-config\") pod \"route-controller-manager-6576b87f9c-882qk\" (UID: \"a76afbe7-44a3-4e31-ba26-53695d082598\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.539217 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b2b7254-d441-4474-b545-16b89b18f845-config\") pod \"controller-manager-879f6c89f-vzw97\" (UID: \"8b2b7254-d441-4474-b545-16b89b18f845\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.539843 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3c01f2fd-2df1-4def-9cb0-141697bd5e80-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bb2js\" (UID: \"3c01f2fd-2df1-4def-9cb0-141697bd5e80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.540998 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/16f09be8-b919-45ca-8a58-38e72e3bb85c-image-import-ca\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.542221 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/16f09be8-b919-45ca-8a58-38e72e3bb85c-etcd-serving-ca\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.542766 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/331b9c26-d7e8-4ef9-97cc-ec36c884cc2d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6kwr6\" (UID: \"331b9c26-d7e8-4ef9-97cc-ec36c884cc2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6kwr6" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.542967 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16f09be8-b919-45ca-8a58-38e72e3bb85c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.542945 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b2b7254-d441-4474-b545-16b89b18f845-client-ca\") pod \"controller-manager-879f6c89f-vzw97\" (UID: \"8b2b7254-d441-4474-b545-16b89b18f845\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.543436 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f09be8-b919-45ca-8a58-38e72e3bb85c-config\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.544641 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3c01f2fd-2df1-4def-9cb0-141697bd5e80-etcd-client\") pod \"apiserver-7bbb656c7d-bb2js\" (UID: \"3c01f2fd-2df1-4def-9cb0-141697bd5e80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.545935 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.549486 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b2b7254-d441-4474-b545-16b89b18f845-serving-cert\") pod \"controller-manager-879f6c89f-vzw97\" (UID: \"8b2b7254-d441-4474-b545-16b89b18f845\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.551394 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.555429 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvmtd"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.556011 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvmtd" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.556126 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.557039 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vzw97"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.592565 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16f09be8-b919-45ca-8a58-38e72e3bb85c-serving-cert\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.592893 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/16f09be8-b919-45ca-8a58-38e72e3bb85c-encryption-config\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.593240 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/16f09be8-b919-45ca-8a58-38e72e3bb85c-etcd-client\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.593467 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3c01f2fd-2df1-4def-9cb0-141697bd5e80-encryption-config\") pod \"apiserver-7bbb656c7d-bb2js\" (UID: \"3c01f2fd-2df1-4def-9cb0-141697bd5e80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.593774 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/331b9c26-d7e8-4ef9-97cc-ec36c884cc2d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6kwr6\" (UID: \"331b9c26-d7e8-4ef9-97cc-ec36c884cc2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6kwr6" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.594289 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c01f2fd-2df1-4def-9cb0-141697bd5e80-serving-cert\") pod \"apiserver-7bbb656c7d-bb2js\" (UID: \"3c01f2fd-2df1-4def-9cb0-141697bd5e80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.598453 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a76afbe7-44a3-4e31-ba26-53695d082598-serving-cert\") pod \"route-controller-manager-6576b87f9c-882qk\" (UID: \"a76afbe7-44a3-4e31-ba26-53695d082598\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.598502 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vvk64"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.599274 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-kmfxg"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.599334 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.599720 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.599773 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.599987 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kmfxg" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.600107 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.600275 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vvk64" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.600353 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.604997 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.605137 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.609391 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqphj"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.609954 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqphj" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.610720 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pl9zt"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.611367 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pl9zt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.611850 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-x4lgb"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.612361 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-x4lgb" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.612924 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492310-cvxsw"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.613363 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-cvxsw" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.615818 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.618227 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-49lf7"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.618977 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5s7q5"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.619165 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-49lf7" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.623372 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6kwr6"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.623413 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-nw5h9"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.623864 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.623983 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-nw5h9" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.625390 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-sd8x9"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.626355 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5nrn"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.627846 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kcmfd"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.629606 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ljskk"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.631814 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.631835 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-d27cj"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.633863 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zkhzf"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.635552 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-m7bv2"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.636072 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.637963 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-8sqx6"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.638296 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lqcx4"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.642629 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mx2qp"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.646867 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7npbf"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.647545 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rwhgq"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.648678 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-kmfxg"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.649652 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-l57nd"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.650662 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gwxs9"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.651655 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ltcdr"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.652744 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-wpfnn"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.653528 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wpfnn" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.653906 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvmtd"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.654049 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.655554 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xhjvn"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.656929 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4sxzb"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.657973 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9fkzb"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.659062 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5xckh"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.660080 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-v5sff"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.661242 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xmqnm"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.662007 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tc9fv"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.663187 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-nw5h9"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.664207 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qgxwz"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.665700 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-l8m44"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.666619 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492310-cvxsw"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.668010 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-gpbpr"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.669457 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gpbpr" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.669494 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vvk64"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.670811 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gpbpr"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.677894 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pl9zt"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.678117 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqphj"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.682155 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.684418 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-x4lgb"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.687462 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4lnms"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.692508 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4lnms"] Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.692613 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4lnms" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.695308 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.714267 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.734480 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.754707 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.773846 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.794063 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.814695 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.833452 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.853878 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.874406 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.893859 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.913776 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.934351 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.954370 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.973918 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 18:44:13 crc kubenswrapper[4915]: I0127 18:44:13.995224 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.014397 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.034236 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.066121 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.074680 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.134388 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.153674 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.174768 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.193703 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.214404 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.234355 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.254285 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.274741 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.294080 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.334638 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.341392 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cdgl\" (UniqueName: \"kubernetes.io/projected/3c01f2fd-2df1-4def-9cb0-141697bd5e80-kube-api-access-5cdgl\") pod \"apiserver-7bbb656c7d-bb2js\" (UID: \"3c01f2fd-2df1-4def-9cb0-141697bd5e80\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.374206 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.380990 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dklh5\" (UniqueName: \"kubernetes.io/projected/16f09be8-b919-45ca-8a58-38e72e3bb85c-kube-api-access-dklh5\") pod \"apiserver-76f77b778f-5s7q5\" (UID: \"16f09be8-b919-45ca-8a58-38e72e3bb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.395061 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.414751 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.434441 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.454239 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.474704 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.494405 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.499774 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.515226 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.532626 4915 request.go:700] Waited for 1.004931442s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.563157 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vq88\" (UniqueName: \"kubernetes.io/projected/8b2b7254-d441-4474-b545-16b89b18f845-kube-api-access-2vq88\") pod \"controller-manager-879f6c89f-vzw97\" (UID: \"8b2b7254-d441-4474-b545-16b89b18f845\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.574850 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.579488 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.583463 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95xb7\" (UniqueName: \"kubernetes.io/projected/331b9c26-d7e8-4ef9-97cc-ec36c884cc2d-kube-api-access-95xb7\") pod \"openshift-apiserver-operator-796bbdcf4f-6kwr6\" (UID: \"331b9c26-d7e8-4ef9-97cc-ec36c884cc2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6kwr6" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.595027 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.640758 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.646982 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vr8f\" (UniqueName: \"kubernetes.io/projected/a76afbe7-44a3-4e31-ba26-53695d082598-kube-api-access-6vr8f\") pod \"route-controller-manager-6576b87f9c-882qk\" (UID: \"a76afbe7-44a3-4e31-ba26-53695d082598\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.661075 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6kwr6" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.675950 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.676266 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.690534 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.697374 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.721289 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.736231 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.755002 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.774263 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.794848 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.801050 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.804347 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js"] Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.815204 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.833747 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vzw97"] Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.835217 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.854248 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 18:44:14 crc kubenswrapper[4915]: W0127 18:44:14.858444 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b2b7254_d441_4474_b545_16b89b18f845.slice/crio-42700dd1a9532ccadc923575d286cbcb39f86f98a6861ccdf700137353cc3eb2 WatchSource:0}: Error finding container 42700dd1a9532ccadc923575d286cbcb39f86f98a6861ccdf700137353cc3eb2: Status 404 returned error can't find the container with id 42700dd1a9532ccadc923575d286cbcb39f86f98a6861ccdf700137353cc3eb2 Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.873698 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.887423 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5s7q5"] Jan 27 18:44:14 crc kubenswrapper[4915]: W0127 18:44:14.893683 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16f09be8_b919_45ca_8a58_38e72e3bb85c.slice/crio-2735e81e2d4cbf3456de007300c7d3b3b4ad8b21361f43ecd8d9236d12fc0e51 WatchSource:0}: Error finding container 2735e81e2d4cbf3456de007300c7d3b3b4ad8b21361f43ecd8d9236d12fc0e51: Status 404 returned error can't find the container with id 2735e81e2d4cbf3456de007300c7d3b3b4ad8b21361f43ecd8d9236d12fc0e51 Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.893741 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.911872 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6kwr6"] Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.914773 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.935431 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.955337 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.966704 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk"] Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.975642 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 18:44:14 crc kubenswrapper[4915]: W0127 18:44:14.977437 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda76afbe7_44a3_4e31_ba26_53695d082598.slice/crio-2c56468ad7acc27b78113e23092eba409c598205cfffd348d6acd0226efb4eb6 WatchSource:0}: Error finding container 2c56468ad7acc27b78113e23092eba409c598205cfffd348d6acd0226efb4eb6: Status 404 returned error can't find the container with id 2c56468ad7acc27b78113e23092eba409c598205cfffd348d6acd0226efb4eb6 Jan 27 18:44:14 crc kubenswrapper[4915]: I0127 18:44:14.993948 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.013704 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.034128 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.054192 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.074150 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.094019 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.114434 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.133297 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.145504 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" event={"ID":"8b2b7254-d441-4474-b545-16b89b18f845","Type":"ContainerStarted","Data":"81ca9ebb5eb695a53c5911b28c06cdd6f153e1bd19cc6cabaae74f447f69d20b"} Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.145541 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" event={"ID":"8b2b7254-d441-4474-b545-16b89b18f845","Type":"ContainerStarted","Data":"42700dd1a9532ccadc923575d286cbcb39f86f98a6861ccdf700137353cc3eb2"} Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.146155 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.147622 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk" event={"ID":"a76afbe7-44a3-4e31-ba26-53695d082598","Type":"ContainerStarted","Data":"1039985fafb23488126c8d863f1c086b14942401aa0e917d66604b1093b82e97"} Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.147669 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk" event={"ID":"a76afbe7-44a3-4e31-ba26-53695d082598","Type":"ContainerStarted","Data":"2c56468ad7acc27b78113e23092eba409c598205cfffd348d6acd0226efb4eb6"} Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.148447 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.148538 4915 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-vzw97 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.148569 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" podUID="8b2b7254-d441-4474-b545-16b89b18f845" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.150325 4915 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-882qk container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.150575 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk" podUID="a76afbe7-44a3-4e31-ba26-53695d082598" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.151901 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6kwr6" event={"ID":"331b9c26-d7e8-4ef9-97cc-ec36c884cc2d","Type":"ContainerStarted","Data":"6b1eb48fb1822202d58f7212ac443463479869d7bf8406ad93cc416dd777918b"} Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.152448 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6kwr6" event={"ID":"331b9c26-d7e8-4ef9-97cc-ec36c884cc2d","Type":"ContainerStarted","Data":"3d50faa0aefaf2e6be2ea1971bfa43a8221694e3e2c21be9b6426d25eb93fb48"} Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.153394 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" event={"ID":"16f09be8-b919-45ca-8a58-38e72e3bb85c","Type":"ContainerStarted","Data":"2735e81e2d4cbf3456de007300c7d3b3b4ad8b21361f43ecd8d9236d12fc0e51"} Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.153838 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.156118 4915 generic.go:334] "Generic (PLEG): container finished" podID="3c01f2fd-2df1-4def-9cb0-141697bd5e80" containerID="9fbcbbe570e63a72362a46a5e40a8152ca2ac229e74ca1b70ec9d0d0c2a2a99a" exitCode=0 Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.156171 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" event={"ID":"3c01f2fd-2df1-4def-9cb0-141697bd5e80","Type":"ContainerDied","Data":"9fbcbbe570e63a72362a46a5e40a8152ca2ac229e74ca1b70ec9d0d0c2a2a99a"} Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.156226 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" event={"ID":"3c01f2fd-2df1-4def-9cb0-141697bd5e80","Type":"ContainerStarted","Data":"f72e768142bde431f0068622a7fabd4c021eaa86e0f43f947aa5b4671b805484"} Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.173642 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.194845 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.213896 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.234039 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.253625 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.274210 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.296453 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.315357 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.335758 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.354305 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.375821 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.394105 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.413924 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.434367 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.454430 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.474633 4915 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.494378 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.553743 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9e1f0e51-1e0a-4449-9b41-54894000941f-images\") pod \"machine-config-operator-74547568cd-m7bv2\" (UID: \"9e1f0e51-1e0a-4449-9b41-54894000941f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7bv2" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.553865 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e03d9c82-1127-4320-858f-bb1a66eb487f-trusted-ca\") pod \"ingress-operator-5b745b69d9-v5sff\" (UID: \"e03d9c82-1127-4320-858f-bb1a66eb487f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v5sff" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.553909 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.553963 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.554000 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d4047fcf-98c6-410c-9388-5af5a8640820-etcd-service-ca\") pod \"etcd-operator-b45778765-8sqx6\" (UID: \"d4047fcf-98c6-410c-9388-5af5a8640820\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sqx6" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.554044 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33cf36b9-fe39-4635-971e-e6b434b58980-serving-cert\") pod \"authentication-operator-69f744f599-ljskk\" (UID: \"33cf36b9-fe39-4635-971e-e6b434b58980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ljskk" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.554081 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4ed4ecf0-fe78-4606-bbfc-abe9539010be-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rwhgq\" (UID: \"4ed4ecf0-fe78-4606-bbfc-abe9539010be\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rwhgq" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.554120 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/549b6c98-7f0f-45f4-bf16-e0c9f0157a72-serving-cert\") pod \"console-operator-58897d9998-xhjvn\" (UID: \"549b6c98-7f0f-45f4-bf16-e0c9f0157a72\") " pod="openshift-console-operator/console-operator-58897d9998-xhjvn" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.554171 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/022a9537-e4e4-46d1-98d8-eb9f8d88b83e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5xckh\" (UID: \"022a9537-e4e4-46d1-98d8-eb9f8d88b83e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5xckh" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.554206 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn82w\" (UniqueName: \"kubernetes.io/projected/06e673dd-e486-441e-b971-a72c53032558-kube-api-access-wn82w\") pod \"openshift-controller-manager-operator-756b6f6bc6-x5nrn\" (UID: \"06e673dd-e486-441e-b971-a72c53032558\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5nrn" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.554236 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/864c2cff-cecd-4156-afd9-088a9b9a1956-audit-dir\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.554667 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.554740 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d97ce85d-90e3-410f-bd7c-812149c6933f-trusted-ca-bundle\") pod \"console-f9d7485db-sd8x9\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.554781 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d97ce85d-90e3-410f-bd7c-812149c6933f-oauth-serving-cert\") pod \"console-f9d7485db-sd8x9\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.554908 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/549b6c98-7f0f-45f4-bf16-e0c9f0157a72-trusted-ca\") pod \"console-operator-58897d9998-xhjvn\" (UID: \"549b6c98-7f0f-45f4-bf16-e0c9f0157a72\") " pod="openshift-console-operator/console-operator-58897d9998-xhjvn" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.554966 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.554998 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.555024 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfbzf\" (UniqueName: \"kubernetes.io/projected/d97ce85d-90e3-410f-bd7c-812149c6933f-kube-api-access-sfbzf\") pod \"console-f9d7485db-sd8x9\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.555087 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d97ce85d-90e3-410f-bd7c-812149c6933f-console-oauth-config\") pod \"console-f9d7485db-sd8x9\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.555383 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8ea219a5-50e9-41e3-887e-e23d61ed73ef-images\") pod \"machine-api-operator-5694c8668f-ltcdr\" (UID: \"8ea219a5-50e9-41e3-887e-e23d61ed73ef\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcdr" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.555444 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh2rw\" (UniqueName: \"kubernetes.io/projected/2f57fca5-f644-4ce3-87fb-7cd1b17118ec-kube-api-access-rh2rw\") pod \"machine-approver-56656f9798-rd4g2\" (UID: \"2f57fca5-f644-4ce3-87fb-7cd1b17118ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd4g2" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.555482 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb7g8\" (UniqueName: \"kubernetes.io/projected/549b6c98-7f0f-45f4-bf16-e0c9f0157a72-kube-api-access-kb7g8\") pod \"console-operator-58897d9998-xhjvn\" (UID: \"549b6c98-7f0f-45f4-bf16-e0c9f0157a72\") " pod="openshift-console-operator/console-operator-58897d9998-xhjvn" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.555555 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgbcz\" (UniqueName: \"kubernetes.io/projected/8ea219a5-50e9-41e3-887e-e23d61ed73ef-kube-api-access-tgbcz\") pod \"machine-api-operator-5694c8668f-ltcdr\" (UID: \"8ea219a5-50e9-41e3-887e-e23d61ed73ef\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcdr" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.555597 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33cf36b9-fe39-4635-971e-e6b434b58980-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ljskk\" (UID: \"33cf36b9-fe39-4635-971e-e6b434b58980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ljskk" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.555631 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.555660 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d97ce85d-90e3-410f-bd7c-812149c6933f-service-ca\") pod \"console-f9d7485db-sd8x9\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.555739 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85zk4\" (UniqueName: \"kubernetes.io/projected/36eeecc3-2b95-4f0f-a182-49b66fdb48c1-kube-api-access-85zk4\") pod \"cluster-image-registry-operator-dc59b4c8b-4sxzb\" (UID: \"36eeecc3-2b95-4f0f-a182-49b66fdb48c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4sxzb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.555856 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.555906 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.555942 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x26l5\" (UniqueName: \"kubernetes.io/projected/e03d9c82-1127-4320-858f-bb1a66eb487f-kube-api-access-x26l5\") pod \"ingress-operator-5b745b69d9-v5sff\" (UID: \"e03d9c82-1127-4320-858f-bb1a66eb487f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v5sff" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.556073 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/36eeecc3-2b95-4f0f-a182-49b66fdb48c1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4sxzb\" (UID: \"36eeecc3-2b95-4f0f-a182-49b66fdb48c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4sxzb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.556137 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/549b6c98-7f0f-45f4-bf16-e0c9f0157a72-config\") pod \"console-operator-58897d9998-xhjvn\" (UID: \"549b6c98-7f0f-45f4-bf16-e0c9f0157a72\") " pod="openshift-console-operator/console-operator-58897d9998-xhjvn" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.556170 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d4047fcf-98c6-410c-9388-5af5a8640820-etcd-client\") pod \"etcd-operator-b45778765-8sqx6\" (UID: \"d4047fcf-98c6-410c-9388-5af5a8640820\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sqx6" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.556233 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.556272 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68s8k\" (UniqueName: \"kubernetes.io/projected/42d32061-12d1-4e45-8217-73fc194a1b3f-kube-api-access-68s8k\") pod \"downloads-7954f5f757-l57nd\" (UID: \"42d32061-12d1-4e45-8217-73fc194a1b3f\") " pod="openshift-console/downloads-7954f5f757-l57nd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.556353 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-trusted-ca\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.556389 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/022a9537-e4e4-46d1-98d8-eb9f8d88b83e-config\") pod \"kube-controller-manager-operator-78b949d7b-5xckh\" (UID: \"022a9537-e4e4-46d1-98d8-eb9f8d88b83e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5xckh" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.556465 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.556518 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddv28\" (UniqueName: \"kubernetes.io/projected/78d062b6-d48b-44f6-abc5-c713cf372402-kube-api-access-ddv28\") pod \"migrator-59844c95c7-l8m44\" (UID: \"78d062b6-d48b-44f6-abc5-c713cf372402\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l8m44" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.556615 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9e1f0e51-1e0a-4449-9b41-54894000941f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-m7bv2\" (UID: \"9e1f0e51-1e0a-4449-9b41-54894000941f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7bv2" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.556678 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-bound-sa-token\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.556709 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33cf36b9-fe39-4635-971e-e6b434b58980-config\") pod \"authentication-operator-69f744f599-ljskk\" (UID: \"33cf36b9-fe39-4635-971e-e6b434b58980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ljskk" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.556755 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ea219a5-50e9-41e3-887e-e23d61ed73ef-config\") pod \"machine-api-operator-5694c8668f-ltcdr\" (UID: \"8ea219a5-50e9-41e3-887e-e23d61ed73ef\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcdr" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.556787 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f57fca5-f644-4ce3-87fb-7cd1b17118ec-auth-proxy-config\") pod \"machine-approver-56656f9798-rd4g2\" (UID: \"2f57fca5-f644-4ce3-87fb-7cd1b17118ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd4g2" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.556849 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f57fca5-f644-4ce3-87fb-7cd1b17118ec-config\") pod \"machine-approver-56656f9798-rd4g2\" (UID: \"2f57fca5-f644-4ce3-87fb-7cd1b17118ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd4g2" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.556913 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.557457 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg85h\" (UniqueName: \"kubernetes.io/projected/4ed4ecf0-fe78-4606-bbfc-abe9539010be-kube-api-access-fg85h\") pod \"multus-admission-controller-857f4d67dd-rwhgq\" (UID: \"4ed4ecf0-fe78-4606-bbfc-abe9539010be\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rwhgq" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.557505 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v59cm\" (UniqueName: \"kubernetes.io/projected/cb28ab26-da64-46ae-9558-ab9c29622a7d-kube-api-access-v59cm\") pod \"dns-operator-744455d44c-d27cj\" (UID: \"cb28ab26-da64-46ae-9558-ab9c29622a7d\") " pod="openshift-dns-operator/dns-operator-744455d44c-d27cj" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.557539 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lrrl\" (UniqueName: \"kubernetes.io/projected/5d15c942-59c9-4570-8c33-a972dce41fd6-kube-api-access-5lrrl\") pod \"cluster-samples-operator-665b6dd947-mx2qp\" (UID: \"5d15c942-59c9-4570-8c33-a972dce41fd6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mx2qp" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.557600 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4848a06-bbb5-4855-b7f7-baa6e534cce2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zkhzf\" (UID: \"b4848a06-bbb5-4855-b7f7-baa6e534cce2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zkhzf" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.557694 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33cf36b9-fe39-4635-971e-e6b434b58980-service-ca-bundle\") pod \"authentication-operator-69f744f599-ljskk\" (UID: \"33cf36b9-fe39-4635-971e-e6b434b58980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ljskk" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.557734 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d97ce85d-90e3-410f-bd7c-812149c6933f-console-serving-cert\") pod \"console-f9d7485db-sd8x9\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.558087 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4047fcf-98c6-410c-9388-5af5a8640820-serving-cert\") pod \"etcd-operator-b45778765-8sqx6\" (UID: \"d4047fcf-98c6-410c-9388-5af5a8640820\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sqx6" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.558198 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-registry-certificates\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.558249 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c49ph\" (UniqueName: \"kubernetes.io/projected/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-kube-api-access-c49ph\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.558280 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/022a9537-e4e4-46d1-98d8-eb9f8d88b83e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5xckh\" (UID: \"022a9537-e4e4-46d1-98d8-eb9f8d88b83e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5xckh" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.558852 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d4047fcf-98c6-410c-9388-5af5a8640820-etcd-ca\") pod \"etcd-operator-b45778765-8sqx6\" (UID: \"d4047fcf-98c6-410c-9388-5af5a8640820\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sqx6" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.558896 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d52r7\" (UniqueName: \"kubernetes.io/projected/b4848a06-bbb5-4855-b7f7-baa6e534cce2-kube-api-access-d52r7\") pod \"machine-config-controller-84d6567774-zkhzf\" (UID: \"b4848a06-bbb5-4855-b7f7-baa6e534cce2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zkhzf" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.559137 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67fnf\" (UniqueName: \"kubernetes.io/projected/9e1f0e51-1e0a-4449-9b41-54894000941f-kube-api-access-67fnf\") pod \"machine-config-operator-74547568cd-m7bv2\" (UID: \"9e1f0e51-1e0a-4449-9b41-54894000941f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7bv2" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.559185 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.559254 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06e673dd-e486-441e-b971-a72c53032558-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-x5nrn\" (UID: \"06e673dd-e486-441e-b971-a72c53032558\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5nrn" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.559286 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d15c942-59c9-4570-8c33-a972dce41fd6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-mx2qp\" (UID: \"5d15c942-59c9-4570-8c33-a972dce41fd6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mx2qp" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.559346 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4848a06-bbb5-4855-b7f7-baa6e534cce2-proxy-tls\") pod \"machine-config-controller-84d6567774-zkhzf\" (UID: \"b4848a06-bbb5-4855-b7f7-baa6e534cce2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zkhzf" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.559411 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-registry-tls\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.559627 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36eeecc3-2b95-4f0f-a182-49b66fdb48c1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4sxzb\" (UID: \"36eeecc3-2b95-4f0f-a182-49b66fdb48c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4sxzb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.559783 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.559864 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cb28ab26-da64-46ae-9558-ab9c29622a7d-metrics-tls\") pod \"dns-operator-744455d44c-d27cj\" (UID: \"cb28ab26-da64-46ae-9558-ab9c29622a7d\") " pod="openshift-dns-operator/dns-operator-744455d44c-d27cj" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.559904 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6ghw\" (UniqueName: \"kubernetes.io/projected/33cf36b9-fe39-4635-971e-e6b434b58980-kube-api-access-r6ghw\") pod \"authentication-operator-69f744f599-ljskk\" (UID: \"33cf36b9-fe39-4635-971e-e6b434b58980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ljskk" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.559951 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9e1f0e51-1e0a-4449-9b41-54894000941f-proxy-tls\") pod \"machine-config-operator-74547568cd-m7bv2\" (UID: \"9e1f0e51-1e0a-4449-9b41-54894000941f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7bv2" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.559983 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ea219a5-50e9-41e3-887e-e23d61ed73ef-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ltcdr\" (UID: \"8ea219a5-50e9-41e3-887e-e23d61ed73ef\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcdr" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.560099 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwc8s\" (UniqueName: \"kubernetes.io/projected/864c2cff-cecd-4156-afd9-088a9b9a1956-kube-api-access-bwc8s\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.560130 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36eeecc3-2b95-4f0f-a182-49b66fdb48c1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4sxzb\" (UID: \"36eeecc3-2b95-4f0f-a182-49b66fdb48c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4sxzb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.560152 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e03d9c82-1127-4320-858f-bb1a66eb487f-metrics-tls\") pod \"ingress-operator-5b745b69d9-v5sff\" (UID: \"e03d9c82-1127-4320-858f-bb1a66eb487f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v5sff" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.560168 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/864c2cff-cecd-4156-afd9-088a9b9a1956-audit-policies\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.560217 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d97ce85d-90e3-410f-bd7c-812149c6933f-console-config\") pod \"console-f9d7485db-sd8x9\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.560295 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2f57fca5-f644-4ce3-87fb-7cd1b17118ec-machine-approver-tls\") pod \"machine-approver-56656f9798-rd4g2\" (UID: \"2f57fca5-f644-4ce3-87fb-7cd1b17118ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd4g2" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.560330 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06e673dd-e486-441e-b971-a72c53032558-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-x5nrn\" (UID: \"06e673dd-e486-441e-b971-a72c53032558\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5nrn" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.560370 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: E0127 18:44:15.560413 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:16.060386969 +0000 UTC m=+147.418240673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.560498 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4047fcf-98c6-410c-9388-5af5a8640820-config\") pod \"etcd-operator-b45778765-8sqx6\" (UID: \"d4047fcf-98c6-410c-9388-5af5a8640820\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sqx6" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.560541 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e03d9c82-1127-4320-858f-bb1a66eb487f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-v5sff\" (UID: \"e03d9c82-1127-4320-858f-bb1a66eb487f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v5sff" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.560590 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgjxz\" (UniqueName: \"kubernetes.io/projected/d4047fcf-98c6-410c-9388-5af5a8640820-kube-api-access-jgjxz\") pod \"etcd-operator-b45778765-8sqx6\" (UID: \"d4047fcf-98c6-410c-9388-5af5a8640820\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sqx6" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.662026 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:15 crc kubenswrapper[4915]: E0127 18:44:15.662231 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:16.162183649 +0000 UTC m=+147.520037373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.662315 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdc47495-7d8a-4981-8647-f93a058fe075-service-ca-bundle\") pod \"router-default-5444994796-49lf7\" (UID: \"cdc47495-7d8a-4981-8647-f93a058fe075\") " pod="openshift-ingress/router-default-5444994796-49lf7" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.662406 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.662462 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfbzf\" (UniqueName: \"kubernetes.io/projected/d97ce85d-90e3-410f-bd7c-812149c6933f-kube-api-access-sfbzf\") pod \"console-f9d7485db-sd8x9\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.662544 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d97ce85d-90e3-410f-bd7c-812149c6933f-console-oauth-config\") pod \"console-f9d7485db-sd8x9\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.662620 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8ea219a5-50e9-41e3-887e-e23d61ed73ef-images\") pod \"machine-api-operator-5694c8668f-ltcdr\" (UID: \"8ea219a5-50e9-41e3-887e-e23d61ed73ef\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcdr" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.662674 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9565c20-c5f0-49a7-acd2-1cc2f4862085-config-volume\") pod \"collect-profiles-29492310-cvxsw\" (UID: \"c9565c20-c5f0-49a7-acd2-1cc2f4862085\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-cvxsw" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.662735 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgbcz\" (UniqueName: \"kubernetes.io/projected/8ea219a5-50e9-41e3-887e-e23d61ed73ef-kube-api-access-tgbcz\") pod \"machine-api-operator-5694c8668f-ltcdr\" (UID: \"8ea219a5-50e9-41e3-887e-e23d61ed73ef\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcdr" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.662836 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85zk4\" (UniqueName: \"kubernetes.io/projected/36eeecc3-2b95-4f0f-a182-49b66fdb48c1-kube-api-access-85zk4\") pod \"cluster-image-registry-operator-dc59b4c8b-4sxzb\" (UID: \"36eeecc3-2b95-4f0f-a182-49b66fdb48c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4sxzb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.662899 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33cf36b9-fe39-4635-971e-e6b434b58980-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ljskk\" (UID: \"33cf36b9-fe39-4635-971e-e6b434b58980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ljskk" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.662957 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2m54\" (UniqueName: \"kubernetes.io/projected/7fafa3c1-2b8b-4f48-b875-b3951548b8dc-kube-api-access-z2m54\") pod \"kube-storage-version-migrator-operator-b67b599dd-xmqnm\" (UID: \"7fafa3c1-2b8b-4f48-b875-b3951548b8dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xmqnm" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.663008 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/26adc908-f613-427c-821d-9070ea52de42-node-bootstrap-token\") pod \"machine-config-server-wpfnn\" (UID: \"26adc908-f613-427c-821d-9070ea52de42\") " pod="openshift-machine-config-operator/machine-config-server-wpfnn" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.663063 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x26l5\" (UniqueName: \"kubernetes.io/projected/e03d9c82-1127-4320-858f-bb1a66eb487f-kube-api-access-x26l5\") pod \"ingress-operator-5b745b69d9-v5sff\" (UID: \"e03d9c82-1127-4320-858f-bb1a66eb487f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v5sff" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.663602 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2c64c84-9ed1-47b6-af09-0af742ec9771-serving-cert\") pod \"openshift-config-operator-7777fb866f-7npbf\" (UID: \"f2c64c84-9ed1-47b6-af09-0af742ec9771\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7npbf" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.664905 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/36eeecc3-2b95-4f0f-a182-49b66fdb48c1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4sxzb\" (UID: \"36eeecc3-2b95-4f0f-a182-49b66fdb48c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4sxzb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.664992 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/549b6c98-7f0f-45f4-bf16-e0c9f0157a72-config\") pod \"console-operator-58897d9998-xhjvn\" (UID: \"549b6c98-7f0f-45f4-bf16-e0c9f0157a72\") " pod="openshift-console-operator/console-operator-58897d9998-xhjvn" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.665014 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33cf36b9-fe39-4635-971e-e6b434b58980-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ljskk\" (UID: \"33cf36b9-fe39-4635-971e-e6b434b58980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ljskk" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.665037 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2b4k\" (UniqueName: \"kubernetes.io/projected/d6f6bcf0-b97a-4e23-9c09-d9f817d725ac-kube-api-access-h2b4k\") pod \"csi-hostpathplugin-4lnms\" (UID: \"d6f6bcf0-b97a-4e23-9c09-d9f817d725ac\") " pod="hostpath-provisioner/csi-hostpathplugin-4lnms" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.665102 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d4047fcf-98c6-410c-9388-5af5a8640820-etcd-client\") pod \"etcd-operator-b45778765-8sqx6\" (UID: \"d4047fcf-98c6-410c-9388-5af5a8640820\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sqx6" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.665112 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8ea219a5-50e9-41e3-887e-e23d61ed73ef-images\") pod \"machine-api-operator-5694c8668f-ltcdr\" (UID: \"8ea219a5-50e9-41e3-887e-e23d61ed73ef\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcdr" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.665127 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68s8k\" (UniqueName: \"kubernetes.io/projected/42d32061-12d1-4e45-8217-73fc194a1b3f-kube-api-access-68s8k\") pod \"downloads-7954f5f757-l57nd\" (UID: \"42d32061-12d1-4e45-8217-73fc194a1b3f\") " pod="openshift-console/downloads-7954f5f757-l57nd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.665249 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddv28\" (UniqueName: \"kubernetes.io/projected/78d062b6-d48b-44f6-abc5-c713cf372402-kube-api-access-ddv28\") pod \"migrator-59844c95c7-l8m44\" (UID: \"78d062b6-d48b-44f6-abc5-c713cf372402\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l8m44" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.666340 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/022a9537-e4e4-46d1-98d8-eb9f8d88b83e-config\") pod \"kube-controller-manager-operator-78b949d7b-5xckh\" (UID: \"022a9537-e4e4-46d1-98d8-eb9f8d88b83e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5xckh" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.666980 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33cf36b9-fe39-4635-971e-e6b434b58980-config\") pod \"authentication-operator-69f744f599-ljskk\" (UID: \"33cf36b9-fe39-4635-971e-e6b434b58980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ljskk" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.667414 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/549b6c98-7f0f-45f4-bf16-e0c9f0157a72-config\") pod \"console-operator-58897d9998-xhjvn\" (UID: \"549b6c98-7f0f-45f4-bf16-e0c9f0157a72\") " pod="openshift-console-operator/console-operator-58897d9998-xhjvn" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.667552 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9e1f0e51-1e0a-4449-9b41-54894000941f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-m7bv2\" (UID: \"9e1f0e51-1e0a-4449-9b41-54894000941f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7bv2" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.667689 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg85h\" (UniqueName: \"kubernetes.io/projected/4ed4ecf0-fe78-4606-bbfc-abe9539010be-kube-api-access-fg85h\") pod \"multus-admission-controller-857f4d67dd-rwhgq\" (UID: \"4ed4ecf0-fe78-4606-bbfc-abe9539010be\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rwhgq" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.667757 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33cf36b9-fe39-4635-971e-e6b434b58980-config\") pod \"authentication-operator-69f744f599-ljskk\" (UID: \"33cf36b9-fe39-4635-971e-e6b434b58980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ljskk" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.667776 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9536396c-05f0-49f2-96b6-969e1c9b92a2-config\") pod \"kube-apiserver-operator-766d6c64bb-pl9zt\" (UID: \"9536396c-05f0-49f2-96b6-969e1c9b92a2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pl9zt" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.667942 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/022a9537-e4e4-46d1-98d8-eb9f8d88b83e-config\") pod \"kube-controller-manager-operator-78b949d7b-5xckh\" (UID: \"022a9537-e4e4-46d1-98d8-eb9f8d88b83e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5xckh" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.667947 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d6f6bcf0-b97a-4e23-9c09-d9f817d725ac-socket-dir\") pod \"csi-hostpathplugin-4lnms\" (UID: \"d6f6bcf0-b97a-4e23-9c09-d9f817d725ac\") " pod="hostpath-provisioner/csi-hostpathplugin-4lnms" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.668107 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f57fca5-f644-4ce3-87fb-7cd1b17118ec-auth-proxy-config\") pod \"machine-approver-56656f9798-rd4g2\" (UID: \"2f57fca5-f644-4ce3-87fb-7cd1b17118ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd4g2" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.668192 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f57fca5-f644-4ce3-87fb-7cd1b17118ec-config\") pod \"machine-approver-56656f9798-rd4g2\" (UID: \"2f57fca5-f644-4ce3-87fb-7cd1b17118ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd4g2" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.668253 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v59cm\" (UniqueName: \"kubernetes.io/projected/cb28ab26-da64-46ae-9558-ab9c29622a7d-kube-api-access-v59cm\") pod \"dns-operator-744455d44c-d27cj\" (UID: \"cb28ab26-da64-46ae-9558-ab9c29622a7d\") " pod="openshift-dns-operator/dns-operator-744455d44c-d27cj" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.668513 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lrrl\" (UniqueName: \"kubernetes.io/projected/5d15c942-59c9-4570-8c33-a972dce41fd6-kube-api-access-5lrrl\") pod \"cluster-samples-operator-665b6dd947-mx2qp\" (UID: \"5d15c942-59c9-4570-8c33-a972dce41fd6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mx2qp" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.668635 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr76n\" (UniqueName: \"kubernetes.io/projected/a0b86903-8500-47c1-9fdb-5b0dea888375-kube-api-access-pr76n\") pod \"marketplace-operator-79b997595-qgxwz\" (UID: \"a0b86903-8500-47c1-9fdb-5b0dea888375\") " pod="openshift-marketplace/marketplace-operator-79b997595-qgxwz" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.668982 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9e1f0e51-1e0a-4449-9b41-54894000941f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-m7bv2\" (UID: \"9e1f0e51-1e0a-4449-9b41-54894000941f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7bv2" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.669169 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d97ce85d-90e3-410f-bd7c-812149c6933f-console-serving-cert\") pod \"console-f9d7485db-sd8x9\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.669210 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/972629a9-8882-491c-863e-770570a0aeac-apiservice-cert\") pod \"packageserver-d55dfcdfc-kqphj\" (UID: \"972629a9-8882-491c-863e-770570a0aeac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqphj" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.669304 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4047fcf-98c6-410c-9388-5af5a8640820-serving-cert\") pod \"etcd-operator-b45778765-8sqx6\" (UID: \"d4047fcf-98c6-410c-9388-5af5a8640820\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sqx6" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.669396 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d6f6bcf0-b97a-4e23-9c09-d9f817d725ac-plugins-dir\") pod \"csi-hostpathplugin-4lnms\" (UID: \"d6f6bcf0-b97a-4e23-9c09-d9f817d725ac\") " pod="hostpath-provisioner/csi-hostpathplugin-4lnms" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.669508 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrrzn\" (UniqueName: \"kubernetes.io/projected/60b5b6a8-738d-4575-a1d1-ecdec1decb98-kube-api-access-lrrzn\") pod \"ingress-canary-nw5h9\" (UID: \"60b5b6a8-738d-4575-a1d1-ecdec1decb98\") " pod="openshift-ingress-canary/ingress-canary-nw5h9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.669612 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d4047fcf-98c6-410c-9388-5af5a8640820-etcd-ca\") pod \"etcd-operator-b45778765-8sqx6\" (UID: \"d4047fcf-98c6-410c-9388-5af5a8640820\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sqx6" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.669763 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fafa3c1-2b8b-4f48-b875-b3951548b8dc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xmqnm\" (UID: \"7fafa3c1-2b8b-4f48-b875-b3951548b8dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xmqnm" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.669900 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06e673dd-e486-441e-b971-a72c53032558-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-x5nrn\" (UID: \"06e673dd-e486-441e-b971-a72c53032558\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5nrn" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.669907 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f57fca5-f644-4ce3-87fb-7cd1b17118ec-auth-proxy-config\") pod \"machine-approver-56656f9798-rd4g2\" (UID: \"2f57fca5-f644-4ce3-87fb-7cd1b17118ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd4g2" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.670319 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f57fca5-f644-4ce3-87fb-7cd1b17118ec-config\") pod \"machine-approver-56656f9798-rd4g2\" (UID: \"2f57fca5-f644-4ce3-87fb-7cd1b17118ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd4g2" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.670473 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67fnf\" (UniqueName: \"kubernetes.io/projected/9e1f0e51-1e0a-4449-9b41-54894000941f-kube-api-access-67fnf\") pod \"machine-config-operator-74547568cd-m7bv2\" (UID: \"9e1f0e51-1e0a-4449-9b41-54894000941f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7bv2" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.670573 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fafa3c1-2b8b-4f48-b875-b3951548b8dc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xmqnm\" (UID: \"7fafa3c1-2b8b-4f48-b875-b3951548b8dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xmqnm" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.670719 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee527873-9e5c-4c8f-8634-8cc2dc627cbc-config-volume\") pod \"dns-default-gpbpr\" (UID: \"ee527873-9e5c-4c8f-8634-8cc2dc627cbc\") " pod="openshift-dns/dns-default-gpbpr" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.670843 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c3f419c0-de2f-49f4-8d15-34a04465730b-signing-key\") pod \"service-ca-9c57cc56f-x4lgb\" (UID: \"c3f419c0-de2f-49f4-8d15-34a04465730b\") " pod="openshift-service-ca/service-ca-9c57cc56f-x4lgb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.671140 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c3f419c0-de2f-49f4-8d15-34a04465730b-signing-cabundle\") pod \"service-ca-9c57cc56f-x4lgb\" (UID: \"c3f419c0-de2f-49f4-8d15-34a04465730b\") " pod="openshift-service-ca/service-ca-9c57cc56f-x4lgb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.671260 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-registry-tls\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.671338 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d15c942-59c9-4570-8c33-a972dce41fd6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-mx2qp\" (UID: \"5d15c942-59c9-4570-8c33-a972dce41fd6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mx2qp" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.671415 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36eeecc3-2b95-4f0f-a182-49b66fdb48c1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4sxzb\" (UID: \"36eeecc3-2b95-4f0f-a182-49b66fdb48c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4sxzb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.671617 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cb28ab26-da64-46ae-9558-ab9c29622a7d-metrics-tls\") pod \"dns-operator-744455d44c-d27cj\" (UID: \"cb28ab26-da64-46ae-9558-ab9c29622a7d\") " pod="openshift-dns-operator/dns-operator-744455d44c-d27cj" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.671699 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6ghw\" (UniqueName: \"kubernetes.io/projected/33cf36b9-fe39-4635-971e-e6b434b58980-kube-api-access-r6ghw\") pod \"authentication-operator-69f744f599-ljskk\" (UID: \"33cf36b9-fe39-4635-971e-e6b434b58980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ljskk" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.671774 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2e593e2-bf9f-415b-9efd-6ffd49cf9905-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-tc9fv\" (UID: \"c2e593e2-bf9f-415b-9efd-6ffd49cf9905\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tc9fv" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.673389 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d97ce85d-90e3-410f-bd7c-812149c6933f-console-oauth-config\") pod \"console-f9d7485db-sd8x9\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.674894 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/36eeecc3-2b95-4f0f-a182-49b66fdb48c1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4sxzb\" (UID: \"36eeecc3-2b95-4f0f-a182-49b66fdb48c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4sxzb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.676585 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ea219a5-50e9-41e3-887e-e23d61ed73ef-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ltcdr\" (UID: \"8ea219a5-50e9-41e3-887e-e23d61ed73ef\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcdr" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.676696 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e03d9c82-1127-4320-858f-bb1a66eb487f-metrics-tls\") pod \"ingress-operator-5b745b69d9-v5sff\" (UID: \"e03d9c82-1127-4320-858f-bb1a66eb487f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v5sff" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.677685 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d4047fcf-98c6-410c-9388-5af5a8640820-etcd-ca\") pod \"etcd-operator-b45778765-8sqx6\" (UID: \"d4047fcf-98c6-410c-9388-5af5a8640820\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sqx6" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.677783 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.678049 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch646\" (UniqueName: \"kubernetes.io/projected/c2e593e2-bf9f-415b-9efd-6ffd49cf9905-kube-api-access-ch646\") pod \"control-plane-machine-set-operator-78cbb6b69f-tc9fv\" (UID: \"c2e593e2-bf9f-415b-9efd-6ffd49cf9905\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tc9fv" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.679263 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d6f6bcf0-b97a-4e23-9c09-d9f817d725ac-csi-data-dir\") pod \"csi-hostpathplugin-4lnms\" (UID: \"d6f6bcf0-b97a-4e23-9c09-d9f817d725ac\") " pod="hostpath-provisioner/csi-hostpathplugin-4lnms" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.679413 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9565c20-c5f0-49a7-acd2-1cc2f4862085-secret-volume\") pod \"collect-profiles-29492310-cvxsw\" (UID: \"c9565c20-c5f0-49a7-acd2-1cc2f4862085\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-cvxsw" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.679491 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkngm\" (UniqueName: \"kubernetes.io/projected/c9565c20-c5f0-49a7-acd2-1cc2f4862085-kube-api-access-xkngm\") pod \"collect-profiles-29492310-cvxsw\" (UID: \"c9565c20-c5f0-49a7-acd2-1cc2f4862085\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-cvxsw" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.679627 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4c3452d-d381-4a8f-a917-d94870b996df-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gvmtd\" (UID: \"f4c3452d-d381-4a8f-a917-d94870b996df\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvmtd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.679711 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4047fcf-98c6-410c-9388-5af5a8640820-config\") pod \"etcd-operator-b45778765-8sqx6\" (UID: \"d4047fcf-98c6-410c-9388-5af5a8640820\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sqx6" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.680442 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.680577 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e03d9c82-1127-4320-858f-bb1a66eb487f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-v5sff\" (UID: \"e03d9c82-1127-4320-858f-bb1a66eb487f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v5sff" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.680769 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsthd\" (UniqueName: \"kubernetes.io/projected/ee527873-9e5c-4c8f-8634-8cc2dc627cbc-kube-api-access-bsthd\") pod \"dns-default-gpbpr\" (UID: \"ee527873-9e5c-4c8f-8634-8cc2dc627cbc\") " pod="openshift-dns/dns-default-gpbpr" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.680858 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgjxz\" (UniqueName: \"kubernetes.io/projected/d4047fcf-98c6-410c-9388-5af5a8640820-kube-api-access-jgjxz\") pod \"etcd-operator-b45778765-8sqx6\" (UID: \"d4047fcf-98c6-410c-9388-5af5a8640820\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sqx6" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.680904 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/163cabda-fdf9-41c8-a913-db9e8f848e91-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gwxs9\" (UID: \"163cabda-fdf9-41c8-a913-db9e8f848e91\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gwxs9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.680963 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cdc47495-7d8a-4981-8647-f93a058fe075-default-certificate\") pod \"router-default-5444994796-49lf7\" (UID: \"cdc47495-7d8a-4981-8647-f93a058fe075\") " pod="openshift-ingress/router-default-5444994796-49lf7" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.680992 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh9g9\" (UniqueName: \"kubernetes.io/projected/26adc908-f613-427c-821d-9070ea52de42-kube-api-access-gh9g9\") pod \"machine-config-server-wpfnn\" (UID: \"26adc908-f613-427c-821d-9070ea52de42\") " pod="openshift-machine-config-operator/machine-config-server-wpfnn" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.681031 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.681068 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d4047fcf-98c6-410c-9388-5af5a8640820-etcd-service-ca\") pod \"etcd-operator-b45778765-8sqx6\" (UID: \"d4047fcf-98c6-410c-9388-5af5a8640820\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sqx6" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.681096 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e03d9c82-1127-4320-858f-bb1a66eb487f-trusted-ca\") pod \"ingress-operator-5b745b69d9-v5sff\" (UID: \"e03d9c82-1127-4320-858f-bb1a66eb487f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v5sff" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.681134 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/972629a9-8882-491c-863e-770570a0aeac-tmpfs\") pod \"packageserver-d55dfcdfc-kqphj\" (UID: \"972629a9-8882-491c-863e-770570a0aeac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqphj" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.681170 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cdc47495-7d8a-4981-8647-f93a058fe075-metrics-certs\") pod \"router-default-5444994796-49lf7\" (UID: \"cdc47495-7d8a-4981-8647-f93a058fe075\") " pod="openshift-ingress/router-default-5444994796-49lf7" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.681201 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fbd96a5-cdcb-4fa4-81c0-ce07cd660089-serving-cert\") pod \"service-ca-operator-777779d784-kmfxg\" (UID: \"3fbd96a5-cdcb-4fa4-81c0-ce07cd660089\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kmfxg" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.681241 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4c3452d-d381-4a8f-a917-d94870b996df-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gvmtd\" (UID: \"f4c3452d-d381-4a8f-a917-d94870b996df\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvmtd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.681266 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7f7j\" (UniqueName: \"kubernetes.io/projected/cdc47495-7d8a-4981-8647-f93a058fe075-kube-api-access-j7f7j\") pod \"router-default-5444994796-49lf7\" (UID: \"cdc47495-7d8a-4981-8647-f93a058fe075\") " pod="openshift-ingress/router-default-5444994796-49lf7" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.681294 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/26adc908-f613-427c-821d-9070ea52de42-certs\") pod \"machine-config-server-wpfnn\" (UID: \"26adc908-f613-427c-821d-9070ea52de42\") " pod="openshift-machine-config-operator/machine-config-server-wpfnn" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.681327 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d97ce85d-90e3-410f-bd7c-812149c6933f-trusted-ca-bundle\") pod \"console-f9d7485db-sd8x9\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.681358 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d97ce85d-90e3-410f-bd7c-812149c6933f-oauth-serving-cert\") pod \"console-f9d7485db-sd8x9\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.681395 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/022a9537-e4e4-46d1-98d8-eb9f8d88b83e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5xckh\" (UID: \"022a9537-e4e4-46d1-98d8-eb9f8d88b83e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5xckh" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.681428 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.681464 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d6f6bcf0-b97a-4e23-9c09-d9f817d725ac-registration-dir\") pod \"csi-hostpathplugin-4lnms\" (UID: \"d6f6bcf0-b97a-4e23-9c09-d9f817d725ac\") " pod="hostpath-provisioner/csi-hostpathplugin-4lnms" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.681498 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4c3452d-d381-4a8f-a917-d94870b996df-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gvmtd\" (UID: \"f4c3452d-d381-4a8f-a917-d94870b996df\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvmtd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.681529 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz2z5\" (UniqueName: \"kubernetes.io/projected/163cabda-fdf9-41c8-a913-db9e8f848e91-kube-api-access-xz2z5\") pod \"package-server-manager-789f6589d5-gwxs9\" (UID: \"163cabda-fdf9-41c8-a913-db9e8f848e91\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gwxs9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.681557 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh2rw\" (UniqueName: \"kubernetes.io/projected/2f57fca5-f644-4ce3-87fb-7cd1b17118ec-kube-api-access-rh2rw\") pod \"machine-approver-56656f9798-rd4g2\" (UID: \"2f57fca5-f644-4ce3-87fb-7cd1b17118ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd4g2" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.681587 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb7g8\" (UniqueName: \"kubernetes.io/projected/549b6c98-7f0f-45f4-bf16-e0c9f0157a72-kube-api-access-kb7g8\") pod \"console-operator-58897d9998-xhjvn\" (UID: \"549b6c98-7f0f-45f4-bf16-e0c9f0157a72\") " pod="openshift-console-operator/console-operator-58897d9998-xhjvn" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.681646 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.681683 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d97ce85d-90e3-410f-bd7c-812149c6933f-service-ca\") pod \"console-f9d7485db-sd8x9\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.681707 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/972629a9-8882-491c-863e-770570a0aeac-webhook-cert\") pod \"packageserver-d55dfcdfc-kqphj\" (UID: \"972629a9-8882-491c-863e-770570a0aeac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqphj" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.681755 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9536396c-05f0-49f2-96b6-969e1c9b92a2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pl9zt\" (UID: \"9536396c-05f0-49f2-96b6-969e1c9b92a2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pl9zt" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.681840 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.681876 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.681909 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4c62b972-1fe6-4b17-9a53-37fdeaa4bd32-profile-collector-cert\") pod \"catalog-operator-68c6474976-9fkzb\" (UID: \"4c62b972-1fe6-4b17-9a53-37fdeaa4bd32\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9fkzb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.681943 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.681974 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtkxx\" (UniqueName: \"kubernetes.io/projected/4c62b972-1fe6-4b17-9a53-37fdeaa4bd32-kube-api-access-gtkxx\") pod \"catalog-operator-68c6474976-9fkzb\" (UID: \"4c62b972-1fe6-4b17-9a53-37fdeaa4bd32\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9fkzb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.682025 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-trusted-ca\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.682056 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fbd96a5-cdcb-4fa4-81c0-ce07cd660089-config\") pod \"service-ca-operator-777779d784-kmfxg\" (UID: \"3fbd96a5-cdcb-4fa4-81c0-ce07cd660089\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kmfxg" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.682086 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.682117 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-bound-sa-token\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.682145 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ea219a5-50e9-41e3-887e-e23d61ed73ef-config\") pod \"machine-api-operator-5694c8668f-ltcdr\" (UID: \"8ea219a5-50e9-41e3-887e-e23d61ed73ef\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcdr" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.682165 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4047fcf-98c6-410c-9388-5af5a8640820-config\") pod \"etcd-operator-b45778765-8sqx6\" (UID: \"d4047fcf-98c6-410c-9388-5af5a8640820\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sqx6" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.682175 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.682346 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0b86903-8500-47c1-9fdb-5b0dea888375-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qgxwz\" (UID: \"a0b86903-8500-47c1-9fdb-5b0dea888375\") " pod="openshift-marketplace/marketplace-operator-79b997595-qgxwz" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.682397 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e03d9c82-1127-4320-858f-bb1a66eb487f-metrics-tls\") pod \"ingress-operator-5b745b69d9-v5sff\" (UID: \"e03d9c82-1127-4320-858f-bb1a66eb487f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v5sff" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.683321 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4848a06-bbb5-4855-b7f7-baa6e534cce2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zkhzf\" (UID: \"b4848a06-bbb5-4855-b7f7-baa6e534cce2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zkhzf" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.683424 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33cf36b9-fe39-4635-971e-e6b434b58980-service-ca-bundle\") pod \"authentication-operator-69f744f599-ljskk\" (UID: \"33cf36b9-fe39-4635-971e-e6b434b58980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ljskk" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.683512 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ee527873-9e5c-4c8f-8634-8cc2dc627cbc-metrics-tls\") pod \"dns-default-gpbpr\" (UID: \"ee527873-9e5c-4c8f-8634-8cc2dc627cbc\") " pod="openshift-dns/dns-default-gpbpr" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.683578 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d97ce85d-90e3-410f-bd7c-812149c6933f-console-serving-cert\") pod \"console-f9d7485db-sd8x9\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.683585 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-registry-certificates\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.683743 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c49ph\" (UniqueName: \"kubernetes.io/projected/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-kube-api-access-c49ph\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.683889 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/022a9537-e4e4-46d1-98d8-eb9f8d88b83e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5xckh\" (UID: \"022a9537-e4e4-46d1-98d8-eb9f8d88b83e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5xckh" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.683976 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d52r7\" (UniqueName: \"kubernetes.io/projected/b4848a06-bbb5-4855-b7f7-baa6e534cce2-kube-api-access-d52r7\") pod \"machine-config-controller-84d6567774-zkhzf\" (UID: \"b4848a06-bbb5-4855-b7f7-baa6e534cce2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zkhzf" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.684053 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.684141 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e03d9c82-1127-4320-858f-bb1a66eb487f-trusted-ca\") pod \"ingress-operator-5b745b69d9-v5sff\" (UID: \"e03d9c82-1127-4320-858f-bb1a66eb487f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v5sff" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.684282 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cdc47495-7d8a-4981-8647-f93a058fe075-stats-auth\") pod \"router-default-5444994796-49lf7\" (UID: \"cdc47495-7d8a-4981-8647-f93a058fe075\") " pod="openshift-ingress/router-default-5444994796-49lf7" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.684377 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4848a06-bbb5-4855-b7f7-baa6e534cce2-proxy-tls\") pod \"machine-config-controller-84d6567774-zkhzf\" (UID: \"b4848a06-bbb5-4855-b7f7-baa6e534cce2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zkhzf" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.684448 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.684484 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a0b86903-8500-47c1-9fdb-5b0dea888375-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qgxwz\" (UID: \"a0b86903-8500-47c1-9fdb-5b0dea888375\") " pod="openshift-marketplace/marketplace-operator-79b997595-qgxwz" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.684623 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.684714 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9e1f0e51-1e0a-4449-9b41-54894000941f-proxy-tls\") pod \"machine-config-operator-74547568cd-m7bv2\" (UID: \"9e1f0e51-1e0a-4449-9b41-54894000941f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7bv2" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.685021 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d97ce85d-90e3-410f-bd7c-812149c6933f-service-ca\") pod \"console-f9d7485db-sd8x9\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.685157 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.685295 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d97ce85d-90e3-410f-bd7c-812149c6933f-oauth-serving-cert\") pod \"console-f9d7485db-sd8x9\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.685786 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.686732 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ea219a5-50e9-41e3-887e-e23d61ed73ef-config\") pod \"machine-api-operator-5694c8668f-ltcdr\" (UID: \"8ea219a5-50e9-41e3-887e-e23d61ed73ef\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcdr" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.688024 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-trusted-ca\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.688398 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/022a9537-e4e4-46d1-98d8-eb9f8d88b83e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5xckh\" (UID: \"022a9537-e4e4-46d1-98d8-eb9f8d88b83e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5xckh" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.688695 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.689122 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4047fcf-98c6-410c-9388-5af5a8640820-serving-cert\") pod \"etcd-operator-b45778765-8sqx6\" (UID: \"d4047fcf-98c6-410c-9388-5af5a8640820\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sqx6" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.689457 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-registry-certificates\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.689551 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sqpj\" (UniqueName: \"kubernetes.io/projected/3fbd96a5-cdcb-4fa4-81c0-ce07cd660089-kube-api-access-8sqpj\") pod \"service-ca-operator-777779d784-kmfxg\" (UID: \"3fbd96a5-cdcb-4fa4-81c0-ce07cd660089\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kmfxg" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.689681 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.689765 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.690209 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-registry-tls\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.690650 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.691185 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwc8s\" (UniqueName: \"kubernetes.io/projected/864c2cff-cecd-4156-afd9-088a9b9a1956-kube-api-access-bwc8s\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: E0127 18:44:15.691910 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:16.191206062 +0000 UTC m=+147.549059726 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.691989 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/60b5b6a8-738d-4575-a1d1-ecdec1decb98-cert\") pod \"ingress-canary-nw5h9\" (UID: \"60b5b6a8-738d-4575-a1d1-ecdec1decb98\") " pod="openshift-ingress-canary/ingress-canary-nw5h9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.692214 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.692698 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33cf36b9-fe39-4635-971e-e6b434b58980-service-ca-bundle\") pod \"authentication-operator-69f744f599-ljskk\" (UID: \"33cf36b9-fe39-4635-971e-e6b434b58980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ljskk" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.692088 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36eeecc3-2b95-4f0f-a182-49b66fdb48c1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4sxzb\" (UID: \"36eeecc3-2b95-4f0f-a182-49b66fdb48c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4sxzb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.692902 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2f57fca5-f644-4ce3-87fb-7cd1b17118ec-machine-approver-tls\") pod \"machine-approver-56656f9798-rd4g2\" (UID: \"2f57fca5-f644-4ce3-87fb-7cd1b17118ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd4g2" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.692943 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/864c2cff-cecd-4156-afd9-088a9b9a1956-audit-policies\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.692719 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06e673dd-e486-441e-b971-a72c53032558-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-x5nrn\" (UID: \"06e673dd-e486-441e-b971-a72c53032558\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5nrn" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.693520 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d97ce85d-90e3-410f-bd7c-812149c6933f-console-config\") pod \"console-f9d7485db-sd8x9\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.693573 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06e673dd-e486-441e-b971-a72c53032558-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-x5nrn\" (UID: \"06e673dd-e486-441e-b971-a72c53032558\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5nrn" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.694674 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d97ce85d-90e3-410f-bd7c-812149c6933f-console-config\") pod \"console-f9d7485db-sd8x9\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.695660 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06e673dd-e486-441e-b971-a72c53032558-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-x5nrn\" (UID: \"06e673dd-e486-441e-b971-a72c53032558\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5nrn" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.695752 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww7bx\" (UniqueName: \"kubernetes.io/projected/66c0e807-851e-4efe-bab8-90f94bb915e8-kube-api-access-ww7bx\") pod \"olm-operator-6b444d44fb-vvk64\" (UID: \"66c0e807-851e-4efe-bab8-90f94bb915e8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vvk64" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.695751 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36eeecc3-2b95-4f0f-a182-49b66fdb48c1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4sxzb\" (UID: \"36eeecc3-2b95-4f0f-a182-49b66fdb48c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4sxzb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.695832 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkl7h\" (UniqueName: \"kubernetes.io/projected/972629a9-8882-491c-863e-770570a0aeac-kube-api-access-jkl7h\") pod \"packageserver-d55dfcdfc-kqphj\" (UID: \"972629a9-8882-491c-863e-770570a0aeac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqphj" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.695836 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d4047fcf-98c6-410c-9388-5af5a8640820-etcd-client\") pod \"etcd-operator-b45778765-8sqx6\" (UID: \"d4047fcf-98c6-410c-9388-5af5a8640820\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sqx6" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.695882 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/66c0e807-851e-4efe-bab8-90f94bb915e8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vvk64\" (UID: \"66c0e807-851e-4efe-bab8-90f94bb915e8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vvk64" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.695918 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l89vb\" (UniqueName: \"kubernetes.io/projected/c3f419c0-de2f-49f4-8d15-34a04465730b-kube-api-access-l89vb\") pod \"service-ca-9c57cc56f-x4lgb\" (UID: \"c3f419c0-de2f-49f4-8d15-34a04465730b\") " pod="openshift-service-ca/service-ca-9c57cc56f-x4lgb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.695952 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9536396c-05f0-49f2-96b6-969e1c9b92a2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pl9zt\" (UID: \"9536396c-05f0-49f2-96b6-969e1c9b92a2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pl9zt" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.696017 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9e1f0e51-1e0a-4449-9b41-54894000941f-images\") pod \"machine-config-operator-74547568cd-m7bv2\" (UID: \"9e1f0e51-1e0a-4449-9b41-54894000941f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7bv2" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.696049 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f2c64c84-9ed1-47b6-af09-0af742ec9771-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7npbf\" (UID: \"f2c64c84-9ed1-47b6-af09-0af742ec9771\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7npbf" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.696075 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/66c0e807-851e-4efe-bab8-90f94bb915e8-srv-cert\") pod \"olm-operator-6b444d44fb-vvk64\" (UID: \"66c0e807-851e-4efe-bab8-90f94bb915e8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vvk64" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.696112 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33cf36b9-fe39-4635-971e-e6b434b58980-serving-cert\") pod \"authentication-operator-69f744f599-ljskk\" (UID: \"33cf36b9-fe39-4635-971e-e6b434b58980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ljskk" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.696164 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/864c2cff-cecd-4156-afd9-088a9b9a1956-audit-policies\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.696226 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d4047fcf-98c6-410c-9388-5af5a8640820-etcd-service-ca\") pod \"etcd-operator-b45778765-8sqx6\" (UID: \"d4047fcf-98c6-410c-9388-5af5a8640820\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sqx6" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.696292 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.696443 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4c62b972-1fe6-4b17-9a53-37fdeaa4bd32-srv-cert\") pod \"catalog-operator-68c6474976-9fkzb\" (UID: \"4c62b972-1fe6-4b17-9a53-37fdeaa4bd32\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9fkzb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.696487 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppnsp\" (UniqueName: \"kubernetes.io/projected/f2c64c84-9ed1-47b6-af09-0af742ec9771-kube-api-access-ppnsp\") pod \"openshift-config-operator-7777fb866f-7npbf\" (UID: \"f2c64c84-9ed1-47b6-af09-0af742ec9771\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7npbf" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.696546 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d6f6bcf0-b97a-4e23-9c09-d9f817d725ac-mountpoint-dir\") pod \"csi-hostpathplugin-4lnms\" (UID: \"d6f6bcf0-b97a-4e23-9c09-d9f817d725ac\") " pod="hostpath-provisioner/csi-hostpathplugin-4lnms" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.697270 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9e1f0e51-1e0a-4449-9b41-54894000941f-images\") pod \"machine-config-operator-74547568cd-m7bv2\" (UID: \"9e1f0e51-1e0a-4449-9b41-54894000941f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7bv2" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.697669 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9e1f0e51-1e0a-4449-9b41-54894000941f-proxy-tls\") pod \"machine-config-operator-74547568cd-m7bv2\" (UID: \"9e1f0e51-1e0a-4449-9b41-54894000941f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7bv2" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.698280 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4848a06-bbb5-4855-b7f7-baa6e534cce2-proxy-tls\") pod \"machine-config-controller-84d6567774-zkhzf\" (UID: \"b4848a06-bbb5-4855-b7f7-baa6e534cce2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zkhzf" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.698441 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4ed4ecf0-fe78-4606-bbfc-abe9539010be-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rwhgq\" (UID: \"4ed4ecf0-fe78-4606-bbfc-abe9539010be\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rwhgq" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.698524 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/549b6c98-7f0f-45f4-bf16-e0c9f0157a72-serving-cert\") pod \"console-operator-58897d9998-xhjvn\" (UID: \"549b6c98-7f0f-45f4-bf16-e0c9f0157a72\") " pod="openshift-console-operator/console-operator-58897d9998-xhjvn" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.698525 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4848a06-bbb5-4855-b7f7-baa6e534cce2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zkhzf\" (UID: \"b4848a06-bbb5-4855-b7f7-baa6e534cce2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zkhzf" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.698566 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn82w\" (UniqueName: \"kubernetes.io/projected/06e673dd-e486-441e-b971-a72c53032558-kube-api-access-wn82w\") pod \"openshift-controller-manager-operator-756b6f6bc6-x5nrn\" (UID: \"06e673dd-e486-441e-b971-a72c53032558\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5nrn" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.698593 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/864c2cff-cecd-4156-afd9-088a9b9a1956-audit-dir\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.698828 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.698861 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/549b6c98-7f0f-45f4-bf16-e0c9f0157a72-trusted-ca\") pod \"console-operator-58897d9998-xhjvn\" (UID: \"549b6c98-7f0f-45f4-bf16-e0c9f0157a72\") " pod="openshift-console-operator/console-operator-58897d9998-xhjvn" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.699268 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d97ce85d-90e3-410f-bd7c-812149c6933f-trusted-ca-bundle\") pod \"console-f9d7485db-sd8x9\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.699474 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/864c2cff-cecd-4156-afd9-088a9b9a1956-audit-dir\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.700187 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/549b6c98-7f0f-45f4-bf16-e0c9f0157a72-trusted-ca\") pod \"console-operator-58897d9998-xhjvn\" (UID: \"549b6c98-7f0f-45f4-bf16-e0c9f0157a72\") " pod="openshift-console-operator/console-operator-58897d9998-xhjvn" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.700703 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.703257 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d15c942-59c9-4570-8c33-a972dce41fd6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-mx2qp\" (UID: \"5d15c942-59c9-4570-8c33-a972dce41fd6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mx2qp" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.705746 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.706048 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2f57fca5-f644-4ce3-87fb-7cd1b17118ec-machine-approver-tls\") pod \"machine-approver-56656f9798-rd4g2\" (UID: \"2f57fca5-f644-4ce3-87fb-7cd1b17118ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd4g2" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.706105 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.706612 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cb28ab26-da64-46ae-9558-ab9c29622a7d-metrics-tls\") pod \"dns-operator-744455d44c-d27cj\" (UID: \"cb28ab26-da64-46ae-9558-ab9c29622a7d\") " pod="openshift-dns-operator/dns-operator-744455d44c-d27cj" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.707117 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4ed4ecf0-fe78-4606-bbfc-abe9539010be-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rwhgq\" (UID: \"4ed4ecf0-fe78-4606-bbfc-abe9539010be\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rwhgq" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.708086 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.709698 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/549b6c98-7f0f-45f4-bf16-e0c9f0157a72-serving-cert\") pod \"console-operator-58897d9998-xhjvn\" (UID: \"549b6c98-7f0f-45f4-bf16-e0c9f0157a72\") " pod="openshift-console-operator/console-operator-58897d9998-xhjvn" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.709710 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ea219a5-50e9-41e3-887e-e23d61ed73ef-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ltcdr\" (UID: \"8ea219a5-50e9-41e3-887e-e23d61ed73ef\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcdr" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.709830 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33cf36b9-fe39-4635-971e-e6b434b58980-serving-cert\") pod \"authentication-operator-69f744f599-ljskk\" (UID: \"33cf36b9-fe39-4635-971e-e6b434b58980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ljskk" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.710330 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgbcz\" (UniqueName: \"kubernetes.io/projected/8ea219a5-50e9-41e3-887e-e23d61ed73ef-kube-api-access-tgbcz\") pod \"machine-api-operator-5694c8668f-ltcdr\" (UID: \"8ea219a5-50e9-41e3-887e-e23d61ed73ef\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcdr" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.730654 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfbzf\" (UniqueName: \"kubernetes.io/projected/d97ce85d-90e3-410f-bd7c-812149c6933f-kube-api-access-sfbzf\") pod \"console-f9d7485db-sd8x9\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.733390 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcdr" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.742530 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.748606 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85zk4\" (UniqueName: \"kubernetes.io/projected/36eeecc3-2b95-4f0f-a182-49b66fdb48c1-kube-api-access-85zk4\") pod \"cluster-image-registry-operator-dc59b4c8b-4sxzb\" (UID: \"36eeecc3-2b95-4f0f-a182-49b66fdb48c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4sxzb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.785243 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x26l5\" (UniqueName: \"kubernetes.io/projected/e03d9c82-1127-4320-858f-bb1a66eb487f-kube-api-access-x26l5\") pod \"ingress-operator-5b745b69d9-v5sff\" (UID: \"e03d9c82-1127-4320-858f-bb1a66eb487f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v5sff" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.789530 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68s8k\" (UniqueName: \"kubernetes.io/projected/42d32061-12d1-4e45-8217-73fc194a1b3f-kube-api-access-68s8k\") pod \"downloads-7954f5f757-l57nd\" (UID: \"42d32061-12d1-4e45-8217-73fc194a1b3f\") " pod="openshift-console/downloads-7954f5f757-l57nd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.800081 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:15 crc kubenswrapper[4915]: E0127 18:44:15.800301 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:16.300265916 +0000 UTC m=+147.658119600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.800408 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cdc47495-7d8a-4981-8647-f93a058fe075-default-certificate\") pod \"router-default-5444994796-49lf7\" (UID: \"cdc47495-7d8a-4981-8647-f93a058fe075\") " pod="openshift-ingress/router-default-5444994796-49lf7" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.800443 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh9g9\" (UniqueName: \"kubernetes.io/projected/26adc908-f613-427c-821d-9070ea52de42-kube-api-access-gh9g9\") pod \"machine-config-server-wpfnn\" (UID: \"26adc908-f613-427c-821d-9070ea52de42\") " pod="openshift-machine-config-operator/machine-config-server-wpfnn" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.800466 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/972629a9-8882-491c-863e-770570a0aeac-tmpfs\") pod \"packageserver-d55dfcdfc-kqphj\" (UID: \"972629a9-8882-491c-863e-770570a0aeac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqphj" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.800497 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cdc47495-7d8a-4981-8647-f93a058fe075-metrics-certs\") pod \"router-default-5444994796-49lf7\" (UID: \"cdc47495-7d8a-4981-8647-f93a058fe075\") " pod="openshift-ingress/router-default-5444994796-49lf7" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.800522 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fbd96a5-cdcb-4fa4-81c0-ce07cd660089-serving-cert\") pod \"service-ca-operator-777779d784-kmfxg\" (UID: \"3fbd96a5-cdcb-4fa4-81c0-ce07cd660089\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kmfxg" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.800544 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7f7j\" (UniqueName: \"kubernetes.io/projected/cdc47495-7d8a-4981-8647-f93a058fe075-kube-api-access-j7f7j\") pod \"router-default-5444994796-49lf7\" (UID: \"cdc47495-7d8a-4981-8647-f93a058fe075\") " pod="openshift-ingress/router-default-5444994796-49lf7" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.800564 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/26adc908-f613-427c-821d-9070ea52de42-certs\") pod \"machine-config-server-wpfnn\" (UID: \"26adc908-f613-427c-821d-9070ea52de42\") " pod="openshift-machine-config-operator/machine-config-server-wpfnn" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.800585 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4c3452d-d381-4a8f-a917-d94870b996df-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gvmtd\" (UID: \"f4c3452d-d381-4a8f-a917-d94870b996df\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvmtd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.800611 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d6f6bcf0-b97a-4e23-9c09-d9f817d725ac-registration-dir\") pod \"csi-hostpathplugin-4lnms\" (UID: \"d6f6bcf0-b97a-4e23-9c09-d9f817d725ac\") " pod="hostpath-provisioner/csi-hostpathplugin-4lnms" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.800639 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4c3452d-d381-4a8f-a917-d94870b996df-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gvmtd\" (UID: \"f4c3452d-d381-4a8f-a917-d94870b996df\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvmtd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.800678 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz2z5\" (UniqueName: \"kubernetes.io/projected/163cabda-fdf9-41c8-a913-db9e8f848e91-kube-api-access-xz2z5\") pod \"package-server-manager-789f6589d5-gwxs9\" (UID: \"163cabda-fdf9-41c8-a913-db9e8f848e91\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gwxs9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.800715 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/972629a9-8882-491c-863e-770570a0aeac-webhook-cert\") pod \"packageserver-d55dfcdfc-kqphj\" (UID: \"972629a9-8882-491c-863e-770570a0aeac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqphj" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.800738 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9536396c-05f0-49f2-96b6-969e1c9b92a2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pl9zt\" (UID: \"9536396c-05f0-49f2-96b6-969e1c9b92a2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pl9zt" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.800763 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4c62b972-1fe6-4b17-9a53-37fdeaa4bd32-profile-collector-cert\") pod \"catalog-operator-68c6474976-9fkzb\" (UID: \"4c62b972-1fe6-4b17-9a53-37fdeaa4bd32\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9fkzb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.800834 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtkxx\" (UniqueName: \"kubernetes.io/projected/4c62b972-1fe6-4b17-9a53-37fdeaa4bd32-kube-api-access-gtkxx\") pod \"catalog-operator-68c6474976-9fkzb\" (UID: \"4c62b972-1fe6-4b17-9a53-37fdeaa4bd32\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9fkzb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.800859 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fbd96a5-cdcb-4fa4-81c0-ce07cd660089-config\") pod \"service-ca-operator-777779d784-kmfxg\" (UID: \"3fbd96a5-cdcb-4fa4-81c0-ce07cd660089\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kmfxg" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.800893 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0b86903-8500-47c1-9fdb-5b0dea888375-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qgxwz\" (UID: \"a0b86903-8500-47c1-9fdb-5b0dea888375\") " pod="openshift-marketplace/marketplace-operator-79b997595-qgxwz" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.800921 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ee527873-9e5c-4c8f-8634-8cc2dc627cbc-metrics-tls\") pod \"dns-default-gpbpr\" (UID: \"ee527873-9e5c-4c8f-8634-8cc2dc627cbc\") " pod="openshift-dns/dns-default-gpbpr" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.800973 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cdc47495-7d8a-4981-8647-f93a058fe075-stats-auth\") pod \"router-default-5444994796-49lf7\" (UID: \"cdc47495-7d8a-4981-8647-f93a058fe075\") " pod="openshift-ingress/router-default-5444994796-49lf7" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801000 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a0b86903-8500-47c1-9fdb-5b0dea888375-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qgxwz\" (UID: \"a0b86903-8500-47c1-9fdb-5b0dea888375\") " pod="openshift-marketplace/marketplace-operator-79b997595-qgxwz" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801031 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801058 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sqpj\" (UniqueName: \"kubernetes.io/projected/3fbd96a5-cdcb-4fa4-81c0-ce07cd660089-kube-api-access-8sqpj\") pod \"service-ca-operator-777779d784-kmfxg\" (UID: \"3fbd96a5-cdcb-4fa4-81c0-ce07cd660089\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kmfxg" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801091 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/60b5b6a8-738d-4575-a1d1-ecdec1decb98-cert\") pod \"ingress-canary-nw5h9\" (UID: \"60b5b6a8-738d-4575-a1d1-ecdec1decb98\") " pod="openshift-ingress-canary/ingress-canary-nw5h9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801117 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww7bx\" (UniqueName: \"kubernetes.io/projected/66c0e807-851e-4efe-bab8-90f94bb915e8-kube-api-access-ww7bx\") pod \"olm-operator-6b444d44fb-vvk64\" (UID: \"66c0e807-851e-4efe-bab8-90f94bb915e8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vvk64" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801142 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkl7h\" (UniqueName: \"kubernetes.io/projected/972629a9-8882-491c-863e-770570a0aeac-kube-api-access-jkl7h\") pod \"packageserver-d55dfcdfc-kqphj\" (UID: \"972629a9-8882-491c-863e-770570a0aeac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqphj" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801165 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/66c0e807-851e-4efe-bab8-90f94bb915e8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vvk64\" (UID: \"66c0e807-851e-4efe-bab8-90f94bb915e8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vvk64" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801188 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9536396c-05f0-49f2-96b6-969e1c9b92a2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pl9zt\" (UID: \"9536396c-05f0-49f2-96b6-969e1c9b92a2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pl9zt" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801214 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l89vb\" (UniqueName: \"kubernetes.io/projected/c3f419c0-de2f-49f4-8d15-34a04465730b-kube-api-access-l89vb\") pod \"service-ca-9c57cc56f-x4lgb\" (UID: \"c3f419c0-de2f-49f4-8d15-34a04465730b\") " pod="openshift-service-ca/service-ca-9c57cc56f-x4lgb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801239 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f2c64c84-9ed1-47b6-af09-0af742ec9771-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7npbf\" (UID: \"f2c64c84-9ed1-47b6-af09-0af742ec9771\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7npbf" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801276 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/66c0e807-851e-4efe-bab8-90f94bb915e8-srv-cert\") pod \"olm-operator-6b444d44fb-vvk64\" (UID: \"66c0e807-851e-4efe-bab8-90f94bb915e8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vvk64" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801298 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4c62b972-1fe6-4b17-9a53-37fdeaa4bd32-srv-cert\") pod \"catalog-operator-68c6474976-9fkzb\" (UID: \"4c62b972-1fe6-4b17-9a53-37fdeaa4bd32\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9fkzb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801322 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppnsp\" (UniqueName: \"kubernetes.io/projected/f2c64c84-9ed1-47b6-af09-0af742ec9771-kube-api-access-ppnsp\") pod \"openshift-config-operator-7777fb866f-7npbf\" (UID: \"f2c64c84-9ed1-47b6-af09-0af742ec9771\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7npbf" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801342 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d6f6bcf0-b97a-4e23-9c09-d9f817d725ac-mountpoint-dir\") pod \"csi-hostpathplugin-4lnms\" (UID: \"d6f6bcf0-b97a-4e23-9c09-d9f817d725ac\") " pod="hostpath-provisioner/csi-hostpathplugin-4lnms" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801375 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdc47495-7d8a-4981-8647-f93a058fe075-service-ca-bundle\") pod \"router-default-5444994796-49lf7\" (UID: \"cdc47495-7d8a-4981-8647-f93a058fe075\") " pod="openshift-ingress/router-default-5444994796-49lf7" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801414 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9565c20-c5f0-49a7-acd2-1cc2f4862085-config-volume\") pod \"collect-profiles-29492310-cvxsw\" (UID: \"c9565c20-c5f0-49a7-acd2-1cc2f4862085\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-cvxsw" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801439 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2m54\" (UniqueName: \"kubernetes.io/projected/7fafa3c1-2b8b-4f48-b875-b3951548b8dc-kube-api-access-z2m54\") pod \"kube-storage-version-migrator-operator-b67b599dd-xmqnm\" (UID: \"7fafa3c1-2b8b-4f48-b875-b3951548b8dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xmqnm" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801461 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2c64c84-9ed1-47b6-af09-0af742ec9771-serving-cert\") pod \"openshift-config-operator-7777fb866f-7npbf\" (UID: \"f2c64c84-9ed1-47b6-af09-0af742ec9771\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7npbf" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801481 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/26adc908-f613-427c-821d-9070ea52de42-node-bootstrap-token\") pod \"machine-config-server-wpfnn\" (UID: \"26adc908-f613-427c-821d-9070ea52de42\") " pod="openshift-machine-config-operator/machine-config-server-wpfnn" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801501 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2b4k\" (UniqueName: \"kubernetes.io/projected/d6f6bcf0-b97a-4e23-9c09-d9f817d725ac-kube-api-access-h2b4k\") pod \"csi-hostpathplugin-4lnms\" (UID: \"d6f6bcf0-b97a-4e23-9c09-d9f817d725ac\") " pod="hostpath-provisioner/csi-hostpathplugin-4lnms" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801538 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9536396c-05f0-49f2-96b6-969e1c9b92a2-config\") pod \"kube-apiserver-operator-766d6c64bb-pl9zt\" (UID: \"9536396c-05f0-49f2-96b6-969e1c9b92a2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pl9zt" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801562 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d6f6bcf0-b97a-4e23-9c09-d9f817d725ac-socket-dir\") pod \"csi-hostpathplugin-4lnms\" (UID: \"d6f6bcf0-b97a-4e23-9c09-d9f817d725ac\") " pod="hostpath-provisioner/csi-hostpathplugin-4lnms" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801595 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr76n\" (UniqueName: \"kubernetes.io/projected/a0b86903-8500-47c1-9fdb-5b0dea888375-kube-api-access-pr76n\") pod \"marketplace-operator-79b997595-qgxwz\" (UID: \"a0b86903-8500-47c1-9fdb-5b0dea888375\") " pod="openshift-marketplace/marketplace-operator-79b997595-qgxwz" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801627 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/972629a9-8882-491c-863e-770570a0aeac-apiservice-cert\") pod \"packageserver-d55dfcdfc-kqphj\" (UID: \"972629a9-8882-491c-863e-770570a0aeac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqphj" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801650 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrrzn\" (UniqueName: \"kubernetes.io/projected/60b5b6a8-738d-4575-a1d1-ecdec1decb98-kube-api-access-lrrzn\") pod \"ingress-canary-nw5h9\" (UID: \"60b5b6a8-738d-4575-a1d1-ecdec1decb98\") " pod="openshift-ingress-canary/ingress-canary-nw5h9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801674 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d6f6bcf0-b97a-4e23-9c09-d9f817d725ac-plugins-dir\") pod \"csi-hostpathplugin-4lnms\" (UID: \"d6f6bcf0-b97a-4e23-9c09-d9f817d725ac\") " pod="hostpath-provisioner/csi-hostpathplugin-4lnms" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801698 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fafa3c1-2b8b-4f48-b875-b3951548b8dc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xmqnm\" (UID: \"7fafa3c1-2b8b-4f48-b875-b3951548b8dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xmqnm" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801720 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c3f419c0-de2f-49f4-8d15-34a04465730b-signing-key\") pod \"service-ca-9c57cc56f-x4lgb\" (UID: \"c3f419c0-de2f-49f4-8d15-34a04465730b\") " pod="openshift-service-ca/service-ca-9c57cc56f-x4lgb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801747 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fafa3c1-2b8b-4f48-b875-b3951548b8dc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xmqnm\" (UID: \"7fafa3c1-2b8b-4f48-b875-b3951548b8dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xmqnm" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801768 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee527873-9e5c-4c8f-8634-8cc2dc627cbc-config-volume\") pod \"dns-default-gpbpr\" (UID: \"ee527873-9e5c-4c8f-8634-8cc2dc627cbc\") " pod="openshift-dns/dns-default-gpbpr" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801807 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c3f419c0-de2f-49f4-8d15-34a04465730b-signing-cabundle\") pod \"service-ca-9c57cc56f-x4lgb\" (UID: \"c3f419c0-de2f-49f4-8d15-34a04465730b\") " pod="openshift-service-ca/service-ca-9c57cc56f-x4lgb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801851 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2e593e2-bf9f-415b-9efd-6ffd49cf9905-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-tc9fv\" (UID: \"c2e593e2-bf9f-415b-9efd-6ffd49cf9905\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tc9fv" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801881 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch646\" (UniqueName: \"kubernetes.io/projected/c2e593e2-bf9f-415b-9efd-6ffd49cf9905-kube-api-access-ch646\") pod \"control-plane-machine-set-operator-78cbb6b69f-tc9fv\" (UID: \"c2e593e2-bf9f-415b-9efd-6ffd49cf9905\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tc9fv" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801902 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d6f6bcf0-b97a-4e23-9c09-d9f817d725ac-csi-data-dir\") pod \"csi-hostpathplugin-4lnms\" (UID: \"d6f6bcf0-b97a-4e23-9c09-d9f817d725ac\") " pod="hostpath-provisioner/csi-hostpathplugin-4lnms" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801186 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4c3452d-d381-4a8f-a917-d94870b996df-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gvmtd\" (UID: \"f4c3452d-d381-4a8f-a917-d94870b996df\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvmtd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.801922 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9565c20-c5f0-49a7-acd2-1cc2f4862085-secret-volume\") pod \"collect-profiles-29492310-cvxsw\" (UID: \"c9565c20-c5f0-49a7-acd2-1cc2f4862085\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-cvxsw" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.802083 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkngm\" (UniqueName: \"kubernetes.io/projected/c9565c20-c5f0-49a7-acd2-1cc2f4862085-kube-api-access-xkngm\") pod \"collect-profiles-29492310-cvxsw\" (UID: \"c9565c20-c5f0-49a7-acd2-1cc2f4862085\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-cvxsw" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.802122 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4c3452d-d381-4a8f-a917-d94870b996df-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gvmtd\" (UID: \"f4c3452d-d381-4a8f-a917-d94870b996df\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvmtd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.802154 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsthd\" (UniqueName: \"kubernetes.io/projected/ee527873-9e5c-4c8f-8634-8cc2dc627cbc-kube-api-access-bsthd\") pod \"dns-default-gpbpr\" (UID: \"ee527873-9e5c-4c8f-8634-8cc2dc627cbc\") " pod="openshift-dns/dns-default-gpbpr" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.802182 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/163cabda-fdf9-41c8-a913-db9e8f848e91-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gwxs9\" (UID: \"163cabda-fdf9-41c8-a913-db9e8f848e91\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gwxs9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.803246 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0b86903-8500-47c1-9fdb-5b0dea888375-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qgxwz\" (UID: \"a0b86903-8500-47c1-9fdb-5b0dea888375\") " pod="openshift-marketplace/marketplace-operator-79b997595-qgxwz" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.803674 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fbd96a5-cdcb-4fa4-81c0-ce07cd660089-config\") pod \"service-ca-operator-777779d784-kmfxg\" (UID: \"3fbd96a5-cdcb-4fa4-81c0-ce07cd660089\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kmfxg" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.803753 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d6f6bcf0-b97a-4e23-9c09-d9f817d725ac-mountpoint-dir\") pod \"csi-hostpathplugin-4lnms\" (UID: \"d6f6bcf0-b97a-4e23-9c09-d9f817d725ac\") " pod="hostpath-provisioner/csi-hostpathplugin-4lnms" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.803903 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/26adc908-f613-427c-821d-9070ea52de42-certs\") pod \"machine-config-server-wpfnn\" (UID: \"26adc908-f613-427c-821d-9070ea52de42\") " pod="openshift-machine-config-operator/machine-config-server-wpfnn" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.804161 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f2c64c84-9ed1-47b6-af09-0af742ec9771-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7npbf\" (UID: \"f2c64c84-9ed1-47b6-af09-0af742ec9771\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7npbf" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.805443 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9565c20-c5f0-49a7-acd2-1cc2f4862085-secret-volume\") pod \"collect-profiles-29492310-cvxsw\" (UID: \"c9565c20-c5f0-49a7-acd2-1cc2f4862085\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-cvxsw" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.805494 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cdc47495-7d8a-4981-8647-f93a058fe075-default-certificate\") pod \"router-default-5444994796-49lf7\" (UID: \"cdc47495-7d8a-4981-8647-f93a058fe075\") " pod="openshift-ingress/router-default-5444994796-49lf7" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.806098 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/60b5b6a8-738d-4575-a1d1-ecdec1decb98-cert\") pod \"ingress-canary-nw5h9\" (UID: \"60b5b6a8-738d-4575-a1d1-ecdec1decb98\") " pod="openshift-ingress-canary/ingress-canary-nw5h9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.806718 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdc47495-7d8a-4981-8647-f93a058fe075-service-ca-bundle\") pod \"router-default-5444994796-49lf7\" (UID: \"cdc47495-7d8a-4981-8647-f93a058fe075\") " pod="openshift-ingress/router-default-5444994796-49lf7" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.807322 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9565c20-c5f0-49a7-acd2-1cc2f4862085-config-volume\") pod \"collect-profiles-29492310-cvxsw\" (UID: \"c9565c20-c5f0-49a7-acd2-1cc2f4862085\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-cvxsw" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.807534 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/163cabda-fdf9-41c8-a913-db9e8f848e91-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gwxs9\" (UID: \"163cabda-fdf9-41c8-a913-db9e8f848e91\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gwxs9" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.807913 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9536396c-05f0-49f2-96b6-969e1c9b92a2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pl9zt\" (UID: \"9536396c-05f0-49f2-96b6-969e1c9b92a2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pl9zt" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.808182 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d6f6bcf0-b97a-4e23-9c09-d9f817d725ac-plugins-dir\") pod \"csi-hostpathplugin-4lnms\" (UID: \"d6f6bcf0-b97a-4e23-9c09-d9f817d725ac\") " pod="hostpath-provisioner/csi-hostpathplugin-4lnms" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.808317 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d6f6bcf0-b97a-4e23-9c09-d9f817d725ac-registration-dir\") pod \"csi-hostpathplugin-4lnms\" (UID: \"d6f6bcf0-b97a-4e23-9c09-d9f817d725ac\") " pod="hostpath-provisioner/csi-hostpathplugin-4lnms" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.810262 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddv28\" (UniqueName: \"kubernetes.io/projected/78d062b6-d48b-44f6-abc5-c713cf372402-kube-api-access-ddv28\") pod \"migrator-59844c95c7-l8m44\" (UID: \"78d062b6-d48b-44f6-abc5-c713cf372402\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l8m44" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.810524 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cdc47495-7d8a-4981-8647-f93a058fe075-metrics-certs\") pod \"router-default-5444994796-49lf7\" (UID: \"cdc47495-7d8a-4981-8647-f93a058fe075\") " pod="openshift-ingress/router-default-5444994796-49lf7" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.810567 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/972629a9-8882-491c-863e-770570a0aeac-tmpfs\") pod \"packageserver-d55dfcdfc-kqphj\" (UID: \"972629a9-8882-491c-863e-770570a0aeac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqphj" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.810864 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/26adc908-f613-427c-821d-9070ea52de42-node-bootstrap-token\") pod \"machine-config-server-wpfnn\" (UID: \"26adc908-f613-427c-821d-9070ea52de42\") " pod="openshift-machine-config-operator/machine-config-server-wpfnn" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.811400 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9536396c-05f0-49f2-96b6-969e1c9b92a2-config\") pod \"kube-apiserver-operator-766d6c64bb-pl9zt\" (UID: \"9536396c-05f0-49f2-96b6-969e1c9b92a2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pl9zt" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.811461 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d6f6bcf0-b97a-4e23-9c09-d9f817d725ac-socket-dir\") pod \"csi-hostpathplugin-4lnms\" (UID: \"d6f6bcf0-b97a-4e23-9c09-d9f817d725ac\") " pod="hostpath-provisioner/csi-hostpathplugin-4lnms" Jan 27 18:44:15 crc kubenswrapper[4915]: E0127 18:44:15.811491 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:16.31147098 +0000 UTC m=+147.669324834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.812320 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/66c0e807-851e-4efe-bab8-90f94bb915e8-srv-cert\") pod \"olm-operator-6b444d44fb-vvk64\" (UID: \"66c0e807-851e-4efe-bab8-90f94bb915e8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vvk64" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.813648 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ee527873-9e5c-4c8f-8634-8cc2dc627cbc-metrics-tls\") pod \"dns-default-gpbpr\" (UID: \"ee527873-9e5c-4c8f-8634-8cc2dc627cbc\") " pod="openshift-dns/dns-default-gpbpr" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.814127 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/972629a9-8882-491c-863e-770570a0aeac-apiservice-cert\") pod \"packageserver-d55dfcdfc-kqphj\" (UID: \"972629a9-8882-491c-863e-770570a0aeac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqphj" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.814769 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c3f419c0-de2f-49f4-8d15-34a04465730b-signing-cabundle\") pod \"service-ca-9c57cc56f-x4lgb\" (UID: \"c3f419c0-de2f-49f4-8d15-34a04465730b\") " pod="openshift-service-ca/service-ca-9c57cc56f-x4lgb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.815138 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4c62b972-1fe6-4b17-9a53-37fdeaa4bd32-srv-cert\") pod \"catalog-operator-68c6474976-9fkzb\" (UID: \"4c62b972-1fe6-4b17-9a53-37fdeaa4bd32\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9fkzb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.815276 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a0b86903-8500-47c1-9fdb-5b0dea888375-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qgxwz\" (UID: \"a0b86903-8500-47c1-9fdb-5b0dea888375\") " pod="openshift-marketplace/marketplace-operator-79b997595-qgxwz" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.815435 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2c64c84-9ed1-47b6-af09-0af742ec9771-serving-cert\") pod \"openshift-config-operator-7777fb866f-7npbf\" (UID: \"f2c64c84-9ed1-47b6-af09-0af742ec9771\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7npbf" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.814881 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee527873-9e5c-4c8f-8634-8cc2dc627cbc-config-volume\") pod \"dns-default-gpbpr\" (UID: \"ee527873-9e5c-4c8f-8634-8cc2dc627cbc\") " pod="openshift-dns/dns-default-gpbpr" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.816434 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2e593e2-bf9f-415b-9efd-6ffd49cf9905-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-tc9fv\" (UID: \"c2e593e2-bf9f-415b-9efd-6ffd49cf9905\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tc9fv" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.817092 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fafa3c1-2b8b-4f48-b875-b3951548b8dc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xmqnm\" (UID: \"7fafa3c1-2b8b-4f48-b875-b3951548b8dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xmqnm" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.817993 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fafa3c1-2b8b-4f48-b875-b3951548b8dc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xmqnm\" (UID: \"7fafa3c1-2b8b-4f48-b875-b3951548b8dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xmqnm" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.818867 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c3f419c0-de2f-49f4-8d15-34a04465730b-signing-key\") pod \"service-ca-9c57cc56f-x4lgb\" (UID: \"c3f419c0-de2f-49f4-8d15-34a04465730b\") " pod="openshift-service-ca/service-ca-9c57cc56f-x4lgb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.821485 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cdc47495-7d8a-4981-8647-f93a058fe075-stats-auth\") pod \"router-default-5444994796-49lf7\" (UID: \"cdc47495-7d8a-4981-8647-f93a058fe075\") " pod="openshift-ingress/router-default-5444994796-49lf7" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.822448 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/66c0e807-851e-4efe-bab8-90f94bb915e8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vvk64\" (UID: \"66c0e807-851e-4efe-bab8-90f94bb915e8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vvk64" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.823206 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fbd96a5-cdcb-4fa4-81c0-ce07cd660089-serving-cert\") pod \"service-ca-operator-777779d784-kmfxg\" (UID: \"3fbd96a5-cdcb-4fa4-81c0-ce07cd660089\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kmfxg" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.823666 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4c62b972-1fe6-4b17-9a53-37fdeaa4bd32-profile-collector-cert\") pod \"catalog-operator-68c6474976-9fkzb\" (UID: \"4c62b972-1fe6-4b17-9a53-37fdeaa4bd32\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9fkzb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.824551 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/972629a9-8882-491c-863e-770570a0aeac-webhook-cert\") pod \"packageserver-d55dfcdfc-kqphj\" (UID: \"972629a9-8882-491c-863e-770570a0aeac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqphj" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.829426 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4c3452d-d381-4a8f-a917-d94870b996df-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gvmtd\" (UID: \"f4c3452d-d381-4a8f-a917-d94870b996df\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvmtd" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.829771 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d6f6bcf0-b97a-4e23-9c09-d9f817d725ac-csi-data-dir\") pod \"csi-hostpathplugin-4lnms\" (UID: \"d6f6bcf0-b97a-4e23-9c09-d9f817d725ac\") " pod="hostpath-provisioner/csi-hostpathplugin-4lnms" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.840126 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg85h\" (UniqueName: \"kubernetes.io/projected/4ed4ecf0-fe78-4606-bbfc-abe9539010be-kube-api-access-fg85h\") pod \"multus-admission-controller-857f4d67dd-rwhgq\" (UID: \"4ed4ecf0-fe78-4606-bbfc-abe9539010be\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rwhgq" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.842710 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l8m44" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.851954 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lrrl\" (UniqueName: \"kubernetes.io/projected/5d15c942-59c9-4570-8c33-a972dce41fd6-kube-api-access-5lrrl\") pod \"cluster-samples-operator-665b6dd947-mx2qp\" (UID: \"5d15c942-59c9-4570-8c33-a972dce41fd6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mx2qp" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.877666 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v59cm\" (UniqueName: \"kubernetes.io/projected/cb28ab26-da64-46ae-9558-ab9c29622a7d-kube-api-access-v59cm\") pod \"dns-operator-744455d44c-d27cj\" (UID: \"cb28ab26-da64-46ae-9558-ab9c29622a7d\") " pod="openshift-dns-operator/dns-operator-744455d44c-d27cj" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.890503 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67fnf\" (UniqueName: \"kubernetes.io/projected/9e1f0e51-1e0a-4449-9b41-54894000941f-kube-api-access-67fnf\") pod \"machine-config-operator-74547568cd-m7bv2\" (UID: \"9e1f0e51-1e0a-4449-9b41-54894000941f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7bv2" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.903369 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:15 crc kubenswrapper[4915]: E0127 18:44:15.903982 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:16.40396299 +0000 UTC m=+147.761816654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.921520 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36eeecc3-2b95-4f0f-a182-49b66fdb48c1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4sxzb\" (UID: \"36eeecc3-2b95-4f0f-a182-49b66fdb48c1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4sxzb" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.932134 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6ghw\" (UniqueName: \"kubernetes.io/projected/33cf36b9-fe39-4635-971e-e6b434b58980-kube-api-access-r6ghw\") pod \"authentication-operator-69f744f599-ljskk\" (UID: \"33cf36b9-fe39-4635-971e-e6b434b58980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ljskk" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.952152 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e03d9c82-1127-4320-858f-bb1a66eb487f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-v5sff\" (UID: \"e03d9c82-1127-4320-858f-bb1a66eb487f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v5sff" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.968555 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgjxz\" (UniqueName: \"kubernetes.io/projected/d4047fcf-98c6-410c-9388-5af5a8640820-kube-api-access-jgjxz\") pod \"etcd-operator-b45778765-8sqx6\" (UID: \"d4047fcf-98c6-410c-9388-5af5a8640820\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sqx6" Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.972467 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-sd8x9"] Jan 27 18:44:15 crc kubenswrapper[4915]: I0127 18:44:15.987693 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d52r7\" (UniqueName: \"kubernetes.io/projected/b4848a06-bbb5-4855-b7f7-baa6e534cce2-kube-api-access-d52r7\") pod \"machine-config-controller-84d6567774-zkhzf\" (UID: \"b4848a06-bbb5-4855-b7f7-baa6e534cce2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zkhzf" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.005557 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:16 crc kubenswrapper[4915]: E0127 18:44:16.005941 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:16.505930553 +0000 UTC m=+147.863784217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.007782 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb7g8\" (UniqueName: \"kubernetes.io/projected/549b6c98-7f0f-45f4-bf16-e0c9f0157a72-kube-api-access-kb7g8\") pod \"console-operator-58897d9998-xhjvn\" (UID: \"549b6c98-7f0f-45f4-bf16-e0c9f0157a72\") " pod="openshift-console-operator/console-operator-58897d9998-xhjvn" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.023813 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ljskk" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.029405 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh2rw\" (UniqueName: \"kubernetes.io/projected/2f57fca5-f644-4ce3-87fb-7cd1b17118ec-kube-api-access-rh2rw\") pod \"machine-approver-56656f9798-rd4g2\" (UID: \"2f57fca5-f644-4ce3-87fb-7cd1b17118ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd4g2" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.034918 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ltcdr"] Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.061786 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-l57nd" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.068144 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4sxzb" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.069689 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-bound-sa-token\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.072259 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwc8s\" (UniqueName: \"kubernetes.io/projected/864c2cff-cecd-4156-afd9-088a9b9a1956-kube-api-access-bwc8s\") pod \"oauth-openshift-558db77b4-kcmfd\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.077892 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-l8m44"] Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.080220 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7bv2" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.088583 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/022a9537-e4e4-46d1-98d8-eb9f8d88b83e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5xckh\" (UID: \"022a9537-e4e4-46d1-98d8-eb9f8d88b83e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5xckh" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.094752 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zkhzf" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.105759 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xhjvn" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.106216 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:16 crc kubenswrapper[4915]: E0127 18:44:16.106598 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:16.606572208 +0000 UTC m=+147.964425872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.112734 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rwhgq" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.116517 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c49ph\" (UniqueName: \"kubernetes.io/projected/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-kube-api-access-c49ph\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.120857 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-d27cj" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.127357 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-8sqx6" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.135993 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mx2qp" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.152048 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5xckh" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.156644 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v5sff" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.160058 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn82w\" (UniqueName: \"kubernetes.io/projected/06e673dd-e486-441e-b971-a72c53032558-kube-api-access-wn82w\") pod \"openshift-controller-manager-operator-756b6f6bc6-x5nrn\" (UID: \"06e673dd-e486-441e-b971-a72c53032558\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5nrn" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.175657 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh9g9\" (UniqueName: \"kubernetes.io/projected/26adc908-f613-427c-821d-9070ea52de42-kube-api-access-gh9g9\") pod \"machine-config-server-wpfnn\" (UID: \"26adc908-f613-427c-821d-9070ea52de42\") " pod="openshift-machine-config-operator/machine-config-server-wpfnn" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.191140 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7f7j\" (UniqueName: \"kubernetes.io/projected/cdc47495-7d8a-4981-8647-f93a058fe075-kube-api-access-j7f7j\") pod \"router-default-5444994796-49lf7\" (UID: \"cdc47495-7d8a-4981-8647-f93a058fe075\") " pod="openshift-ingress/router-default-5444994796-49lf7" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.212890 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:16 crc kubenswrapper[4915]: E0127 18:44:16.213552 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:16.713542155 +0000 UTC m=+148.071395819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.221408 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2m54\" (UniqueName: \"kubernetes.io/projected/7fafa3c1-2b8b-4f48-b875-b3951548b8dc-kube-api-access-z2m54\") pod \"kube-storage-version-migrator-operator-b67b599dd-xmqnm\" (UID: \"7fafa3c1-2b8b-4f48-b875-b3951548b8dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xmqnm" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.232804 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtkxx\" (UniqueName: \"kubernetes.io/projected/4c62b972-1fe6-4b17-9a53-37fdeaa4bd32-kube-api-access-gtkxx\") pod \"catalog-operator-68c6474976-9fkzb\" (UID: \"4c62b972-1fe6-4b17-9a53-37fdeaa4bd32\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9fkzb" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.255706 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sqpj\" (UniqueName: \"kubernetes.io/projected/3fbd96a5-cdcb-4fa4-81c0-ce07cd660089-kube-api-access-8sqpj\") pod \"service-ca-operator-777779d784-kmfxg\" (UID: \"3fbd96a5-cdcb-4fa4-81c0-ce07cd660089\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kmfxg" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.266127 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-49lf7" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.268447 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww7bx\" (UniqueName: \"kubernetes.io/projected/66c0e807-851e-4efe-bab8-90f94bb915e8-kube-api-access-ww7bx\") pod \"olm-operator-6b444d44fb-vvk64\" (UID: \"66c0e807-851e-4efe-bab8-90f94bb915e8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vvk64" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.282980 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wpfnn" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.288480 4915 generic.go:334] "Generic (PLEG): container finished" podID="16f09be8-b919-45ca-8a58-38e72e3bb85c" containerID="6423778e0467a263b4de6f906248713e1d4bb2b5be65b30a5a428886f4c827a7" exitCode=0 Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.288622 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" event={"ID":"16f09be8-b919-45ca-8a58-38e72e3bb85c","Type":"ContainerDied","Data":"6423778e0467a263b4de6f906248713e1d4bb2b5be65b30a5a428886f4c827a7"} Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.288869 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l89vb\" (UniqueName: \"kubernetes.io/projected/c3f419c0-de2f-49f4-8d15-34a04465730b-kube-api-access-l89vb\") pod \"service-ca-9c57cc56f-x4lgb\" (UID: \"c3f419c0-de2f-49f4-8d15-34a04465730b\") " pod="openshift-service-ca/service-ca-9c57cc56f-x4lgb" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.294868 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" event={"ID":"3c01f2fd-2df1-4def-9cb0-141697bd5e80","Type":"ContainerStarted","Data":"74e15fb001558f69905a9aba307e4f514fab2974b8638473ea5228aa65a9b79a"} Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.301007 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd4g2" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.304473 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ljskk"] Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.309006 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l8m44" event={"ID":"78d062b6-d48b-44f6-abc5-c713cf372402","Type":"ContainerStarted","Data":"805674bcf3559ea815e52dc8f3ddd1b81a0ab877b9962676da62c261133cbfd3"} Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.309529 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.311466 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9536396c-05f0-49f2-96b6-969e1c9b92a2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pl9zt\" (UID: \"9536396c-05f0-49f2-96b6-969e1c9b92a2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pl9zt" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.313347 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sd8x9" event={"ID":"d97ce85d-90e3-410f-bd7c-812149c6933f","Type":"ContainerStarted","Data":"24df1cc5a74222740510a219dc1ba69914f943471e1404fcfa027b80910e3862"} Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.314206 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:16 crc kubenswrapper[4915]: E0127 18:44:16.314351 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:16.814324001 +0000 UTC m=+148.172177665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.314577 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:16 crc kubenswrapper[4915]: E0127 18:44:16.314867 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:16.814855498 +0000 UTC m=+148.172709162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.332204 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcdr" event={"ID":"8ea219a5-50e9-41e3-887e-e23d61ed73ef","Type":"ContainerStarted","Data":"35620ae5f0c2fd877894f307ae7b6d15f268050f019ba915675d961f0e337406"} Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.342115 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkl7h\" (UniqueName: \"kubernetes.io/projected/972629a9-8882-491c-863e-770570a0aeac-kube-api-access-jkl7h\") pod \"packageserver-d55dfcdfc-kqphj\" (UID: \"972629a9-8882-491c-863e-770570a0aeac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqphj" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.349999 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.350302 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5nrn" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.362149 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.364636 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkngm\" (UniqueName: \"kubernetes.io/projected/c9565c20-c5f0-49a7-acd2-1cc2f4862085-kube-api-access-xkngm\") pod \"collect-profiles-29492310-cvxsw\" (UID: \"c9565c20-c5f0-49a7-acd2-1cc2f4862085\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-cvxsw" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.370173 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4c3452d-d381-4a8f-a917-d94870b996df-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gvmtd\" (UID: \"f4c3452d-d381-4a8f-a917-d94870b996df\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvmtd" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.406965 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsthd\" (UniqueName: \"kubernetes.io/projected/ee527873-9e5c-4c8f-8634-8cc2dc627cbc-kube-api-access-bsthd\") pod \"dns-default-gpbpr\" (UID: \"ee527873-9e5c-4c8f-8634-8cc2dc627cbc\") " pod="openshift-dns/dns-default-gpbpr" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.413179 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz2z5\" (UniqueName: \"kubernetes.io/projected/163cabda-fdf9-41c8-a913-db9e8f848e91-kube-api-access-xz2z5\") pod \"package-server-manager-789f6589d5-gwxs9\" (UID: \"163cabda-fdf9-41c8-a913-db9e8f848e91\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gwxs9" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.415495 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:16 crc kubenswrapper[4915]: E0127 18:44:16.418901 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:16.918883557 +0000 UTC m=+148.276737211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.456157 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2b4k\" (UniqueName: \"kubernetes.io/projected/d6f6bcf0-b97a-4e23-9c09-d9f817d725ac-kube-api-access-h2b4k\") pod \"csi-hostpathplugin-4lnms\" (UID: \"d6f6bcf0-b97a-4e23-9c09-d9f817d725ac\") " pod="hostpath-provisioner/csi-hostpathplugin-4lnms" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.461767 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppnsp\" (UniqueName: \"kubernetes.io/projected/f2c64c84-9ed1-47b6-af09-0af742ec9771-kube-api-access-ppnsp\") pod \"openshift-config-operator-7777fb866f-7npbf\" (UID: \"f2c64c84-9ed1-47b6-af09-0af742ec9771\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7npbf" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.470659 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xmqnm" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.485225 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gwxs9" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.494323 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7npbf" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.499814 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9fkzb" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.511300 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvmtd" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.517590 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.518249 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kmfxg" Jan 27 18:44:16 crc kubenswrapper[4915]: E0127 18:44:16.519751 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:17.019726535 +0000 UTC m=+148.377580189 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.521229 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vvk64" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.529034 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqphj" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.539592 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pl9zt" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.546842 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-x4lgb" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.562141 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr76n\" (UniqueName: \"kubernetes.io/projected/a0b86903-8500-47c1-9fdb-5b0dea888375-kube-api-access-pr76n\") pod \"marketplace-operator-79b997595-qgxwz\" (UID: \"a0b86903-8500-47c1-9fdb-5b0dea888375\") " pod="openshift-marketplace/marketplace-operator-79b997595-qgxwz" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.562294 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch646\" (UniqueName: \"kubernetes.io/projected/c2e593e2-bf9f-415b-9efd-6ffd49cf9905-kube-api-access-ch646\") pod \"control-plane-machine-set-operator-78cbb6b69f-tc9fv\" (UID: \"c2e593e2-bf9f-415b-9efd-6ffd49cf9905\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tc9fv" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.562733 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-cvxsw" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.563847 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrrzn\" (UniqueName: \"kubernetes.io/projected/60b5b6a8-738d-4575-a1d1-ecdec1decb98-kube-api-access-lrrzn\") pod \"ingress-canary-nw5h9\" (UID: \"60b5b6a8-738d-4575-a1d1-ecdec1decb98\") " pod="openshift-ingress-canary/ingress-canary-nw5h9" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.574340 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-nw5h9" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.590264 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gpbpr" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.603206 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4lnms" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.631152 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:16 crc kubenswrapper[4915]: E0127 18:44:16.631483 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:17.131468363 +0000 UTC m=+148.489322017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.732186 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:16 crc kubenswrapper[4915]: E0127 18:44:16.732483 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:17.232473323 +0000 UTC m=+148.590326987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.762937 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tc9fv" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.784982 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qgxwz" Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.832758 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:16 crc kubenswrapper[4915]: E0127 18:44:16.833056 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:17.333035247 +0000 UTC m=+148.690888911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.840031 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:16 crc kubenswrapper[4915]: E0127 18:44:16.840488 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:17.340472593 +0000 UTC m=+148.698326257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.941011 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:16 crc kubenswrapper[4915]: E0127 18:44:16.941457 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:17.441439802 +0000 UTC m=+148.799293466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:16 crc kubenswrapper[4915]: I0127 18:44:16.962981 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zkhzf"] Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:16.998062 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4sxzb"] Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.016502 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-m7bv2"] Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.042813 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:17 crc kubenswrapper[4915]: E0127 18:44:17.043125 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:17.54311349 +0000 UTC m=+148.900967144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.144289 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:17 crc kubenswrapper[4915]: E0127 18:44:17.144587 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:17.644573046 +0000 UTC m=+149.002426710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:17 crc kubenswrapper[4915]: W0127 18:44:17.168268 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4848a06_bbb5_4855_b7f7_baa6e534cce2.slice/crio-5ac6adf0029dc6e3731b0357bf46563b12c84b147241493f322dd77128a78896 WatchSource:0}: Error finding container 5ac6adf0029dc6e3731b0357bf46563b12c84b147241493f322dd77128a78896: Status 404 returned error can't find the container with id 5ac6adf0029dc6e3731b0357bf46563b12c84b147241493f322dd77128a78896 Jan 27 18:44:17 crc kubenswrapper[4915]: W0127 18:44:17.191434 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36eeecc3_2b95_4f0f_a182_49b66fdb48c1.slice/crio-18a0645b5c8aed99b8d6afc952638ff6265b5875463bbe8920dce732e1a7129c WatchSource:0}: Error finding container 18a0645b5c8aed99b8d6afc952638ff6265b5875463bbe8920dce732e1a7129c: Status 404 returned error can't find the container with id 18a0645b5c8aed99b8d6afc952638ff6265b5875463bbe8920dce732e1a7129c Jan 27 18:44:17 crc kubenswrapper[4915]: W0127 18:44:17.205320 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e1f0e51_1e0a_4449_9b41_54894000941f.slice/crio-4af5fb695e02ad0ca29a360a9fd9e924b59bb2faefe973ef673a43056600bf50 WatchSource:0}: Error finding container 4af5fb695e02ad0ca29a360a9fd9e924b59bb2faefe973ef673a43056600bf50: Status 404 returned error can't find the container with id 4af5fb695e02ad0ca29a360a9fd9e924b59bb2faefe973ef673a43056600bf50 Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.247984 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:17 crc kubenswrapper[4915]: E0127 18:44:17.248429 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:17.748412082 +0000 UTC m=+149.106265746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.339766 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-sd8x9" podStartSLOduration=127.339750438 podStartE2EDuration="2m7.339750438s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:17.304668996 +0000 UTC m=+148.662522660" watchObservedRunningTime="2026-01-27 18:44:17.339750438 +0000 UTC m=+148.697604102" Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.356838 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:17 crc kubenswrapper[4915]: E0127 18:44:17.357176 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:17.857158402 +0000 UTC m=+149.215012066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.376468 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" podStartSLOduration=127.37645269 podStartE2EDuration="2m7.37645269s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:17.374304933 +0000 UTC m=+148.732158597" watchObservedRunningTime="2026-01-27 18:44:17.37645269 +0000 UTC m=+148.734306354" Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.442358 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6kwr6" podStartSLOduration=127.442341928 podStartE2EDuration="2m7.442341928s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:17.441379456 +0000 UTC m=+148.799233120" watchObservedRunningTime="2026-01-27 18:44:17.442341928 +0000 UTC m=+148.800195592" Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.458267 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:17 crc kubenswrapper[4915]: E0127 18:44:17.458887 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:17.958866951 +0000 UTC m=+149.316720685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.494876 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-49lf7" event={"ID":"cdc47495-7d8a-4981-8647-f93a058fe075","Type":"ContainerStarted","Data":"79c7a7a72703b1e1186bf045a166d60b3e8ab736f2e4db51a37c9b8097e330f1"} Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.495167 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-49lf7" event={"ID":"cdc47495-7d8a-4981-8647-f93a058fe075","Type":"ContainerStarted","Data":"6f2717f8b1c612b47f06cf93212ab6ca8d0a46872e3a5131a9cd80ba66143cff"} Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.495262 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcdr" event={"ID":"8ea219a5-50e9-41e3-887e-e23d61ed73ef","Type":"ContainerStarted","Data":"6b3249715210fc752a7711ed4d8ab506520980f975f8858e43bb92e96f96b759"} Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.495341 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcdr" event={"ID":"8ea219a5-50e9-41e3-887e-e23d61ed73ef","Type":"ContainerStarted","Data":"9f9af6bd9a94f6d7365801e67953b377dc3911bbf2f878b8add0929307a2e2d9"} Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.495409 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" event={"ID":"16f09be8-b919-45ca-8a58-38e72e3bb85c","Type":"ContainerStarted","Data":"16ecd165042019672e0911d26ecd7dc9f0c466d483004dc06e7d06f96eeaec56"} Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.495484 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4sxzb" event={"ID":"36eeecc3-2b95-4f0f-a182-49b66fdb48c1","Type":"ContainerStarted","Data":"18a0645b5c8aed99b8d6afc952638ff6265b5875463bbe8920dce732e1a7129c"} Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.495569 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ljskk" event={"ID":"33cf36b9-fe39-4635-971e-e6b434b58980","Type":"ContainerStarted","Data":"0e6b62c573c1cd5ccfb7fed2aec12b4d3f7e04b541e186afee84c87b69bb47bc"} Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.495656 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ljskk" event={"ID":"33cf36b9-fe39-4635-971e-e6b434b58980","Type":"ContainerStarted","Data":"7da410a9cc3cc147f318b6ea9de55fa4490b95ba5c96b813bdad68454d5440d5"} Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.495718 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7bv2" event={"ID":"9e1f0e51-1e0a-4449-9b41-54894000941f","Type":"ContainerStarted","Data":"4af5fb695e02ad0ca29a360a9fd9e924b59bb2faefe973ef673a43056600bf50"} Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.495777 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zkhzf" event={"ID":"b4848a06-bbb5-4855-b7f7-baa6e534cce2","Type":"ContainerStarted","Data":"5ac6adf0029dc6e3731b0357bf46563b12c84b147241493f322dd77128a78896"} Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.495865 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l8m44" event={"ID":"78d062b6-d48b-44f6-abc5-c713cf372402","Type":"ContainerStarted","Data":"4ae6e10b9c324274a85783361445605a0b6174165b9173ba1eddd7ed7246caa6"} Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.496098 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l8m44" event={"ID":"78d062b6-d48b-44f6-abc5-c713cf372402","Type":"ContainerStarted","Data":"9ed91e5387a603e088ce08c709d86c1a489640eaab31ae9b2434e02c7675bd6a"} Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.496172 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sd8x9" event={"ID":"d97ce85d-90e3-410f-bd7c-812149c6933f","Type":"ContainerStarted","Data":"146fe73eb3c89f7e673257e6e3ae198e12975df38dbade4ce1e67070d3981f08"} Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.496239 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd4g2" event={"ID":"2f57fca5-f644-4ce3-87fb-7cd1b17118ec","Type":"ContainerStarted","Data":"068f44cb6ba437dae8f11b85cd1cada0a5aec8160f7a4e9b1cd50cea2cb2c806"} Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.496299 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd4g2" event={"ID":"2f57fca5-f644-4ce3-87fb-7cd1b17118ec","Type":"ContainerStarted","Data":"ad7117e45ee6f7ee5e4df2644ba5c402194db3ef0aee260962b9bc65b4c5c6d1"} Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.496356 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wpfnn" event={"ID":"26adc908-f613-427c-821d-9070ea52de42","Type":"ContainerStarted","Data":"6671ded998bbc6aa27fb04a84c426767c8987b18de803fbcf6765e34bb7a6a98"} Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.496426 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wpfnn" event={"ID":"26adc908-f613-427c-821d-9070ea52de42","Type":"ContainerStarted","Data":"cba8402707028d6aaaeb3cf9b5c6fff168fae4f5e4e0324b6f3cc65f7fe71008"} Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.529598 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk" podStartSLOduration=127.529575651 podStartE2EDuration="2m7.529575651s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:17.527800948 +0000 UTC m=+148.885654612" watchObservedRunningTime="2026-01-27 18:44:17.529575651 +0000 UTC m=+148.887429315" Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.559935 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:17 crc kubenswrapper[4915]: E0127 18:44:17.563302 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:18.063281195 +0000 UTC m=+149.421134849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.664668 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:17 crc kubenswrapper[4915]: E0127 18:44:17.667986 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:18.167973122 +0000 UTC m=+149.525826786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.689391 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5xckh"] Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.769209 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:17 crc kubenswrapper[4915]: E0127 18:44:17.769369 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:18.269342626 +0000 UTC m=+149.627196290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.769886 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:17 crc kubenswrapper[4915]: E0127 18:44:17.770179 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:18.270159667 +0000 UTC m=+149.628013331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.807996 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rwhgq"] Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.855840 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" podStartSLOduration=127.855823409 podStartE2EDuration="2m7.855823409s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:17.815180456 +0000 UTC m=+149.173034130" watchObservedRunningTime="2026-01-27 18:44:17.855823409 +0000 UTC m=+149.213677073" Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.871770 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:17 crc kubenswrapper[4915]: E0127 18:44:17.872059 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:18.372026448 +0000 UTC m=+149.729880112 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.873142 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:17 crc kubenswrapper[4915]: E0127 18:44:17.873576 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:18.373554067 +0000 UTC m=+149.731407731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.877298 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xhjvn"] Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.882680 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-8sqx6"] Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.885148 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-l57nd"] Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.973775 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:17 crc kubenswrapper[4915]: E0127 18:44:17.974021 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:18.4739892 +0000 UTC m=+149.831842874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:17 crc kubenswrapper[4915]: I0127 18:44:17.974367 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:17 crc kubenswrapper[4915]: E0127 18:44:17.974715 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:18.474697409 +0000 UTC m=+149.832551183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.075226 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:18 crc kubenswrapper[4915]: E0127 18:44:18.075607 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:18.575593187 +0000 UTC m=+149.933446841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.179259 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:18 crc kubenswrapper[4915]: E0127 18:44:18.179619 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:18.679609086 +0000 UTC m=+150.037462750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.214898 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-wpfnn" podStartSLOduration=5.21488181 podStartE2EDuration="5.21488181s" podCreationTimestamp="2026-01-27 18:44:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:18.213436781 +0000 UTC m=+149.571290455" watchObservedRunningTime="2026-01-27 18:44:18.21488181 +0000 UTC m=+149.572735474" Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.266509 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l8m44" podStartSLOduration=128.266490924 podStartE2EDuration="2m8.266490924s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:18.242219562 +0000 UTC m=+149.600073226" watchObservedRunningTime="2026-01-27 18:44:18.266490924 +0000 UTC m=+149.624344588" Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.268955 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-49lf7" Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.272131 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mx2qp"] Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.282998 4915 patch_prober.go:28] interesting pod/router-default-5444994796-49lf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:44:18 crc kubenswrapper[4915]: [-]has-synced failed: reason withheld Jan 27 18:44:18 crc kubenswrapper[4915]: [+]process-running ok Jan 27 18:44:18 crc kubenswrapper[4915]: healthz check failed Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.283062 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49lf7" podUID="cdc47495-7d8a-4981-8647-f93a058fe075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.290982 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:18 crc kubenswrapper[4915]: E0127 18:44:18.291078 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:18.791064111 +0000 UTC m=+150.148917775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.291363 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.291581 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-ljskk" podStartSLOduration=128.291570947 podStartE2EDuration="2m8.291570947s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:18.28554915 +0000 UTC m=+149.643402814" watchObservedRunningTime="2026-01-27 18:44:18.291570947 +0000 UTC m=+149.649424631" Jan 27 18:44:18 crc kubenswrapper[4915]: E0127 18:44:18.291687 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:18.791679268 +0000 UTC m=+150.149532932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.328387 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcdr" podStartSLOduration=128.328373211 podStartE2EDuration="2m8.328373211s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:18.325872278 +0000 UTC m=+149.683725942" watchObservedRunningTime="2026-01-27 18:44:18.328373211 +0000 UTC m=+149.686226875" Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.330853 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xmqnm"] Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.391538 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-49lf7" podStartSLOduration=128.391519943 podStartE2EDuration="2m8.391519943s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:18.391432392 +0000 UTC m=+149.749286056" watchObservedRunningTime="2026-01-27 18:44:18.391519943 +0000 UTC m=+149.749373607" Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.391785 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:18 crc kubenswrapper[4915]: E0127 18:44:18.392187 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:18.892173442 +0000 UTC m=+150.250027106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.405776 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-v5sff"] Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.430846 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gwxs9"] Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.449859 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-d27cj"] Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.453990 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gpbpr"] Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.503474 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:18 crc kubenswrapper[4915]: E0127 18:44:18.503823 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:19.003811668 +0000 UTC m=+150.361665332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.517646 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492310-cvxsw"] Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.525985 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xmqnm" event={"ID":"7fafa3c1-2b8b-4f48-b875-b3951548b8dc","Type":"ContainerStarted","Data":"93575d774fcab3b28442e11f23f54cd3e847284d114872a35250cb7fbe042cf1"} Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.552723 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5nrn"] Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.553639 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kcmfd"] Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.598958 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4sxzb" event={"ID":"36eeecc3-2b95-4f0f-a182-49b66fdb48c1","Type":"ContainerStarted","Data":"e6d89e8246bddb740e88e9c535521d2b88f48443e964ae2bff52f6ab08f74107"} Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.604455 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.604712 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:18 crc kubenswrapper[4915]: E0127 18:44:18.604846 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:19.104816178 +0000 UTC m=+150.462669842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.605010 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.605071 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.605104 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.605163 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:18 crc kubenswrapper[4915]: E0127 18:44:18.606116 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:19.106103645 +0000 UTC m=+150.463957309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.612729 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.621102 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.622998 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.623171 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqphj"] Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.625516 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.638495 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-8sqx6" event={"ID":"d4047fcf-98c6-410c-9388-5af5a8640820","Type":"ContainerStarted","Data":"4fcb89b3732b0e7ea85f15d455f6faf0ede877c050656c948029cfa9f63a8b4d"} Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.647653 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4sxzb" podStartSLOduration=128.647632839 podStartE2EDuration="2m8.647632839s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:18.626269534 +0000 UTC m=+149.984123188" watchObservedRunningTime="2026-01-27 18:44:18.647632839 +0000 UTC m=+150.005486503" Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.649095 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-l57nd" event={"ID":"42d32061-12d1-4e45-8217-73fc194a1b3f","Type":"ContainerStarted","Data":"afbe819ead1beff64b3ca7b58fb5aca37d233c37ca8e115396c00fb180ef1540"} Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.649310 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-l57nd" event={"ID":"42d32061-12d1-4e45-8217-73fc194a1b3f","Type":"ContainerStarted","Data":"165bef44bba31cbd038ed6111170280cf07636029ada4ed120fcf94d250161b6"} Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.649956 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-l57nd" Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.664712 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7bv2" event={"ID":"9e1f0e51-1e0a-4449-9b41-54894000941f","Type":"ContainerStarted","Data":"d47acade621f7e2813640d1e48f3b37fe98dc177bb2a9ef94d1632435d7ee09c"} Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.664754 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7bv2" event={"ID":"9e1f0e51-1e0a-4449-9b41-54894000941f","Type":"ContainerStarted","Data":"96a01f19b0fba5ceb5b6a14c355ea3793dbd352ca590c1c6400647d9a8d3620c"} Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.669071 4915 patch_prober.go:28] interesting pod/downloads-7954f5f757-l57nd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.669122 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l57nd" podUID="42d32061-12d1-4e45-8217-73fc194a1b3f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 27 18:44:18 crc kubenswrapper[4915]: W0127 18:44:18.672160 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9565c20_c5f0_49a7_acd2_1cc2f4862085.slice/crio-ac392b07f80fd445d5200d385b6a38fc2223b87dd8ac3f5b688bea05f6ceb6eb WatchSource:0}: Error finding container ac392b07f80fd445d5200d385b6a38fc2223b87dd8ac3f5b688bea05f6ceb6eb: Status 404 returned error can't find the container with id ac392b07f80fd445d5200d385b6a38fc2223b87dd8ac3f5b688bea05f6ceb6eb Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.682617 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-l57nd" podStartSLOduration=128.682599069 podStartE2EDuration="2m8.682599069s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:18.682216564 +0000 UTC m=+150.040070228" watchObservedRunningTime="2026-01-27 18:44:18.682599069 +0000 UTC m=+150.040452733" Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.691555 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" event={"ID":"16f09be8-b919-45ca-8a58-38e72e3bb85c","Type":"ContainerStarted","Data":"a2d2e75126bff708b473e5d6393804ed44e4f49dea1923ff8a629b130cdb5bcf"} Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.707514 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd4g2" event={"ID":"2f57fca5-f644-4ce3-87fb-7cd1b17118ec","Type":"ContainerStarted","Data":"1d474c86588c41b04493c57335eb07b02ac637707693e0fd1cb4dcc60a872ac5"} Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.708580 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.708993 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m7bv2" podStartSLOduration=128.708976639 podStartE2EDuration="2m8.708976639s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:18.706692949 +0000 UTC m=+150.064546613" watchObservedRunningTime="2026-01-27 18:44:18.708976639 +0000 UTC m=+150.066830303" Jan 27 18:44:18 crc kubenswrapper[4915]: E0127 18:44:18.709652 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:19.209620907 +0000 UTC m=+150.567474561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.740172 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vvk64"] Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.746690 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rwhgq" event={"ID":"4ed4ecf0-fe78-4606-bbfc-abe9539010be","Type":"ContainerStarted","Data":"09bcd9d2be115728a8679d32ce43e3d072f261ac1d6f9a992ecf97b44751167a"} Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.779658 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" podStartSLOduration=128.779640568 podStartE2EDuration="2m8.779640568s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:18.750008737 +0000 UTC m=+150.107862421" watchObservedRunningTime="2026-01-27 18:44:18.779640568 +0000 UTC m=+150.137494232" Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.782819 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.787617 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rd4g2" podStartSLOduration=129.78759844 podStartE2EDuration="2m9.78759844s" podCreationTimestamp="2026-01-27 18:42:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:18.780227786 +0000 UTC m=+150.138081450" watchObservedRunningTime="2026-01-27 18:44:18.78759844 +0000 UTC m=+150.145452104" Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.791221 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5xckh" event={"ID":"022a9537-e4e4-46d1-98d8-eb9f8d88b83e","Type":"ContainerStarted","Data":"37d62e211832f8560488ad43d19848cec7c533e1bb2127950f66fc25492fc4ec"} Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.793689 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-kmfxg"] Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.796445 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xhjvn" event={"ID":"549b6c98-7f0f-45f4-bf16-e0c9f0157a72","Type":"ContainerStarted","Data":"d102bb17e111d1bb4d0c020575cf9630700ead64f4422de6c548f209c6e8fc80"} Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.797444 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-xhjvn" Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.797554 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.800229 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pl9zt"] Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.800969 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9fkzb"] Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.803866 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zkhzf" event={"ID":"b4848a06-bbb5-4855-b7f7-baa6e534cce2","Type":"ContainerStarted","Data":"672370d9ec389e5df20200ef0a3ee8290193845c84105cc526d785052cab700a"} Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.803888 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zkhzf" event={"ID":"b4848a06-bbb5-4855-b7f7-baa6e534cce2","Type":"ContainerStarted","Data":"b7aeaac11c42bc9b6b32760f576184ae76645a55a7e0786b8751b7fefaa1c5b7"} Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.804334 4915 patch_prober.go:28] interesting pod/console-operator-58897d9998-xhjvn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/readyz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.804372 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xhjvn" podUID="549b6c98-7f0f-45f4-bf16-e0c9f0157a72" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/readyz\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.808014 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5xckh" podStartSLOduration=128.808004873 podStartE2EDuration="2m8.808004873s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:18.80700537 +0000 UTC m=+150.164859034" watchObservedRunningTime="2026-01-27 18:44:18.808004873 +0000 UTC m=+150.165858537" Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.809580 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:18 crc kubenswrapper[4915]: E0127 18:44:18.810631 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:19.310616597 +0000 UTC m=+150.668470261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.811106 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.841355 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4lnms"] Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.857004 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-xhjvn" podStartSLOduration=128.84276859 podStartE2EDuration="2m8.84276859s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:18.834811918 +0000 UTC m=+150.192665582" watchObservedRunningTime="2026-01-27 18:44:18.84276859 +0000 UTC m=+150.200622254" Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.860872 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7npbf"] Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.861074 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zkhzf" podStartSLOduration=128.861057526 podStartE2EDuration="2m8.861057526s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:18.858330021 +0000 UTC m=+150.216183685" watchObservedRunningTime="2026-01-27 18:44:18.861057526 +0000 UTC m=+150.218911190" Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.892986 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-nw5h9"] Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.907102 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvmtd"] Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.911378 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qgxwz"] Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.911924 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:18 crc kubenswrapper[4915]: E0127 18:44:18.917386 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:19.41736054 +0000 UTC m=+150.775214204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.926648 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-x4lgb"] Jan 27 18:44:18 crc kubenswrapper[4915]: I0127 18:44:18.934196 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tc9fv"] Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.014875 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:19 crc kubenswrapper[4915]: E0127 18:44:19.015732 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:19.515721506 +0000 UTC m=+150.873575170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.126422 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:19 crc kubenswrapper[4915]: E0127 18:44:19.127029 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:19.627012628 +0000 UTC m=+150.984866292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.228146 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:19 crc kubenswrapper[4915]: E0127 18:44:19.228505 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:19.728493133 +0000 UTC m=+151.086346797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.272112 4915 csr.go:261] certificate signing request csr-gpz4z is approved, waiting to be issued Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.272532 4915 patch_prober.go:28] interesting pod/router-default-5444994796-49lf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:44:19 crc kubenswrapper[4915]: [-]has-synced failed: reason withheld Jan 27 18:44:19 crc kubenswrapper[4915]: [+]process-running ok Jan 27 18:44:19 crc kubenswrapper[4915]: healthz check failed Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.272567 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49lf7" podUID="cdc47495-7d8a-4981-8647-f93a058fe075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.278724 4915 csr.go:257] certificate signing request csr-gpz4z is issued Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.329660 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:19 crc kubenswrapper[4915]: E0127 18:44:19.331758 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:19.831727402 +0000 UTC m=+151.189581066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.439969 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:19 crc kubenswrapper[4915]: E0127 18:44:19.440671 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:19.940659564 +0000 UTC m=+151.298513228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.501632 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.502696 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.526418 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.541006 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:19 crc kubenswrapper[4915]: E0127 18:44:19.542630 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:20.042592596 +0000 UTC m=+151.400446260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.642501 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.643032 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.643875 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:19 crc kubenswrapper[4915]: E0127 18:44:19.644176 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:20.144165933 +0000 UTC m=+151.502019597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.744399 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:19 crc kubenswrapper[4915]: E0127 18:44:19.745440 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:20.245426386 +0000 UTC m=+151.603280050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.835136 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqphj" event={"ID":"972629a9-8882-491c-863e-770570a0aeac","Type":"ContainerStarted","Data":"71ed966324f9a39509e962b30935af7041f7648e9bf61bf5ca7b6a00f5966c4a"} Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.843910 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gpbpr" event={"ID":"ee527873-9e5c-4c8f-8634-8cc2dc627cbc","Type":"ContainerStarted","Data":"0abd4f281603437ba00879b4a9f5195ddfe0fd278101f1583e401faa5feb8d93"} Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.843948 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gpbpr" event={"ID":"ee527873-9e5c-4c8f-8634-8cc2dc627cbc","Type":"ContainerStarted","Data":"e793975a99eb8fd11e3c013b50a0a5b4867b8f912c237b37c1bfc88945493326"} Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.846854 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:19 crc kubenswrapper[4915]: E0127 18:44:19.847113 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:20.347101414 +0000 UTC m=+151.704955078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.904025 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" event={"ID":"864c2cff-cecd-4156-afd9-088a9b9a1956","Type":"ContainerStarted","Data":"e6a88cbcf1b3a274752f1a8918f20c75e2549351819ab3cbfce234d4e2058d3c"} Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.904084 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" event={"ID":"864c2cff-cecd-4156-afd9-088a9b9a1956","Type":"ContainerStarted","Data":"8c436607be9305b9187686587ead7b1560d672f9a8ca79a9e930e42f50b27ebc"} Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.905161 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.908627 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tc9fv" event={"ID":"c2e593e2-bf9f-415b-9efd-6ffd49cf9905","Type":"ContainerStarted","Data":"2dc63efdcc8cfb48d1dbb39e0c1e29c1185047fd417eb6dbacc0d618fa4ecbe8"} Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.917064 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pl9zt" event={"ID":"9536396c-05f0-49f2-96b6-969e1c9b92a2","Type":"ContainerStarted","Data":"66537886a5605e65e11e5238bcbdd16c5767a32984f9a8553b57469cf7873b64"} Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.930757 4915 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-kcmfd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.930833 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" podUID="864c2cff-cecd-4156-afd9-088a9b9a1956" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.932079 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"05be701a203c10c00c51e84d6705f6d76e98333ca50ad3c99482004ee3f8ef3c"} Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.944582 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xmqnm" event={"ID":"7fafa3c1-2b8b-4f48-b875-b3951548b8dc","Type":"ContainerStarted","Data":"b04d55e91471ec49742e417e49945738cc51546dcf549fd8f7c8a3b5717c8dbd"} Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.947440 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:19 crc kubenswrapper[4915]: E0127 18:44:19.948722 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:20.448676392 +0000 UTC m=+151.806530056 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.950401 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-8sqx6" event={"ID":"d4047fcf-98c6-410c-9388-5af5a8640820","Type":"ContainerStarted","Data":"bd402269a94f91e5024fe5ce3890e3f91eec1deec60890d835e0a23e5f9fcf31"} Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.961484 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gwxs9" event={"ID":"163cabda-fdf9-41c8-a913-db9e8f848e91","Type":"ContainerStarted","Data":"cf1ede2b6b6c2b924dcbc4ebc1c4639fb15654db4055c1048257d00d137112ed"} Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.961508 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gwxs9" event={"ID":"163cabda-fdf9-41c8-a913-db9e8f848e91","Type":"ContainerStarted","Data":"00724997c5103377a90b3d926e10876100f58cc037ef3ba2e3a3b6d8b267d23b"} Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.965922 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-nw5h9" event={"ID":"60b5b6a8-738d-4575-a1d1-ecdec1decb98","Type":"ContainerStarted","Data":"f6b78e1397e646c6801fa70dd1acf24159ff64278a1c776c53f51d5942a59405"} Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.967752 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vvk64" event={"ID":"66c0e807-851e-4efe-bab8-90f94bb915e8","Type":"ContainerStarted","Data":"49d37f420c57f683a4407af10a081a330acdf0b08a7b1b9dc189ab8ebf53d4e2"} Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.970754 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kmfxg" event={"ID":"3fbd96a5-cdcb-4fa4-81c0-ce07cd660089","Type":"ContainerStarted","Data":"89b0194504da3f80b783c1cf0691dfe5d2a68511e1bc7055af270d991c332b63"} Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.970821 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kmfxg" event={"ID":"3fbd96a5-cdcb-4fa4-81c0-ce07cd660089","Type":"ContainerStarted","Data":"84ad754da9b3bb74048ffe3ca6ae4d63edf79d8951bc8c8505902fb2cd65fe64"} Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.971630 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-d27cj" event={"ID":"cb28ab26-da64-46ae-9558-ab9c29622a7d","Type":"ContainerStarted","Data":"f740893398351b3766eb9cd7f85507a196cd15297da4049f66c3778cff3f3ea6"} Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.971654 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-d27cj" event={"ID":"cb28ab26-da64-46ae-9558-ab9c29622a7d","Type":"ContainerStarted","Data":"3f7be33b454149148f6fc000db0fbf28ac1609b4c3ef41d8652e6468e058ce5d"} Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.973604 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9fkzb" event={"ID":"4c62b972-1fe6-4b17-9a53-37fdeaa4bd32","Type":"ContainerStarted","Data":"e88da9f8324d51e3d2aaff4dc9b60fa42de7e0bfe41d79cbcd74c6a5c32e4939"} Jan 27 18:44:19 crc kubenswrapper[4915]: I0127 18:44:19.987751 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4lnms" event={"ID":"d6f6bcf0-b97a-4e23-9c09-d9f817d725ac","Type":"ContainerStarted","Data":"bced0389a73b5fb17fd9e23564d0ad3e901dda1ba05a4cd79690a9cccd37efac"} Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.000371 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-cvxsw" event={"ID":"c9565c20-c5f0-49a7-acd2-1cc2f4862085","Type":"ContainerStarted","Data":"6a7e413166283ba88181998fce69b87f999667af6f8ab0d9276cd140bee67cbb"} Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.000416 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-cvxsw" event={"ID":"c9565c20-c5f0-49a7-acd2-1cc2f4862085","Type":"ContainerStarted","Data":"ac392b07f80fd445d5200d385b6a38fc2223b87dd8ac3f5b688bea05f6ceb6eb"} Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.004179 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xhjvn" event={"ID":"549b6c98-7f0f-45f4-bf16-e0c9f0157a72","Type":"ContainerStarted","Data":"853f06f6109246d7da96e2b361c369263392003bc0fcc61f6f172dad93960418"} Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.005020 4915 patch_prober.go:28] interesting pod/console-operator-58897d9998-xhjvn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/readyz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.005224 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xhjvn" podUID="549b6c98-7f0f-45f4-bf16-e0c9f0157a72" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/readyz\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.013466 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-x4lgb" event={"ID":"c3f419c0-de2f-49f4-8d15-34a04465730b","Type":"ContainerStarted","Data":"30ffd3cb7fec0d042ac42c791755fd3b4148a549f2dd60eafd2192d66ef608fb"} Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.026456 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7npbf" event={"ID":"f2c64c84-9ed1-47b6-af09-0af742ec9771","Type":"ContainerStarted","Data":"e1fc3dddb06489b84deccba71190f1944ff3329e5496f1707334b68ef96ae1cc"} Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.033875 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mx2qp" event={"ID":"5d15c942-59c9-4570-8c33-a972dce41fd6","Type":"ContainerStarted","Data":"91b126b2c3df90ca792c6c75d872742e560d1906feb6f27c190c79bbb3289386"} Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.033916 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mx2qp" event={"ID":"5d15c942-59c9-4570-8c33-a972dce41fd6","Type":"ContainerStarted","Data":"c9d6d0b715b79bf3bb3cebe6d5211b3a1ea09c8dce1ba150c589b8558e6d5812"} Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.038060 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5xckh" event={"ID":"022a9537-e4e4-46d1-98d8-eb9f8d88b83e","Type":"ContainerStarted","Data":"963ca5a9dda1ec8de0bf4b0391ed0c1c62a585f0de27bb201fc5fa8d394886d1"} Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.053356 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:20 crc kubenswrapper[4915]: E0127 18:44:20.053693 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:20.553682633 +0000 UTC m=+151.911536297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.084675 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5nrn" event={"ID":"06e673dd-e486-441e-b971-a72c53032558","Type":"ContainerStarted","Data":"7a5b41d31cb86da44bf95ab3eaf8a21c52435c5275b59889d9d294182c0fddeb"} Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.084716 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5nrn" event={"ID":"06e673dd-e486-441e-b971-a72c53032558","Type":"ContainerStarted","Data":"adce569436c57bc92ec69dc67d9f7f4d2e1fa06db174347100afdad4dc1c494a"} Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.140454 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v5sff" event={"ID":"e03d9c82-1127-4320-858f-bb1a66eb487f","Type":"ContainerStarted","Data":"ea7bc4f90c14eeed9c88083a308523d5994b2416ae07a0af136136bf984c80c8"} Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.140500 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v5sff" event={"ID":"e03d9c82-1127-4320-858f-bb1a66eb487f","Type":"ContainerStarted","Data":"7ad60d975c251446bcdc3556e0209d36ea761bf2c6949a8a4d2e899fba11050e"} Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.140485 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-cvxsw" podStartSLOduration=130.14047043 podStartE2EDuration="2m10.14047043s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:20.140080075 +0000 UTC m=+151.497933739" watchObservedRunningTime="2026-01-27 18:44:20.14047043 +0000 UTC m=+151.498324084" Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.146754 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qgxwz" event={"ID":"a0b86903-8500-47c1-9fdb-5b0dea888375","Type":"ContainerStarted","Data":"08463e40c0545d2fe137831a9a7649102c4a1701a2a49a73265ccd8839f5eb83"} Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.152825 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvmtd" event={"ID":"f4c3452d-d381-4a8f-a917-d94870b996df","Type":"ContainerStarted","Data":"ed491d84cd5dad80ffdbf93f318441b6ed22a729750426d8efe980ffd987103f"} Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.164291 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:20 crc kubenswrapper[4915]: E0127 18:44:20.167208 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:20.667191744 +0000 UTC m=+152.025045408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.190871 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rwhgq" event={"ID":"4ed4ecf0-fe78-4606-bbfc-abe9539010be","Type":"ContainerStarted","Data":"c7e22b3a3498de180b1bc31302d802a98166ad72d80cc7b04a3e51ec84062727"} Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.193569 4915 patch_prober.go:28] interesting pod/downloads-7954f5f757-l57nd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.193631 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l57nd" podUID="42d32061-12d1-4e45-8217-73fc194a1b3f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.202917 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bb2js" Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.208703 4915 patch_prober.go:28] interesting pod/apiserver-76f77b778f-5s7q5 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 27 18:44:20 crc kubenswrapper[4915]: [+]log ok Jan 27 18:44:20 crc kubenswrapper[4915]: [+]etcd ok Jan 27 18:44:20 crc kubenswrapper[4915]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 27 18:44:20 crc kubenswrapper[4915]: [+]poststarthook/generic-apiserver-start-informers ok Jan 27 18:44:20 crc kubenswrapper[4915]: [+]poststarthook/max-in-flight-filter ok Jan 27 18:44:20 crc kubenswrapper[4915]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 27 18:44:20 crc kubenswrapper[4915]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 27 18:44:20 crc kubenswrapper[4915]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 27 18:44:20 crc kubenswrapper[4915]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 27 18:44:20 crc kubenswrapper[4915]: [+]poststarthook/project.openshift.io-projectcache ok Jan 27 18:44:20 crc kubenswrapper[4915]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 27 18:44:20 crc kubenswrapper[4915]: [+]poststarthook/openshift.io-startinformers ok Jan 27 18:44:20 crc kubenswrapper[4915]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 27 18:44:20 crc kubenswrapper[4915]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 27 18:44:20 crc kubenswrapper[4915]: livez check failed Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.208748 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" podUID="16f09be8-b919-45ca-8a58-38e72e3bb85c" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.209717 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" podStartSLOduration=130.209699911 podStartE2EDuration="2m10.209699911s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:20.208938491 +0000 UTC m=+151.566792155" watchObservedRunningTime="2026-01-27 18:44:20.209699911 +0000 UTC m=+151.567553565" Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.259846 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-8sqx6" podStartSLOduration=130.259829626 podStartE2EDuration="2m10.259829626s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:20.236003849 +0000 UTC m=+151.593857513" watchObservedRunningTime="2026-01-27 18:44:20.259829626 +0000 UTC m=+151.617683290" Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.266176 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:20 crc kubenswrapper[4915]: E0127 18:44:20.266514 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:20.766503172 +0000 UTC m=+152.124356826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.272260 4915 patch_prober.go:28] interesting pod/router-default-5444994796-49lf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:44:20 crc kubenswrapper[4915]: [-]has-synced failed: reason withheld Jan 27 18:44:20 crc kubenswrapper[4915]: [+]process-running ok Jan 27 18:44:20 crc kubenswrapper[4915]: healthz check failed Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.272310 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49lf7" podUID="cdc47495-7d8a-4981-8647-f93a058fe075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.282982 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-27 18:39:19 +0000 UTC, rotation deadline is 2026-10-29 20:45:42.213598488 +0000 UTC Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.283021 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6602h1m21.930581144s for next certificate rotation Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.285378 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5nrn" podStartSLOduration=130.285358484 podStartE2EDuration="2m10.285358484s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:20.259823976 +0000 UTC m=+151.617677650" watchObservedRunningTime="2026-01-27 18:44:20.285358484 +0000 UTC m=+151.643212148" Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.286640 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xmqnm" podStartSLOduration=130.286633391 podStartE2EDuration="2m10.286633391s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:20.283270337 +0000 UTC m=+151.641124001" watchObservedRunningTime="2026-01-27 18:44:20.286633391 +0000 UTC m=+151.644487055" Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.366873 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:20 crc kubenswrapper[4915]: E0127 18:44:20.368077 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:20.868054038 +0000 UTC m=+152.225907702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.469349 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:20 crc kubenswrapper[4915]: E0127 18:44:20.470010 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:20.96999343 +0000 UTC m=+152.327847094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.572443 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:20 crc kubenswrapper[4915]: E0127 18:44:20.573316 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:21.07329942 +0000 UTC m=+152.431153084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.602111 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.624990 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.625059 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.674318 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:20 crc kubenswrapper[4915]: E0127 18:44:20.674724 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:21.174708055 +0000 UTC m=+152.532561719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.775268 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:20 crc kubenswrapper[4915]: E0127 18:44:20.775495 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:21.275475742 +0000 UTC m=+152.633329416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.775708 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:20 crc kubenswrapper[4915]: E0127 18:44:20.776426 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:21.276408864 +0000 UTC m=+152.634262528 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.876920 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:20 crc kubenswrapper[4915]: E0127 18:44:20.877138 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:21.37711223 +0000 UTC m=+152.734965894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.877205 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:20 crc kubenswrapper[4915]: E0127 18:44:20.877554 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:21.377542965 +0000 UTC m=+152.735396719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.977560 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:20 crc kubenswrapper[4915]: E0127 18:44:20.977754 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:21.477729374 +0000 UTC m=+152.835583038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:20 crc kubenswrapper[4915]: I0127 18:44:20.977878 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:20 crc kubenswrapper[4915]: E0127 18:44:20.978224 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:21.478212901 +0000 UTC m=+152.836066565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.078990 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:21 crc kubenswrapper[4915]: E0127 18:44:21.079420 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:21.579401863 +0000 UTC m=+152.937255527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.180238 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:21 crc kubenswrapper[4915]: E0127 18:44:21.180617 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:21.680600675 +0000 UTC m=+153.038454339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.195844 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"eddcd426d3f032cd8008deb2aa3efcdd7e69d72fd550730f0c60a437772560bd"} Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.195901 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.198644 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"84c116cbfa9a449461d5c88a62fec01b9644487e55abb3a973dbfa770e3fd7bf"} Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.198678 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"663220c2eff04ce8afc8c746ddc766b4cd057c34251f82c5294e55b66939060e"} Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.205492 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gwxs9" event={"ID":"163cabda-fdf9-41c8-a913-db9e8f848e91","Type":"ContainerStarted","Data":"d8223706f1400fa744432fdd0787fe2d076305fdc3e8fda906fa70521b1a1fc3"} Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.205596 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gwxs9" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.207634 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvmtd" event={"ID":"f4c3452d-d381-4a8f-a917-d94870b996df","Type":"ContainerStarted","Data":"07ff6de4580ba3c7d9ed13f780355a04e68beef845e2c98885f928aa07ed4755"} Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.209630 4915 generic.go:334] "Generic (PLEG): container finished" podID="f2c64c84-9ed1-47b6-af09-0af742ec9771" containerID="249a3738dfd0d3870b68d45c18efe6e965c6bf482037958c875f9dff62275fe6" exitCode=0 Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.209845 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7npbf" event={"ID":"f2c64c84-9ed1-47b6-af09-0af742ec9771","Type":"ContainerDied","Data":"249a3738dfd0d3870b68d45c18efe6e965c6bf482037958c875f9dff62275fe6"} Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.211411 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rwhgq" event={"ID":"4ed4ecf0-fe78-4606-bbfc-abe9539010be","Type":"ContainerStarted","Data":"ef18abe508cb5e453f4908ebdf8deb2bcfbc70ec2ac7e1b8cb257c92c392ba12"} Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.216765 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-d27cj" event={"ID":"cb28ab26-da64-46ae-9558-ab9c29622a7d","Type":"ContainerStarted","Data":"98064343b6f61a80322503c27ae5cc4b5e901f65f3ba733cffed4ffb97a44d2f"} Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.218298 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-nw5h9" event={"ID":"60b5b6a8-738d-4575-a1d1-ecdec1decb98","Type":"ContainerStarted","Data":"a787c53f03b43e167d3c0b00c709956ddfa83644c4370cdeb1d82353abbf0428"} Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.220438 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9fkzb" event={"ID":"4c62b972-1fe6-4b17-9a53-37fdeaa4bd32","Type":"ContainerStarted","Data":"199706600cf355b842fae1e3335f57aa1ed001c61a1a876bb8f4bb45337c7006"} Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.220709 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9fkzb" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.222223 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqphj" event={"ID":"972629a9-8882-491c-863e-770570a0aeac","Type":"ContainerStarted","Data":"2649c882aae6b89edadd892bf1a1f441f55a0148120b6fa01f8cfd3399064de1"} Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.222575 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqphj" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.224695 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gpbpr" event={"ID":"ee527873-9e5c-4c8f-8634-8cc2dc627cbc","Type":"ContainerStarted","Data":"b93cb7ed52311e7a7f58a912851e96ffccb14383f63d7fc17db6ba15c0560954"} Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.224864 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-gpbpr" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.226785 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qgxwz" event={"ID":"a0b86903-8500-47c1-9fdb-5b0dea888375","Type":"ContainerStarted","Data":"33056bd315755d0f3665517e23a1ce0ae0f6a3f02c4995f801a7e1010ee144fe"} Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.227423 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-qgxwz" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.228569 4915 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qgxwz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.228601 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qgxwz" podUID="a0b86903-8500-47c1-9fdb-5b0dea888375" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.233645 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tc9fv" event={"ID":"c2e593e2-bf9f-415b-9efd-6ffd49cf9905","Type":"ContainerStarted","Data":"8c7016de8675982ce502329e1894c6ae20fa952efd200a3fcac210af851b35bc"} Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.239370 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9fkzb" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.253672 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vvk64" event={"ID":"66c0e807-851e-4efe-bab8-90f94bb915e8","Type":"ContainerStarted","Data":"e4b306080cd11f7751835f05e95533d446927bbb33ddcc3a9fe04813bfc381de"} Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.254474 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vvk64" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.255905 4915 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vvk64 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.255943 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vvk64" podUID="66c0e807-851e-4efe-bab8-90f94bb915e8" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.262614 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mx2qp" event={"ID":"5d15c942-59c9-4570-8c33-a972dce41fd6","Type":"ContainerStarted","Data":"11d2ca0c8cfa0ab794330cda9d385b6c3c46e92e00959d4d8041cd2a31336cfd"} Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.273216 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4lnms" event={"ID":"d6f6bcf0-b97a-4e23-9c09-d9f817d725ac","Type":"ContainerStarted","Data":"114c4e19882a96f011920e60398b861189c7063e02462f803e891d8cee3d62a3"} Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.273320 4915 patch_prober.go:28] interesting pod/router-default-5444994796-49lf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:44:21 crc kubenswrapper[4915]: [-]has-synced failed: reason withheld Jan 27 18:44:21 crc kubenswrapper[4915]: [+]process-running ok Jan 27 18:44:21 crc kubenswrapper[4915]: healthz check failed Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.273378 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49lf7" podUID="cdc47495-7d8a-4981-8647-f93a058fe075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.278650 4915 generic.go:334] "Generic (PLEG): container finished" podID="c9565c20-c5f0-49a7-acd2-1cc2f4862085" containerID="6a7e413166283ba88181998fce69b87f999667af6f8ab0d9276cd140bee67cbb" exitCode=0 Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.278713 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-cvxsw" event={"ID":"c9565c20-c5f0-49a7-acd2-1cc2f4862085","Type":"ContainerDied","Data":"6a7e413166283ba88181998fce69b87f999667af6f8ab0d9276cd140bee67cbb"} Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.280608 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-x4lgb" event={"ID":"c3f419c0-de2f-49f4-8d15-34a04465730b","Type":"ContainerStarted","Data":"9c4400308fdb50e6316eac60b567d33a409f564acee5c5bad7df062e4f703f6a"} Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.280961 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:21 crc kubenswrapper[4915]: E0127 18:44:21.281349 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:21.781328222 +0000 UTC m=+153.139181886 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.284116 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a041015ef4f36d956f6d6cbf3d59886322091c0abfe9017f458f2a6f9ed8ac77"} Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.284146 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7987f94952d5a8187f10c1925003253afac83015ee41225a914cf51d386b903b"} Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.286896 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v5sff" event={"ID":"e03d9c82-1127-4320-858f-bb1a66eb487f","Type":"ContainerStarted","Data":"f1cacc63caa5bf69c1060c87fda3c69ddd011a25a02c112c48a636a2e50ada71"} Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.289767 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pl9zt" event={"ID":"9536396c-05f0-49f2-96b6-969e1c9b92a2","Type":"ContainerStarted","Data":"99637b2134ad9ca880de5ba13e17acc6677cf142e326b2d5e90ce6d8191f4810"} Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.290500 4915 patch_prober.go:28] interesting pod/downloads-7954f5f757-l57nd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.290533 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l57nd" podUID="42d32061-12d1-4e45-8217-73fc194a1b3f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.292564 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-rwhgq" podStartSLOduration=131.292553846 podStartE2EDuration="2m11.292553846s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:21.290830354 +0000 UTC m=+152.648684018" watchObservedRunningTime="2026-01-27 18:44:21.292553846 +0000 UTC m=+152.650407510" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.293675 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gwxs9" podStartSLOduration=131.29366805 podStartE2EDuration="2m11.29366805s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:21.273072095 +0000 UTC m=+152.630925759" watchObservedRunningTime="2026-01-27 18:44:21.29366805 +0000 UTC m=+152.651521714" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.326725 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-xhjvn" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.351244 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gvmtd" podStartSLOduration=131.351223621 podStartE2EDuration="2m11.351223621s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:21.318200436 +0000 UTC m=+152.676054100" watchObservedRunningTime="2026-01-27 18:44:21.351223621 +0000 UTC m=+152.709077285" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.383428 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:21 crc kubenswrapper[4915]: E0127 18:44:21.385607 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:21.885593473 +0000 UTC m=+153.243447137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.421726 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vvk64" podStartSLOduration=131.421711508 podStartE2EDuration="2m11.421711508s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:21.419178626 +0000 UTC m=+152.777032290" watchObservedRunningTime="2026-01-27 18:44:21.421711508 +0000 UTC m=+152.779565172" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.487370 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:21 crc kubenswrapper[4915]: E0127 18:44:21.488496 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:21.988482058 +0000 UTC m=+153.346335722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.520735 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-d27cj" podStartSLOduration=131.520714612 podStartE2EDuration="2m11.520714612s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:21.517090836 +0000 UTC m=+152.874944500" watchObservedRunningTime="2026-01-27 18:44:21.520714612 +0000 UTC m=+152.878568276" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.545685 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-nw5h9" podStartSLOduration=8.545668403 podStartE2EDuration="8.545668403s" podCreationTimestamp="2026-01-27 18:44:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:21.544555689 +0000 UTC m=+152.902409353" watchObservedRunningTime="2026-01-27 18:44:21.545668403 +0000 UTC m=+152.903522067" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.569199 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqphj" podStartSLOduration=131.569164546 podStartE2EDuration="2m11.569164546s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:21.56485061 +0000 UTC m=+152.922704274" watchObservedRunningTime="2026-01-27 18:44:21.569164546 +0000 UTC m=+152.927018210" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.591945 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:21 crc kubenswrapper[4915]: E0127 18:44:21.592243 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:22.092231433 +0000 UTC m=+153.450085097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.613738 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kmfxg" podStartSLOduration=131.613719979 podStartE2EDuration="2m11.613719979s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:21.586031353 +0000 UTC m=+152.943885007" watchObservedRunningTime="2026-01-27 18:44:21.613719979 +0000 UTC m=+152.971573643" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.627754 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.643171 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pl9zt" podStartSLOduration=131.643156458 podStartE2EDuration="2m11.643156458s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:21.623193301 +0000 UTC m=+152.981046965" watchObservedRunningTime="2026-01-27 18:44:21.643156458 +0000 UTC m=+153.001010122" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.679033 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tc9fv" podStartSLOduration=131.679017359 podStartE2EDuration="2m11.679017359s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:21.649122855 +0000 UTC m=+153.006976519" watchObservedRunningTime="2026-01-27 18:44:21.679017359 +0000 UTC m=+153.036871013" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.695355 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:21 crc kubenswrapper[4915]: E0127 18:44:21.695536 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:22.195504692 +0000 UTC m=+153.553358356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.695742 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:21 crc kubenswrapper[4915]: E0127 18:44:21.696043 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:22.196031098 +0000 UTC m=+153.553884762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.697670 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9fkzb" podStartSLOduration=131.697655049 podStartE2EDuration="2m11.697655049s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:21.695212548 +0000 UTC m=+153.053066212" watchObservedRunningTime="2026-01-27 18:44:21.697655049 +0000 UTC m=+153.055508713" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.697943 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mx2qp" podStartSLOduration=131.697937793 podStartE2EDuration="2m11.697937793s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:21.677814884 +0000 UTC m=+153.035668538" watchObservedRunningTime="2026-01-27 18:44:21.697937793 +0000 UTC m=+153.055791457" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.730838 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v5sff" podStartSLOduration=131.730819896 podStartE2EDuration="2m11.730819896s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:21.729628471 +0000 UTC m=+153.087482135" watchObservedRunningTime="2026-01-27 18:44:21.730819896 +0000 UTC m=+153.088673560" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.752152 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-qgxwz" podStartSLOduration=131.752138051 podStartE2EDuration="2m11.752138051s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:21.749423116 +0000 UTC m=+153.107276780" watchObservedRunningTime="2026-01-27 18:44:21.752138051 +0000 UTC m=+153.109991715" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.777618 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-gpbpr" podStartSLOduration=8.777603078 podStartE2EDuration="8.777603078s" podCreationTimestamp="2026-01-27 18:44:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:21.775854316 +0000 UTC m=+153.133707980" watchObservedRunningTime="2026-01-27 18:44:21.777603078 +0000 UTC m=+153.135456742" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.797150 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:21 crc kubenswrapper[4915]: E0127 18:44:21.797545 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:22.297531725 +0000 UTC m=+153.655385389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.877527 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-x4lgb" podStartSLOduration=131.877509254 podStartE2EDuration="2m11.877509254s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:21.838525202 +0000 UTC m=+153.196378886" watchObservedRunningTime="2026-01-27 18:44:21.877509254 +0000 UTC m=+153.235362918" Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.898392 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:21 crc kubenswrapper[4915]: E0127 18:44:21.898684 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:22.398673946 +0000 UTC m=+153.756527610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.998976 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:21 crc kubenswrapper[4915]: E0127 18:44:21.999160 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:22.499133219 +0000 UTC m=+153.856986883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:21 crc kubenswrapper[4915]: I0127 18:44:21.999288 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:21 crc kubenswrapper[4915]: E0127 18:44:21.999573 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:22.499561635 +0000 UTC m=+153.857415299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.100030 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:22 crc kubenswrapper[4915]: E0127 18:44:22.100316 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:22.600287071 +0000 UTC m=+153.958140745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.100584 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:22 crc kubenswrapper[4915]: E0127 18:44:22.100989 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:22.60097891 +0000 UTC m=+153.958832624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.201731 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:22 crc kubenswrapper[4915]: E0127 18:44:22.202198 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:22.702178022 +0000 UTC m=+154.060031686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.222662 4915 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-kqphj container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.222739 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqphj" podUID="972629a9-8882-491c-863e-770570a0aeac" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.277434 4915 patch_prober.go:28] interesting pod/router-default-5444994796-49lf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:44:22 crc kubenswrapper[4915]: [-]has-synced failed: reason withheld Jan 27 18:44:22 crc kubenswrapper[4915]: [+]process-running ok Jan 27 18:44:22 crc kubenswrapper[4915]: healthz check failed Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.277489 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49lf7" podUID="cdc47495-7d8a-4981-8647-f93a058fe075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.303730 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:22 crc kubenswrapper[4915]: E0127 18:44:22.304078 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:22.804063843 +0000 UTC m=+154.161917507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.304353 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4lnms" event={"ID":"d6f6bcf0-b97a-4e23-9c09-d9f817d725ac","Type":"ContainerStarted","Data":"1c8eaaf918bc87b53833e43ef1af85ffdbfe179b8c95a887439e4c2f46f678b5"} Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.307659 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7npbf" event={"ID":"f2c64c84-9ed1-47b6-af09-0af742ec9771","Type":"ContainerStarted","Data":"a7b9795e33325e616a79bbae1d217ae47284b98db527bef01b853fcb5a834473"} Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.315020 4915 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qgxwz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.315045 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7npbf" Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.315108 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qgxwz" podUID="a0b86903-8500-47c1-9fdb-5b0dea888375" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.339892 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7npbf" podStartSLOduration=132.339871064 podStartE2EDuration="2m12.339871064s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:22.337347652 +0000 UTC m=+153.695201316" watchObservedRunningTime="2026-01-27 18:44:22.339871064 +0000 UTC m=+153.697724728" Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.345295 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vvk64" Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.404705 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:22 crc kubenswrapper[4915]: E0127 18:44:22.404928 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:22.904902361 +0000 UTC m=+154.262756025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.405619 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:22 crc kubenswrapper[4915]: E0127 18:44:22.407757 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:22.907742018 +0000 UTC m=+154.265595782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.506561 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:22 crc kubenswrapper[4915]: E0127 18:44:22.508218 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:23.00820294 +0000 UTC m=+154.366056604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.599609 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-cvxsw" Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.616505 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9565c20-c5f0-49a7-acd2-1cc2f4862085-secret-volume\") pod \"c9565c20-c5f0-49a7-acd2-1cc2f4862085\" (UID: \"c9565c20-c5f0-49a7-acd2-1cc2f4862085\") " Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.616558 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkngm\" (UniqueName: \"kubernetes.io/projected/c9565c20-c5f0-49a7-acd2-1cc2f4862085-kube-api-access-xkngm\") pod \"c9565c20-c5f0-49a7-acd2-1cc2f4862085\" (UID: \"c9565c20-c5f0-49a7-acd2-1cc2f4862085\") " Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.616704 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9565c20-c5f0-49a7-acd2-1cc2f4862085-config-volume\") pod \"c9565c20-c5f0-49a7-acd2-1cc2f4862085\" (UID: \"c9565c20-c5f0-49a7-acd2-1cc2f4862085\") " Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.616838 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:22 crc kubenswrapper[4915]: E0127 18:44:22.617158 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:23.117147902 +0000 UTC m=+154.475001566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.619047 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9565c20-c5f0-49a7-acd2-1cc2f4862085-config-volume" (OuterVolumeSpecName: "config-volume") pod "c9565c20-c5f0-49a7-acd2-1cc2f4862085" (UID: "c9565c20-c5f0-49a7-acd2-1cc2f4862085"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.623447 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9565c20-c5f0-49a7-acd2-1cc2f4862085-kube-api-access-xkngm" (OuterVolumeSpecName: "kube-api-access-xkngm") pod "c9565c20-c5f0-49a7-acd2-1cc2f4862085" (UID: "c9565c20-c5f0-49a7-acd2-1cc2f4862085"). InnerVolumeSpecName "kube-api-access-xkngm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.624231 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9565c20-c5f0-49a7-acd2-1cc2f4862085-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c9565c20-c5f0-49a7-acd2-1cc2f4862085" (UID: "c9565c20-c5f0-49a7-acd2-1cc2f4862085"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.718275 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:22 crc kubenswrapper[4915]: E0127 18:44:22.718467 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:23.218444915 +0000 UTC m=+154.576298579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.718921 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.719043 4915 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9565c20-c5f0-49a7-acd2-1cc2f4862085-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.719057 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkngm\" (UniqueName: \"kubernetes.io/projected/c9565c20-c5f0-49a7-acd2-1cc2f4862085-kube-api-access-xkngm\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.719070 4915 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9565c20-c5f0-49a7-acd2-1cc2f4862085-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:22 crc kubenswrapper[4915]: E0127 18:44:22.719343 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:23.219335706 +0000 UTC m=+154.577189370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:22 crc kubenswrapper[4915]: E0127 18:44:22.819899 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:23.31987737 +0000 UTC m=+154.677731034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.819932 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.820203 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:22 crc kubenswrapper[4915]: E0127 18:44:22.820481 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:23.320473828 +0000 UTC m=+154.678327492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.844384 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kqphj" Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.898115 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q4xjv"] Jan 27 18:44:22 crc kubenswrapper[4915]: E0127 18:44:22.898327 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9565c20-c5f0-49a7-acd2-1cc2f4862085" containerName="collect-profiles" Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.898338 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9565c20-c5f0-49a7-acd2-1cc2f4862085" containerName="collect-profiles" Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.898431 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9565c20-c5f0-49a7-acd2-1cc2f4862085" containerName="collect-profiles" Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.899112 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q4xjv" Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.900659 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.915118 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q4xjv"] Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.920840 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.920997 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw7l6\" (UniqueName: \"kubernetes.io/projected/8ec10b90-7f91-4bda-bcaa-cce472027970-kube-api-access-mw7l6\") pod \"community-operators-q4xjv\" (UID: \"8ec10b90-7f91-4bda-bcaa-cce472027970\") " pod="openshift-marketplace/community-operators-q4xjv" Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.921023 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ec10b90-7f91-4bda-bcaa-cce472027970-catalog-content\") pod \"community-operators-q4xjv\" (UID: \"8ec10b90-7f91-4bda-bcaa-cce472027970\") " pod="openshift-marketplace/community-operators-q4xjv" Jan 27 18:44:22 crc kubenswrapper[4915]: I0127 18:44:22.921045 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ec10b90-7f91-4bda-bcaa-cce472027970-utilities\") pod \"community-operators-q4xjv\" (UID: \"8ec10b90-7f91-4bda-bcaa-cce472027970\") " pod="openshift-marketplace/community-operators-q4xjv" Jan 27 18:44:22 crc kubenswrapper[4915]: E0127 18:44:22.921143 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:23.421130073 +0000 UTC m=+154.778983737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.021766 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.021866 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw7l6\" (UniqueName: \"kubernetes.io/projected/8ec10b90-7f91-4bda-bcaa-cce472027970-kube-api-access-mw7l6\") pod \"community-operators-q4xjv\" (UID: \"8ec10b90-7f91-4bda-bcaa-cce472027970\") " pod="openshift-marketplace/community-operators-q4xjv" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.021894 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ec10b90-7f91-4bda-bcaa-cce472027970-catalog-content\") pod \"community-operators-q4xjv\" (UID: \"8ec10b90-7f91-4bda-bcaa-cce472027970\") " pod="openshift-marketplace/community-operators-q4xjv" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.021927 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ec10b90-7f91-4bda-bcaa-cce472027970-utilities\") pod \"community-operators-q4xjv\" (UID: \"8ec10b90-7f91-4bda-bcaa-cce472027970\") " pod="openshift-marketplace/community-operators-q4xjv" Jan 27 18:44:23 crc kubenswrapper[4915]: E0127 18:44:23.022066 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:23.522053272 +0000 UTC m=+154.879906936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.022396 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ec10b90-7f91-4bda-bcaa-cce472027970-catalog-content\") pod \"community-operators-q4xjv\" (UID: \"8ec10b90-7f91-4bda-bcaa-cce472027970\") " pod="openshift-marketplace/community-operators-q4xjv" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.022527 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ec10b90-7f91-4bda-bcaa-cce472027970-utilities\") pod \"community-operators-q4xjv\" (UID: \"8ec10b90-7f91-4bda-bcaa-cce472027970\") " pod="openshift-marketplace/community-operators-q4xjv" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.047825 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw7l6\" (UniqueName: \"kubernetes.io/projected/8ec10b90-7f91-4bda-bcaa-cce472027970-kube-api-access-mw7l6\") pod \"community-operators-q4xjv\" (UID: \"8ec10b90-7f91-4bda-bcaa-cce472027970\") " pod="openshift-marketplace/community-operators-q4xjv" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.081250 4915 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.103514 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lpn7t"] Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.105063 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lpn7t" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.107071 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.115018 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lpn7t"] Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.122245 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.122368 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq5k2\" (UniqueName: \"kubernetes.io/projected/f973ebe4-3ce2-49c7-96b8-c7e238c606a7-kube-api-access-cq5k2\") pod \"certified-operators-lpn7t\" (UID: \"f973ebe4-3ce2-49c7-96b8-c7e238c606a7\") " pod="openshift-marketplace/certified-operators-lpn7t" Jan 27 18:44:23 crc kubenswrapper[4915]: E0127 18:44:23.122404 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:23.622381283 +0000 UTC m=+154.980234947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.122432 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.122469 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f973ebe4-3ce2-49c7-96b8-c7e238c606a7-utilities\") pod \"certified-operators-lpn7t\" (UID: \"f973ebe4-3ce2-49c7-96b8-c7e238c606a7\") " pod="openshift-marketplace/certified-operators-lpn7t" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.122577 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f973ebe4-3ce2-49c7-96b8-c7e238c606a7-catalog-content\") pod \"certified-operators-lpn7t\" (UID: \"f973ebe4-3ce2-49c7-96b8-c7e238c606a7\") " pod="openshift-marketplace/certified-operators-lpn7t" Jan 27 18:44:23 crc kubenswrapper[4915]: E0127 18:44:23.122884 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:23.62287447 +0000 UTC m=+154.980728224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.219465 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q4xjv" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.223243 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:23 crc kubenswrapper[4915]: E0127 18:44:23.223386 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:23.723370153 +0000 UTC m=+155.081223817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.223419 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f973ebe4-3ce2-49c7-96b8-c7e238c606a7-catalog-content\") pod \"certified-operators-lpn7t\" (UID: \"f973ebe4-3ce2-49c7-96b8-c7e238c606a7\") " pod="openshift-marketplace/certified-operators-lpn7t" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.223458 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq5k2\" (UniqueName: \"kubernetes.io/projected/f973ebe4-3ce2-49c7-96b8-c7e238c606a7-kube-api-access-cq5k2\") pod \"certified-operators-lpn7t\" (UID: \"f973ebe4-3ce2-49c7-96b8-c7e238c606a7\") " pod="openshift-marketplace/certified-operators-lpn7t" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.223478 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.223505 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f973ebe4-3ce2-49c7-96b8-c7e238c606a7-utilities\") pod \"certified-operators-lpn7t\" (UID: \"f973ebe4-3ce2-49c7-96b8-c7e238c606a7\") " pod="openshift-marketplace/certified-operators-lpn7t" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.223869 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f973ebe4-3ce2-49c7-96b8-c7e238c606a7-utilities\") pod \"certified-operators-lpn7t\" (UID: \"f973ebe4-3ce2-49c7-96b8-c7e238c606a7\") " pod="openshift-marketplace/certified-operators-lpn7t" Jan 27 18:44:23 crc kubenswrapper[4915]: E0127 18:44:23.224026 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:23.724016791 +0000 UTC m=+155.081870455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.224374 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f973ebe4-3ce2-49c7-96b8-c7e238c606a7-catalog-content\") pod \"certified-operators-lpn7t\" (UID: \"f973ebe4-3ce2-49c7-96b8-c7e238c606a7\") " pod="openshift-marketplace/certified-operators-lpn7t" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.243528 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq5k2\" (UniqueName: \"kubernetes.io/projected/f973ebe4-3ce2-49c7-96b8-c7e238c606a7-kube-api-access-cq5k2\") pod \"certified-operators-lpn7t\" (UID: \"f973ebe4-3ce2-49c7-96b8-c7e238c606a7\") " pod="openshift-marketplace/certified-operators-lpn7t" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.270505 4915 patch_prober.go:28] interesting pod/router-default-5444994796-49lf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:44:23 crc kubenswrapper[4915]: [-]has-synced failed: reason withheld Jan 27 18:44:23 crc kubenswrapper[4915]: [+]process-running ok Jan 27 18:44:23 crc kubenswrapper[4915]: healthz check failed Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.270823 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49lf7" podUID="cdc47495-7d8a-4981-8647-f93a058fe075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.311312 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q6zlh"] Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.312393 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6zlh" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.324403 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.324479 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ea88d8e-91fe-42e8-869b-c95deff169b2-catalog-content\") pod \"community-operators-q6zlh\" (UID: \"4ea88d8e-91fe-42e8-869b-c95deff169b2\") " pod="openshift-marketplace/community-operators-q6zlh" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.324500 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q5cx\" (UniqueName: \"kubernetes.io/projected/4ea88d8e-91fe-42e8-869b-c95deff169b2-kube-api-access-9q5cx\") pod \"community-operators-q6zlh\" (UID: \"4ea88d8e-91fe-42e8-869b-c95deff169b2\") " pod="openshift-marketplace/community-operators-q6zlh" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.324577 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ea88d8e-91fe-42e8-869b-c95deff169b2-utilities\") pod \"community-operators-q6zlh\" (UID: \"4ea88d8e-91fe-42e8-869b-c95deff169b2\") " pod="openshift-marketplace/community-operators-q6zlh" Jan 27 18:44:23 crc kubenswrapper[4915]: E0127 18:44:23.324706 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:23.824693557 +0000 UTC m=+155.182547221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.326843 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q6zlh"] Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.331035 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4lnms" event={"ID":"d6f6bcf0-b97a-4e23-9c09-d9f817d725ac","Type":"ContainerStarted","Data":"c479cf43a8532ff49081fa31327cd26ad52fe7db6841593f1ec302d1badb106f"} Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.331089 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4lnms" event={"ID":"d6f6bcf0-b97a-4e23-9c09-d9f817d725ac","Type":"ContainerStarted","Data":"9987308ab35eb3135e65f8b8d65dee8f804b166a75624b9f2dcdbca3d080941b"} Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.341152 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-cvxsw" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.347402 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-cvxsw" event={"ID":"c9565c20-c5f0-49a7-acd2-1cc2f4862085","Type":"ContainerDied","Data":"ac392b07f80fd445d5200d385b6a38fc2223b87dd8ac3f5b688bea05f6ceb6eb"} Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.347449 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac392b07f80fd445d5200d385b6a38fc2223b87dd8ac3f5b688bea05f6ceb6eb" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.352773 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-qgxwz" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.362603 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-4lnms" podStartSLOduration=10.362583944 podStartE2EDuration="10.362583944s" podCreationTimestamp="2026-01-27 18:44:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:23.362312131 +0000 UTC m=+154.720165815" watchObservedRunningTime="2026-01-27 18:44:23.362583944 +0000 UTC m=+154.720437608" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.422658 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lpn7t" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.425634 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ea88d8e-91fe-42e8-869b-c95deff169b2-utilities\") pod \"community-operators-q6zlh\" (UID: \"4ea88d8e-91fe-42e8-869b-c95deff169b2\") " pod="openshift-marketplace/community-operators-q6zlh" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.425718 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ea88d8e-91fe-42e8-869b-c95deff169b2-catalog-content\") pod \"community-operators-q6zlh\" (UID: \"4ea88d8e-91fe-42e8-869b-c95deff169b2\") " pod="openshift-marketplace/community-operators-q6zlh" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.425749 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q5cx\" (UniqueName: \"kubernetes.io/projected/4ea88d8e-91fe-42e8-869b-c95deff169b2-kube-api-access-9q5cx\") pod \"community-operators-q6zlh\" (UID: \"4ea88d8e-91fe-42e8-869b-c95deff169b2\") " pod="openshift-marketplace/community-operators-q6zlh" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.425827 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:23 crc kubenswrapper[4915]: E0127 18:44:23.426297 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:23.926272244 +0000 UTC m=+155.284125918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.426426 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ea88d8e-91fe-42e8-869b-c95deff169b2-catalog-content\") pod \"community-operators-q6zlh\" (UID: \"4ea88d8e-91fe-42e8-869b-c95deff169b2\") " pod="openshift-marketplace/community-operators-q6zlh" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.426443 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ea88d8e-91fe-42e8-869b-c95deff169b2-utilities\") pod \"community-operators-q6zlh\" (UID: \"4ea88d8e-91fe-42e8-869b-c95deff169b2\") " pod="openshift-marketplace/community-operators-q6zlh" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.460120 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q5cx\" (UniqueName: \"kubernetes.io/projected/4ea88d8e-91fe-42e8-869b-c95deff169b2-kube-api-access-9q5cx\") pod \"community-operators-q6zlh\" (UID: \"4ea88d8e-91fe-42e8-869b-c95deff169b2\") " pod="openshift-marketplace/community-operators-q6zlh" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.503756 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qz9ll"] Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.504707 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qz9ll" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.514448 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qz9ll"] Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.527762 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:23 crc kubenswrapper[4915]: E0127 18:44:23.528568 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 18:44:24.02855265 +0000 UTC m=+155.386406314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.629921 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f662342-5dc6-43c7-b6a1-242751990f64-catalog-content\") pod \"certified-operators-qz9ll\" (UID: \"3f662342-5dc6-43c7-b6a1-242751990f64\") " pod="openshift-marketplace/certified-operators-qz9ll" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.630367 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlrjb\" (UniqueName: \"kubernetes.io/projected/3f662342-5dc6-43c7-b6a1-242751990f64-kube-api-access-rlrjb\") pod \"certified-operators-qz9ll\" (UID: \"3f662342-5dc6-43c7-b6a1-242751990f64\") " pod="openshift-marketplace/certified-operators-qz9ll" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.630376 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6zlh" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.630421 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f662342-5dc6-43c7-b6a1-242751990f64-utilities\") pod \"certified-operators-qz9ll\" (UID: \"3f662342-5dc6-43c7-b6a1-242751990f64\") " pod="openshift-marketplace/certified-operators-qz9ll" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.630449 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:23 crc kubenswrapper[4915]: E0127 18:44:23.630840 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 18:44:24.130823096 +0000 UTC m=+155.488676810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lqcx4" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.659302 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q4xjv"] Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.662170 4915 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-27T18:44:23.081280924Z","Handler":null,"Name":""} Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.666000 4915 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.666042 4915 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 27 18:44:23 crc kubenswrapper[4915]: W0127 18:44:23.685244 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ec10b90_7f91_4bda_bcaa_cce472027970.slice/crio-89c1afc1e43c45aa1842563008228b9a969fbe8e3822377cc62868c40007f3a1 WatchSource:0}: Error finding container 89c1afc1e43c45aa1842563008228b9a969fbe8e3822377cc62868c40007f3a1: Status 404 returned error can't find the container with id 89c1afc1e43c45aa1842563008228b9a969fbe8e3822377cc62868c40007f3a1 Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.711356 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lpn7t"] Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.733295 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.733460 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f662342-5dc6-43c7-b6a1-242751990f64-catalog-content\") pod \"certified-operators-qz9ll\" (UID: \"3f662342-5dc6-43c7-b6a1-242751990f64\") " pod="openshift-marketplace/certified-operators-qz9ll" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.733514 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlrjb\" (UniqueName: \"kubernetes.io/projected/3f662342-5dc6-43c7-b6a1-242751990f64-kube-api-access-rlrjb\") pod \"certified-operators-qz9ll\" (UID: \"3f662342-5dc6-43c7-b6a1-242751990f64\") " pod="openshift-marketplace/certified-operators-qz9ll" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.733558 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f662342-5dc6-43c7-b6a1-242751990f64-utilities\") pod \"certified-operators-qz9ll\" (UID: \"3f662342-5dc6-43c7-b6a1-242751990f64\") " pod="openshift-marketplace/certified-operators-qz9ll" Jan 27 18:44:23 crc kubenswrapper[4915]: W0127 18:44:23.733947 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf973ebe4_3ce2_49c7_96b8_c7e238c606a7.slice/crio-cdc3a017bd1385f130018331244d077d39cdf576b018884f8cbf534149ee9e6f WatchSource:0}: Error finding container cdc3a017bd1385f130018331244d077d39cdf576b018884f8cbf534149ee9e6f: Status 404 returned error can't find the container with id cdc3a017bd1385f130018331244d077d39cdf576b018884f8cbf534149ee9e6f Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.734094 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f662342-5dc6-43c7-b6a1-242751990f64-utilities\") pod \"certified-operators-qz9ll\" (UID: \"3f662342-5dc6-43c7-b6a1-242751990f64\") " pod="openshift-marketplace/certified-operators-qz9ll" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.734528 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f662342-5dc6-43c7-b6a1-242751990f64-catalog-content\") pod \"certified-operators-qz9ll\" (UID: \"3f662342-5dc6-43c7-b6a1-242751990f64\") " pod="openshift-marketplace/certified-operators-qz9ll" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.750818 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.759602 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlrjb\" (UniqueName: \"kubernetes.io/projected/3f662342-5dc6-43c7-b6a1-242751990f64-kube-api-access-rlrjb\") pod \"certified-operators-qz9ll\" (UID: \"3f662342-5dc6-43c7-b6a1-242751990f64\") " pod="openshift-marketplace/certified-operators-qz9ll" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.834541 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.855534 4915 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.855583 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.857068 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qz9ll" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.936068 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lqcx4\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.966882 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.967615 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.967817 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q6zlh"] Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.972118 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.972394 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 27 18:44:23 crc kubenswrapper[4915]: I0127 18:44:23.974785 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 18:44:24 crc kubenswrapper[4915]: W0127 18:44:24.060497 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ea88d8e_91fe_42e8_869b_c95deff169b2.slice/crio-57174efae0dba5a9dc44ebe65275a6a483058c1375af08b68352afa79210f880 WatchSource:0}: Error finding container 57174efae0dba5a9dc44ebe65275a6a483058c1375af08b68352afa79210f880: Status 404 returned error can't find the container with id 57174efae0dba5a9dc44ebe65275a6a483058c1375af08b68352afa79210f880 Jan 27 18:44:24 crc kubenswrapper[4915]: I0127 18:44:24.086353 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qz9ll"] Jan 27 18:44:24 crc kubenswrapper[4915]: W0127 18:44:24.092947 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f662342_5dc6_43c7_b6a1_242751990f64.slice/crio-4b1c2f3b1e7d40f85bd1d7fbd415f8031b49e49e6e63719a88c11c0c87773f88 WatchSource:0}: Error finding container 4b1c2f3b1e7d40f85bd1d7fbd415f8031b49e49e6e63719a88c11c0c87773f88: Status 404 returned error can't find the container with id 4b1c2f3b1e7d40f85bd1d7fbd415f8031b49e49e6e63719a88c11c0c87773f88 Jan 27 18:44:24 crc kubenswrapper[4915]: I0127 18:44:24.139177 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7f0109a-db9d-4999-83c2-426a28cecc90-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a7f0109a-db9d-4999-83c2-426a28cecc90\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:44:24 crc kubenswrapper[4915]: I0127 18:44:24.139225 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7f0109a-db9d-4999-83c2-426a28cecc90-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a7f0109a-db9d-4999-83c2-426a28cecc90\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:44:24 crc kubenswrapper[4915]: I0127 18:44:24.188068 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:24 crc kubenswrapper[4915]: I0127 18:44:24.240947 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7f0109a-db9d-4999-83c2-426a28cecc90-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a7f0109a-db9d-4999-83c2-426a28cecc90\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:44:24 crc kubenswrapper[4915]: I0127 18:44:24.241691 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7f0109a-db9d-4999-83c2-426a28cecc90-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a7f0109a-db9d-4999-83c2-426a28cecc90\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:44:24 crc kubenswrapper[4915]: I0127 18:44:24.242110 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7f0109a-db9d-4999-83c2-426a28cecc90-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a7f0109a-db9d-4999-83c2-426a28cecc90\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:44:24 crc kubenswrapper[4915]: I0127 18:44:24.263771 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7f0109a-db9d-4999-83c2-426a28cecc90-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a7f0109a-db9d-4999-83c2-426a28cecc90\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:44:24 crc kubenswrapper[4915]: I0127 18:44:24.269704 4915 patch_prober.go:28] interesting pod/router-default-5444994796-49lf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:44:24 crc kubenswrapper[4915]: [-]has-synced failed: reason withheld Jan 27 18:44:24 crc kubenswrapper[4915]: [+]process-running ok Jan 27 18:44:24 crc kubenswrapper[4915]: healthz check failed Jan 27 18:44:24 crc kubenswrapper[4915]: I0127 18:44:24.270121 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49lf7" podUID="cdc47495-7d8a-4981-8647-f93a058fe075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:44:24 crc kubenswrapper[4915]: I0127 18:44:24.346607 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6zlh" event={"ID":"4ea88d8e-91fe-42e8-869b-c95deff169b2","Type":"ContainerStarted","Data":"57174efae0dba5a9dc44ebe65275a6a483058c1375af08b68352afa79210f880"} Jan 27 18:44:24 crc kubenswrapper[4915]: I0127 18:44:24.348378 4915 generic.go:334] "Generic (PLEG): container finished" podID="f973ebe4-3ce2-49c7-96b8-c7e238c606a7" containerID="ba06e1ee3f1bac4ca54f6a6c4980077d41e244cf7ef92dc26d87ec1c7d929caa" exitCode=0 Jan 27 18:44:24 crc kubenswrapper[4915]: I0127 18:44:24.348427 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lpn7t" event={"ID":"f973ebe4-3ce2-49c7-96b8-c7e238c606a7","Type":"ContainerDied","Data":"ba06e1ee3f1bac4ca54f6a6c4980077d41e244cf7ef92dc26d87ec1c7d929caa"} Jan 27 18:44:24 crc kubenswrapper[4915]: I0127 18:44:24.348444 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lpn7t" event={"ID":"f973ebe4-3ce2-49c7-96b8-c7e238c606a7","Type":"ContainerStarted","Data":"cdc3a017bd1385f130018331244d077d39cdf576b018884f8cbf534149ee9e6f"} Jan 27 18:44:24 crc kubenswrapper[4915]: I0127 18:44:24.353183 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:44:24 crc kubenswrapper[4915]: I0127 18:44:24.356238 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz9ll" event={"ID":"3f662342-5dc6-43c7-b6a1-242751990f64","Type":"ContainerStarted","Data":"4b1c2f3b1e7d40f85bd1d7fbd415f8031b49e49e6e63719a88c11c0c87773f88"} Jan 27 18:44:24 crc kubenswrapper[4915]: I0127 18:44:24.362765 4915 generic.go:334] "Generic (PLEG): container finished" podID="8ec10b90-7f91-4bda-bcaa-cce472027970" containerID="721b29e4e43c4944bc270eef0fa8670b03d5e4f6db081c8bbb14bc0cbe5b99df" exitCode=0 Jan 27 18:44:24 crc kubenswrapper[4915]: I0127 18:44:24.362905 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q4xjv" event={"ID":"8ec10b90-7f91-4bda-bcaa-cce472027970","Type":"ContainerDied","Data":"721b29e4e43c4944bc270eef0fa8670b03d5e4f6db081c8bbb14bc0cbe5b99df"} Jan 27 18:44:24 crc kubenswrapper[4915]: I0127 18:44:24.362949 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q4xjv" event={"ID":"8ec10b90-7f91-4bda-bcaa-cce472027970","Type":"ContainerStarted","Data":"89c1afc1e43c45aa1842563008228b9a969fbe8e3822377cc62868c40007f3a1"} Jan 27 18:44:24 crc kubenswrapper[4915]: I0127 18:44:24.373065 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:44:24 crc kubenswrapper[4915]: I0127 18:44:24.376386 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7npbf" Jan 27 18:44:24 crc kubenswrapper[4915]: I0127 18:44:24.649020 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:24 crc kubenswrapper[4915]: I0127 18:44:24.656571 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-5s7q5" Jan 27 18:44:24 crc kubenswrapper[4915]: I0127 18:44:24.680050 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lqcx4"] Jan 27 18:44:24 crc kubenswrapper[4915]: I0127 18:44:24.820007 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.101918 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s9b8l"] Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.103516 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s9b8l" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.105768 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.112430 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s9b8l"] Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.257906 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/644a6ae4-550e-4ca7-9394-d2e09abb6b16-catalog-content\") pod \"redhat-marketplace-s9b8l\" (UID: \"644a6ae4-550e-4ca7-9394-d2e09abb6b16\") " pod="openshift-marketplace/redhat-marketplace-s9b8l" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.258155 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dh24\" (UniqueName: \"kubernetes.io/projected/644a6ae4-550e-4ca7-9394-d2e09abb6b16-kube-api-access-7dh24\") pod \"redhat-marketplace-s9b8l\" (UID: \"644a6ae4-550e-4ca7-9394-d2e09abb6b16\") " pod="openshift-marketplace/redhat-marketplace-s9b8l" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.258268 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/644a6ae4-550e-4ca7-9394-d2e09abb6b16-utilities\") pod \"redhat-marketplace-s9b8l\" (UID: \"644a6ae4-550e-4ca7-9394-d2e09abb6b16\") " pod="openshift-marketplace/redhat-marketplace-s9b8l" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.272526 4915 patch_prober.go:28] interesting pod/router-default-5444994796-49lf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:44:25 crc kubenswrapper[4915]: [-]has-synced failed: reason withheld Jan 27 18:44:25 crc kubenswrapper[4915]: [+]process-running ok Jan 27 18:44:25 crc kubenswrapper[4915]: healthz check failed Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.272645 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49lf7" podUID="cdc47495-7d8a-4981-8647-f93a058fe075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.359767 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/644a6ae4-550e-4ca7-9394-d2e09abb6b16-catalog-content\") pod \"redhat-marketplace-s9b8l\" (UID: \"644a6ae4-550e-4ca7-9394-d2e09abb6b16\") " pod="openshift-marketplace/redhat-marketplace-s9b8l" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.359929 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dh24\" (UniqueName: \"kubernetes.io/projected/644a6ae4-550e-4ca7-9394-d2e09abb6b16-kube-api-access-7dh24\") pod \"redhat-marketplace-s9b8l\" (UID: \"644a6ae4-550e-4ca7-9394-d2e09abb6b16\") " pod="openshift-marketplace/redhat-marketplace-s9b8l" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.359976 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/644a6ae4-550e-4ca7-9394-d2e09abb6b16-utilities\") pod \"redhat-marketplace-s9b8l\" (UID: \"644a6ae4-550e-4ca7-9394-d2e09abb6b16\") " pod="openshift-marketplace/redhat-marketplace-s9b8l" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.360749 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/644a6ae4-550e-4ca7-9394-d2e09abb6b16-catalog-content\") pod \"redhat-marketplace-s9b8l\" (UID: \"644a6ae4-550e-4ca7-9394-d2e09abb6b16\") " pod="openshift-marketplace/redhat-marketplace-s9b8l" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.360863 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/644a6ae4-550e-4ca7-9394-d2e09abb6b16-utilities\") pod \"redhat-marketplace-s9b8l\" (UID: \"644a6ae4-550e-4ca7-9394-d2e09abb6b16\") " pod="openshift-marketplace/redhat-marketplace-s9b8l" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.364253 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.368308 4915 generic.go:334] "Generic (PLEG): container finished" podID="3f662342-5dc6-43c7-b6a1-242751990f64" containerID="4f5a23e527f6dcccb35c87fd050801ff8f67b5900643b42fe3c911767e16991f" exitCode=0 Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.368410 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz9ll" event={"ID":"3f662342-5dc6-43c7-b6a1-242751990f64","Type":"ContainerDied","Data":"4f5a23e527f6dcccb35c87fd050801ff8f67b5900643b42fe3c911767e16991f"} Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.371028 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a7f0109a-db9d-4999-83c2-426a28cecc90","Type":"ContainerStarted","Data":"d6102004dd777818971d818571bbf3772417347710ed0f82c0d0edd55122ce68"} Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.371054 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a7f0109a-db9d-4999-83c2-426a28cecc90","Type":"ContainerStarted","Data":"6ec65c6191ef985256ba53343f0e85dc9cc2eb2fdba097e045ff6f8b92ef2aa7"} Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.373544 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" event={"ID":"ed1f13f7-cdf2-4eb8-addd-7087dd72db11","Type":"ContainerStarted","Data":"917452f3a9fcdd89fec98b5134191c36df2b3aac39c4f7f45dfd3cefafbf2adc"} Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.373582 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" event={"ID":"ed1f13f7-cdf2-4eb8-addd-7087dd72db11","Type":"ContainerStarted","Data":"6d46cc4d1bf63ca50c0c74e3bd8c9064dcb52db104aa7ba903bbcee8a2fc5f9f"} Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.373671 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.375985 4915 generic.go:334] "Generic (PLEG): container finished" podID="4ea88d8e-91fe-42e8-869b-c95deff169b2" containerID="f2c419d9283db99758b56abe9530360c5e770ec23714db637e37b914570259a3" exitCode=0 Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.376072 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6zlh" event={"ID":"4ea88d8e-91fe-42e8-869b-c95deff169b2","Type":"ContainerDied","Data":"f2c419d9283db99758b56abe9530360c5e770ec23714db637e37b914570259a3"} Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.396024 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dh24\" (UniqueName: \"kubernetes.io/projected/644a6ae4-550e-4ca7-9394-d2e09abb6b16-kube-api-access-7dh24\") pod \"redhat-marketplace-s9b8l\" (UID: \"644a6ae4-550e-4ca7-9394-d2e09abb6b16\") " pod="openshift-marketplace/redhat-marketplace-s9b8l" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.422819 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s9b8l" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.440371 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" podStartSLOduration=135.440346414 podStartE2EDuration="2m15.440346414s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:25.438376288 +0000 UTC m=+156.796229962" watchObservedRunningTime="2026-01-27 18:44:25.440346414 +0000 UTC m=+156.798200078" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.459985 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.459958746 podStartE2EDuration="2.459958746s" podCreationTimestamp="2026-01-27 18:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:25.457894829 +0000 UTC m=+156.815748513" watchObservedRunningTime="2026-01-27 18:44:25.459958746 +0000 UTC m=+156.817812460" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.505576 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t7ngn"] Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.506600 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t7ngn" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.519563 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t7ngn"] Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.663173 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qckqz\" (UniqueName: \"kubernetes.io/projected/5f772375-40f6-46c4-84ea-7eab8f29e6de-kube-api-access-qckqz\") pod \"redhat-marketplace-t7ngn\" (UID: \"5f772375-40f6-46c4-84ea-7eab8f29e6de\") " pod="openshift-marketplace/redhat-marketplace-t7ngn" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.663593 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f772375-40f6-46c4-84ea-7eab8f29e6de-catalog-content\") pod \"redhat-marketplace-t7ngn\" (UID: \"5f772375-40f6-46c4-84ea-7eab8f29e6de\") " pod="openshift-marketplace/redhat-marketplace-t7ngn" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.663653 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f772375-40f6-46c4-84ea-7eab8f29e6de-utilities\") pod \"redhat-marketplace-t7ngn\" (UID: \"5f772375-40f6-46c4-84ea-7eab8f29e6de\") " pod="openshift-marketplace/redhat-marketplace-t7ngn" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.714351 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s9b8l"] Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.751210 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.751254 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.753246 4915 patch_prober.go:28] interesting pod/console-f9d7485db-sd8x9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.753383 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-sd8x9" podUID="d97ce85d-90e3-410f-bd7c-812149c6933f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.765812 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qckqz\" (UniqueName: \"kubernetes.io/projected/5f772375-40f6-46c4-84ea-7eab8f29e6de-kube-api-access-qckqz\") pod \"redhat-marketplace-t7ngn\" (UID: \"5f772375-40f6-46c4-84ea-7eab8f29e6de\") " pod="openshift-marketplace/redhat-marketplace-t7ngn" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.765900 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f772375-40f6-46c4-84ea-7eab8f29e6de-catalog-content\") pod \"redhat-marketplace-t7ngn\" (UID: \"5f772375-40f6-46c4-84ea-7eab8f29e6de\") " pod="openshift-marketplace/redhat-marketplace-t7ngn" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.765941 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f772375-40f6-46c4-84ea-7eab8f29e6de-utilities\") pod \"redhat-marketplace-t7ngn\" (UID: \"5f772375-40f6-46c4-84ea-7eab8f29e6de\") " pod="openshift-marketplace/redhat-marketplace-t7ngn" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.767624 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f772375-40f6-46c4-84ea-7eab8f29e6de-catalog-content\") pod \"redhat-marketplace-t7ngn\" (UID: \"5f772375-40f6-46c4-84ea-7eab8f29e6de\") " pod="openshift-marketplace/redhat-marketplace-t7ngn" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.767632 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f772375-40f6-46c4-84ea-7eab8f29e6de-utilities\") pod \"redhat-marketplace-t7ngn\" (UID: \"5f772375-40f6-46c4-84ea-7eab8f29e6de\") " pod="openshift-marketplace/redhat-marketplace-t7ngn" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.785890 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qckqz\" (UniqueName: \"kubernetes.io/projected/5f772375-40f6-46c4-84ea-7eab8f29e6de-kube-api-access-qckqz\") pod \"redhat-marketplace-t7ngn\" (UID: \"5f772375-40f6-46c4-84ea-7eab8f29e6de\") " pod="openshift-marketplace/redhat-marketplace-t7ngn" Jan 27 18:44:25 crc kubenswrapper[4915]: I0127 18:44:25.822679 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t7ngn" Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.064597 4915 patch_prober.go:28] interesting pod/downloads-7954f5f757-l57nd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.064991 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l57nd" podUID="42d32061-12d1-4e45-8217-73fc194a1b3f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.064665 4915 patch_prober.go:28] interesting pod/downloads-7954f5f757-l57nd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.066930 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-l57nd" podUID="42d32061-12d1-4e45-8217-73fc194a1b3f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.072454 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t7ngn"] Jan 27 18:44:26 crc kubenswrapper[4915]: W0127 18:44:26.082095 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f772375_40f6_46c4_84ea_7eab8f29e6de.slice/crio-40fbe498734f3e1536c51c076bb66c5be4dee9179173dfc59e416362dd3094f2 WatchSource:0}: Error finding container 40fbe498734f3e1536c51c076bb66c5be4dee9179173dfc59e416362dd3094f2: Status 404 returned error can't find the container with id 40fbe498734f3e1536c51c076bb66c5be4dee9179173dfc59e416362dd3094f2 Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.097591 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fvnw2"] Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.098474 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvnw2" Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.101676 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.107288 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fvnw2"] Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.177471 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ngl6\" (UniqueName: \"kubernetes.io/projected/75b62d1e-bcc6-46c1-ac10-7e2819203102-kube-api-access-6ngl6\") pod \"redhat-operators-fvnw2\" (UID: \"75b62d1e-bcc6-46c1-ac10-7e2819203102\") " pod="openshift-marketplace/redhat-operators-fvnw2" Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.177553 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75b62d1e-bcc6-46c1-ac10-7e2819203102-catalog-content\") pod \"redhat-operators-fvnw2\" (UID: \"75b62d1e-bcc6-46c1-ac10-7e2819203102\") " pod="openshift-marketplace/redhat-operators-fvnw2" Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.177727 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75b62d1e-bcc6-46c1-ac10-7e2819203102-utilities\") pod \"redhat-operators-fvnw2\" (UID: \"75b62d1e-bcc6-46c1-ac10-7e2819203102\") " pod="openshift-marketplace/redhat-operators-fvnw2" Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.268250 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-49lf7" Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.270117 4915 patch_prober.go:28] interesting pod/router-default-5444994796-49lf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 18:44:26 crc kubenswrapper[4915]: [-]has-synced failed: reason withheld Jan 27 18:44:26 crc kubenswrapper[4915]: [+]process-running ok Jan 27 18:44:26 crc kubenswrapper[4915]: healthz check failed Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.270178 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-49lf7" podUID="cdc47495-7d8a-4981-8647-f93a058fe075" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.279821 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75b62d1e-bcc6-46c1-ac10-7e2819203102-catalog-content\") pod \"redhat-operators-fvnw2\" (UID: \"75b62d1e-bcc6-46c1-ac10-7e2819203102\") " pod="openshift-marketplace/redhat-operators-fvnw2" Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.279904 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75b62d1e-bcc6-46c1-ac10-7e2819203102-utilities\") pod \"redhat-operators-fvnw2\" (UID: \"75b62d1e-bcc6-46c1-ac10-7e2819203102\") " pod="openshift-marketplace/redhat-operators-fvnw2" Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.279939 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ngl6\" (UniqueName: \"kubernetes.io/projected/75b62d1e-bcc6-46c1-ac10-7e2819203102-kube-api-access-6ngl6\") pod \"redhat-operators-fvnw2\" (UID: \"75b62d1e-bcc6-46c1-ac10-7e2819203102\") " pod="openshift-marketplace/redhat-operators-fvnw2" Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.280783 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75b62d1e-bcc6-46c1-ac10-7e2819203102-utilities\") pod \"redhat-operators-fvnw2\" (UID: \"75b62d1e-bcc6-46c1-ac10-7e2819203102\") " pod="openshift-marketplace/redhat-operators-fvnw2" Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.281638 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75b62d1e-bcc6-46c1-ac10-7e2819203102-catalog-content\") pod \"redhat-operators-fvnw2\" (UID: \"75b62d1e-bcc6-46c1-ac10-7e2819203102\") " pod="openshift-marketplace/redhat-operators-fvnw2" Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.296625 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ngl6\" (UniqueName: \"kubernetes.io/projected/75b62d1e-bcc6-46c1-ac10-7e2819203102-kube-api-access-6ngl6\") pod \"redhat-operators-fvnw2\" (UID: \"75b62d1e-bcc6-46c1-ac10-7e2819203102\") " pod="openshift-marketplace/redhat-operators-fvnw2" Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.398329 4915 generic.go:334] "Generic (PLEG): container finished" podID="5f772375-40f6-46c4-84ea-7eab8f29e6de" containerID="cec41666db03f636810e8dc9f96644adeb83ed21ec3ad9d42ca7023c244049e9" exitCode=0 Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.398398 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t7ngn" event={"ID":"5f772375-40f6-46c4-84ea-7eab8f29e6de","Type":"ContainerDied","Data":"cec41666db03f636810e8dc9f96644adeb83ed21ec3ad9d42ca7023c244049e9"} Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.398456 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t7ngn" event={"ID":"5f772375-40f6-46c4-84ea-7eab8f29e6de","Type":"ContainerStarted","Data":"40fbe498734f3e1536c51c076bb66c5be4dee9179173dfc59e416362dd3094f2"} Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.400945 4915 generic.go:334] "Generic (PLEG): container finished" podID="a7f0109a-db9d-4999-83c2-426a28cecc90" containerID="d6102004dd777818971d818571bbf3772417347710ed0f82c0d0edd55122ce68" exitCode=0 Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.401004 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a7f0109a-db9d-4999-83c2-426a28cecc90","Type":"ContainerDied","Data":"d6102004dd777818971d818571bbf3772417347710ed0f82c0d0edd55122ce68"} Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.404346 4915 generic.go:334] "Generic (PLEG): container finished" podID="644a6ae4-550e-4ca7-9394-d2e09abb6b16" containerID="932d1bbb8aedc34c3d90ee4c157ea85a50a7e36feee55904e0652da61bd60bc3" exitCode=0 Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.404430 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s9b8l" event={"ID":"644a6ae4-550e-4ca7-9394-d2e09abb6b16","Type":"ContainerDied","Data":"932d1bbb8aedc34c3d90ee4c157ea85a50a7e36feee55904e0652da61bd60bc3"} Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.404481 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s9b8l" event={"ID":"644a6ae4-550e-4ca7-9394-d2e09abb6b16","Type":"ContainerStarted","Data":"f0794cb6b7e77d3c42b4b7eb6d620fe043049155b35a0c4e0a2c25832150f759"} Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.428091 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvnw2" Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.504496 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9v6vn"] Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.505621 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9v6vn" Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.511014 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9v6vn"] Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.587840 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtj87\" (UniqueName: \"kubernetes.io/projected/67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4-kube-api-access-qtj87\") pod \"redhat-operators-9v6vn\" (UID: \"67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4\") " pod="openshift-marketplace/redhat-operators-9v6vn" Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.588224 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4-utilities\") pod \"redhat-operators-9v6vn\" (UID: \"67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4\") " pod="openshift-marketplace/redhat-operators-9v6vn" Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.588257 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4-catalog-content\") pod \"redhat-operators-9v6vn\" (UID: \"67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4\") " pod="openshift-marketplace/redhat-operators-9v6vn" Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.680101 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fvnw2"] Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.689200 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4-catalog-content\") pod \"redhat-operators-9v6vn\" (UID: \"67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4\") " pod="openshift-marketplace/redhat-operators-9v6vn" Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.689516 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtj87\" (UniqueName: \"kubernetes.io/projected/67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4-kube-api-access-qtj87\") pod \"redhat-operators-9v6vn\" (UID: \"67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4\") " pod="openshift-marketplace/redhat-operators-9v6vn" Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.689580 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4-utilities\") pod \"redhat-operators-9v6vn\" (UID: \"67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4\") " pod="openshift-marketplace/redhat-operators-9v6vn" Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.690044 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4-utilities\") pod \"redhat-operators-9v6vn\" (UID: \"67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4\") " pod="openshift-marketplace/redhat-operators-9v6vn" Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.690113 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4-catalog-content\") pod \"redhat-operators-9v6vn\" (UID: \"67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4\") " pod="openshift-marketplace/redhat-operators-9v6vn" Jan 27 18:44:26 crc kubenswrapper[4915]: W0127 18:44:26.692837 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75b62d1e_bcc6_46c1_ac10_7e2819203102.slice/crio-be8028adaa39c1cc90727bdce28f2023a89596cbc6a05a63cfadac50c90e09ec WatchSource:0}: Error finding container be8028adaa39c1cc90727bdce28f2023a89596cbc6a05a63cfadac50c90e09ec: Status 404 returned error can't find the container with id be8028adaa39c1cc90727bdce28f2023a89596cbc6a05a63cfadac50c90e09ec Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.710463 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtj87\" (UniqueName: \"kubernetes.io/projected/67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4-kube-api-access-qtj87\") pod \"redhat-operators-9v6vn\" (UID: \"67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4\") " pod="openshift-marketplace/redhat-operators-9v6vn" Jan 27 18:44:26 crc kubenswrapper[4915]: I0127 18:44:26.824806 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9v6vn" Jan 27 18:44:27 crc kubenswrapper[4915]: I0127 18:44:27.180001 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9v6vn"] Jan 27 18:44:27 crc kubenswrapper[4915]: W0127 18:44:27.210463 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67518fbf_bc2b_457c_a2ae_a5b31d8cf8e4.slice/crio-5b328b06764a37f04bef308acbbf058c207e5d68af431c05d573d627d6e5268c WatchSource:0}: Error finding container 5b328b06764a37f04bef308acbbf058c207e5d68af431c05d573d627d6e5268c: Status 404 returned error can't find the container with id 5b328b06764a37f04bef308acbbf058c207e5d68af431c05d573d627d6e5268c Jan 27 18:44:27 crc kubenswrapper[4915]: I0127 18:44:27.274520 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-49lf7" Jan 27 18:44:27 crc kubenswrapper[4915]: I0127 18:44:27.277448 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-49lf7" Jan 27 18:44:27 crc kubenswrapper[4915]: I0127 18:44:27.446462 4915 generic.go:334] "Generic (PLEG): container finished" podID="75b62d1e-bcc6-46c1-ac10-7e2819203102" containerID="0e79da55f615f880e6e715704f3451cf5a02ee7eafbf471c84174e349a2ccae8" exitCode=0 Jan 27 18:44:27 crc kubenswrapper[4915]: I0127 18:44:27.446521 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvnw2" event={"ID":"75b62d1e-bcc6-46c1-ac10-7e2819203102","Type":"ContainerDied","Data":"0e79da55f615f880e6e715704f3451cf5a02ee7eafbf471c84174e349a2ccae8"} Jan 27 18:44:27 crc kubenswrapper[4915]: I0127 18:44:27.446548 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvnw2" event={"ID":"75b62d1e-bcc6-46c1-ac10-7e2819203102","Type":"ContainerStarted","Data":"be8028adaa39c1cc90727bdce28f2023a89596cbc6a05a63cfadac50c90e09ec"} Jan 27 18:44:27 crc kubenswrapper[4915]: I0127 18:44:27.452004 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v6vn" event={"ID":"67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4","Type":"ContainerStarted","Data":"5b328b06764a37f04bef308acbbf058c207e5d68af431c05d573d627d6e5268c"} Jan 27 18:44:27 crc kubenswrapper[4915]: I0127 18:44:27.805182 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:44:27 crc kubenswrapper[4915]: I0127 18:44:27.910816 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7f0109a-db9d-4999-83c2-426a28cecc90-kube-api-access\") pod \"a7f0109a-db9d-4999-83c2-426a28cecc90\" (UID: \"a7f0109a-db9d-4999-83c2-426a28cecc90\") " Jan 27 18:44:27 crc kubenswrapper[4915]: I0127 18:44:27.910932 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7f0109a-db9d-4999-83c2-426a28cecc90-kubelet-dir\") pod \"a7f0109a-db9d-4999-83c2-426a28cecc90\" (UID: \"a7f0109a-db9d-4999-83c2-426a28cecc90\") " Jan 27 18:44:27 crc kubenswrapper[4915]: I0127 18:44:27.911204 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7f0109a-db9d-4999-83c2-426a28cecc90-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a7f0109a-db9d-4999-83c2-426a28cecc90" (UID: "a7f0109a-db9d-4999-83c2-426a28cecc90"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:44:27 crc kubenswrapper[4915]: I0127 18:44:27.918879 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7f0109a-db9d-4999-83c2-426a28cecc90-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a7f0109a-db9d-4999-83c2-426a28cecc90" (UID: "a7f0109a-db9d-4999-83c2-426a28cecc90"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:44:28 crc kubenswrapper[4915]: I0127 18:44:28.012808 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7f0109a-db9d-4999-83c2-426a28cecc90-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:28 crc kubenswrapper[4915]: I0127 18:44:28.012845 4915 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7f0109a-db9d-4999-83c2-426a28cecc90-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:28 crc kubenswrapper[4915]: I0127 18:44:28.466825 4915 generic.go:334] "Generic (PLEG): container finished" podID="67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4" containerID="e82990a7c0d3f444e7fb4bdbd7ca5cae94b15e1a9d8208caaa4334a745ab4336" exitCode=0 Jan 27 18:44:28 crc kubenswrapper[4915]: I0127 18:44:28.467095 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v6vn" event={"ID":"67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4","Type":"ContainerDied","Data":"e82990a7c0d3f444e7fb4bdbd7ca5cae94b15e1a9d8208caaa4334a745ab4336"} Jan 27 18:44:28 crc kubenswrapper[4915]: I0127 18:44:28.474941 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a7f0109a-db9d-4999-83c2-426a28cecc90","Type":"ContainerDied","Data":"6ec65c6191ef985256ba53343f0e85dc9cc2eb2fdba097e045ff6f8b92ef2aa7"} Jan 27 18:44:28 crc kubenswrapper[4915]: I0127 18:44:28.474969 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ec65c6191ef985256ba53343f0e85dc9cc2eb2fdba097e045ff6f8b92ef2aa7" Jan 27 18:44:28 crc kubenswrapper[4915]: I0127 18:44:28.475039 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 18:44:28 crc kubenswrapper[4915]: I0127 18:44:28.987090 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 18:44:28 crc kubenswrapper[4915]: E0127 18:44:28.987325 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f0109a-db9d-4999-83c2-426a28cecc90" containerName="pruner" Jan 27 18:44:28 crc kubenswrapper[4915]: I0127 18:44:28.987335 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f0109a-db9d-4999-83c2-426a28cecc90" containerName="pruner" Jan 27 18:44:28 crc kubenswrapper[4915]: I0127 18:44:28.987422 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7f0109a-db9d-4999-83c2-426a28cecc90" containerName="pruner" Jan 27 18:44:28 crc kubenswrapper[4915]: I0127 18:44:28.987839 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:44:28 crc kubenswrapper[4915]: I0127 18:44:28.989627 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 18:44:28 crc kubenswrapper[4915]: I0127 18:44:28.989755 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 18:44:29 crc kubenswrapper[4915]: I0127 18:44:29.007857 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 18:44:29 crc kubenswrapper[4915]: I0127 18:44:29.135544 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89defabd-4685-40f6-947b-1d67884adc37-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"89defabd-4685-40f6-947b-1d67884adc37\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:44:29 crc kubenswrapper[4915]: I0127 18:44:29.135667 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89defabd-4685-40f6-947b-1d67884adc37-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"89defabd-4685-40f6-947b-1d67884adc37\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:44:29 crc kubenswrapper[4915]: I0127 18:44:29.236778 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89defabd-4685-40f6-947b-1d67884adc37-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"89defabd-4685-40f6-947b-1d67884adc37\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:44:29 crc kubenswrapper[4915]: I0127 18:44:29.236885 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89defabd-4685-40f6-947b-1d67884adc37-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"89defabd-4685-40f6-947b-1d67884adc37\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:44:29 crc kubenswrapper[4915]: I0127 18:44:29.236949 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89defabd-4685-40f6-947b-1d67884adc37-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"89defabd-4685-40f6-947b-1d67884adc37\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:44:29 crc kubenswrapper[4915]: I0127 18:44:29.269043 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89defabd-4685-40f6-947b-1d67884adc37-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"89defabd-4685-40f6-947b-1d67884adc37\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:44:29 crc kubenswrapper[4915]: I0127 18:44:29.349158 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:44:31 crc kubenswrapper[4915]: I0127 18:44:31.606955 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-gpbpr" Jan 27 18:44:32 crc kubenswrapper[4915]: I0127 18:44:32.509088 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65be8e09-e032-40de-b290-c66c07282211-metrics-certs\") pod \"network-metrics-daemon-d467q\" (UID: \"65be8e09-e032-40de-b290-c66c07282211\") " pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:44:32 crc kubenswrapper[4915]: I0127 18:44:32.515080 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65be8e09-e032-40de-b290-c66c07282211-metrics-certs\") pod \"network-metrics-daemon-d467q\" (UID: \"65be8e09-e032-40de-b290-c66c07282211\") " pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:44:32 crc kubenswrapper[4915]: I0127 18:44:32.677372 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-d467q" Jan 27 18:44:35 crc kubenswrapper[4915]: I0127 18:44:35.747423 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:44:35 crc kubenswrapper[4915]: I0127 18:44:35.750641 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:44:36 crc kubenswrapper[4915]: I0127 18:44:36.063291 4915 patch_prober.go:28] interesting pod/downloads-7954f5f757-l57nd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 27 18:44:36 crc kubenswrapper[4915]: I0127 18:44:36.063390 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l57nd" podUID="42d32061-12d1-4e45-8217-73fc194a1b3f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 27 18:44:36 crc kubenswrapper[4915]: I0127 18:44:36.063635 4915 patch_prober.go:28] interesting pod/downloads-7954f5f757-l57nd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 27 18:44:36 crc kubenswrapper[4915]: I0127 18:44:36.063731 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-l57nd" podUID="42d32061-12d1-4e45-8217-73fc194a1b3f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 27 18:44:44 crc kubenswrapper[4915]: I0127 18:44:44.198121 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:44:46 crc kubenswrapper[4915]: I0127 18:44:46.087072 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-l57nd" Jan 27 18:44:50 crc kubenswrapper[4915]: I0127 18:44:50.625169 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:44:50 crc kubenswrapper[4915]: I0127 18:44:50.625829 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:44:56 crc kubenswrapper[4915]: I0127 18:44:56.493176 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gwxs9" Jan 27 18:44:56 crc kubenswrapper[4915]: E0127 18:44:56.658912 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 18:44:56 crc kubenswrapper[4915]: E0127 18:44:56.659098 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cq5k2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-lpn7t_openshift-marketplace(f973ebe4-3ce2-49c7-96b8-c7e238c606a7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 18:44:56 crc kubenswrapper[4915]: E0127 18:44:56.660636 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-lpn7t" podUID="f973ebe4-3ce2-49c7-96b8-c7e238c606a7" Jan 27 18:44:56 crc kubenswrapper[4915]: E0127 18:44:56.733139 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 18:44:56 crc kubenswrapper[4915]: E0127 18:44:56.733288 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9q5cx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-q6zlh_openshift-marketplace(4ea88d8e-91fe-42e8-869b-c95deff169b2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 18:44:56 crc kubenswrapper[4915]: E0127 18:44:56.734536 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-q6zlh" podUID="4ea88d8e-91fe-42e8-869b-c95deff169b2" Jan 27 18:44:56 crc kubenswrapper[4915]: E0127 18:44:56.760360 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-lpn7t" podUID="f973ebe4-3ce2-49c7-96b8-c7e238c606a7" Jan 27 18:44:56 crc kubenswrapper[4915]: E0127 18:44:56.793176 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q6zlh" podUID="4ea88d8e-91fe-42e8-869b-c95deff169b2" Jan 27 18:44:56 crc kubenswrapper[4915]: E0127 18:44:56.845565 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 18:44:56 crc kubenswrapper[4915]: E0127 18:44:56.851415 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlrjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qz9ll_openshift-marketplace(3f662342-5dc6-43c7-b6a1-242751990f64): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 18:44:56 crc kubenswrapper[4915]: E0127 18:44:56.852711 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-qz9ll" podUID="3f662342-5dc6-43c7-b6a1-242751990f64" Jan 27 18:44:57 crc kubenswrapper[4915]: I0127 18:44:57.092309 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 18:44:57 crc kubenswrapper[4915]: W0127 18:44:57.100955 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod89defabd_4685_40f6_947b_1d67884adc37.slice/crio-affa9c76d8fa19ba18955ddbdbef9ff2b33f8ba655185ee294cfa3711385a548 WatchSource:0}: Error finding container affa9c76d8fa19ba18955ddbdbef9ff2b33f8ba655185ee294cfa3711385a548: Status 404 returned error can't find the container with id affa9c76d8fa19ba18955ddbdbef9ff2b33f8ba655185ee294cfa3711385a548 Jan 27 18:44:57 crc kubenswrapper[4915]: I0127 18:44:57.168200 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-d467q"] Jan 27 18:44:57 crc kubenswrapper[4915]: I0127 18:44:57.765292 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"89defabd-4685-40f6-947b-1d67884adc37","Type":"ContainerStarted","Data":"affa9c76d8fa19ba18955ddbdbef9ff2b33f8ba655185ee294cfa3711385a548"} Jan 27 18:44:57 crc kubenswrapper[4915]: I0127 18:44:57.767417 4915 generic.go:334] "Generic (PLEG): container finished" podID="644a6ae4-550e-4ca7-9394-d2e09abb6b16" containerID="29d6c162833f8a2438e10040b08ea768df85693277237a099428934b91fb8e78" exitCode=0 Jan 27 18:44:57 crc kubenswrapper[4915]: I0127 18:44:57.767482 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s9b8l" event={"ID":"644a6ae4-550e-4ca7-9394-d2e09abb6b16","Type":"ContainerDied","Data":"29d6c162833f8a2438e10040b08ea768df85693277237a099428934b91fb8e78"} Jan 27 18:44:57 crc kubenswrapper[4915]: I0127 18:44:57.771510 4915 generic.go:334] "Generic (PLEG): container finished" podID="75b62d1e-bcc6-46c1-ac10-7e2819203102" containerID="cdbaee307cf76b2c7f0a909615bdc310164892d584fcd54f8291d391c12bd551" exitCode=0 Jan 27 18:44:57 crc kubenswrapper[4915]: I0127 18:44:57.771607 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvnw2" event={"ID":"75b62d1e-bcc6-46c1-ac10-7e2819203102","Type":"ContainerDied","Data":"cdbaee307cf76b2c7f0a909615bdc310164892d584fcd54f8291d391c12bd551"} Jan 27 18:44:57 crc kubenswrapper[4915]: I0127 18:44:57.775083 4915 generic.go:334] "Generic (PLEG): container finished" podID="5f772375-40f6-46c4-84ea-7eab8f29e6de" containerID="b3fa59bab7d55799e5619c8d133035329ea57617acc0b86270f50bd38c59b884" exitCode=0 Jan 27 18:44:57 crc kubenswrapper[4915]: I0127 18:44:57.775157 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t7ngn" event={"ID":"5f772375-40f6-46c4-84ea-7eab8f29e6de","Type":"ContainerDied","Data":"b3fa59bab7d55799e5619c8d133035329ea57617acc0b86270f50bd38c59b884"} Jan 27 18:44:57 crc kubenswrapper[4915]: I0127 18:44:57.778570 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-d467q" event={"ID":"65be8e09-e032-40de-b290-c66c07282211","Type":"ContainerStarted","Data":"6981ff124e07d475b39c7db5aad029dce36d63097837f00d860cdaa64f42aa0e"} Jan 27 18:44:57 crc kubenswrapper[4915]: I0127 18:44:57.826140 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v6vn" event={"ID":"67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4","Type":"ContainerStarted","Data":"dd1231d70c9561956be91b84e2526a208369aa1207b6fc58f3c1d1da30626558"} Jan 27 18:44:57 crc kubenswrapper[4915]: I0127 18:44:57.852122 4915 generic.go:334] "Generic (PLEG): container finished" podID="8ec10b90-7f91-4bda-bcaa-cce472027970" containerID="b036f4f05f5af94c8661b1909b65b918e6ea0fca8b29fd05ee92a372fa134c48" exitCode=0 Jan 27 18:44:57 crc kubenswrapper[4915]: I0127 18:44:57.854635 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q4xjv" event={"ID":"8ec10b90-7f91-4bda-bcaa-cce472027970","Type":"ContainerDied","Data":"b036f4f05f5af94c8661b1909b65b918e6ea0fca8b29fd05ee92a372fa134c48"} Jan 27 18:44:57 crc kubenswrapper[4915]: E0127 18:44:57.854985 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qz9ll" podUID="3f662342-5dc6-43c7-b6a1-242751990f64" Jan 27 18:44:58 crc kubenswrapper[4915]: I0127 18:44:58.803051 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 18:44:58 crc kubenswrapper[4915]: I0127 18:44:58.864708 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-d467q" event={"ID":"65be8e09-e032-40de-b290-c66c07282211","Type":"ContainerStarted","Data":"6f31f8a64aebce6a44cdf1e76cec86477b9aa466a71039b2a76b62d84ca3f62d"} Jan 27 18:44:58 crc kubenswrapper[4915]: I0127 18:44:58.864753 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-d467q" event={"ID":"65be8e09-e032-40de-b290-c66c07282211","Type":"ContainerStarted","Data":"053ac664d0120c5ff6646364a5bd52e64924028693e1445a1cdf228d6dcdc41e"} Jan 27 18:44:58 crc kubenswrapper[4915]: I0127 18:44:58.867102 4915 generic.go:334] "Generic (PLEG): container finished" podID="67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4" containerID="dd1231d70c9561956be91b84e2526a208369aa1207b6fc58f3c1d1da30626558" exitCode=0 Jan 27 18:44:58 crc kubenswrapper[4915]: I0127 18:44:58.867150 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v6vn" event={"ID":"67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4","Type":"ContainerDied","Data":"dd1231d70c9561956be91b84e2526a208369aa1207b6fc58f3c1d1da30626558"} Jan 27 18:44:58 crc kubenswrapper[4915]: I0127 18:44:58.880910 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-d467q" podStartSLOduration=168.880894383 podStartE2EDuration="2m48.880894383s" podCreationTimestamp="2026-01-27 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:44:58.878560345 +0000 UTC m=+190.236414009" watchObservedRunningTime="2026-01-27 18:44:58.880894383 +0000 UTC m=+190.238748047" Jan 27 18:44:58 crc kubenswrapper[4915]: I0127 18:44:58.886828 4915 generic.go:334] "Generic (PLEG): container finished" podID="89defabd-4685-40f6-947b-1d67884adc37" containerID="09385dc5533095e5d7d00e27235f52189eb684675871a0a452730e5a4875dbf8" exitCode=0 Jan 27 18:44:58 crc kubenswrapper[4915]: I0127 18:44:58.886871 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"89defabd-4685-40f6-947b-1d67884adc37","Type":"ContainerDied","Data":"09385dc5533095e5d7d00e27235f52189eb684675871a0a452730e5a4875dbf8"} Jan 27 18:45:00 crc kubenswrapper[4915]: I0127 18:45:00.135491 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492325-7jg5z"] Jan 27 18:45:00 crc kubenswrapper[4915]: I0127 18:45:00.137827 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-7jg5z" Jan 27 18:45:00 crc kubenswrapper[4915]: I0127 18:45:00.140153 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 18:45:00 crc kubenswrapper[4915]: I0127 18:45:00.140218 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 18:45:00 crc kubenswrapper[4915]: I0127 18:45:00.146298 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492325-7jg5z"] Jan 27 18:45:00 crc kubenswrapper[4915]: I0127 18:45:00.176475 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd-config-volume\") pod \"collect-profiles-29492325-7jg5z\" (UID: \"26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-7jg5z" Jan 27 18:45:00 crc kubenswrapper[4915]: I0127 18:45:00.176537 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd-secret-volume\") pod \"collect-profiles-29492325-7jg5z\" (UID: \"26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-7jg5z" Jan 27 18:45:00 crc kubenswrapper[4915]: I0127 18:45:00.176576 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r9b9\" (UniqueName: \"kubernetes.io/projected/26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd-kube-api-access-4r9b9\") pod \"collect-profiles-29492325-7jg5z\" (UID: \"26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-7jg5z" Jan 27 18:45:00 crc kubenswrapper[4915]: I0127 18:45:00.274253 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:45:00 crc kubenswrapper[4915]: I0127 18:45:00.277725 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd-config-volume\") pod \"collect-profiles-29492325-7jg5z\" (UID: \"26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-7jg5z" Jan 27 18:45:00 crc kubenswrapper[4915]: I0127 18:45:00.277766 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd-secret-volume\") pod \"collect-profiles-29492325-7jg5z\" (UID: \"26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-7jg5z" Jan 27 18:45:00 crc kubenswrapper[4915]: I0127 18:45:00.277819 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r9b9\" (UniqueName: \"kubernetes.io/projected/26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd-kube-api-access-4r9b9\") pod \"collect-profiles-29492325-7jg5z\" (UID: \"26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-7jg5z" Jan 27 18:45:00 crc kubenswrapper[4915]: I0127 18:45:00.278545 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd-config-volume\") pod \"collect-profiles-29492325-7jg5z\" (UID: \"26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-7jg5z" Jan 27 18:45:00 crc kubenswrapper[4915]: I0127 18:45:00.284007 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd-secret-volume\") pod \"collect-profiles-29492325-7jg5z\" (UID: \"26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-7jg5z" Jan 27 18:45:00 crc kubenswrapper[4915]: I0127 18:45:00.306474 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r9b9\" (UniqueName: \"kubernetes.io/projected/26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd-kube-api-access-4r9b9\") pod \"collect-profiles-29492325-7jg5z\" (UID: \"26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-7jg5z" Jan 27 18:45:00 crc kubenswrapper[4915]: I0127 18:45:00.378343 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89defabd-4685-40f6-947b-1d67884adc37-kube-api-access\") pod \"89defabd-4685-40f6-947b-1d67884adc37\" (UID: \"89defabd-4685-40f6-947b-1d67884adc37\") " Jan 27 18:45:00 crc kubenswrapper[4915]: I0127 18:45:00.378414 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89defabd-4685-40f6-947b-1d67884adc37-kubelet-dir\") pod \"89defabd-4685-40f6-947b-1d67884adc37\" (UID: \"89defabd-4685-40f6-947b-1d67884adc37\") " Jan 27 18:45:00 crc kubenswrapper[4915]: I0127 18:45:00.378493 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89defabd-4685-40f6-947b-1d67884adc37-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "89defabd-4685-40f6-947b-1d67884adc37" (UID: "89defabd-4685-40f6-947b-1d67884adc37"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:45:00 crc kubenswrapper[4915]: I0127 18:45:00.378690 4915 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89defabd-4685-40f6-947b-1d67884adc37-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:00 crc kubenswrapper[4915]: I0127 18:45:00.382975 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89defabd-4685-40f6-947b-1d67884adc37-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "89defabd-4685-40f6-947b-1d67884adc37" (UID: "89defabd-4685-40f6-947b-1d67884adc37"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:45:00 crc kubenswrapper[4915]: I0127 18:45:00.456626 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-7jg5z" Jan 27 18:45:00 crc kubenswrapper[4915]: I0127 18:45:00.480828 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89defabd-4685-40f6-947b-1d67884adc37-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:00 crc kubenswrapper[4915]: I0127 18:45:00.898649 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"89defabd-4685-40f6-947b-1d67884adc37","Type":"ContainerDied","Data":"affa9c76d8fa19ba18955ddbdbef9ff2b33f8ba655185ee294cfa3711385a548"} Jan 27 18:45:00 crc kubenswrapper[4915]: I0127 18:45:00.899040 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="affa9c76d8fa19ba18955ddbdbef9ff2b33f8ba655185ee294cfa3711385a548" Jan 27 18:45:00 crc kubenswrapper[4915]: I0127 18:45:00.898818 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 18:45:00 crc kubenswrapper[4915]: E0127 18:45:00.959729 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod89defabd_4685_40f6_947b_1d67884adc37.slice\": RecentStats: unable to find data in memory cache]" Jan 27 18:45:03 crc kubenswrapper[4915]: I0127 18:45:03.249614 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492325-7jg5z"] Jan 27 18:45:03 crc kubenswrapper[4915]: W0127 18:45:03.258603 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26ca3030_1aef_4e8a_a7cb_ac9b8b8bbafd.slice/crio-e4d6d7c65420c5f5a5a8534ede3bb35d6e4ae9e159af00eb5ee49abd1c8512e7 WatchSource:0}: Error finding container e4d6d7c65420c5f5a5a8534ede3bb35d6e4ae9e159af00eb5ee49abd1c8512e7: Status 404 returned error can't find the container with id e4d6d7c65420c5f5a5a8534ede3bb35d6e4ae9e159af00eb5ee49abd1c8512e7 Jan 27 18:45:03 crc kubenswrapper[4915]: I0127 18:45:03.919812 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-7jg5z" event={"ID":"26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd","Type":"ContainerStarted","Data":"e4d6d7c65420c5f5a5a8534ede3bb35d6e4ae9e159af00eb5ee49abd1c8512e7"} Jan 27 18:45:03 crc kubenswrapper[4915]: I0127 18:45:03.922908 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q4xjv" event={"ID":"8ec10b90-7f91-4bda-bcaa-cce472027970","Type":"ContainerStarted","Data":"c4a31417c295ab9013ffc624ecd66d0e0f8c842ba3c8a15908e270e5a1e5bac5"} Jan 27 18:45:03 crc kubenswrapper[4915]: I0127 18:45:03.925288 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s9b8l" event={"ID":"644a6ae4-550e-4ca7-9394-d2e09abb6b16","Type":"ContainerStarted","Data":"c8fce89cba2721932c94a70df9d802d494457f14d77407eda045a341964749d9"} Jan 27 18:45:03 crc kubenswrapper[4915]: I0127 18:45:03.950501 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q4xjv" podStartSLOduration=3.52551311 podStartE2EDuration="41.950475147s" podCreationTimestamp="2026-01-27 18:44:22 +0000 UTC" firstStartedPulling="2026-01-27 18:44:24.364293866 +0000 UTC m=+155.722147530" lastFinishedPulling="2026-01-27 18:45:02.789255893 +0000 UTC m=+194.147109567" observedRunningTime="2026-01-27 18:45:03.949449482 +0000 UTC m=+195.307303156" watchObservedRunningTime="2026-01-27 18:45:03.950475147 +0000 UTC m=+195.308328811" Jan 27 18:45:03 crc kubenswrapper[4915]: I0127 18:45:03.972169 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s9b8l" podStartSLOduration=3.8243615 podStartE2EDuration="38.972142534s" podCreationTimestamp="2026-01-27 18:44:25 +0000 UTC" firstStartedPulling="2026-01-27 18:44:26.406134141 +0000 UTC m=+157.763987805" lastFinishedPulling="2026-01-27 18:45:01.553915175 +0000 UTC m=+192.911768839" observedRunningTime="2026-01-27 18:45:03.96915464 +0000 UTC m=+195.327008314" watchObservedRunningTime="2026-01-27 18:45:03.972142534 +0000 UTC m=+195.329996208" Jan 27 18:45:04 crc kubenswrapper[4915]: I0127 18:45:04.005852 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kcmfd"] Jan 27 18:45:04 crc kubenswrapper[4915]: I0127 18:45:04.933688 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-7jg5z" event={"ID":"26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd","Type":"ContainerStarted","Data":"4493a21aaba906d69aef852a9340c8b02c6fdcb9bd20a8b861ea1ca4b21dea5a"} Jan 27 18:45:05 crc kubenswrapper[4915]: I0127 18:45:05.187924 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 18:45:05 crc kubenswrapper[4915]: E0127 18:45:05.188449 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89defabd-4685-40f6-947b-1d67884adc37" containerName="pruner" Jan 27 18:45:05 crc kubenswrapper[4915]: I0127 18:45:05.189427 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="89defabd-4685-40f6-947b-1d67884adc37" containerName="pruner" Jan 27 18:45:05 crc kubenswrapper[4915]: I0127 18:45:05.189676 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="89defabd-4685-40f6-947b-1d67884adc37" containerName="pruner" Jan 27 18:45:05 crc kubenswrapper[4915]: I0127 18:45:05.190266 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:45:05 crc kubenswrapper[4915]: I0127 18:45:05.192577 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 18:45:05 crc kubenswrapper[4915]: I0127 18:45:05.194892 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 18:45:05 crc kubenswrapper[4915]: I0127 18:45:05.194938 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 18:45:05 crc kubenswrapper[4915]: I0127 18:45:05.344473 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d0cd4c3-6c87-405b-ace2-6eefd124ecbe-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5d0cd4c3-6c87-405b-ace2-6eefd124ecbe\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:45:05 crc kubenswrapper[4915]: I0127 18:45:05.344947 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d0cd4c3-6c87-405b-ace2-6eefd124ecbe-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5d0cd4c3-6c87-405b-ace2-6eefd124ecbe\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:45:05 crc kubenswrapper[4915]: I0127 18:45:05.423310 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s9b8l" Jan 27 18:45:05 crc kubenswrapper[4915]: I0127 18:45:05.423522 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s9b8l" Jan 27 18:45:05 crc kubenswrapper[4915]: I0127 18:45:05.445612 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d0cd4c3-6c87-405b-ace2-6eefd124ecbe-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5d0cd4c3-6c87-405b-ace2-6eefd124ecbe\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:45:05 crc kubenswrapper[4915]: I0127 18:45:05.445704 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d0cd4c3-6c87-405b-ace2-6eefd124ecbe-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5d0cd4c3-6c87-405b-ace2-6eefd124ecbe\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:45:05 crc kubenswrapper[4915]: I0127 18:45:05.445743 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d0cd4c3-6c87-405b-ace2-6eefd124ecbe-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5d0cd4c3-6c87-405b-ace2-6eefd124ecbe\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:45:05 crc kubenswrapper[4915]: I0127 18:45:05.465495 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d0cd4c3-6c87-405b-ace2-6eefd124ecbe-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5d0cd4c3-6c87-405b-ace2-6eefd124ecbe\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:45:05 crc kubenswrapper[4915]: I0127 18:45:05.511441 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:45:05 crc kubenswrapper[4915]: I0127 18:45:05.939222 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v6vn" event={"ID":"67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4","Type":"ContainerStarted","Data":"c0957b8e8631200a15dc095df007154d059c17ae3c81ff518c243ff4d3cbe4bb"} Jan 27 18:45:05 crc kubenswrapper[4915]: I0127 18:45:05.942445 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvnw2" event={"ID":"75b62d1e-bcc6-46c1-ac10-7e2819203102","Type":"ContainerStarted","Data":"7db5923a3635ae64631a1a2ded481b84481a469ce38cd9aa5ee4cd3ad7845be2"} Jan 27 18:45:05 crc kubenswrapper[4915]: I0127 18:45:05.944542 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t7ngn" event={"ID":"5f772375-40f6-46c4-84ea-7eab8f29e6de","Type":"ContainerStarted","Data":"85a87e84af766ad76459f4ca4b4d3d980d45b8195788ec8f4ccdd5133d857af4"} Jan 27 18:45:05 crc kubenswrapper[4915]: I0127 18:45:05.946516 4915 generic.go:334] "Generic (PLEG): container finished" podID="26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd" containerID="4493a21aaba906d69aef852a9340c8b02c6fdcb9bd20a8b861ea1ca4b21dea5a" exitCode=0 Jan 27 18:45:05 crc kubenswrapper[4915]: I0127 18:45:05.946586 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-7jg5z" event={"ID":"26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd","Type":"ContainerDied","Data":"4493a21aaba906d69aef852a9340c8b02c6fdcb9bd20a8b861ea1ca4b21dea5a"} Jan 27 18:45:05 crc kubenswrapper[4915]: I0127 18:45:05.954698 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 18:45:05 crc kubenswrapper[4915]: W0127 18:45:05.960153 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5d0cd4c3_6c87_405b_ace2_6eefd124ecbe.slice/crio-dd50ae468ddedb4f246f582659c8d2f05ec32f4e3347999a09d56592efcd8ef4 WatchSource:0}: Error finding container dd50ae468ddedb4f246f582659c8d2f05ec32f4e3347999a09d56592efcd8ef4: Status 404 returned error can't find the container with id dd50ae468ddedb4f246f582659c8d2f05ec32f4e3347999a09d56592efcd8ef4 Jan 27 18:45:05 crc kubenswrapper[4915]: I0127 18:45:05.978807 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9v6vn" podStartSLOduration=3.238627301 podStartE2EDuration="39.978774196s" podCreationTimestamp="2026-01-27 18:44:26 +0000 UTC" firstStartedPulling="2026-01-27 18:44:28.46899086 +0000 UTC m=+159.826844524" lastFinishedPulling="2026-01-27 18:45:05.209137745 +0000 UTC m=+196.566991419" observedRunningTime="2026-01-27 18:45:05.964416578 +0000 UTC m=+197.322270242" watchObservedRunningTime="2026-01-27 18:45:05.978774196 +0000 UTC m=+197.336627860" Jan 27 18:45:06 crc kubenswrapper[4915]: I0127 18:45:06.019158 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fvnw2" podStartSLOduration=2.289231134 podStartE2EDuration="40.019136225s" podCreationTimestamp="2026-01-27 18:44:26 +0000 UTC" firstStartedPulling="2026-01-27 18:44:27.447923708 +0000 UTC m=+158.805777372" lastFinishedPulling="2026-01-27 18:45:05.177828799 +0000 UTC m=+196.535682463" observedRunningTime="2026-01-27 18:45:06.017055509 +0000 UTC m=+197.374909173" watchObservedRunningTime="2026-01-27 18:45:06.019136225 +0000 UTC m=+197.376989909" Jan 27 18:45:06 crc kubenswrapper[4915]: I0127 18:45:06.035037 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t7ngn" podStartSLOduration=3.120333982 podStartE2EDuration="41.035023904s" podCreationTimestamp="2026-01-27 18:44:25 +0000 UTC" firstStartedPulling="2026-01-27 18:44:26.399597777 +0000 UTC m=+157.757451441" lastFinishedPulling="2026-01-27 18:45:04.314287699 +0000 UTC m=+195.672141363" observedRunningTime="2026-01-27 18:45:06.033442531 +0000 UTC m=+197.391296195" watchObservedRunningTime="2026-01-27 18:45:06.035023904 +0000 UTC m=+197.392877568" Jan 27 18:45:06 crc kubenswrapper[4915]: I0127 18:45:06.429118 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fvnw2" Jan 27 18:45:06 crc kubenswrapper[4915]: I0127 18:45:06.429186 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fvnw2" Jan 27 18:45:06 crc kubenswrapper[4915]: I0127 18:45:06.639454 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-s9b8l" podUID="644a6ae4-550e-4ca7-9394-d2e09abb6b16" containerName="registry-server" probeResult="failure" output=< Jan 27 18:45:06 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Jan 27 18:45:06 crc kubenswrapper[4915]: > Jan 27 18:45:06 crc kubenswrapper[4915]: I0127 18:45:06.825727 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9v6vn" Jan 27 18:45:06 crc kubenswrapper[4915]: I0127 18:45:06.826326 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9v6vn" Jan 27 18:45:06 crc kubenswrapper[4915]: I0127 18:45:06.952656 4915 generic.go:334] "Generic (PLEG): container finished" podID="5d0cd4c3-6c87-405b-ace2-6eefd124ecbe" containerID="ebb05daa8cc5948d0d0517b610582c852b72ac3dcb23ece97a31bc36aaa9d12a" exitCode=0 Jan 27 18:45:06 crc kubenswrapper[4915]: I0127 18:45:06.952766 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5d0cd4c3-6c87-405b-ace2-6eefd124ecbe","Type":"ContainerDied","Data":"ebb05daa8cc5948d0d0517b610582c852b72ac3dcb23ece97a31bc36aaa9d12a"} Jan 27 18:45:06 crc kubenswrapper[4915]: I0127 18:45:06.953256 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5d0cd4c3-6c87-405b-ace2-6eefd124ecbe","Type":"ContainerStarted","Data":"dd50ae468ddedb4f246f582659c8d2f05ec32f4e3347999a09d56592efcd8ef4"} Jan 27 18:45:07 crc kubenswrapper[4915]: I0127 18:45:07.272777 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-7jg5z" Jan 27 18:45:07 crc kubenswrapper[4915]: I0127 18:45:07.368117 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd-config-volume\") pod \"26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd\" (UID: \"26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd\") " Jan 27 18:45:07 crc kubenswrapper[4915]: I0127 18:45:07.368173 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r9b9\" (UniqueName: \"kubernetes.io/projected/26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd-kube-api-access-4r9b9\") pod \"26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd\" (UID: \"26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd\") " Jan 27 18:45:07 crc kubenswrapper[4915]: I0127 18:45:07.368243 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd-secret-volume\") pod \"26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd\" (UID: \"26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd\") " Jan 27 18:45:07 crc kubenswrapper[4915]: I0127 18:45:07.368891 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd-config-volume" (OuterVolumeSpecName: "config-volume") pod "26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd" (UID: "26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:45:07 crc kubenswrapper[4915]: I0127 18:45:07.369539 4915 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:07 crc kubenswrapper[4915]: I0127 18:45:07.372839 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd" (UID: "26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:45:07 crc kubenswrapper[4915]: I0127 18:45:07.373084 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd-kube-api-access-4r9b9" (OuterVolumeSpecName: "kube-api-access-4r9b9") pod "26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd" (UID: "26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd"). InnerVolumeSpecName "kube-api-access-4r9b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:45:07 crc kubenswrapper[4915]: I0127 18:45:07.470305 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r9b9\" (UniqueName: \"kubernetes.io/projected/26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd-kube-api-access-4r9b9\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:07 crc kubenswrapper[4915]: I0127 18:45:07.470597 4915 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:07 crc kubenswrapper[4915]: I0127 18:45:07.475232 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fvnw2" podUID="75b62d1e-bcc6-46c1-ac10-7e2819203102" containerName="registry-server" probeResult="failure" output=< Jan 27 18:45:07 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Jan 27 18:45:07 crc kubenswrapper[4915]: > Jan 27 18:45:07 crc kubenswrapper[4915]: I0127 18:45:07.881608 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9v6vn" podUID="67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4" containerName="registry-server" probeResult="failure" output=< Jan 27 18:45:07 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Jan 27 18:45:07 crc kubenswrapper[4915]: > Jan 27 18:45:07 crc kubenswrapper[4915]: I0127 18:45:07.961544 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-7jg5z" event={"ID":"26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd","Type":"ContainerDied","Data":"e4d6d7c65420c5f5a5a8534ede3bb35d6e4ae9e159af00eb5ee49abd1c8512e7"} Jan 27 18:45:07 crc kubenswrapper[4915]: I0127 18:45:07.961597 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4d6d7c65420c5f5a5a8534ede3bb35d6e4ae9e159af00eb5ee49abd1c8512e7" Jan 27 18:45:07 crc kubenswrapper[4915]: I0127 18:45:07.961636 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-7jg5z" Jan 27 18:45:08 crc kubenswrapper[4915]: I0127 18:45:08.372287 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:45:08 crc kubenswrapper[4915]: I0127 18:45:08.484904 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d0cd4c3-6c87-405b-ace2-6eefd124ecbe-kubelet-dir\") pod \"5d0cd4c3-6c87-405b-ace2-6eefd124ecbe\" (UID: \"5d0cd4c3-6c87-405b-ace2-6eefd124ecbe\") " Jan 27 18:45:08 crc kubenswrapper[4915]: I0127 18:45:08.485072 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d0cd4c3-6c87-405b-ace2-6eefd124ecbe-kube-api-access\") pod \"5d0cd4c3-6c87-405b-ace2-6eefd124ecbe\" (UID: \"5d0cd4c3-6c87-405b-ace2-6eefd124ecbe\") " Jan 27 18:45:08 crc kubenswrapper[4915]: I0127 18:45:08.485286 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d0cd4c3-6c87-405b-ace2-6eefd124ecbe-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5d0cd4c3-6c87-405b-ace2-6eefd124ecbe" (UID: "5d0cd4c3-6c87-405b-ace2-6eefd124ecbe"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:45:08 crc kubenswrapper[4915]: I0127 18:45:08.485441 4915 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d0cd4c3-6c87-405b-ace2-6eefd124ecbe-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:08 crc kubenswrapper[4915]: I0127 18:45:08.489248 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d0cd4c3-6c87-405b-ace2-6eefd124ecbe-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5d0cd4c3-6c87-405b-ace2-6eefd124ecbe" (UID: "5d0cd4c3-6c87-405b-ace2-6eefd124ecbe"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:45:08 crc kubenswrapper[4915]: I0127 18:45:08.586265 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d0cd4c3-6c87-405b-ace2-6eefd124ecbe-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:08 crc kubenswrapper[4915]: I0127 18:45:08.967020 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5d0cd4c3-6c87-405b-ace2-6eefd124ecbe","Type":"ContainerDied","Data":"dd50ae468ddedb4f246f582659c8d2f05ec32f4e3347999a09d56592efcd8ef4"} Jan 27 18:45:08 crc kubenswrapper[4915]: I0127 18:45:08.967072 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd50ae468ddedb4f246f582659c8d2f05ec32f4e3347999a09d56592efcd8ef4" Jan 27 18:45:08 crc kubenswrapper[4915]: I0127 18:45:08.967132 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 18:45:12 crc kubenswrapper[4915]: I0127 18:45:12.800078 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 18:45:12 crc kubenswrapper[4915]: E0127 18:45:12.800820 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd" containerName="collect-profiles" Jan 27 18:45:12 crc kubenswrapper[4915]: I0127 18:45:12.800840 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd" containerName="collect-profiles" Jan 27 18:45:12 crc kubenswrapper[4915]: E0127 18:45:12.800864 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d0cd4c3-6c87-405b-ace2-6eefd124ecbe" containerName="pruner" Jan 27 18:45:12 crc kubenswrapper[4915]: I0127 18:45:12.800874 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d0cd4c3-6c87-405b-ace2-6eefd124ecbe" containerName="pruner" Jan 27 18:45:12 crc kubenswrapper[4915]: I0127 18:45:12.801045 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d0cd4c3-6c87-405b-ace2-6eefd124ecbe" containerName="pruner" Jan 27 18:45:12 crc kubenswrapper[4915]: I0127 18:45:12.801071 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd" containerName="collect-profiles" Jan 27 18:45:12 crc kubenswrapper[4915]: I0127 18:45:12.801649 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:45:12 crc kubenswrapper[4915]: I0127 18:45:12.806593 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 18:45:12 crc kubenswrapper[4915]: I0127 18:45:12.807516 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 18:45:12 crc kubenswrapper[4915]: I0127 18:45:12.812183 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 18:45:12 crc kubenswrapper[4915]: I0127 18:45:12.943438 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d1c9d54c-fd3a-4729-beab-091e7acbcb83-var-lock\") pod \"installer-9-crc\" (UID: \"d1c9d54c-fd3a-4729-beab-091e7acbcb83\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:45:12 crc kubenswrapper[4915]: I0127 18:45:12.943487 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1c9d54c-fd3a-4729-beab-091e7acbcb83-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d1c9d54c-fd3a-4729-beab-091e7acbcb83\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:45:12 crc kubenswrapper[4915]: I0127 18:45:12.943521 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1c9d54c-fd3a-4729-beab-091e7acbcb83-kube-api-access\") pod \"installer-9-crc\" (UID: \"d1c9d54c-fd3a-4729-beab-091e7acbcb83\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:45:13 crc kubenswrapper[4915]: I0127 18:45:13.044669 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1c9d54c-fd3a-4729-beab-091e7acbcb83-kube-api-access\") pod \"installer-9-crc\" (UID: \"d1c9d54c-fd3a-4729-beab-091e7acbcb83\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:45:13 crc kubenswrapper[4915]: I0127 18:45:13.044811 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d1c9d54c-fd3a-4729-beab-091e7acbcb83-var-lock\") pod \"installer-9-crc\" (UID: \"d1c9d54c-fd3a-4729-beab-091e7acbcb83\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:45:13 crc kubenswrapper[4915]: I0127 18:45:13.044833 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1c9d54c-fd3a-4729-beab-091e7acbcb83-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d1c9d54c-fd3a-4729-beab-091e7acbcb83\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:45:13 crc kubenswrapper[4915]: I0127 18:45:13.044897 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1c9d54c-fd3a-4729-beab-091e7acbcb83-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d1c9d54c-fd3a-4729-beab-091e7acbcb83\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:45:13 crc kubenswrapper[4915]: I0127 18:45:13.044918 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d1c9d54c-fd3a-4729-beab-091e7acbcb83-var-lock\") pod \"installer-9-crc\" (UID: \"d1c9d54c-fd3a-4729-beab-091e7acbcb83\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:45:13 crc kubenswrapper[4915]: I0127 18:45:13.077410 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1c9d54c-fd3a-4729-beab-091e7acbcb83-kube-api-access\") pod \"installer-9-crc\" (UID: \"d1c9d54c-fd3a-4729-beab-091e7acbcb83\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:45:13 crc kubenswrapper[4915]: I0127 18:45:13.135172 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:45:13 crc kubenswrapper[4915]: I0127 18:45:13.220092 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q4xjv" Jan 27 18:45:13 crc kubenswrapper[4915]: I0127 18:45:13.220130 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q4xjv" Jan 27 18:45:13 crc kubenswrapper[4915]: I0127 18:45:13.286656 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q4xjv" Jan 27 18:45:14 crc kubenswrapper[4915]: I0127 18:45:14.067365 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q4xjv" Jan 27 18:45:15 crc kubenswrapper[4915]: I0127 18:45:15.490953 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s9b8l" Jan 27 18:45:15 crc kubenswrapper[4915]: I0127 18:45:15.542456 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s9b8l" Jan 27 18:45:15 crc kubenswrapper[4915]: I0127 18:45:15.823116 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t7ngn" Jan 27 18:45:15 crc kubenswrapper[4915]: I0127 18:45:15.824025 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t7ngn" Jan 27 18:45:15 crc kubenswrapper[4915]: I0127 18:45:15.886715 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t7ngn" Jan 27 18:45:16 crc kubenswrapper[4915]: I0127 18:45:16.007909 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6zlh" event={"ID":"4ea88d8e-91fe-42e8-869b-c95deff169b2","Type":"ContainerStarted","Data":"e2fca945a92f6c10554080f8235fb6ca7d47db6aa70e1c71ffdfb1abb803c1c8"} Jan 27 18:45:16 crc kubenswrapper[4915]: I0127 18:45:16.009726 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lpn7t" event={"ID":"f973ebe4-3ce2-49c7-96b8-c7e238c606a7","Type":"ContainerStarted","Data":"de9ccd6f5936a244f110f5585042eac4da7a22a01f3cfbd4eb554718d0a75095"} Jan 27 18:45:16 crc kubenswrapper[4915]: I0127 18:45:16.013490 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz9ll" event={"ID":"3f662342-5dc6-43c7-b6a1-242751990f64","Type":"ContainerStarted","Data":"56caf6f8389d73b9681278297ed287bca0a4bc646e7aed5277191728963b4b87"} Jan 27 18:45:16 crc kubenswrapper[4915]: I0127 18:45:16.093560 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t7ngn" Jan 27 18:45:16 crc kubenswrapper[4915]: I0127 18:45:16.134950 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 18:45:16 crc kubenswrapper[4915]: W0127 18:45:16.168165 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd1c9d54c_fd3a_4729_beab_091e7acbcb83.slice/crio-7060cfed0ca0599d01be47dc66524a6ebd52ced6814f4cd1229b7870e8ed471c WatchSource:0}: Error finding container 7060cfed0ca0599d01be47dc66524a6ebd52ced6814f4cd1229b7870e8ed471c: Status 404 returned error can't find the container with id 7060cfed0ca0599d01be47dc66524a6ebd52ced6814f4cd1229b7870e8ed471c Jan 27 18:45:16 crc kubenswrapper[4915]: I0127 18:45:16.487476 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fvnw2" Jan 27 18:45:16 crc kubenswrapper[4915]: I0127 18:45:16.527751 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fvnw2" Jan 27 18:45:16 crc kubenswrapper[4915]: I0127 18:45:16.895570 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9v6vn" Jan 27 18:45:16 crc kubenswrapper[4915]: I0127 18:45:16.957863 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9v6vn" Jan 27 18:45:17 crc kubenswrapper[4915]: I0127 18:45:17.022859 4915 generic.go:334] "Generic (PLEG): container finished" podID="f973ebe4-3ce2-49c7-96b8-c7e238c606a7" containerID="de9ccd6f5936a244f110f5585042eac4da7a22a01f3cfbd4eb554718d0a75095" exitCode=0 Jan 27 18:45:17 crc kubenswrapper[4915]: I0127 18:45:17.022925 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lpn7t" event={"ID":"f973ebe4-3ce2-49c7-96b8-c7e238c606a7","Type":"ContainerDied","Data":"de9ccd6f5936a244f110f5585042eac4da7a22a01f3cfbd4eb554718d0a75095"} Jan 27 18:45:17 crc kubenswrapper[4915]: I0127 18:45:17.028353 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz9ll" event={"ID":"3f662342-5dc6-43c7-b6a1-242751990f64","Type":"ContainerDied","Data":"56caf6f8389d73b9681278297ed287bca0a4bc646e7aed5277191728963b4b87"} Jan 27 18:45:17 crc kubenswrapper[4915]: I0127 18:45:17.028289 4915 generic.go:334] "Generic (PLEG): container finished" podID="3f662342-5dc6-43c7-b6a1-242751990f64" containerID="56caf6f8389d73b9681278297ed287bca0a4bc646e7aed5277191728963b4b87" exitCode=0 Jan 27 18:45:17 crc kubenswrapper[4915]: I0127 18:45:17.032895 4915 generic.go:334] "Generic (PLEG): container finished" podID="4ea88d8e-91fe-42e8-869b-c95deff169b2" containerID="e2fca945a92f6c10554080f8235fb6ca7d47db6aa70e1c71ffdfb1abb803c1c8" exitCode=0 Jan 27 18:45:17 crc kubenswrapper[4915]: I0127 18:45:17.033004 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6zlh" event={"ID":"4ea88d8e-91fe-42e8-869b-c95deff169b2","Type":"ContainerDied","Data":"e2fca945a92f6c10554080f8235fb6ca7d47db6aa70e1c71ffdfb1abb803c1c8"} Jan 27 18:45:17 crc kubenswrapper[4915]: I0127 18:45:17.036949 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d1c9d54c-fd3a-4729-beab-091e7acbcb83","Type":"ContainerStarted","Data":"059f72ac6ddd72ef62f1a858f2e3ba3960c45a666ad0c25da461da1ca55cee12"} Jan 27 18:45:17 crc kubenswrapper[4915]: I0127 18:45:17.037015 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d1c9d54c-fd3a-4729-beab-091e7acbcb83","Type":"ContainerStarted","Data":"7060cfed0ca0599d01be47dc66524a6ebd52ced6814f4cd1229b7870e8ed471c"} Jan 27 18:45:17 crc kubenswrapper[4915]: I0127 18:45:17.122656 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=5.122624787 podStartE2EDuration="5.122624787s" podCreationTimestamp="2026-01-27 18:45:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:17.106504832 +0000 UTC m=+208.464358536" watchObservedRunningTime="2026-01-27 18:45:17.122624787 +0000 UTC m=+208.480478451" Jan 27 18:45:17 crc kubenswrapper[4915]: I0127 18:45:17.328559 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t7ngn"] Jan 27 18:45:18 crc kubenswrapper[4915]: I0127 18:45:18.050889 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz9ll" event={"ID":"3f662342-5dc6-43c7-b6a1-242751990f64","Type":"ContainerStarted","Data":"557d0dfe91a3afc06edbec15830b941b903661ef330e37811bb7f380ec7e2f5b"} Jan 27 18:45:18 crc kubenswrapper[4915]: I0127 18:45:18.081057 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qz9ll" podStartSLOduration=2.986799356 podStartE2EDuration="55.081037952s" podCreationTimestamp="2026-01-27 18:44:23 +0000 UTC" firstStartedPulling="2026-01-27 18:44:25.369884127 +0000 UTC m=+156.727737791" lastFinishedPulling="2026-01-27 18:45:17.464122703 +0000 UTC m=+208.821976387" observedRunningTime="2026-01-27 18:45:18.078627647 +0000 UTC m=+209.436481321" watchObservedRunningTime="2026-01-27 18:45:18.081037952 +0000 UTC m=+209.438891616" Jan 27 18:45:19 crc kubenswrapper[4915]: I0127 18:45:19.059170 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lpn7t" event={"ID":"f973ebe4-3ce2-49c7-96b8-c7e238c606a7","Type":"ContainerStarted","Data":"3b4c3c30ebc9b0c3921b90940e7b694dc3d53594c656eac546b439b9e81ed716"} Jan 27 18:45:19 crc kubenswrapper[4915]: I0127 18:45:19.064036 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6zlh" event={"ID":"4ea88d8e-91fe-42e8-869b-c95deff169b2","Type":"ContainerStarted","Data":"841ecd031ef5cf3b15113f1d0f573fcd17c668b5b411b2390dfea442d439d69d"} Jan 27 18:45:19 crc kubenswrapper[4915]: I0127 18:45:19.064226 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t7ngn" podUID="5f772375-40f6-46c4-84ea-7eab8f29e6de" containerName="registry-server" containerID="cri-o://85a87e84af766ad76459f4ca4b4d3d980d45b8195788ec8f4ccdd5133d857af4" gracePeriod=2 Jan 27 18:45:19 crc kubenswrapper[4915]: I0127 18:45:19.094894 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lpn7t" podStartSLOduration=2.533124811 podStartE2EDuration="56.094874173s" podCreationTimestamp="2026-01-27 18:44:23 +0000 UTC" firstStartedPulling="2026-01-27 18:44:24.352901089 +0000 UTC m=+155.710754753" lastFinishedPulling="2026-01-27 18:45:17.914650411 +0000 UTC m=+209.272504115" observedRunningTime="2026-01-27 18:45:19.093742482 +0000 UTC m=+210.451596136" watchObservedRunningTime="2026-01-27 18:45:19.094874173 +0000 UTC m=+210.452727857" Jan 27 18:45:19 crc kubenswrapper[4915]: I0127 18:45:19.493850 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t7ngn" Jan 27 18:45:19 crc kubenswrapper[4915]: I0127 18:45:19.517049 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q6zlh" podStartSLOduration=4.001093206 podStartE2EDuration="56.517032216s" podCreationTimestamp="2026-01-27 18:44:23 +0000 UTC" firstStartedPulling="2026-01-27 18:44:25.378612049 +0000 UTC m=+156.736465713" lastFinishedPulling="2026-01-27 18:45:17.894551049 +0000 UTC m=+209.252404723" observedRunningTime="2026-01-27 18:45:19.117354579 +0000 UTC m=+210.475208253" watchObservedRunningTime="2026-01-27 18:45:19.517032216 +0000 UTC m=+210.874885880" Jan 27 18:45:19 crc kubenswrapper[4915]: I0127 18:45:19.651850 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f772375-40f6-46c4-84ea-7eab8f29e6de-utilities\") pod \"5f772375-40f6-46c4-84ea-7eab8f29e6de\" (UID: \"5f772375-40f6-46c4-84ea-7eab8f29e6de\") " Jan 27 18:45:19 crc kubenswrapper[4915]: I0127 18:45:19.651903 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f772375-40f6-46c4-84ea-7eab8f29e6de-catalog-content\") pod \"5f772375-40f6-46c4-84ea-7eab8f29e6de\" (UID: \"5f772375-40f6-46c4-84ea-7eab8f29e6de\") " Jan 27 18:45:19 crc kubenswrapper[4915]: I0127 18:45:19.651951 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qckqz\" (UniqueName: \"kubernetes.io/projected/5f772375-40f6-46c4-84ea-7eab8f29e6de-kube-api-access-qckqz\") pod \"5f772375-40f6-46c4-84ea-7eab8f29e6de\" (UID: \"5f772375-40f6-46c4-84ea-7eab8f29e6de\") " Jan 27 18:45:19 crc kubenswrapper[4915]: I0127 18:45:19.652859 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f772375-40f6-46c4-84ea-7eab8f29e6de-utilities" (OuterVolumeSpecName: "utilities") pod "5f772375-40f6-46c4-84ea-7eab8f29e6de" (UID: "5f772375-40f6-46c4-84ea-7eab8f29e6de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:45:19 crc kubenswrapper[4915]: I0127 18:45:19.670181 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f772375-40f6-46c4-84ea-7eab8f29e6de-kube-api-access-qckqz" (OuterVolumeSpecName: "kube-api-access-qckqz") pod "5f772375-40f6-46c4-84ea-7eab8f29e6de" (UID: "5f772375-40f6-46c4-84ea-7eab8f29e6de"). InnerVolumeSpecName "kube-api-access-qckqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:45:19 crc kubenswrapper[4915]: I0127 18:45:19.686041 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f772375-40f6-46c4-84ea-7eab8f29e6de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f772375-40f6-46c4-84ea-7eab8f29e6de" (UID: "5f772375-40f6-46c4-84ea-7eab8f29e6de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:45:19 crc kubenswrapper[4915]: I0127 18:45:19.753874 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f772375-40f6-46c4-84ea-7eab8f29e6de-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:19 crc kubenswrapper[4915]: I0127 18:45:19.754245 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f772375-40f6-46c4-84ea-7eab8f29e6de-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:19 crc kubenswrapper[4915]: I0127 18:45:19.754449 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qckqz\" (UniqueName: \"kubernetes.io/projected/5f772375-40f6-46c4-84ea-7eab8f29e6de-kube-api-access-qckqz\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.075442 4915 generic.go:334] "Generic (PLEG): container finished" podID="5f772375-40f6-46c4-84ea-7eab8f29e6de" containerID="85a87e84af766ad76459f4ca4b4d3d980d45b8195788ec8f4ccdd5133d857af4" exitCode=0 Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.075524 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t7ngn" event={"ID":"5f772375-40f6-46c4-84ea-7eab8f29e6de","Type":"ContainerDied","Data":"85a87e84af766ad76459f4ca4b4d3d980d45b8195788ec8f4ccdd5133d857af4"} Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.075567 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t7ngn" Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.075605 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t7ngn" event={"ID":"5f772375-40f6-46c4-84ea-7eab8f29e6de","Type":"ContainerDied","Data":"40fbe498734f3e1536c51c076bb66c5be4dee9179173dfc59e416362dd3094f2"} Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.075642 4915 scope.go:117] "RemoveContainer" containerID="85a87e84af766ad76459f4ca4b4d3d980d45b8195788ec8f4ccdd5133d857af4" Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.102494 4915 scope.go:117] "RemoveContainer" containerID="b3fa59bab7d55799e5619c8d133035329ea57617acc0b86270f50bd38c59b884" Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.130530 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t7ngn"] Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.139929 4915 scope.go:117] "RemoveContainer" containerID="cec41666db03f636810e8dc9f96644adeb83ed21ec3ad9d42ca7023c244049e9" Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.146070 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t7ngn"] Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.160241 4915 scope.go:117] "RemoveContainer" containerID="85a87e84af766ad76459f4ca4b4d3d980d45b8195788ec8f4ccdd5133d857af4" Jan 27 18:45:20 crc kubenswrapper[4915]: E0127 18:45:20.160908 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85a87e84af766ad76459f4ca4b4d3d980d45b8195788ec8f4ccdd5133d857af4\": container with ID starting with 85a87e84af766ad76459f4ca4b4d3d980d45b8195788ec8f4ccdd5133d857af4 not found: ID does not exist" containerID="85a87e84af766ad76459f4ca4b4d3d980d45b8195788ec8f4ccdd5133d857af4" Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.161012 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85a87e84af766ad76459f4ca4b4d3d980d45b8195788ec8f4ccdd5133d857af4"} err="failed to get container status \"85a87e84af766ad76459f4ca4b4d3d980d45b8195788ec8f4ccdd5133d857af4\": rpc error: code = NotFound desc = could not find container \"85a87e84af766ad76459f4ca4b4d3d980d45b8195788ec8f4ccdd5133d857af4\": container with ID starting with 85a87e84af766ad76459f4ca4b4d3d980d45b8195788ec8f4ccdd5133d857af4 not found: ID does not exist" Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.161108 4915 scope.go:117] "RemoveContainer" containerID="b3fa59bab7d55799e5619c8d133035329ea57617acc0b86270f50bd38c59b884" Jan 27 18:45:20 crc kubenswrapper[4915]: E0127 18:45:20.161421 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3fa59bab7d55799e5619c8d133035329ea57617acc0b86270f50bd38c59b884\": container with ID starting with b3fa59bab7d55799e5619c8d133035329ea57617acc0b86270f50bd38c59b884 not found: ID does not exist" containerID="b3fa59bab7d55799e5619c8d133035329ea57617acc0b86270f50bd38c59b884" Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.161525 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3fa59bab7d55799e5619c8d133035329ea57617acc0b86270f50bd38c59b884"} err="failed to get container status \"b3fa59bab7d55799e5619c8d133035329ea57617acc0b86270f50bd38c59b884\": rpc error: code = NotFound desc = could not find container \"b3fa59bab7d55799e5619c8d133035329ea57617acc0b86270f50bd38c59b884\": container with ID starting with b3fa59bab7d55799e5619c8d133035329ea57617acc0b86270f50bd38c59b884 not found: ID does not exist" Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.161608 4915 scope.go:117] "RemoveContainer" containerID="cec41666db03f636810e8dc9f96644adeb83ed21ec3ad9d42ca7023c244049e9" Jan 27 18:45:20 crc kubenswrapper[4915]: E0127 18:45:20.161914 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cec41666db03f636810e8dc9f96644adeb83ed21ec3ad9d42ca7023c244049e9\": container with ID starting with cec41666db03f636810e8dc9f96644adeb83ed21ec3ad9d42ca7023c244049e9 not found: ID does not exist" containerID="cec41666db03f636810e8dc9f96644adeb83ed21ec3ad9d42ca7023c244049e9" Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.161993 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cec41666db03f636810e8dc9f96644adeb83ed21ec3ad9d42ca7023c244049e9"} err="failed to get container status \"cec41666db03f636810e8dc9f96644adeb83ed21ec3ad9d42ca7023c244049e9\": rpc error: code = NotFound desc = could not find container \"cec41666db03f636810e8dc9f96644adeb83ed21ec3ad9d42ca7023c244049e9\": container with ID starting with cec41666db03f636810e8dc9f96644adeb83ed21ec3ad9d42ca7023c244049e9 not found: ID does not exist" Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.331181 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9v6vn"] Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.331646 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9v6vn" podUID="67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4" containerName="registry-server" containerID="cri-o://c0957b8e8631200a15dc095df007154d059c17ae3c81ff518c243ff4d3cbe4bb" gracePeriod=2 Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.624992 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.625423 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.625481 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.626325 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.626422 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3" gracePeriod=600 Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.837580 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9v6vn" Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.973699 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4-utilities\") pod \"67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4\" (UID: \"67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4\") " Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.973833 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtj87\" (UniqueName: \"kubernetes.io/projected/67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4-kube-api-access-qtj87\") pod \"67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4\" (UID: \"67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4\") " Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.973891 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4-catalog-content\") pod \"67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4\" (UID: \"67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4\") " Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.974655 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4-utilities" (OuterVolumeSpecName: "utilities") pod "67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4" (UID: "67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:45:20 crc kubenswrapper[4915]: I0127 18:45:20.980031 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4-kube-api-access-qtj87" (OuterVolumeSpecName: "kube-api-access-qtj87") pod "67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4" (UID: "67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4"). InnerVolumeSpecName "kube-api-access-qtj87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:45:21 crc kubenswrapper[4915]: I0127 18:45:21.075144 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtj87\" (UniqueName: \"kubernetes.io/projected/67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4-kube-api-access-qtj87\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:21 crc kubenswrapper[4915]: I0127 18:45:21.075510 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:21 crc kubenswrapper[4915]: I0127 18:45:21.084713 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3" exitCode=0 Jan 27 18:45:21 crc kubenswrapper[4915]: I0127 18:45:21.085018 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3"} Jan 27 18:45:21 crc kubenswrapper[4915]: I0127 18:45:21.085145 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"6631bd22e9892834edbb04562817ce83e419fee41fa7145f12ca5baef6b13a1f"} Jan 27 18:45:21 crc kubenswrapper[4915]: I0127 18:45:21.085226 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4" (UID: "67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:45:21 crc kubenswrapper[4915]: I0127 18:45:21.087737 4915 generic.go:334] "Generic (PLEG): container finished" podID="67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4" containerID="c0957b8e8631200a15dc095df007154d059c17ae3c81ff518c243ff4d3cbe4bb" exitCode=0 Jan 27 18:45:21 crc kubenswrapper[4915]: I0127 18:45:21.087905 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v6vn" event={"ID":"67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4","Type":"ContainerDied","Data":"c0957b8e8631200a15dc095df007154d059c17ae3c81ff518c243ff4d3cbe4bb"} Jan 27 18:45:21 crc kubenswrapper[4915]: I0127 18:45:21.087954 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9v6vn" event={"ID":"67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4","Type":"ContainerDied","Data":"5b328b06764a37f04bef308acbbf058c207e5d68af431c05d573d627d6e5268c"} Jan 27 18:45:21 crc kubenswrapper[4915]: I0127 18:45:21.087976 4915 scope.go:117] "RemoveContainer" containerID="c0957b8e8631200a15dc095df007154d059c17ae3c81ff518c243ff4d3cbe4bb" Jan 27 18:45:21 crc kubenswrapper[4915]: I0127 18:45:21.087881 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9v6vn" Jan 27 18:45:21 crc kubenswrapper[4915]: I0127 18:45:21.108771 4915 scope.go:117] "RemoveContainer" containerID="dd1231d70c9561956be91b84e2526a208369aa1207b6fc58f3c1d1da30626558" Jan 27 18:45:21 crc kubenswrapper[4915]: I0127 18:45:21.131843 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9v6vn"] Jan 27 18:45:21 crc kubenswrapper[4915]: I0127 18:45:21.138447 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9v6vn"] Jan 27 18:45:21 crc kubenswrapper[4915]: I0127 18:45:21.140881 4915 scope.go:117] "RemoveContainer" containerID="e82990a7c0d3f444e7fb4bdbd7ca5cae94b15e1a9d8208caaa4334a745ab4336" Jan 27 18:45:21 crc kubenswrapper[4915]: I0127 18:45:21.155243 4915 scope.go:117] "RemoveContainer" containerID="c0957b8e8631200a15dc095df007154d059c17ae3c81ff518c243ff4d3cbe4bb" Jan 27 18:45:21 crc kubenswrapper[4915]: E0127 18:45:21.155546 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0957b8e8631200a15dc095df007154d059c17ae3c81ff518c243ff4d3cbe4bb\": container with ID starting with c0957b8e8631200a15dc095df007154d059c17ae3c81ff518c243ff4d3cbe4bb not found: ID does not exist" containerID="c0957b8e8631200a15dc095df007154d059c17ae3c81ff518c243ff4d3cbe4bb" Jan 27 18:45:21 crc kubenswrapper[4915]: I0127 18:45:21.155581 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0957b8e8631200a15dc095df007154d059c17ae3c81ff518c243ff4d3cbe4bb"} err="failed to get container status \"c0957b8e8631200a15dc095df007154d059c17ae3c81ff518c243ff4d3cbe4bb\": rpc error: code = NotFound desc = could not find container \"c0957b8e8631200a15dc095df007154d059c17ae3c81ff518c243ff4d3cbe4bb\": container with ID starting with c0957b8e8631200a15dc095df007154d059c17ae3c81ff518c243ff4d3cbe4bb not found: ID does not exist" Jan 27 18:45:21 crc kubenswrapper[4915]: I0127 18:45:21.155608 4915 scope.go:117] "RemoveContainer" containerID="dd1231d70c9561956be91b84e2526a208369aa1207b6fc58f3c1d1da30626558" Jan 27 18:45:21 crc kubenswrapper[4915]: E0127 18:45:21.155976 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd1231d70c9561956be91b84e2526a208369aa1207b6fc58f3c1d1da30626558\": container with ID starting with dd1231d70c9561956be91b84e2526a208369aa1207b6fc58f3c1d1da30626558 not found: ID does not exist" containerID="dd1231d70c9561956be91b84e2526a208369aa1207b6fc58f3c1d1da30626558" Jan 27 18:45:21 crc kubenswrapper[4915]: I0127 18:45:21.156028 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd1231d70c9561956be91b84e2526a208369aa1207b6fc58f3c1d1da30626558"} err="failed to get container status \"dd1231d70c9561956be91b84e2526a208369aa1207b6fc58f3c1d1da30626558\": rpc error: code = NotFound desc = could not find container \"dd1231d70c9561956be91b84e2526a208369aa1207b6fc58f3c1d1da30626558\": container with ID starting with dd1231d70c9561956be91b84e2526a208369aa1207b6fc58f3c1d1da30626558 not found: ID does not exist" Jan 27 18:45:21 crc kubenswrapper[4915]: I0127 18:45:21.156063 4915 scope.go:117] "RemoveContainer" containerID="e82990a7c0d3f444e7fb4bdbd7ca5cae94b15e1a9d8208caaa4334a745ab4336" Jan 27 18:45:21 crc kubenswrapper[4915]: E0127 18:45:21.156393 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e82990a7c0d3f444e7fb4bdbd7ca5cae94b15e1a9d8208caaa4334a745ab4336\": container with ID starting with e82990a7c0d3f444e7fb4bdbd7ca5cae94b15e1a9d8208caaa4334a745ab4336 not found: ID does not exist" containerID="e82990a7c0d3f444e7fb4bdbd7ca5cae94b15e1a9d8208caaa4334a745ab4336" Jan 27 18:45:21 crc kubenswrapper[4915]: I0127 18:45:21.156423 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e82990a7c0d3f444e7fb4bdbd7ca5cae94b15e1a9d8208caaa4334a745ab4336"} err="failed to get container status \"e82990a7c0d3f444e7fb4bdbd7ca5cae94b15e1a9d8208caaa4334a745ab4336\": rpc error: code = NotFound desc = could not find container \"e82990a7c0d3f444e7fb4bdbd7ca5cae94b15e1a9d8208caaa4334a745ab4336\": container with ID starting with e82990a7c0d3f444e7fb4bdbd7ca5cae94b15e1a9d8208caaa4334a745ab4336 not found: ID does not exist" Jan 27 18:45:21 crc kubenswrapper[4915]: I0127 18:45:21.177650 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:21 crc kubenswrapper[4915]: E0127 18:45:21.224672 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67518fbf_bc2b_457c_a2ae_a5b31d8cf8e4.slice/crio-5b328b06764a37f04bef308acbbf058c207e5d68af431c05d573d627d6e5268c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67518fbf_bc2b_457c_a2ae_a5b31d8cf8e4.slice\": RecentStats: unable to find data in memory cache]" Jan 27 18:45:21 crc kubenswrapper[4915]: I0127 18:45:21.365285 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f772375-40f6-46c4-84ea-7eab8f29e6de" path="/var/lib/kubelet/pods/5f772375-40f6-46c4-84ea-7eab8f29e6de/volumes" Jan 27 18:45:21 crc kubenswrapper[4915]: I0127 18:45:21.366546 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4" path="/var/lib/kubelet/pods/67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4/volumes" Jan 27 18:45:23 crc kubenswrapper[4915]: I0127 18:45:23.424078 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lpn7t" Jan 27 18:45:23 crc kubenswrapper[4915]: I0127 18:45:23.424522 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lpn7t" Jan 27 18:45:23 crc kubenswrapper[4915]: I0127 18:45:23.478920 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lpn7t" Jan 27 18:45:23 crc kubenswrapper[4915]: I0127 18:45:23.631295 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q6zlh" Jan 27 18:45:23 crc kubenswrapper[4915]: I0127 18:45:23.631364 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q6zlh" Jan 27 18:45:23 crc kubenswrapper[4915]: I0127 18:45:23.685694 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q6zlh" Jan 27 18:45:23 crc kubenswrapper[4915]: I0127 18:45:23.858195 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qz9ll" Jan 27 18:45:23 crc kubenswrapper[4915]: I0127 18:45:23.858261 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qz9ll" Jan 27 18:45:23 crc kubenswrapper[4915]: I0127 18:45:23.911745 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qz9ll" Jan 27 18:45:24 crc kubenswrapper[4915]: I0127 18:45:24.188479 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qz9ll" Jan 27 18:45:24 crc kubenswrapper[4915]: I0127 18:45:24.190365 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q6zlh" Jan 27 18:45:24 crc kubenswrapper[4915]: I0127 18:45:24.192671 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lpn7t" Jan 27 18:45:26 crc kubenswrapper[4915]: I0127 18:45:26.729068 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q6zlh"] Jan 27 18:45:26 crc kubenswrapper[4915]: I0127 18:45:26.730122 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q6zlh" podUID="4ea88d8e-91fe-42e8-869b-c95deff169b2" containerName="registry-server" containerID="cri-o://841ecd031ef5cf3b15113f1d0f573fcd17c668b5b411b2390dfea442d439d69d" gracePeriod=2 Jan 27 18:45:27 crc kubenswrapper[4915]: I0127 18:45:27.147113 4915 generic.go:334] "Generic (PLEG): container finished" podID="4ea88d8e-91fe-42e8-869b-c95deff169b2" containerID="841ecd031ef5cf3b15113f1d0f573fcd17c668b5b411b2390dfea442d439d69d" exitCode=0 Jan 27 18:45:27 crc kubenswrapper[4915]: I0127 18:45:27.147169 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6zlh" event={"ID":"4ea88d8e-91fe-42e8-869b-c95deff169b2","Type":"ContainerDied","Data":"841ecd031ef5cf3b15113f1d0f573fcd17c668b5b411b2390dfea442d439d69d"} Jan 27 18:45:27 crc kubenswrapper[4915]: I0127 18:45:27.147597 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6zlh" event={"ID":"4ea88d8e-91fe-42e8-869b-c95deff169b2","Type":"ContainerDied","Data":"57174efae0dba5a9dc44ebe65275a6a483058c1375af08b68352afa79210f880"} Jan 27 18:45:27 crc kubenswrapper[4915]: I0127 18:45:27.147626 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57174efae0dba5a9dc44ebe65275a6a483058c1375af08b68352afa79210f880" Jan 27 18:45:27 crc kubenswrapper[4915]: I0127 18:45:27.179701 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6zlh" Jan 27 18:45:27 crc kubenswrapper[4915]: I0127 18:45:27.279585 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ea88d8e-91fe-42e8-869b-c95deff169b2-utilities\") pod \"4ea88d8e-91fe-42e8-869b-c95deff169b2\" (UID: \"4ea88d8e-91fe-42e8-869b-c95deff169b2\") " Jan 27 18:45:27 crc kubenswrapper[4915]: I0127 18:45:27.279757 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q5cx\" (UniqueName: \"kubernetes.io/projected/4ea88d8e-91fe-42e8-869b-c95deff169b2-kube-api-access-9q5cx\") pod \"4ea88d8e-91fe-42e8-869b-c95deff169b2\" (UID: \"4ea88d8e-91fe-42e8-869b-c95deff169b2\") " Jan 27 18:45:27 crc kubenswrapper[4915]: I0127 18:45:27.279887 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ea88d8e-91fe-42e8-869b-c95deff169b2-catalog-content\") pod \"4ea88d8e-91fe-42e8-869b-c95deff169b2\" (UID: \"4ea88d8e-91fe-42e8-869b-c95deff169b2\") " Jan 27 18:45:27 crc kubenswrapper[4915]: I0127 18:45:27.281122 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ea88d8e-91fe-42e8-869b-c95deff169b2-utilities" (OuterVolumeSpecName: "utilities") pod "4ea88d8e-91fe-42e8-869b-c95deff169b2" (UID: "4ea88d8e-91fe-42e8-869b-c95deff169b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:45:27 crc kubenswrapper[4915]: I0127 18:45:27.289447 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ea88d8e-91fe-42e8-869b-c95deff169b2-kube-api-access-9q5cx" (OuterVolumeSpecName: "kube-api-access-9q5cx") pod "4ea88d8e-91fe-42e8-869b-c95deff169b2" (UID: "4ea88d8e-91fe-42e8-869b-c95deff169b2"). InnerVolumeSpecName "kube-api-access-9q5cx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:45:27 crc kubenswrapper[4915]: I0127 18:45:27.358991 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ea88d8e-91fe-42e8-869b-c95deff169b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ea88d8e-91fe-42e8-869b-c95deff169b2" (UID: "4ea88d8e-91fe-42e8-869b-c95deff169b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:45:27 crc kubenswrapper[4915]: I0127 18:45:27.381247 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ea88d8e-91fe-42e8-869b-c95deff169b2-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:27 crc kubenswrapper[4915]: I0127 18:45:27.381293 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q5cx\" (UniqueName: \"kubernetes.io/projected/4ea88d8e-91fe-42e8-869b-c95deff169b2-kube-api-access-9q5cx\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:27 crc kubenswrapper[4915]: I0127 18:45:27.381313 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ea88d8e-91fe-42e8-869b-c95deff169b2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:27 crc kubenswrapper[4915]: I0127 18:45:27.727098 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qz9ll"] Jan 27 18:45:27 crc kubenswrapper[4915]: I0127 18:45:27.727329 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qz9ll" podUID="3f662342-5dc6-43c7-b6a1-242751990f64" containerName="registry-server" containerID="cri-o://557d0dfe91a3afc06edbec15830b941b903661ef330e37811bb7f380ec7e2f5b" gracePeriod=2 Jan 27 18:45:28 crc kubenswrapper[4915]: I0127 18:45:28.157465 4915 generic.go:334] "Generic (PLEG): container finished" podID="3f662342-5dc6-43c7-b6a1-242751990f64" containerID="557d0dfe91a3afc06edbec15830b941b903661ef330e37811bb7f380ec7e2f5b" exitCode=0 Jan 27 18:45:28 crc kubenswrapper[4915]: I0127 18:45:28.157543 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz9ll" event={"ID":"3f662342-5dc6-43c7-b6a1-242751990f64","Type":"ContainerDied","Data":"557d0dfe91a3afc06edbec15830b941b903661ef330e37811bb7f380ec7e2f5b"} Jan 27 18:45:28 crc kubenswrapper[4915]: I0127 18:45:28.157604 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6zlh" Jan 27 18:45:28 crc kubenswrapper[4915]: I0127 18:45:28.187099 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q6zlh"] Jan 27 18:45:28 crc kubenswrapper[4915]: I0127 18:45:28.190964 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q6zlh"] Jan 27 18:45:28 crc kubenswrapper[4915]: I0127 18:45:28.690577 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qz9ll" Jan 27 18:45:28 crc kubenswrapper[4915]: I0127 18:45:28.806420 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlrjb\" (UniqueName: \"kubernetes.io/projected/3f662342-5dc6-43c7-b6a1-242751990f64-kube-api-access-rlrjb\") pod \"3f662342-5dc6-43c7-b6a1-242751990f64\" (UID: \"3f662342-5dc6-43c7-b6a1-242751990f64\") " Jan 27 18:45:28 crc kubenswrapper[4915]: I0127 18:45:28.806493 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f662342-5dc6-43c7-b6a1-242751990f64-utilities\") pod \"3f662342-5dc6-43c7-b6a1-242751990f64\" (UID: \"3f662342-5dc6-43c7-b6a1-242751990f64\") " Jan 27 18:45:28 crc kubenswrapper[4915]: I0127 18:45:28.806682 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f662342-5dc6-43c7-b6a1-242751990f64-catalog-content\") pod \"3f662342-5dc6-43c7-b6a1-242751990f64\" (UID: \"3f662342-5dc6-43c7-b6a1-242751990f64\") " Jan 27 18:45:28 crc kubenswrapper[4915]: I0127 18:45:28.807370 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f662342-5dc6-43c7-b6a1-242751990f64-utilities" (OuterVolumeSpecName: "utilities") pod "3f662342-5dc6-43c7-b6a1-242751990f64" (UID: "3f662342-5dc6-43c7-b6a1-242751990f64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:45:28 crc kubenswrapper[4915]: I0127 18:45:28.811244 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f662342-5dc6-43c7-b6a1-242751990f64-kube-api-access-rlrjb" (OuterVolumeSpecName: "kube-api-access-rlrjb") pod "3f662342-5dc6-43c7-b6a1-242751990f64" (UID: "3f662342-5dc6-43c7-b6a1-242751990f64"). InnerVolumeSpecName "kube-api-access-rlrjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:45:28 crc kubenswrapper[4915]: I0127 18:45:28.873037 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f662342-5dc6-43c7-b6a1-242751990f64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f662342-5dc6-43c7-b6a1-242751990f64" (UID: "3f662342-5dc6-43c7-b6a1-242751990f64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:45:28 crc kubenswrapper[4915]: I0127 18:45:28.910601 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlrjb\" (UniqueName: \"kubernetes.io/projected/3f662342-5dc6-43c7-b6a1-242751990f64-kube-api-access-rlrjb\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:28 crc kubenswrapper[4915]: I0127 18:45:28.910646 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f662342-5dc6-43c7-b6a1-242751990f64-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:28 crc kubenswrapper[4915]: I0127 18:45:28.910656 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f662342-5dc6-43c7-b6a1-242751990f64-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.051713 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" podUID="864c2cff-cecd-4156-afd9-088a9b9a1956" containerName="oauth-openshift" containerID="cri-o://e6a88cbcf1b3a274752f1a8918f20c75e2549351819ab3cbfce234d4e2058d3c" gracePeriod=15 Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.168035 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qz9ll" event={"ID":"3f662342-5dc6-43c7-b6a1-242751990f64","Type":"ContainerDied","Data":"4b1c2f3b1e7d40f85bd1d7fbd415f8031b49e49e6e63719a88c11c0c87773f88"} Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.168093 4915 scope.go:117] "RemoveContainer" containerID="557d0dfe91a3afc06edbec15830b941b903661ef330e37811bb7f380ec7e2f5b" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.168120 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qz9ll" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.195817 4915 scope.go:117] "RemoveContainer" containerID="56caf6f8389d73b9681278297ed287bca0a4bc646e7aed5277191728963b4b87" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.216278 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qz9ll"] Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.224629 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qz9ll"] Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.235742 4915 scope.go:117] "RemoveContainer" containerID="4f5a23e527f6dcccb35c87fd050801ff8f67b5900643b42fe3c911767e16991f" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.362864 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f662342-5dc6-43c7-b6a1-242751990f64" path="/var/lib/kubelet/pods/3f662342-5dc6-43c7-b6a1-242751990f64/volumes" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.363548 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ea88d8e-91fe-42e8-869b-c95deff169b2" path="/var/lib/kubelet/pods/4ea88d8e-91fe-42e8-869b-c95deff169b2/volumes" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.451410 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.619462 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-cliconfig\") pod \"864c2cff-cecd-4156-afd9-088a9b9a1956\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.619546 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-user-template-provider-selection\") pod \"864c2cff-cecd-4156-afd9-088a9b9a1956\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.619631 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwc8s\" (UniqueName: \"kubernetes.io/projected/864c2cff-cecd-4156-afd9-088a9b9a1956-kube-api-access-bwc8s\") pod \"864c2cff-cecd-4156-afd9-088a9b9a1956\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.619676 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-user-template-login\") pod \"864c2cff-cecd-4156-afd9-088a9b9a1956\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.619707 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-router-certs\") pod \"864c2cff-cecd-4156-afd9-088a9b9a1956\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.619813 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/864c2cff-cecd-4156-afd9-088a9b9a1956-audit-policies\") pod \"864c2cff-cecd-4156-afd9-088a9b9a1956\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.619848 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-user-template-error\") pod \"864c2cff-cecd-4156-afd9-088a9b9a1956\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.619880 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-service-ca\") pod \"864c2cff-cecd-4156-afd9-088a9b9a1956\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.619908 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-trusted-ca-bundle\") pod \"864c2cff-cecd-4156-afd9-088a9b9a1956\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.619939 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-ocp-branding-template\") pod \"864c2cff-cecd-4156-afd9-088a9b9a1956\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.619973 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-session\") pod \"864c2cff-cecd-4156-afd9-088a9b9a1956\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.620020 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-user-idp-0-file-data\") pod \"864c2cff-cecd-4156-afd9-088a9b9a1956\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.620054 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/864c2cff-cecd-4156-afd9-088a9b9a1956-audit-dir\") pod \"864c2cff-cecd-4156-afd9-088a9b9a1956\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.620088 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-serving-cert\") pod \"864c2cff-cecd-4156-afd9-088a9b9a1956\" (UID: \"864c2cff-cecd-4156-afd9-088a9b9a1956\") " Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.620273 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/864c2cff-cecd-4156-afd9-088a9b9a1956-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "864c2cff-cecd-4156-afd9-088a9b9a1956" (UID: "864c2cff-cecd-4156-afd9-088a9b9a1956"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.620537 4915 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/864c2cff-cecd-4156-afd9-088a9b9a1956-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.620595 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/864c2cff-cecd-4156-afd9-088a9b9a1956-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "864c2cff-cecd-4156-afd9-088a9b9a1956" (UID: "864c2cff-cecd-4156-afd9-088a9b9a1956"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.620702 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "864c2cff-cecd-4156-afd9-088a9b9a1956" (UID: "864c2cff-cecd-4156-afd9-088a9b9a1956"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.621019 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "864c2cff-cecd-4156-afd9-088a9b9a1956" (UID: "864c2cff-cecd-4156-afd9-088a9b9a1956"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.621135 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "864c2cff-cecd-4156-afd9-088a9b9a1956" (UID: "864c2cff-cecd-4156-afd9-088a9b9a1956"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.626445 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "864c2cff-cecd-4156-afd9-088a9b9a1956" (UID: "864c2cff-cecd-4156-afd9-088a9b9a1956"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.626812 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "864c2cff-cecd-4156-afd9-088a9b9a1956" (UID: "864c2cff-cecd-4156-afd9-088a9b9a1956"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.627088 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "864c2cff-cecd-4156-afd9-088a9b9a1956" (UID: "864c2cff-cecd-4156-afd9-088a9b9a1956"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.627555 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "864c2cff-cecd-4156-afd9-088a9b9a1956" (UID: "864c2cff-cecd-4156-afd9-088a9b9a1956"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.627610 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "864c2cff-cecd-4156-afd9-088a9b9a1956" (UID: "864c2cff-cecd-4156-afd9-088a9b9a1956"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.627873 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "864c2cff-cecd-4156-afd9-088a9b9a1956" (UID: "864c2cff-cecd-4156-afd9-088a9b9a1956"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.629102 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "864c2cff-cecd-4156-afd9-088a9b9a1956" (UID: "864c2cff-cecd-4156-afd9-088a9b9a1956"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.630638 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "864c2cff-cecd-4156-afd9-088a9b9a1956" (UID: "864c2cff-cecd-4156-afd9-088a9b9a1956"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.630976 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/864c2cff-cecd-4156-afd9-088a9b9a1956-kube-api-access-bwc8s" (OuterVolumeSpecName: "kube-api-access-bwc8s") pod "864c2cff-cecd-4156-afd9-088a9b9a1956" (UID: "864c2cff-cecd-4156-afd9-088a9b9a1956"). InnerVolumeSpecName "kube-api-access-bwc8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.721690 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.721742 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.721755 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwc8s\" (UniqueName: \"kubernetes.io/projected/864c2cff-cecd-4156-afd9-088a9b9a1956-kube-api-access-bwc8s\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.721765 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.721784 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.721815 4915 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/864c2cff-cecd-4156-afd9-088a9b9a1956-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.721825 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.721840 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.721850 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.721861 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.721871 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.721882 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:29 crc kubenswrapper[4915]: I0127 18:45:29.721892 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/864c2cff-cecd-4156-afd9-088a9b9a1956-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:30 crc kubenswrapper[4915]: I0127 18:45:30.179647 4915 generic.go:334] "Generic (PLEG): container finished" podID="864c2cff-cecd-4156-afd9-088a9b9a1956" containerID="e6a88cbcf1b3a274752f1a8918f20c75e2549351819ab3cbfce234d4e2058d3c" exitCode=0 Jan 27 18:45:30 crc kubenswrapper[4915]: I0127 18:45:30.179741 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" event={"ID":"864c2cff-cecd-4156-afd9-088a9b9a1956","Type":"ContainerDied","Data":"e6a88cbcf1b3a274752f1a8918f20c75e2549351819ab3cbfce234d4e2058d3c"} Jan 27 18:45:30 crc kubenswrapper[4915]: I0127 18:45:30.179763 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" Jan 27 18:45:30 crc kubenswrapper[4915]: I0127 18:45:30.180086 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kcmfd" event={"ID":"864c2cff-cecd-4156-afd9-088a9b9a1956","Type":"ContainerDied","Data":"8c436607be9305b9187686587ead7b1560d672f9a8ca79a9e930e42f50b27ebc"} Jan 27 18:45:30 crc kubenswrapper[4915]: I0127 18:45:30.180130 4915 scope.go:117] "RemoveContainer" containerID="e6a88cbcf1b3a274752f1a8918f20c75e2549351819ab3cbfce234d4e2058d3c" Jan 27 18:45:30 crc kubenswrapper[4915]: I0127 18:45:30.209077 4915 scope.go:117] "RemoveContainer" containerID="e6a88cbcf1b3a274752f1a8918f20c75e2549351819ab3cbfce234d4e2058d3c" Jan 27 18:45:30 crc kubenswrapper[4915]: E0127 18:45:30.209605 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6a88cbcf1b3a274752f1a8918f20c75e2549351819ab3cbfce234d4e2058d3c\": container with ID starting with e6a88cbcf1b3a274752f1a8918f20c75e2549351819ab3cbfce234d4e2058d3c not found: ID does not exist" containerID="e6a88cbcf1b3a274752f1a8918f20c75e2549351819ab3cbfce234d4e2058d3c" Jan 27 18:45:30 crc kubenswrapper[4915]: I0127 18:45:30.209682 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6a88cbcf1b3a274752f1a8918f20c75e2549351819ab3cbfce234d4e2058d3c"} err="failed to get container status \"e6a88cbcf1b3a274752f1a8918f20c75e2549351819ab3cbfce234d4e2058d3c\": rpc error: code = NotFound desc = could not find container \"e6a88cbcf1b3a274752f1a8918f20c75e2549351819ab3cbfce234d4e2058d3c\": container with ID starting with e6a88cbcf1b3a274752f1a8918f20c75e2549351819ab3cbfce234d4e2058d3c not found: ID does not exist" Jan 27 18:45:30 crc kubenswrapper[4915]: I0127 18:45:30.223703 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kcmfd"] Jan 27 18:45:30 crc kubenswrapper[4915]: I0127 18:45:30.230001 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kcmfd"] Jan 27 18:45:31 crc kubenswrapper[4915]: I0127 18:45:31.364744 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="864c2cff-cecd-4156-afd9-088a9b9a1956" path="/var/lib/kubelet/pods/864c2cff-cecd-4156-afd9-088a9b9a1956/volumes" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.736712 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-58cd8c9949-vzfdg"] Jan 27 18:45:37 crc kubenswrapper[4915]: E0127 18:45:37.737431 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f662342-5dc6-43c7-b6a1-242751990f64" containerName="extract-utilities" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.737453 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f662342-5dc6-43c7-b6a1-242751990f64" containerName="extract-utilities" Jan 27 18:45:37 crc kubenswrapper[4915]: E0127 18:45:37.737471 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4" containerName="registry-server" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.737484 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4" containerName="registry-server" Jan 27 18:45:37 crc kubenswrapper[4915]: E0127 18:45:37.737503 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f662342-5dc6-43c7-b6a1-242751990f64" containerName="extract-content" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.737518 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f662342-5dc6-43c7-b6a1-242751990f64" containerName="extract-content" Jan 27 18:45:37 crc kubenswrapper[4915]: E0127 18:45:37.737539 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4" containerName="extract-content" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.737552 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4" containerName="extract-content" Jan 27 18:45:37 crc kubenswrapper[4915]: E0127 18:45:37.737569 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f772375-40f6-46c4-84ea-7eab8f29e6de" containerName="extract-content" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.737584 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f772375-40f6-46c4-84ea-7eab8f29e6de" containerName="extract-content" Jan 27 18:45:37 crc kubenswrapper[4915]: E0127 18:45:37.737606 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f662342-5dc6-43c7-b6a1-242751990f64" containerName="registry-server" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.737618 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f662342-5dc6-43c7-b6a1-242751990f64" containerName="registry-server" Jan 27 18:45:37 crc kubenswrapper[4915]: E0127 18:45:37.737637 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f772375-40f6-46c4-84ea-7eab8f29e6de" containerName="registry-server" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.737649 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f772375-40f6-46c4-84ea-7eab8f29e6de" containerName="registry-server" Jan 27 18:45:37 crc kubenswrapper[4915]: E0127 18:45:37.737664 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea88d8e-91fe-42e8-869b-c95deff169b2" containerName="extract-content" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.737676 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea88d8e-91fe-42e8-869b-c95deff169b2" containerName="extract-content" Jan 27 18:45:37 crc kubenswrapper[4915]: E0127 18:45:37.737694 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4" containerName="extract-utilities" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.737707 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4" containerName="extract-utilities" Jan 27 18:45:37 crc kubenswrapper[4915]: E0127 18:45:37.737725 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864c2cff-cecd-4156-afd9-088a9b9a1956" containerName="oauth-openshift" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.737737 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="864c2cff-cecd-4156-afd9-088a9b9a1956" containerName="oauth-openshift" Jan 27 18:45:37 crc kubenswrapper[4915]: E0127 18:45:37.737757 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea88d8e-91fe-42e8-869b-c95deff169b2" containerName="extract-utilities" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.737768 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea88d8e-91fe-42e8-869b-c95deff169b2" containerName="extract-utilities" Jan 27 18:45:37 crc kubenswrapper[4915]: E0127 18:45:37.737822 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea88d8e-91fe-42e8-869b-c95deff169b2" containerName="registry-server" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.737834 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea88d8e-91fe-42e8-869b-c95deff169b2" containerName="registry-server" Jan 27 18:45:37 crc kubenswrapper[4915]: E0127 18:45:37.737850 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f772375-40f6-46c4-84ea-7eab8f29e6de" containerName="extract-utilities" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.737861 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f772375-40f6-46c4-84ea-7eab8f29e6de" containerName="extract-utilities" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.738052 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="864c2cff-cecd-4156-afd9-088a9b9a1956" containerName="oauth-openshift" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.738081 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="67518fbf-bc2b-457c-a2ae-a5b31d8cf8e4" containerName="registry-server" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.738101 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f772375-40f6-46c4-84ea-7eab8f29e6de" containerName="registry-server" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.738125 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f662342-5dc6-43c7-b6a1-242751990f64" containerName="registry-server" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.738139 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea88d8e-91fe-42e8-869b-c95deff169b2" containerName="registry-server" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.738757 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.741146 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.747299 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.747713 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.747721 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.748341 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.748591 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.749386 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.750299 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.750376 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.751574 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.751785 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.751956 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.758718 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.781449 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.796969 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.796977 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-58cd8c9949-vzfdg"] Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.833734 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.833850 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.833881 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.833903 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-user-template-login\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.833926 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-system-router-certs\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.834143 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-user-template-error\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.834241 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea028c17-cdc1-484a-95c0-9c729fd117ed-audit-dir\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.834291 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-system-session\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.834462 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.834528 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84p62\" (UniqueName: \"kubernetes.io/projected/ea028c17-cdc1-484a-95c0-9c729fd117ed-kube-api-access-84p62\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.834565 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ea028c17-cdc1-484a-95c0-9c729fd117ed-audit-policies\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.834604 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-system-service-ca\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.834712 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.834778 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.935885 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.935971 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.936147 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.936197 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.936329 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.936400 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-user-template-login\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.936476 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-system-router-certs\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.936694 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-user-template-error\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.937128 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea028c17-cdc1-484a-95c0-9c729fd117ed-audit-dir\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.937201 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-system-session\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.937251 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.937315 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84p62\" (UniqueName: \"kubernetes.io/projected/ea028c17-cdc1-484a-95c0-9c729fd117ed-kube-api-access-84p62\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.937362 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ea028c17-cdc1-484a-95c0-9c729fd117ed-audit-policies\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.937417 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-system-service-ca\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.937700 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea028c17-cdc1-484a-95c0-9c729fd117ed-audit-dir\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.937728 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.938549 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.938915 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ea028c17-cdc1-484a-95c0-9c729fd117ed-audit-policies\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.938951 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-system-service-ca\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.944888 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-user-template-login\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.945574 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-user-template-error\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.946270 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-system-session\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.946777 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.947251 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.947805 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.947902 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.948356 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ea028c17-cdc1-484a-95c0-9c729fd117ed-v4-0-config-system-router-certs\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:37 crc kubenswrapper[4915]: I0127 18:45:37.972957 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84p62\" (UniqueName: \"kubernetes.io/projected/ea028c17-cdc1-484a-95c0-9c729fd117ed-kube-api-access-84p62\") pod \"oauth-openshift-58cd8c9949-vzfdg\" (UID: \"ea028c17-cdc1-484a-95c0-9c729fd117ed\") " pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:38 crc kubenswrapper[4915]: I0127 18:45:38.062784 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:38 crc kubenswrapper[4915]: I0127 18:45:38.496953 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-58cd8c9949-vzfdg"] Jan 27 18:45:39 crc kubenswrapper[4915]: I0127 18:45:39.235750 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" event={"ID":"ea028c17-cdc1-484a-95c0-9c729fd117ed","Type":"ContainerStarted","Data":"4a276ab65248390ffd302519bc0c0c13c4f8d3186af8e348d747601ef544086c"} Jan 27 18:45:39 crc kubenswrapper[4915]: I0127 18:45:39.235809 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" event={"ID":"ea028c17-cdc1-484a-95c0-9c729fd117ed","Type":"ContainerStarted","Data":"b7eb03e1a6ab3771f25105fe59482b9a1f27bd4c838c5103633a3a7ab18ca385"} Jan 27 18:45:39 crc kubenswrapper[4915]: I0127 18:45:39.236089 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:39 crc kubenswrapper[4915]: I0127 18:45:39.249657 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" Jan 27 18:45:39 crc kubenswrapper[4915]: I0127 18:45:39.267215 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-58cd8c9949-vzfdg" podStartSLOduration=35.267199146 podStartE2EDuration="35.267199146s" podCreationTimestamp="2026-01-27 18:45:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:45:39.262038207 +0000 UTC m=+230.619891871" watchObservedRunningTime="2026-01-27 18:45:39.267199146 +0000 UTC m=+230.625052820" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.267731 4915 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.268881 4915 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.269139 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9" gracePeriod=15 Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.269188 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.269230 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7" gracePeriod=15 Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.269268 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4" gracePeriod=15 Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.269257 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4" gracePeriod=15 Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.269303 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083" gracePeriod=15 Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.274848 4915 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 18:45:54 crc kubenswrapper[4915]: E0127 18:45:54.275298 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.275329 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 18:45:54 crc kubenswrapper[4915]: E0127 18:45:54.275345 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.275357 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 18:45:54 crc kubenswrapper[4915]: E0127 18:45:54.275374 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.275387 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 18:45:54 crc kubenswrapper[4915]: E0127 18:45:54.275411 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.275422 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 18:45:54 crc kubenswrapper[4915]: E0127 18:45:54.275442 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.275455 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 18:45:54 crc kubenswrapper[4915]: E0127 18:45:54.275480 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.275492 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 18:45:54 crc kubenswrapper[4915]: E0127 18:45:54.275514 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.275526 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.275706 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.275730 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.275745 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.275757 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.275771 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.276190 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.284458 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.284565 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.284673 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.284715 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.284762 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.284778 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.284908 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.284960 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.302045 4915 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.302400 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.390467 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.390527 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.390579 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.390620 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.390640 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.390659 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.390697 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.390716 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.390820 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.391200 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.391238 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.391265 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.391290 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.391316 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.391341 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:45:54 crc kubenswrapper[4915]: I0127 18:45:54.391364 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:45:55 crc kubenswrapper[4915]: I0127 18:45:55.374871 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 18:45:55 crc kubenswrapper[4915]: I0127 18:45:55.377967 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 18:45:55 crc kubenswrapper[4915]: I0127 18:45:55.379184 4915 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083" exitCode=0 Jan 27 18:45:55 crc kubenswrapper[4915]: I0127 18:45:55.379214 4915 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7" exitCode=0 Jan 27 18:45:55 crc kubenswrapper[4915]: I0127 18:45:55.379225 4915 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4" exitCode=0 Jan 27 18:45:55 crc kubenswrapper[4915]: I0127 18:45:55.379236 4915 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4" exitCode=2 Jan 27 18:45:55 crc kubenswrapper[4915]: I0127 18:45:55.379301 4915 scope.go:117] "RemoveContainer" containerID="067c972c3a048aa6bb70ce4788185e45b1b170d5bb76716727f27d2421812518" Jan 27 18:45:55 crc kubenswrapper[4915]: I0127 18:45:55.381544 4915 generic.go:334] "Generic (PLEG): container finished" podID="d1c9d54c-fd3a-4729-beab-091e7acbcb83" containerID="059f72ac6ddd72ef62f1a858f2e3ba3960c45a666ad0c25da461da1ca55cee12" exitCode=0 Jan 27 18:45:55 crc kubenswrapper[4915]: I0127 18:45:55.381579 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d1c9d54c-fd3a-4729-beab-091e7acbcb83","Type":"ContainerDied","Data":"059f72ac6ddd72ef62f1a858f2e3ba3960c45a666ad0c25da461da1ca55cee12"} Jan 27 18:45:55 crc kubenswrapper[4915]: I0127 18:45:55.382841 4915 status_manager.go:851] "Failed to get status for pod" podUID="d1c9d54c-fd3a-4729-beab-091e7acbcb83" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:45:56 crc kubenswrapper[4915]: I0127 18:45:56.451235 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 18:45:56 crc kubenswrapper[4915]: I0127 18:45:56.783079 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:45:56 crc kubenswrapper[4915]: I0127 18:45:56.784613 4915 status_manager.go:851] "Failed to get status for pod" podUID="d1c9d54c-fd3a-4729-beab-091e7acbcb83" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:45:56 crc kubenswrapper[4915]: I0127 18:45:56.825194 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1c9d54c-fd3a-4729-beab-091e7acbcb83-kube-api-access\") pod \"d1c9d54c-fd3a-4729-beab-091e7acbcb83\" (UID: \"d1c9d54c-fd3a-4729-beab-091e7acbcb83\") " Jan 27 18:45:56 crc kubenswrapper[4915]: I0127 18:45:56.831415 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1c9d54c-fd3a-4729-beab-091e7acbcb83-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d1c9d54c-fd3a-4729-beab-091e7acbcb83" (UID: "d1c9d54c-fd3a-4729-beab-091e7acbcb83"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:45:56 crc kubenswrapper[4915]: I0127 18:45:56.925733 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d1c9d54c-fd3a-4729-beab-091e7acbcb83-var-lock\") pod \"d1c9d54c-fd3a-4729-beab-091e7acbcb83\" (UID: \"d1c9d54c-fd3a-4729-beab-091e7acbcb83\") " Jan 27 18:45:56 crc kubenswrapper[4915]: I0127 18:45:56.925838 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1c9d54c-fd3a-4729-beab-091e7acbcb83-kubelet-dir\") pod \"d1c9d54c-fd3a-4729-beab-091e7acbcb83\" (UID: \"d1c9d54c-fd3a-4729-beab-091e7acbcb83\") " Jan 27 18:45:56 crc kubenswrapper[4915]: I0127 18:45:56.925850 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1c9d54c-fd3a-4729-beab-091e7acbcb83-var-lock" (OuterVolumeSpecName: "var-lock") pod "d1c9d54c-fd3a-4729-beab-091e7acbcb83" (UID: "d1c9d54c-fd3a-4729-beab-091e7acbcb83"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:45:56 crc kubenswrapper[4915]: I0127 18:45:56.925976 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1c9d54c-fd3a-4729-beab-091e7acbcb83-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d1c9d54c-fd3a-4729-beab-091e7acbcb83" (UID: "d1c9d54c-fd3a-4729-beab-091e7acbcb83"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:45:56 crc kubenswrapper[4915]: I0127 18:45:56.926217 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1c9d54c-fd3a-4729-beab-091e7acbcb83-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:56 crc kubenswrapper[4915]: I0127 18:45:56.926236 4915 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d1c9d54c-fd3a-4729-beab-091e7acbcb83-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:56 crc kubenswrapper[4915]: I0127 18:45:56.926247 4915 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1c9d54c-fd3a-4729-beab-091e7acbcb83-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.174910 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.175609 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.176171 4915 status_manager.go:851] "Failed to get status for pod" podUID="d1c9d54c-fd3a-4729-beab-091e7acbcb83" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.176496 4915 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.331570 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.332031 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.332065 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.331682 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.332313 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.332336 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.364628 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.432959 4915 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.432993 4915 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.433006 4915 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.464501 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.465727 4915 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9" exitCode=0 Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.465834 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.465852 4915 scope.go:117] "RemoveContainer" containerID="0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.466405 4915 status_manager.go:851] "Failed to get status for pod" podUID="d1c9d54c-fd3a-4729-beab-091e7acbcb83" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.466654 4915 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.467965 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d1c9d54c-fd3a-4729-beab-091e7acbcb83","Type":"ContainerDied","Data":"7060cfed0ca0599d01be47dc66524a6ebd52ced6814f4cd1229b7870e8ed471c"} Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.468007 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7060cfed0ca0599d01be47dc66524a6ebd52ced6814f4cd1229b7870e8ed471c" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.468020 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.469891 4915 status_manager.go:851] "Failed to get status for pod" podUID="d1c9d54c-fd3a-4729-beab-091e7acbcb83" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.470244 4915 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.472490 4915 status_manager.go:851] "Failed to get status for pod" podUID="d1c9d54c-fd3a-4729-beab-091e7acbcb83" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.472878 4915 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.486698 4915 scope.go:117] "RemoveContainer" containerID="7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.501546 4915 scope.go:117] "RemoveContainer" containerID="81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.514511 4915 scope.go:117] "RemoveContainer" containerID="e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.528201 4915 scope.go:117] "RemoveContainer" containerID="375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.545668 4915 scope.go:117] "RemoveContainer" containerID="9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.567917 4915 scope.go:117] "RemoveContainer" containerID="0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083" Jan 27 18:45:57 crc kubenswrapper[4915]: E0127 18:45:57.568531 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\": container with ID starting with 0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083 not found: ID does not exist" containerID="0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.568572 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083"} err="failed to get container status \"0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\": rpc error: code = NotFound desc = could not find container \"0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083\": container with ID starting with 0eaffc22d50685b5df779491e9e0913b3b6bf06f5e04ccac36d1a25707154083 not found: ID does not exist" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.568600 4915 scope.go:117] "RemoveContainer" containerID="7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7" Jan 27 18:45:57 crc kubenswrapper[4915]: E0127 18:45:57.569054 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\": container with ID starting with 7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7 not found: ID does not exist" containerID="7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.569112 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7"} err="failed to get container status \"7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\": rpc error: code = NotFound desc = could not find container \"7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7\": container with ID starting with 7e094bb5793d0dd0be1ad482aa481cf469c6d811bca8bea222103059473ba0a7 not found: ID does not exist" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.569152 4915 scope.go:117] "RemoveContainer" containerID="81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4" Jan 27 18:45:57 crc kubenswrapper[4915]: E0127 18:45:57.569552 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\": container with ID starting with 81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4 not found: ID does not exist" containerID="81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.569592 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4"} err="failed to get container status \"81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\": rpc error: code = NotFound desc = could not find container \"81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4\": container with ID starting with 81bddbdc1fbe45efd8ad52a24ea1df80b35cd91fc21de1a77ecb28073da1ded4 not found: ID does not exist" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.569620 4915 scope.go:117] "RemoveContainer" containerID="e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4" Jan 27 18:45:57 crc kubenswrapper[4915]: E0127 18:45:57.569970 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\": container with ID starting with e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4 not found: ID does not exist" containerID="e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.569993 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4"} err="failed to get container status \"e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\": rpc error: code = NotFound desc = could not find container \"e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4\": container with ID starting with e8d5e5f16a88e3706acb765dd287e4d2dd49a3ded811db2d8c6776831c910de4 not found: ID does not exist" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.570011 4915 scope.go:117] "RemoveContainer" containerID="375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9" Jan 27 18:45:57 crc kubenswrapper[4915]: E0127 18:45:57.570270 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\": container with ID starting with 375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9 not found: ID does not exist" containerID="375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.570347 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9"} err="failed to get container status \"375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\": rpc error: code = NotFound desc = could not find container \"375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9\": container with ID starting with 375cf36801e5e6c83d8b0ed3438fe4abe561ff64994f320ef537528f071888b9 not found: ID does not exist" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.570377 4915 scope.go:117] "RemoveContainer" containerID="9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26" Jan 27 18:45:57 crc kubenswrapper[4915]: E0127 18:45:57.570701 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\": container with ID starting with 9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26 not found: ID does not exist" containerID="9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26" Jan 27 18:45:57 crc kubenswrapper[4915]: I0127 18:45:57.570754 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26"} err="failed to get container status \"9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\": rpc error: code = NotFound desc = could not find container \"9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26\": container with ID starting with 9a57dbc39c22f2ee76256e0a879320e2f66316ecd7190ac63f5bd5a4474f5e26 not found: ID does not exist" Jan 27 18:45:59 crc kubenswrapper[4915]: I0127 18:45:59.361467 4915 status_manager.go:851] "Failed to get status for pod" podUID="d1c9d54c-fd3a-4729-beab-091e7acbcb83" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:45:59 crc kubenswrapper[4915]: I0127 18:45:59.362264 4915 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:45:59 crc kubenswrapper[4915]: E0127 18:45:59.365838 4915 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.106:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:45:59 crc kubenswrapper[4915]: I0127 18:45:59.366373 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:45:59 crc kubenswrapper[4915]: E0127 18:45:59.397651 4915 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.106:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188eaada95d4cc9d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 18:45:59.396527261 +0000 UTC m=+250.754380935,LastTimestamp:2026-01-27 18:45:59.396527261 +0000 UTC m=+250.754380935,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 18:45:59 crc kubenswrapper[4915]: I0127 18:45:59.480665 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f991674e2854bfa8490d68fa110c42ad43e393dbc65725b6b4db744984380830"} Jan 27 18:46:00 crc kubenswrapper[4915]: I0127 18:46:00.486634 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"9e69f89b2dcddb321b7ccadb368d617fb38ecd30fcaa0ab1f50f91b650507e9f"} Jan 27 18:46:00 crc kubenswrapper[4915]: E0127 18:46:00.487125 4915 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.106:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:46:00 crc kubenswrapper[4915]: I0127 18:46:00.488383 4915 status_manager.go:851] "Failed to get status for pod" podUID="d1c9d54c-fd3a-4729-beab-091e7acbcb83" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:46:00 crc kubenswrapper[4915]: E0127 18:46:00.778464 4915 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:46:00 crc kubenswrapper[4915]: E0127 18:46:00.778735 4915 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:46:00 crc kubenswrapper[4915]: E0127 18:46:00.778979 4915 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:46:00 crc kubenswrapper[4915]: E0127 18:46:00.779266 4915 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:46:00 crc kubenswrapper[4915]: E0127 18:46:00.779543 4915 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:46:00 crc kubenswrapper[4915]: I0127 18:46:00.779572 4915 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 27 18:46:00 crc kubenswrapper[4915]: E0127 18:46:00.779757 4915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="200ms" Jan 27 18:46:00 crc kubenswrapper[4915]: E0127 18:46:00.980565 4915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="400ms" Jan 27 18:46:01 crc kubenswrapper[4915]: E0127 18:46:01.381362 4915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="800ms" Jan 27 18:46:01 crc kubenswrapper[4915]: E0127 18:46:01.493358 4915 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.106:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:46:02 crc kubenswrapper[4915]: E0127 18:46:02.182562 4915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="1.6s" Jan 27 18:46:03 crc kubenswrapper[4915]: E0127 18:46:03.784549 4915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="3.2s" Jan 27 18:46:06 crc kubenswrapper[4915]: E0127 18:46:06.986568 4915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="6.4s" Jan 27 18:46:07 crc kubenswrapper[4915]: I0127 18:46:07.483928 4915 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 18:46:07 crc kubenswrapper[4915]: I0127 18:46:07.484044 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 18:46:07 crc kubenswrapper[4915]: I0127 18:46:07.538714 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 18:46:07 crc kubenswrapper[4915]: I0127 18:46:07.538833 4915 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c" exitCode=1 Jan 27 18:46:07 crc kubenswrapper[4915]: I0127 18:46:07.538877 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c"} Jan 27 18:46:07 crc kubenswrapper[4915]: I0127 18:46:07.539631 4915 scope.go:117] "RemoveContainer" containerID="5657e4e91bdca75d7c7b105eabef6e989b3431e1c0b4b2005bfe21f7a0be712c" Jan 27 18:46:07 crc kubenswrapper[4915]: I0127 18:46:07.543712 4915 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:46:07 crc kubenswrapper[4915]: I0127 18:46:07.544949 4915 status_manager.go:851] "Failed to get status for pod" podUID="d1c9d54c-fd3a-4729-beab-091e7acbcb83" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:46:08 crc kubenswrapper[4915]: I0127 18:46:08.357611 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:08 crc kubenswrapper[4915]: I0127 18:46:08.359119 4915 status_manager.go:851] "Failed to get status for pod" podUID="d1c9d54c-fd3a-4729-beab-091e7acbcb83" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:46:08 crc kubenswrapper[4915]: I0127 18:46:08.359721 4915 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:46:08 crc kubenswrapper[4915]: I0127 18:46:08.387333 4915 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2ff27c42-efc5-47ca-af7e-18a62d4dded9" Jan 27 18:46:08 crc kubenswrapper[4915]: I0127 18:46:08.387379 4915 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2ff27c42-efc5-47ca-af7e-18a62d4dded9" Jan 27 18:46:08 crc kubenswrapper[4915]: E0127 18:46:08.387914 4915 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:08 crc kubenswrapper[4915]: I0127 18:46:08.388513 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:08 crc kubenswrapper[4915]: I0127 18:46:08.547222 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 18:46:08 crc kubenswrapper[4915]: I0127 18:46:08.547333 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7bebdd3c3e07b950b967c19c3b6580f5a30c0a755b9b48e35fe5693854487635"} Jan 27 18:46:08 crc kubenswrapper[4915]: I0127 18:46:08.548369 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4c6b2dfdddf1f7bd9c799d1c90439288541d83dfd9daea69e8b3e0b250d6909c"} Jan 27 18:46:08 crc kubenswrapper[4915]: I0127 18:46:08.548506 4915 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:46:08 crc kubenswrapper[4915]: I0127 18:46:08.548767 4915 status_manager.go:851] "Failed to get status for pod" podUID="d1c9d54c-fd3a-4729-beab-091e7acbcb83" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:46:09 crc kubenswrapper[4915]: E0127 18:46:09.170174 4915 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.106:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188eaada95d4cc9d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 18:45:59.396527261 +0000 UTC m=+250.754380935,LastTimestamp:2026-01-27 18:45:59.396527261 +0000 UTC m=+250.754380935,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 18:46:09 crc kubenswrapper[4915]: I0127 18:46:09.365880 4915 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:46:09 crc kubenswrapper[4915]: I0127 18:46:09.366660 4915 status_manager.go:851] "Failed to get status for pod" podUID="d1c9d54c-fd3a-4729-beab-091e7acbcb83" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:46:09 crc kubenswrapper[4915]: I0127 18:46:09.367413 4915 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:46:09 crc kubenswrapper[4915]: I0127 18:46:09.386056 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:46:09 crc kubenswrapper[4915]: I0127 18:46:09.393464 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:46:09 crc kubenswrapper[4915]: I0127 18:46:09.394121 4915 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:46:09 crc kubenswrapper[4915]: I0127 18:46:09.394683 4915 status_manager.go:851] "Failed to get status for pod" podUID="d1c9d54c-fd3a-4729-beab-091e7acbcb83" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:46:09 crc kubenswrapper[4915]: I0127 18:46:09.395306 4915 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:46:09 crc kubenswrapper[4915]: I0127 18:46:09.555883 4915 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="10c825ea3acfc45f6b6425818252b93d2714dad1a0374dddce184f9578408eb5" exitCode=0 Jan 27 18:46:09 crc kubenswrapper[4915]: I0127 18:46:09.556535 4915 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2ff27c42-efc5-47ca-af7e-18a62d4dded9" Jan 27 18:46:09 crc kubenswrapper[4915]: I0127 18:46:09.556564 4915 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2ff27c42-efc5-47ca-af7e-18a62d4dded9" Jan 27 18:46:09 crc kubenswrapper[4915]: I0127 18:46:09.556144 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"10c825ea3acfc45f6b6425818252b93d2714dad1a0374dddce184f9578408eb5"} Jan 27 18:46:09 crc kubenswrapper[4915]: I0127 18:46:09.557412 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:46:09 crc kubenswrapper[4915]: I0127 18:46:09.557522 4915 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:46:09 crc kubenswrapper[4915]: E0127 18:46:09.557525 4915 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:09 crc kubenswrapper[4915]: I0127 18:46:09.558000 4915 status_manager.go:851] "Failed to get status for pod" podUID="d1c9d54c-fd3a-4729-beab-091e7acbcb83" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:46:09 crc kubenswrapper[4915]: I0127 18:46:09.558475 4915 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 27 18:46:10 crc kubenswrapper[4915]: I0127 18:46:10.571002 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"125d2de8b7285408ed52fc80c1387314d9e9af86d7242ba2f95053278a6fbca0"} Jan 27 18:46:10 crc kubenswrapper[4915]: I0127 18:46:10.571078 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"271c95d4f08ced09c435f3300283f694e182909c64f7f45f1a80b9e8f9aa16f9"} Jan 27 18:46:10 crc kubenswrapper[4915]: I0127 18:46:10.571096 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6bd30d2244e9ac49017c4db5bc373a92d6061c3cbcf7de926c2fe3570f1f5a89"} Jan 27 18:46:11 crc kubenswrapper[4915]: I0127 18:46:11.581233 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e39823f17ca9568d14bd9fca5baef2a49f4bdfeed8304f6311ac9977c64e4acd"} Jan 27 18:46:11 crc kubenswrapper[4915]: I0127 18:46:11.581532 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"88d9251b9a18c55d8929062b931367a56663b5db72cd0e23bf54fcf5273e909f"} Jan 27 18:46:11 crc kubenswrapper[4915]: I0127 18:46:11.581646 4915 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2ff27c42-efc5-47ca-af7e-18a62d4dded9" Jan 27 18:46:11 crc kubenswrapper[4915]: I0127 18:46:11.581674 4915 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2ff27c42-efc5-47ca-af7e-18a62d4dded9" Jan 27 18:46:13 crc kubenswrapper[4915]: I0127 18:46:13.390031 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:13 crc kubenswrapper[4915]: I0127 18:46:13.390420 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:13 crc kubenswrapper[4915]: I0127 18:46:13.400248 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:16 crc kubenswrapper[4915]: I0127 18:46:16.591571 4915 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:16 crc kubenswrapper[4915]: I0127 18:46:16.610424 4915 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2ff27c42-efc5-47ca-af7e-18a62d4dded9" Jan 27 18:46:16 crc kubenswrapper[4915]: I0127 18:46:16.610674 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:16 crc kubenswrapper[4915]: I0127 18:46:16.610684 4915 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2ff27c42-efc5-47ca-af7e-18a62d4dded9" Jan 27 18:46:16 crc kubenswrapper[4915]: I0127 18:46:16.614910 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:16 crc kubenswrapper[4915]: I0127 18:46:16.615703 4915 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="987d9059-c9a8-4297-91ef-8c20868a5ad6" Jan 27 18:46:17 crc kubenswrapper[4915]: I0127 18:46:17.489694 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 18:46:17 crc kubenswrapper[4915]: I0127 18:46:17.616582 4915 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2ff27c42-efc5-47ca-af7e-18a62d4dded9" Jan 27 18:46:17 crc kubenswrapper[4915]: I0127 18:46:17.616621 4915 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2ff27c42-efc5-47ca-af7e-18a62d4dded9" Jan 27 18:46:17 crc kubenswrapper[4915]: I0127 18:46:17.622213 4915 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="987d9059-c9a8-4297-91ef-8c20868a5ad6" Jan 27 18:46:18 crc kubenswrapper[4915]: I0127 18:46:18.625082 4915 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2ff27c42-efc5-47ca-af7e-18a62d4dded9" Jan 27 18:46:18 crc kubenswrapper[4915]: I0127 18:46:18.625152 4915 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2ff27c42-efc5-47ca-af7e-18a62d4dded9" Jan 27 18:46:18 crc kubenswrapper[4915]: I0127 18:46:18.627908 4915 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="987d9059-c9a8-4297-91ef-8c20868a5ad6" Jan 27 18:46:25 crc kubenswrapper[4915]: I0127 18:46:25.709647 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 18:46:25 crc kubenswrapper[4915]: I0127 18:46:25.779944 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 18:46:26 crc kubenswrapper[4915]: I0127 18:46:26.459543 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 18:46:27 crc kubenswrapper[4915]: I0127 18:46:27.201053 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 18:46:27 crc kubenswrapper[4915]: I0127 18:46:27.334573 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 18:46:27 crc kubenswrapper[4915]: I0127 18:46:27.421719 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 18:46:27 crc kubenswrapper[4915]: I0127 18:46:27.479372 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 18:46:27 crc kubenswrapper[4915]: I0127 18:46:27.737428 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 18:46:27 crc kubenswrapper[4915]: I0127 18:46:27.789115 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 18:46:27 crc kubenswrapper[4915]: I0127 18:46:27.798337 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 18:46:27 crc kubenswrapper[4915]: I0127 18:46:27.801785 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 18:46:27 crc kubenswrapper[4915]: I0127 18:46:27.925602 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 18:46:27 crc kubenswrapper[4915]: I0127 18:46:27.942959 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 18:46:27 crc kubenswrapper[4915]: I0127 18:46:27.992328 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 18:46:28 crc kubenswrapper[4915]: I0127 18:46:28.096692 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 18:46:28 crc kubenswrapper[4915]: I0127 18:46:28.489664 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 18:46:28 crc kubenswrapper[4915]: I0127 18:46:28.525431 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 18:46:28 crc kubenswrapper[4915]: I0127 18:46:28.570159 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 18:46:28 crc kubenswrapper[4915]: I0127 18:46:28.615171 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 18:46:28 crc kubenswrapper[4915]: I0127 18:46:28.648540 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 18:46:28 crc kubenswrapper[4915]: I0127 18:46:28.719217 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 18:46:28 crc kubenswrapper[4915]: I0127 18:46:28.762997 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 18:46:28 crc kubenswrapper[4915]: I0127 18:46:28.777228 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 18:46:28 crc kubenswrapper[4915]: I0127 18:46:28.944325 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 18:46:29 crc kubenswrapper[4915]: I0127 18:46:29.000218 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 18:46:29 crc kubenswrapper[4915]: I0127 18:46:29.003927 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 18:46:29 crc kubenswrapper[4915]: I0127 18:46:29.112340 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 18:46:29 crc kubenswrapper[4915]: I0127 18:46:29.160129 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 18:46:29 crc kubenswrapper[4915]: I0127 18:46:29.174662 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 18:46:29 crc kubenswrapper[4915]: I0127 18:46:29.337745 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 18:46:29 crc kubenswrapper[4915]: I0127 18:46:29.534135 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 18:46:29 crc kubenswrapper[4915]: I0127 18:46:29.686960 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 18:46:29 crc kubenswrapper[4915]: I0127 18:46:29.693388 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 18:46:29 crc kubenswrapper[4915]: I0127 18:46:29.727785 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 18:46:29 crc kubenswrapper[4915]: I0127 18:46:29.754566 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 18:46:29 crc kubenswrapper[4915]: I0127 18:46:29.771706 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 18:46:29 crc kubenswrapper[4915]: I0127 18:46:29.816936 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 18:46:29 crc kubenswrapper[4915]: I0127 18:46:29.827128 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 18:46:29 crc kubenswrapper[4915]: I0127 18:46:29.882224 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 18:46:29 crc kubenswrapper[4915]: I0127 18:46:29.917497 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 18:46:29 crc kubenswrapper[4915]: I0127 18:46:29.981535 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 18:46:30 crc kubenswrapper[4915]: I0127 18:46:30.181877 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 18:46:30 crc kubenswrapper[4915]: I0127 18:46:30.225134 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 18:46:30 crc kubenswrapper[4915]: I0127 18:46:30.551496 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 18:46:30 crc kubenswrapper[4915]: I0127 18:46:30.636414 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 18:46:30 crc kubenswrapper[4915]: I0127 18:46:30.653656 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 18:46:30 crc kubenswrapper[4915]: I0127 18:46:30.688938 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 18:46:30 crc kubenswrapper[4915]: I0127 18:46:30.691355 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 18:46:30 crc kubenswrapper[4915]: I0127 18:46:30.771152 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 18:46:30 crc kubenswrapper[4915]: I0127 18:46:30.781760 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 18:46:30 crc kubenswrapper[4915]: I0127 18:46:30.787752 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 18:46:30 crc kubenswrapper[4915]: I0127 18:46:30.801416 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 18:46:30 crc kubenswrapper[4915]: I0127 18:46:30.939398 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 18:46:31 crc kubenswrapper[4915]: I0127 18:46:31.096260 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 18:46:31 crc kubenswrapper[4915]: I0127 18:46:31.152485 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 18:46:31 crc kubenswrapper[4915]: I0127 18:46:31.205276 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 18:46:31 crc kubenswrapper[4915]: I0127 18:46:31.306152 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 18:46:31 crc kubenswrapper[4915]: I0127 18:46:31.475073 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 18:46:31 crc kubenswrapper[4915]: I0127 18:46:31.524976 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 18:46:31 crc kubenswrapper[4915]: I0127 18:46:31.540829 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 18:46:31 crc kubenswrapper[4915]: I0127 18:46:31.544238 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 18:46:31 crc kubenswrapper[4915]: I0127 18:46:31.630024 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 18:46:31 crc kubenswrapper[4915]: I0127 18:46:31.660588 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 18:46:31 crc kubenswrapper[4915]: I0127 18:46:31.669460 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 18:46:31 crc kubenswrapper[4915]: I0127 18:46:31.734181 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 18:46:31 crc kubenswrapper[4915]: I0127 18:46:31.803094 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 18:46:31 crc kubenswrapper[4915]: I0127 18:46:31.825643 4915 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 18:46:31 crc kubenswrapper[4915]: I0127 18:46:31.829958 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 18:46:31 crc kubenswrapper[4915]: I0127 18:46:31.833319 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 18:46:31 crc kubenswrapper[4915]: I0127 18:46:31.833388 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 18:46:31 crc kubenswrapper[4915]: I0127 18:46:31.842195 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 18:46:31 crc kubenswrapper[4915]: I0127 18:46:31.856958 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.856932049 podStartE2EDuration="15.856932049s" podCreationTimestamp="2026-01-27 18:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:46:31.85242626 +0000 UTC m=+283.210279984" watchObservedRunningTime="2026-01-27 18:46:31.856932049 +0000 UTC m=+283.214785753" Jan 27 18:46:32 crc kubenswrapper[4915]: I0127 18:46:32.036179 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 18:46:32 crc kubenswrapper[4915]: I0127 18:46:32.105382 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 18:46:32 crc kubenswrapper[4915]: I0127 18:46:32.144616 4915 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 18:46:32 crc kubenswrapper[4915]: I0127 18:46:32.158281 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 18:46:32 crc kubenswrapper[4915]: I0127 18:46:32.290446 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 18:46:32 crc kubenswrapper[4915]: I0127 18:46:32.356006 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 18:46:32 crc kubenswrapper[4915]: I0127 18:46:32.416612 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 18:46:32 crc kubenswrapper[4915]: I0127 18:46:32.566667 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 18:46:32 crc kubenswrapper[4915]: I0127 18:46:32.582629 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 18:46:32 crc kubenswrapper[4915]: I0127 18:46:32.607949 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 18:46:32 crc kubenswrapper[4915]: I0127 18:46:32.675256 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 18:46:32 crc kubenswrapper[4915]: I0127 18:46:32.695493 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 18:46:32 crc kubenswrapper[4915]: I0127 18:46:32.717287 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 18:46:32 crc kubenswrapper[4915]: I0127 18:46:32.739972 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 18:46:32 crc kubenswrapper[4915]: I0127 18:46:32.786131 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 18:46:32 crc kubenswrapper[4915]: I0127 18:46:32.801691 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 18:46:32 crc kubenswrapper[4915]: I0127 18:46:32.802847 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 18:46:32 crc kubenswrapper[4915]: I0127 18:46:32.817283 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 18:46:33 crc kubenswrapper[4915]: I0127 18:46:33.078674 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 18:46:33 crc kubenswrapper[4915]: I0127 18:46:33.275560 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 18:46:33 crc kubenswrapper[4915]: I0127 18:46:33.326877 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 18:46:33 crc kubenswrapper[4915]: I0127 18:46:33.393178 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 18:46:33 crc kubenswrapper[4915]: I0127 18:46:33.462194 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 18:46:33 crc kubenswrapper[4915]: I0127 18:46:33.516080 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 18:46:33 crc kubenswrapper[4915]: I0127 18:46:33.584081 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 18:46:33 crc kubenswrapper[4915]: I0127 18:46:33.658988 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 18:46:33 crc kubenswrapper[4915]: I0127 18:46:33.723625 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 18:46:33 crc kubenswrapper[4915]: I0127 18:46:33.775525 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 18:46:33 crc kubenswrapper[4915]: I0127 18:46:33.811594 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 18:46:33 crc kubenswrapper[4915]: I0127 18:46:33.830619 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 18:46:33 crc kubenswrapper[4915]: I0127 18:46:33.842849 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 18:46:33 crc kubenswrapper[4915]: I0127 18:46:33.903858 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 18:46:33 crc kubenswrapper[4915]: I0127 18:46:33.928411 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 18:46:33 crc kubenswrapper[4915]: I0127 18:46:33.940417 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 18:46:33 crc kubenswrapper[4915]: I0127 18:46:33.974424 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 18:46:33 crc kubenswrapper[4915]: I0127 18:46:33.998680 4915 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 18:46:34 crc kubenswrapper[4915]: I0127 18:46:34.021788 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 18:46:34 crc kubenswrapper[4915]: I0127 18:46:34.022920 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 18:46:34 crc kubenswrapper[4915]: I0127 18:46:34.148668 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 18:46:34 crc kubenswrapper[4915]: I0127 18:46:34.236888 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 18:46:34 crc kubenswrapper[4915]: I0127 18:46:34.237567 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 18:46:34 crc kubenswrapper[4915]: I0127 18:46:34.319758 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 18:46:34 crc kubenswrapper[4915]: I0127 18:46:34.348655 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 18:46:34 crc kubenswrapper[4915]: I0127 18:46:34.397563 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 18:46:34 crc kubenswrapper[4915]: I0127 18:46:34.445168 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 18:46:34 crc kubenswrapper[4915]: I0127 18:46:34.449358 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 18:46:34 crc kubenswrapper[4915]: I0127 18:46:34.594781 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 18:46:34 crc kubenswrapper[4915]: I0127 18:46:34.783406 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 18:46:34 crc kubenswrapper[4915]: I0127 18:46:34.798331 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 18:46:34 crc kubenswrapper[4915]: I0127 18:46:34.851106 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 18:46:34 crc kubenswrapper[4915]: I0127 18:46:34.941107 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 18:46:35 crc kubenswrapper[4915]: I0127 18:46:35.000070 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 18:46:35 crc kubenswrapper[4915]: I0127 18:46:35.006450 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 18:46:35 crc kubenswrapper[4915]: I0127 18:46:35.033511 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 18:46:35 crc kubenswrapper[4915]: I0127 18:46:35.132730 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 18:46:35 crc kubenswrapper[4915]: I0127 18:46:35.150121 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 18:46:35 crc kubenswrapper[4915]: I0127 18:46:35.158128 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 18:46:35 crc kubenswrapper[4915]: I0127 18:46:35.171497 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 18:46:35 crc kubenswrapper[4915]: I0127 18:46:35.186091 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 18:46:35 crc kubenswrapper[4915]: I0127 18:46:35.284017 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 18:46:35 crc kubenswrapper[4915]: I0127 18:46:35.394857 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 18:46:35 crc kubenswrapper[4915]: I0127 18:46:35.395039 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 18:46:35 crc kubenswrapper[4915]: I0127 18:46:35.419358 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 18:46:35 crc kubenswrapper[4915]: I0127 18:46:35.429529 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 18:46:35 crc kubenswrapper[4915]: I0127 18:46:35.454123 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 18:46:35 crc kubenswrapper[4915]: I0127 18:46:35.527507 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 18:46:35 crc kubenswrapper[4915]: I0127 18:46:35.550487 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 18:46:35 crc kubenswrapper[4915]: I0127 18:46:35.690760 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 18:46:35 crc kubenswrapper[4915]: I0127 18:46:35.766209 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 18:46:35 crc kubenswrapper[4915]: I0127 18:46:35.842057 4915 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 18:46:35 crc kubenswrapper[4915]: I0127 18:46:35.917948 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 18:46:35 crc kubenswrapper[4915]: I0127 18:46:35.919174 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 18:46:35 crc kubenswrapper[4915]: I0127 18:46:35.956153 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.013897 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.034585 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.091391 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.187691 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.196148 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.203893 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.261938 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.287859 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.362238 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.396482 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.417574 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.512678 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.535904 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.537621 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.564488 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.577066 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.590216 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.654464 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.690254 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.711063 4915 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.747300 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.773124 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.795170 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.811851 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.936535 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.978990 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.983651 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 18:46:36 crc kubenswrapper[4915]: I0127 18:46:36.991882 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 18:46:37 crc kubenswrapper[4915]: I0127 18:46:37.104830 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 18:46:37 crc kubenswrapper[4915]: I0127 18:46:37.115992 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 18:46:37 crc kubenswrapper[4915]: I0127 18:46:37.159107 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 18:46:37 crc kubenswrapper[4915]: I0127 18:46:37.165864 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 18:46:37 crc kubenswrapper[4915]: I0127 18:46:37.168553 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 18:46:37 crc kubenswrapper[4915]: I0127 18:46:37.196838 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 18:46:37 crc kubenswrapper[4915]: I0127 18:46:37.218681 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 18:46:37 crc kubenswrapper[4915]: I0127 18:46:37.333866 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 18:46:37 crc kubenswrapper[4915]: I0127 18:46:37.504142 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 18:46:37 crc kubenswrapper[4915]: I0127 18:46:37.520586 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 18:46:37 crc kubenswrapper[4915]: I0127 18:46:37.531682 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 18:46:37 crc kubenswrapper[4915]: I0127 18:46:37.562513 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 18:46:37 crc kubenswrapper[4915]: I0127 18:46:37.779193 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 18:46:37 crc kubenswrapper[4915]: I0127 18:46:37.949813 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 18:46:37 crc kubenswrapper[4915]: I0127 18:46:37.986900 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 18:46:38 crc kubenswrapper[4915]: I0127 18:46:38.067812 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 18:46:38 crc kubenswrapper[4915]: I0127 18:46:38.127526 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 18:46:38 crc kubenswrapper[4915]: I0127 18:46:38.152745 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 18:46:38 crc kubenswrapper[4915]: I0127 18:46:38.159677 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 18:46:38 crc kubenswrapper[4915]: I0127 18:46:38.181328 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 18:46:38 crc kubenswrapper[4915]: I0127 18:46:38.209123 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 18:46:38 crc kubenswrapper[4915]: I0127 18:46:38.298579 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 18:46:38 crc kubenswrapper[4915]: I0127 18:46:38.403333 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 18:46:38 crc kubenswrapper[4915]: I0127 18:46:38.596002 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 18:46:38 crc kubenswrapper[4915]: I0127 18:46:38.626854 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 18:46:38 crc kubenswrapper[4915]: I0127 18:46:38.667653 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 18:46:38 crc kubenswrapper[4915]: I0127 18:46:38.894726 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 18:46:38 crc kubenswrapper[4915]: I0127 18:46:38.938737 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 18:46:38 crc kubenswrapper[4915]: I0127 18:46:38.943523 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 18:46:38 crc kubenswrapper[4915]: I0127 18:46:38.953727 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.022115 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.036360 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.046815 4915 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.047096 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://9e69f89b2dcddb321b7ccadb368d617fb38ecd30fcaa0ab1f50f91b650507e9f" gracePeriod=5 Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.120704 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.145324 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.148161 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.180171 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.227325 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.261759 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.269409 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.344097 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.360782 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.370783 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.373929 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.376948 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.438926 4915 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.440028 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.464400 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.468871 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.474014 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.482894 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.575685 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.585267 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.619961 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.621967 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.625067 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.681975 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.766780 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 18:46:39 crc kubenswrapper[4915]: I0127 18:46:39.846637 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 18:46:40 crc kubenswrapper[4915]: I0127 18:46:40.015398 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 18:46:40 crc kubenswrapper[4915]: I0127 18:46:40.117819 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 18:46:40 crc kubenswrapper[4915]: I0127 18:46:40.351950 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 18:46:40 crc kubenswrapper[4915]: I0127 18:46:40.449938 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 18:46:40 crc kubenswrapper[4915]: I0127 18:46:40.525942 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 18:46:40 crc kubenswrapper[4915]: I0127 18:46:40.617518 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 18:46:40 crc kubenswrapper[4915]: I0127 18:46:40.656268 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 18:46:40 crc kubenswrapper[4915]: I0127 18:46:40.773488 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 18:46:41 crc kubenswrapper[4915]: I0127 18:46:41.020559 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 18:46:41 crc kubenswrapper[4915]: I0127 18:46:41.376981 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 18:46:41 crc kubenswrapper[4915]: I0127 18:46:41.457953 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 18:46:41 crc kubenswrapper[4915]: I0127 18:46:41.468924 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 18:46:41 crc kubenswrapper[4915]: I0127 18:46:41.554552 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 18:46:41 crc kubenswrapper[4915]: I0127 18:46:41.786666 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 18:46:41 crc kubenswrapper[4915]: I0127 18:46:41.810504 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 18:46:42 crc kubenswrapper[4915]: I0127 18:46:42.044074 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 18:46:42 crc kubenswrapper[4915]: I0127 18:46:42.070760 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 18:46:42 crc kubenswrapper[4915]: I0127 18:46:42.091023 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 18:46:42 crc kubenswrapper[4915]: I0127 18:46:42.188915 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 18:46:42 crc kubenswrapper[4915]: I0127 18:46:42.246773 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 18:46:42 crc kubenswrapper[4915]: I0127 18:46:42.281085 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 18:46:42 crc kubenswrapper[4915]: I0127 18:46:42.306188 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 18:46:42 crc kubenswrapper[4915]: I0127 18:46:42.597746 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.661923 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lpn7t"] Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.664152 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lpn7t" podUID="f973ebe4-3ce2-49c7-96b8-c7e238c606a7" containerName="registry-server" containerID="cri-o://3b4c3c30ebc9b0c3921b90940e7b694dc3d53594c656eac546b439b9e81ed716" gracePeriod=30 Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.669079 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q4xjv"] Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.669341 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q4xjv" podUID="8ec10b90-7f91-4bda-bcaa-cce472027970" containerName="registry-server" containerID="cri-o://c4a31417c295ab9013ffc624ecd66d0e0f8c842ba3c8a15908e270e5a1e5bac5" gracePeriod=30 Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.687101 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qgxwz"] Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.687476 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-qgxwz" podUID="a0b86903-8500-47c1-9fdb-5b0dea888375" containerName="marketplace-operator" containerID="cri-o://33056bd315755d0f3665517e23a1ce0ae0f6a3f02c4995f801a7e1010ee144fe" gracePeriod=30 Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.695524 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s9b8l"] Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.696054 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s9b8l" podUID="644a6ae4-550e-4ca7-9394-d2e09abb6b16" containerName="registry-server" containerID="cri-o://c8fce89cba2721932c94a70df9d802d494457f14d77407eda045a341964749d9" gracePeriod=30 Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.712735 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fvnw2"] Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.713042 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fvnw2" podUID="75b62d1e-bcc6-46c1-ac10-7e2819203102" containerName="registry-server" containerID="cri-o://7db5923a3635ae64631a1a2ded481b84481a469ce38cd9aa5ee4cd3ad7845be2" gracePeriod=30 Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.736160 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g6bqr"] Jan 27 18:46:43 crc kubenswrapper[4915]: E0127 18:46:43.739411 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c9d54c-fd3a-4729-beab-091e7acbcb83" containerName="installer" Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.739525 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c9d54c-fd3a-4729-beab-091e7acbcb83" containerName="installer" Jan 27 18:46:43 crc kubenswrapper[4915]: E0127 18:46:43.739643 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.739732 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.739990 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.740117 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1c9d54c-fd3a-4729-beab-091e7acbcb83" containerName="installer" Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.740688 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g6bqr" Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.748801 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g6bqr"] Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.793115 4915 generic.go:334] "Generic (PLEG): container finished" podID="8ec10b90-7f91-4bda-bcaa-cce472027970" containerID="c4a31417c295ab9013ffc624ecd66d0e0f8c842ba3c8a15908e270e5a1e5bac5" exitCode=0 Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.793157 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q4xjv" event={"ID":"8ec10b90-7f91-4bda-bcaa-cce472027970","Type":"ContainerDied","Data":"c4a31417c295ab9013ffc624ecd66d0e0f8c842ba3c8a15908e270e5a1e5bac5"} Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.842691 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8q9n\" (UniqueName: \"kubernetes.io/projected/2d935ad0-7baa-422a-af52-0a58d7c14312-kube-api-access-m8q9n\") pod \"marketplace-operator-79b997595-g6bqr\" (UID: \"2d935ad0-7baa-422a-af52-0a58d7c14312\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6bqr" Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.842814 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2d935ad0-7baa-422a-af52-0a58d7c14312-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g6bqr\" (UID: \"2d935ad0-7baa-422a-af52-0a58d7c14312\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6bqr" Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.842903 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d935ad0-7baa-422a-af52-0a58d7c14312-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g6bqr\" (UID: \"2d935ad0-7baa-422a-af52-0a58d7c14312\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6bqr" Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.943882 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2d935ad0-7baa-422a-af52-0a58d7c14312-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g6bqr\" (UID: \"2d935ad0-7baa-422a-af52-0a58d7c14312\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6bqr" Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.944089 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d935ad0-7baa-422a-af52-0a58d7c14312-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g6bqr\" (UID: \"2d935ad0-7baa-422a-af52-0a58d7c14312\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6bqr" Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.944141 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8q9n\" (UniqueName: \"kubernetes.io/projected/2d935ad0-7baa-422a-af52-0a58d7c14312-kube-api-access-m8q9n\") pod \"marketplace-operator-79b997595-g6bqr\" (UID: \"2d935ad0-7baa-422a-af52-0a58d7c14312\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6bqr" Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.945455 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d935ad0-7baa-422a-af52-0a58d7c14312-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g6bqr\" (UID: \"2d935ad0-7baa-422a-af52-0a58d7c14312\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6bqr" Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.954560 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2d935ad0-7baa-422a-af52-0a58d7c14312-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g6bqr\" (UID: \"2d935ad0-7baa-422a-af52-0a58d7c14312\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6bqr" Jan 27 18:46:43 crc kubenswrapper[4915]: I0127 18:46:43.961722 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8q9n\" (UniqueName: \"kubernetes.io/projected/2d935ad0-7baa-422a-af52-0a58d7c14312-kube-api-access-m8q9n\") pod \"marketplace-operator-79b997595-g6bqr\" (UID: \"2d935ad0-7baa-422a-af52-0a58d7c14312\") " pod="openshift-marketplace/marketplace-operator-79b997595-g6bqr" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.168346 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g6bqr" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.174187 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q4xjv" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.188225 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lpn7t" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.199755 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.200747 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.219675 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvnw2" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.220133 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qgxwz" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.223101 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s9b8l" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.351567 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f973ebe4-3ce2-49c7-96b8-c7e238c606a7-utilities\") pod \"f973ebe4-3ce2-49c7-96b8-c7e238c606a7\" (UID: \"f973ebe4-3ce2-49c7-96b8-c7e238c606a7\") " Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.351606 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr76n\" (UniqueName: \"kubernetes.io/projected/a0b86903-8500-47c1-9fdb-5b0dea888375-kube-api-access-pr76n\") pod \"a0b86903-8500-47c1-9fdb-5b0dea888375\" (UID: \"a0b86903-8500-47c1-9fdb-5b0dea888375\") " Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.351631 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq5k2\" (UniqueName: \"kubernetes.io/projected/f973ebe4-3ce2-49c7-96b8-c7e238c606a7-kube-api-access-cq5k2\") pod \"f973ebe4-3ce2-49c7-96b8-c7e238c606a7\" (UID: \"f973ebe4-3ce2-49c7-96b8-c7e238c606a7\") " Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.351652 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ngl6\" (UniqueName: \"kubernetes.io/projected/75b62d1e-bcc6-46c1-ac10-7e2819203102-kube-api-access-6ngl6\") pod \"75b62d1e-bcc6-46c1-ac10-7e2819203102\" (UID: \"75b62d1e-bcc6-46c1-ac10-7e2819203102\") " Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.351675 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/644a6ae4-550e-4ca7-9394-d2e09abb6b16-utilities\") pod \"644a6ae4-550e-4ca7-9394-d2e09abb6b16\" (UID: \"644a6ae4-550e-4ca7-9394-d2e09abb6b16\") " Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.351702 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f973ebe4-3ce2-49c7-96b8-c7e238c606a7-catalog-content\") pod \"f973ebe4-3ce2-49c7-96b8-c7e238c606a7\" (UID: \"f973ebe4-3ce2-49c7-96b8-c7e238c606a7\") " Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.351727 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a0b86903-8500-47c1-9fdb-5b0dea888375-marketplace-operator-metrics\") pod \"a0b86903-8500-47c1-9fdb-5b0dea888375\" (UID: \"a0b86903-8500-47c1-9fdb-5b0dea888375\") " Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.351748 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dh24\" (UniqueName: \"kubernetes.io/projected/644a6ae4-550e-4ca7-9394-d2e09abb6b16-kube-api-access-7dh24\") pod \"644a6ae4-550e-4ca7-9394-d2e09abb6b16\" (UID: \"644a6ae4-550e-4ca7-9394-d2e09abb6b16\") " Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.351781 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ec10b90-7f91-4bda-bcaa-cce472027970-utilities\") pod \"8ec10b90-7f91-4bda-bcaa-cce472027970\" (UID: \"8ec10b90-7f91-4bda-bcaa-cce472027970\") " Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.351813 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75b62d1e-bcc6-46c1-ac10-7e2819203102-utilities\") pod \"75b62d1e-bcc6-46c1-ac10-7e2819203102\" (UID: \"75b62d1e-bcc6-46c1-ac10-7e2819203102\") " Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.351839 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ec10b90-7f91-4bda-bcaa-cce472027970-catalog-content\") pod \"8ec10b90-7f91-4bda-bcaa-cce472027970\" (UID: \"8ec10b90-7f91-4bda-bcaa-cce472027970\") " Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.351861 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/644a6ae4-550e-4ca7-9394-d2e09abb6b16-catalog-content\") pod \"644a6ae4-550e-4ca7-9394-d2e09abb6b16\" (UID: \"644a6ae4-550e-4ca7-9394-d2e09abb6b16\") " Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.351882 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75b62d1e-bcc6-46c1-ac10-7e2819203102-catalog-content\") pod \"75b62d1e-bcc6-46c1-ac10-7e2819203102\" (UID: \"75b62d1e-bcc6-46c1-ac10-7e2819203102\") " Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.351922 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0b86903-8500-47c1-9fdb-5b0dea888375-marketplace-trusted-ca\") pod \"a0b86903-8500-47c1-9fdb-5b0dea888375\" (UID: \"a0b86903-8500-47c1-9fdb-5b0dea888375\") " Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.352666 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ec10b90-7f91-4bda-bcaa-cce472027970-utilities" (OuterVolumeSpecName: "utilities") pod "8ec10b90-7f91-4bda-bcaa-cce472027970" (UID: "8ec10b90-7f91-4bda-bcaa-cce472027970"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.352762 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f973ebe4-3ce2-49c7-96b8-c7e238c606a7-utilities" (OuterVolumeSpecName: "utilities") pod "f973ebe4-3ce2-49c7-96b8-c7e238c606a7" (UID: "f973ebe4-3ce2-49c7-96b8-c7e238c606a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.353579 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0b86903-8500-47c1-9fdb-5b0dea888375-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "a0b86903-8500-47c1-9fdb-5b0dea888375" (UID: "a0b86903-8500-47c1-9fdb-5b0dea888375"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.353623 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw7l6\" (UniqueName: \"kubernetes.io/projected/8ec10b90-7f91-4bda-bcaa-cce472027970-kube-api-access-mw7l6\") pod \"8ec10b90-7f91-4bda-bcaa-cce472027970\" (UID: \"8ec10b90-7f91-4bda-bcaa-cce472027970\") " Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.353734 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/644a6ae4-550e-4ca7-9394-d2e09abb6b16-utilities" (OuterVolumeSpecName: "utilities") pod "644a6ae4-550e-4ca7-9394-d2e09abb6b16" (UID: "644a6ae4-550e-4ca7-9394-d2e09abb6b16"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.353754 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75b62d1e-bcc6-46c1-ac10-7e2819203102-utilities" (OuterVolumeSpecName: "utilities") pod "75b62d1e-bcc6-46c1-ac10-7e2819203102" (UID: "75b62d1e-bcc6-46c1-ac10-7e2819203102"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.354120 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ec10b90-7f91-4bda-bcaa-cce472027970-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.354188 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75b62d1e-bcc6-46c1-ac10-7e2819203102-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.354284 4915 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0b86903-8500-47c1-9fdb-5b0dea888375-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.354342 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f973ebe4-3ce2-49c7-96b8-c7e238c606a7-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.354430 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/644a6ae4-550e-4ca7-9394-d2e09abb6b16-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.355008 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0b86903-8500-47c1-9fdb-5b0dea888375-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "a0b86903-8500-47c1-9fdb-5b0dea888375" (UID: "a0b86903-8500-47c1-9fdb-5b0dea888375"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.355070 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644a6ae4-550e-4ca7-9394-d2e09abb6b16-kube-api-access-7dh24" (OuterVolumeSpecName: "kube-api-access-7dh24") pod "644a6ae4-550e-4ca7-9394-d2e09abb6b16" (UID: "644a6ae4-550e-4ca7-9394-d2e09abb6b16"). InnerVolumeSpecName "kube-api-access-7dh24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.355425 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f973ebe4-3ce2-49c7-96b8-c7e238c606a7-kube-api-access-cq5k2" (OuterVolumeSpecName: "kube-api-access-cq5k2") pod "f973ebe4-3ce2-49c7-96b8-c7e238c606a7" (UID: "f973ebe4-3ce2-49c7-96b8-c7e238c606a7"). InnerVolumeSpecName "kube-api-access-cq5k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.355932 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b62d1e-bcc6-46c1-ac10-7e2819203102-kube-api-access-6ngl6" (OuterVolumeSpecName: "kube-api-access-6ngl6") pod "75b62d1e-bcc6-46c1-ac10-7e2819203102" (UID: "75b62d1e-bcc6-46c1-ac10-7e2819203102"). InnerVolumeSpecName "kube-api-access-6ngl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.357764 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0b86903-8500-47c1-9fdb-5b0dea888375-kube-api-access-pr76n" (OuterVolumeSpecName: "kube-api-access-pr76n") pod "a0b86903-8500-47c1-9fdb-5b0dea888375" (UID: "a0b86903-8500-47c1-9fdb-5b0dea888375"). InnerVolumeSpecName "kube-api-access-pr76n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.363532 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ec10b90-7f91-4bda-bcaa-cce472027970-kube-api-access-mw7l6" (OuterVolumeSpecName: "kube-api-access-mw7l6") pod "8ec10b90-7f91-4bda-bcaa-cce472027970" (UID: "8ec10b90-7f91-4bda-bcaa-cce472027970"). InnerVolumeSpecName "kube-api-access-mw7l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.383765 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/644a6ae4-550e-4ca7-9394-d2e09abb6b16-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "644a6ae4-550e-4ca7-9394-d2e09abb6b16" (UID: "644a6ae4-550e-4ca7-9394-d2e09abb6b16"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.405379 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f973ebe4-3ce2-49c7-96b8-c7e238c606a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f973ebe4-3ce2-49c7-96b8-c7e238c606a7" (UID: "f973ebe4-3ce2-49c7-96b8-c7e238c606a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.438929 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ec10b90-7f91-4bda-bcaa-cce472027970-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ec10b90-7f91-4bda-bcaa-cce472027970" (UID: "8ec10b90-7f91-4bda-bcaa-cce472027970"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.447538 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g6bqr"] Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.455422 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw7l6\" (UniqueName: \"kubernetes.io/projected/8ec10b90-7f91-4bda-bcaa-cce472027970-kube-api-access-mw7l6\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.455984 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr76n\" (UniqueName: \"kubernetes.io/projected/a0b86903-8500-47c1-9fdb-5b0dea888375-kube-api-access-pr76n\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.456006 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq5k2\" (UniqueName: \"kubernetes.io/projected/f973ebe4-3ce2-49c7-96b8-c7e238c606a7-kube-api-access-cq5k2\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.456017 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ngl6\" (UniqueName: \"kubernetes.io/projected/75b62d1e-bcc6-46c1-ac10-7e2819203102-kube-api-access-6ngl6\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.456030 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f973ebe4-3ce2-49c7-96b8-c7e238c606a7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.456066 4915 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a0b86903-8500-47c1-9fdb-5b0dea888375-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.456078 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dh24\" (UniqueName: \"kubernetes.io/projected/644a6ae4-550e-4ca7-9394-d2e09abb6b16-kube-api-access-7dh24\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.456090 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ec10b90-7f91-4bda-bcaa-cce472027970-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.456101 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/644a6ae4-550e-4ca7-9394-d2e09abb6b16-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.488668 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75b62d1e-bcc6-46c1-ac10-7e2819203102-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75b62d1e-bcc6-46c1-ac10-7e2819203102" (UID: "75b62d1e-bcc6-46c1-ac10-7e2819203102"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.556998 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75b62d1e-bcc6-46c1-ac10-7e2819203102-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.615071 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.615153 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.758876 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.758951 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.759009 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.759057 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.759124 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.759218 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.759299 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.759373 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.759434 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.759746 4915 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.759779 4915 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.759843 4915 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.759868 4915 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.766542 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.802118 4915 generic.go:334] "Generic (PLEG): container finished" podID="a0b86903-8500-47c1-9fdb-5b0dea888375" containerID="33056bd315755d0f3665517e23a1ce0ae0f6a3f02c4995f801a7e1010ee144fe" exitCode=0 Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.802245 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qgxwz" event={"ID":"a0b86903-8500-47c1-9fdb-5b0dea888375","Type":"ContainerDied","Data":"33056bd315755d0f3665517e23a1ce0ae0f6a3f02c4995f801a7e1010ee144fe"} Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.802382 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qgxwz" event={"ID":"a0b86903-8500-47c1-9fdb-5b0dea888375","Type":"ContainerDied","Data":"08463e40c0545d2fe137831a9a7649102c4a1701a2a49a73265ccd8839f5eb83"} Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.802419 4915 scope.go:117] "RemoveContainer" containerID="33056bd315755d0f3665517e23a1ce0ae0f6a3f02c4995f801a7e1010ee144fe" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.802727 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qgxwz" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.806038 4915 generic.go:334] "Generic (PLEG): container finished" podID="644a6ae4-550e-4ca7-9394-d2e09abb6b16" containerID="c8fce89cba2721932c94a70df9d802d494457f14d77407eda045a341964749d9" exitCode=0 Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.806151 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s9b8l" event={"ID":"644a6ae4-550e-4ca7-9394-d2e09abb6b16","Type":"ContainerDied","Data":"c8fce89cba2721932c94a70df9d802d494457f14d77407eda045a341964749d9"} Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.806178 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s9b8l" event={"ID":"644a6ae4-550e-4ca7-9394-d2e09abb6b16","Type":"ContainerDied","Data":"f0794cb6b7e77d3c42b4b7eb6d620fe043049155b35a0c4e0a2c25832150f759"} Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.806273 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s9b8l" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.809094 4915 generic.go:334] "Generic (PLEG): container finished" podID="75b62d1e-bcc6-46c1-ac10-7e2819203102" containerID="7db5923a3635ae64631a1a2ded481b84481a469ce38cd9aa5ee4cd3ad7845be2" exitCode=0 Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.809205 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvnw2" event={"ID":"75b62d1e-bcc6-46c1-ac10-7e2819203102","Type":"ContainerDied","Data":"7db5923a3635ae64631a1a2ded481b84481a469ce38cd9aa5ee4cd3ad7845be2"} Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.809252 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvnw2" event={"ID":"75b62d1e-bcc6-46c1-ac10-7e2819203102","Type":"ContainerDied","Data":"be8028adaa39c1cc90727bdce28f2023a89596cbc6a05a63cfadac50c90e09ec"} Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.809361 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvnw2" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.821541 4915 generic.go:334] "Generic (PLEG): container finished" podID="f973ebe4-3ce2-49c7-96b8-c7e238c606a7" containerID="3b4c3c30ebc9b0c3921b90940e7b694dc3d53594c656eac546b439b9e81ed716" exitCode=0 Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.821671 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lpn7t" event={"ID":"f973ebe4-3ce2-49c7-96b8-c7e238c606a7","Type":"ContainerDied","Data":"3b4c3c30ebc9b0c3921b90940e7b694dc3d53594c656eac546b439b9e81ed716"} Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.821719 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lpn7t" event={"ID":"f973ebe4-3ce2-49c7-96b8-c7e238c606a7","Type":"ContainerDied","Data":"cdc3a017bd1385f130018331244d077d39cdf576b018884f8cbf534149ee9e6f"} Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.822111 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lpn7t" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.831831 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q4xjv" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.832124 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q4xjv" event={"ID":"8ec10b90-7f91-4bda-bcaa-cce472027970","Type":"ContainerDied","Data":"89c1afc1e43c45aa1842563008228b9a969fbe8e3822377cc62868c40007f3a1"} Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.835815 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.835876 4915 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="9e69f89b2dcddb321b7ccadb368d617fb38ecd30fcaa0ab1f50f91b650507e9f" exitCode=137 Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.836021 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.837432 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g6bqr" event={"ID":"2d935ad0-7baa-422a-af52-0a58d7c14312","Type":"ContainerStarted","Data":"cdd07a508da693e8cb2efc8511cadec140eec47844a0cd1384c61b482d5c2530"} Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.837464 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g6bqr" event={"ID":"2d935ad0-7baa-422a-af52-0a58d7c14312","Type":"ContainerStarted","Data":"f71e54ddbd75ca1834f20e321add1b13807d1bb4703a064ff565a16cf95f7f7f"} Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.838681 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-g6bqr" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.850526 4915 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-g6bqr container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.58:8080/healthz\": dial tcp 10.217.0.58:8080: connect: connection refused" start-of-body= Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.850613 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-g6bqr" podUID="2d935ad0-7baa-422a-af52-0a58d7c14312" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.58:8080/healthz\": dial tcp 10.217.0.58:8080: connect: connection refused" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.853166 4915 scope.go:117] "RemoveContainer" containerID="33056bd315755d0f3665517e23a1ce0ae0f6a3f02c4995f801a7e1010ee144fe" Jan 27 18:46:44 crc kubenswrapper[4915]: E0127 18:46:44.854145 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33056bd315755d0f3665517e23a1ce0ae0f6a3f02c4995f801a7e1010ee144fe\": container with ID starting with 33056bd315755d0f3665517e23a1ce0ae0f6a3f02c4995f801a7e1010ee144fe not found: ID does not exist" containerID="33056bd315755d0f3665517e23a1ce0ae0f6a3f02c4995f801a7e1010ee144fe" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.854426 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33056bd315755d0f3665517e23a1ce0ae0f6a3f02c4995f801a7e1010ee144fe"} err="failed to get container status \"33056bd315755d0f3665517e23a1ce0ae0f6a3f02c4995f801a7e1010ee144fe\": rpc error: code = NotFound desc = could not find container \"33056bd315755d0f3665517e23a1ce0ae0f6a3f02c4995f801a7e1010ee144fe\": container with ID starting with 33056bd315755d0f3665517e23a1ce0ae0f6a3f02c4995f801a7e1010ee144fe not found: ID does not exist" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.854594 4915 scope.go:117] "RemoveContainer" containerID="c8fce89cba2721932c94a70df9d802d494457f14d77407eda045a341964749d9" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.866677 4915 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.884406 4915 scope.go:117] "RemoveContainer" containerID="29d6c162833f8a2438e10040b08ea768df85693277237a099428934b91fb8e78" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.904735 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-g6bqr" podStartSLOduration=1.904698526 podStartE2EDuration="1.904698526s" podCreationTimestamp="2026-01-27 18:46:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:46:44.876938234 +0000 UTC m=+296.234791898" watchObservedRunningTime="2026-01-27 18:46:44.904698526 +0000 UTC m=+296.262552190" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.905519 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s9b8l"] Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.913361 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s9b8l"] Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.917528 4915 scope.go:117] "RemoveContainer" containerID="932d1bbb8aedc34c3d90ee4c157ea85a50a7e36feee55904e0652da61bd60bc3" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.920681 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q4xjv"] Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.927101 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q4xjv"] Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.932719 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qgxwz"] Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.935126 4915 scope.go:117] "RemoveContainer" containerID="c8fce89cba2721932c94a70df9d802d494457f14d77407eda045a341964749d9" Jan 27 18:46:44 crc kubenswrapper[4915]: E0127 18:46:44.935698 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8fce89cba2721932c94a70df9d802d494457f14d77407eda045a341964749d9\": container with ID starting with c8fce89cba2721932c94a70df9d802d494457f14d77407eda045a341964749d9 not found: ID does not exist" containerID="c8fce89cba2721932c94a70df9d802d494457f14d77407eda045a341964749d9" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.935882 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8fce89cba2721932c94a70df9d802d494457f14d77407eda045a341964749d9"} err="failed to get container status \"c8fce89cba2721932c94a70df9d802d494457f14d77407eda045a341964749d9\": rpc error: code = NotFound desc = could not find container \"c8fce89cba2721932c94a70df9d802d494457f14d77407eda045a341964749d9\": container with ID starting with c8fce89cba2721932c94a70df9d802d494457f14d77407eda045a341964749d9 not found: ID does not exist" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.936044 4915 scope.go:117] "RemoveContainer" containerID="29d6c162833f8a2438e10040b08ea768df85693277237a099428934b91fb8e78" Jan 27 18:46:44 crc kubenswrapper[4915]: E0127 18:46:44.939371 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29d6c162833f8a2438e10040b08ea768df85693277237a099428934b91fb8e78\": container with ID starting with 29d6c162833f8a2438e10040b08ea768df85693277237a099428934b91fb8e78 not found: ID does not exist" containerID="29d6c162833f8a2438e10040b08ea768df85693277237a099428934b91fb8e78" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.939423 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29d6c162833f8a2438e10040b08ea768df85693277237a099428934b91fb8e78"} err="failed to get container status \"29d6c162833f8a2438e10040b08ea768df85693277237a099428934b91fb8e78\": rpc error: code = NotFound desc = could not find container \"29d6c162833f8a2438e10040b08ea768df85693277237a099428934b91fb8e78\": container with ID starting with 29d6c162833f8a2438e10040b08ea768df85693277237a099428934b91fb8e78 not found: ID does not exist" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.939461 4915 scope.go:117] "RemoveContainer" containerID="932d1bbb8aedc34c3d90ee4c157ea85a50a7e36feee55904e0652da61bd60bc3" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.939687 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qgxwz"] Jan 27 18:46:44 crc kubenswrapper[4915]: E0127 18:46:44.943024 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"932d1bbb8aedc34c3d90ee4c157ea85a50a7e36feee55904e0652da61bd60bc3\": container with ID starting with 932d1bbb8aedc34c3d90ee4c157ea85a50a7e36feee55904e0652da61bd60bc3 not found: ID does not exist" containerID="932d1bbb8aedc34c3d90ee4c157ea85a50a7e36feee55904e0652da61bd60bc3" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.943090 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"932d1bbb8aedc34c3d90ee4c157ea85a50a7e36feee55904e0652da61bd60bc3"} err="failed to get container status \"932d1bbb8aedc34c3d90ee4c157ea85a50a7e36feee55904e0652da61bd60bc3\": rpc error: code = NotFound desc = could not find container \"932d1bbb8aedc34c3d90ee4c157ea85a50a7e36feee55904e0652da61bd60bc3\": container with ID starting with 932d1bbb8aedc34c3d90ee4c157ea85a50a7e36feee55904e0652da61bd60bc3 not found: ID does not exist" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.943144 4915 scope.go:117] "RemoveContainer" containerID="7db5923a3635ae64631a1a2ded481b84481a469ce38cd9aa5ee4cd3ad7845be2" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.944409 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fvnw2"] Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.949893 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fvnw2"] Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.953689 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lpn7t"] Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.957870 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lpn7t"] Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.959020 4915 scope.go:117] "RemoveContainer" containerID="cdbaee307cf76b2c7f0a909615bdc310164892d584fcd54f8291d391c12bd551" Jan 27 18:46:44 crc kubenswrapper[4915]: I0127 18:46:44.986659 4915 scope.go:117] "RemoveContainer" containerID="0e79da55f615f880e6e715704f3451cf5a02ee7eafbf471c84174e349a2ccae8" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.004742 4915 scope.go:117] "RemoveContainer" containerID="7db5923a3635ae64631a1a2ded481b84481a469ce38cd9aa5ee4cd3ad7845be2" Jan 27 18:46:45 crc kubenswrapper[4915]: E0127 18:46:45.005398 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7db5923a3635ae64631a1a2ded481b84481a469ce38cd9aa5ee4cd3ad7845be2\": container with ID starting with 7db5923a3635ae64631a1a2ded481b84481a469ce38cd9aa5ee4cd3ad7845be2 not found: ID does not exist" containerID="7db5923a3635ae64631a1a2ded481b84481a469ce38cd9aa5ee4cd3ad7845be2" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.005453 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7db5923a3635ae64631a1a2ded481b84481a469ce38cd9aa5ee4cd3ad7845be2"} err="failed to get container status \"7db5923a3635ae64631a1a2ded481b84481a469ce38cd9aa5ee4cd3ad7845be2\": rpc error: code = NotFound desc = could not find container \"7db5923a3635ae64631a1a2ded481b84481a469ce38cd9aa5ee4cd3ad7845be2\": container with ID starting with 7db5923a3635ae64631a1a2ded481b84481a469ce38cd9aa5ee4cd3ad7845be2 not found: ID does not exist" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.005487 4915 scope.go:117] "RemoveContainer" containerID="cdbaee307cf76b2c7f0a909615bdc310164892d584fcd54f8291d391c12bd551" Jan 27 18:46:45 crc kubenswrapper[4915]: E0127 18:46:45.006134 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdbaee307cf76b2c7f0a909615bdc310164892d584fcd54f8291d391c12bd551\": container with ID starting with cdbaee307cf76b2c7f0a909615bdc310164892d584fcd54f8291d391c12bd551 not found: ID does not exist" containerID="cdbaee307cf76b2c7f0a909615bdc310164892d584fcd54f8291d391c12bd551" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.006267 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdbaee307cf76b2c7f0a909615bdc310164892d584fcd54f8291d391c12bd551"} err="failed to get container status \"cdbaee307cf76b2c7f0a909615bdc310164892d584fcd54f8291d391c12bd551\": rpc error: code = NotFound desc = could not find container \"cdbaee307cf76b2c7f0a909615bdc310164892d584fcd54f8291d391c12bd551\": container with ID starting with cdbaee307cf76b2c7f0a909615bdc310164892d584fcd54f8291d391c12bd551 not found: ID does not exist" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.006391 4915 scope.go:117] "RemoveContainer" containerID="0e79da55f615f880e6e715704f3451cf5a02ee7eafbf471c84174e349a2ccae8" Jan 27 18:46:45 crc kubenswrapper[4915]: E0127 18:46:45.007700 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e79da55f615f880e6e715704f3451cf5a02ee7eafbf471c84174e349a2ccae8\": container with ID starting with 0e79da55f615f880e6e715704f3451cf5a02ee7eafbf471c84174e349a2ccae8 not found: ID does not exist" containerID="0e79da55f615f880e6e715704f3451cf5a02ee7eafbf471c84174e349a2ccae8" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.007752 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e79da55f615f880e6e715704f3451cf5a02ee7eafbf471c84174e349a2ccae8"} err="failed to get container status \"0e79da55f615f880e6e715704f3451cf5a02ee7eafbf471c84174e349a2ccae8\": rpc error: code = NotFound desc = could not find container \"0e79da55f615f880e6e715704f3451cf5a02ee7eafbf471c84174e349a2ccae8\": container with ID starting with 0e79da55f615f880e6e715704f3451cf5a02ee7eafbf471c84174e349a2ccae8 not found: ID does not exist" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.007775 4915 scope.go:117] "RemoveContainer" containerID="3b4c3c30ebc9b0c3921b90940e7b694dc3d53594c656eac546b439b9e81ed716" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.021779 4915 scope.go:117] "RemoveContainer" containerID="de9ccd6f5936a244f110f5585042eac4da7a22a01f3cfbd4eb554718d0a75095" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.039176 4915 scope.go:117] "RemoveContainer" containerID="ba06e1ee3f1bac4ca54f6a6c4980077d41e244cf7ef92dc26d87ec1c7d929caa" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.053943 4915 scope.go:117] "RemoveContainer" containerID="3b4c3c30ebc9b0c3921b90940e7b694dc3d53594c656eac546b439b9e81ed716" Jan 27 18:46:45 crc kubenswrapper[4915]: E0127 18:46:45.054328 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b4c3c30ebc9b0c3921b90940e7b694dc3d53594c656eac546b439b9e81ed716\": container with ID starting with 3b4c3c30ebc9b0c3921b90940e7b694dc3d53594c656eac546b439b9e81ed716 not found: ID does not exist" containerID="3b4c3c30ebc9b0c3921b90940e7b694dc3d53594c656eac546b439b9e81ed716" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.054364 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b4c3c30ebc9b0c3921b90940e7b694dc3d53594c656eac546b439b9e81ed716"} err="failed to get container status \"3b4c3c30ebc9b0c3921b90940e7b694dc3d53594c656eac546b439b9e81ed716\": rpc error: code = NotFound desc = could not find container \"3b4c3c30ebc9b0c3921b90940e7b694dc3d53594c656eac546b439b9e81ed716\": container with ID starting with 3b4c3c30ebc9b0c3921b90940e7b694dc3d53594c656eac546b439b9e81ed716 not found: ID does not exist" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.054394 4915 scope.go:117] "RemoveContainer" containerID="de9ccd6f5936a244f110f5585042eac4da7a22a01f3cfbd4eb554718d0a75095" Jan 27 18:46:45 crc kubenswrapper[4915]: E0127 18:46:45.054668 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de9ccd6f5936a244f110f5585042eac4da7a22a01f3cfbd4eb554718d0a75095\": container with ID starting with de9ccd6f5936a244f110f5585042eac4da7a22a01f3cfbd4eb554718d0a75095 not found: ID does not exist" containerID="de9ccd6f5936a244f110f5585042eac4da7a22a01f3cfbd4eb554718d0a75095" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.054705 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de9ccd6f5936a244f110f5585042eac4da7a22a01f3cfbd4eb554718d0a75095"} err="failed to get container status \"de9ccd6f5936a244f110f5585042eac4da7a22a01f3cfbd4eb554718d0a75095\": rpc error: code = NotFound desc = could not find container \"de9ccd6f5936a244f110f5585042eac4da7a22a01f3cfbd4eb554718d0a75095\": container with ID starting with de9ccd6f5936a244f110f5585042eac4da7a22a01f3cfbd4eb554718d0a75095 not found: ID does not exist" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.054734 4915 scope.go:117] "RemoveContainer" containerID="ba06e1ee3f1bac4ca54f6a6c4980077d41e244cf7ef92dc26d87ec1c7d929caa" Jan 27 18:46:45 crc kubenswrapper[4915]: E0127 18:46:45.055118 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba06e1ee3f1bac4ca54f6a6c4980077d41e244cf7ef92dc26d87ec1c7d929caa\": container with ID starting with ba06e1ee3f1bac4ca54f6a6c4980077d41e244cf7ef92dc26d87ec1c7d929caa not found: ID does not exist" containerID="ba06e1ee3f1bac4ca54f6a6c4980077d41e244cf7ef92dc26d87ec1c7d929caa" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.055150 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba06e1ee3f1bac4ca54f6a6c4980077d41e244cf7ef92dc26d87ec1c7d929caa"} err="failed to get container status \"ba06e1ee3f1bac4ca54f6a6c4980077d41e244cf7ef92dc26d87ec1c7d929caa\": rpc error: code = NotFound desc = could not find container \"ba06e1ee3f1bac4ca54f6a6c4980077d41e244cf7ef92dc26d87ec1c7d929caa\": container with ID starting with ba06e1ee3f1bac4ca54f6a6c4980077d41e244cf7ef92dc26d87ec1c7d929caa not found: ID does not exist" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.055169 4915 scope.go:117] "RemoveContainer" containerID="c4a31417c295ab9013ffc624ecd66d0e0f8c842ba3c8a15908e270e5a1e5bac5" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.070849 4915 scope.go:117] "RemoveContainer" containerID="b036f4f05f5af94c8661b1909b65b918e6ea0fca8b29fd05ee92a372fa134c48" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.089958 4915 scope.go:117] "RemoveContainer" containerID="721b29e4e43c4944bc270eef0fa8670b03d5e4f6db081c8bbb14bc0cbe5b99df" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.105778 4915 scope.go:117] "RemoveContainer" containerID="9e69f89b2dcddb321b7ccadb368d617fb38ecd30fcaa0ab1f50f91b650507e9f" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.120603 4915 scope.go:117] "RemoveContainer" containerID="9e69f89b2dcddb321b7ccadb368d617fb38ecd30fcaa0ab1f50f91b650507e9f" Jan 27 18:46:45 crc kubenswrapper[4915]: E0127 18:46:45.121024 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e69f89b2dcddb321b7ccadb368d617fb38ecd30fcaa0ab1f50f91b650507e9f\": container with ID starting with 9e69f89b2dcddb321b7ccadb368d617fb38ecd30fcaa0ab1f50f91b650507e9f not found: ID does not exist" containerID="9e69f89b2dcddb321b7ccadb368d617fb38ecd30fcaa0ab1f50f91b650507e9f" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.121058 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e69f89b2dcddb321b7ccadb368d617fb38ecd30fcaa0ab1f50f91b650507e9f"} err="failed to get container status \"9e69f89b2dcddb321b7ccadb368d617fb38ecd30fcaa0ab1f50f91b650507e9f\": rpc error: code = NotFound desc = could not find container \"9e69f89b2dcddb321b7ccadb368d617fb38ecd30fcaa0ab1f50f91b650507e9f\": container with ID starting with 9e69f89b2dcddb321b7ccadb368d617fb38ecd30fcaa0ab1f50f91b650507e9f not found: ID does not exist" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.380781 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="644a6ae4-550e-4ca7-9394-d2e09abb6b16" path="/var/lib/kubelet/pods/644a6ae4-550e-4ca7-9394-d2e09abb6b16/volumes" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.381816 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75b62d1e-bcc6-46c1-ac10-7e2819203102" path="/var/lib/kubelet/pods/75b62d1e-bcc6-46c1-ac10-7e2819203102/volumes" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.382840 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ec10b90-7f91-4bda-bcaa-cce472027970" path="/var/lib/kubelet/pods/8ec10b90-7f91-4bda-bcaa-cce472027970/volumes" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.384355 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0b86903-8500-47c1-9fdb-5b0dea888375" path="/var/lib/kubelet/pods/a0b86903-8500-47c1-9fdb-5b0dea888375/volumes" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.384840 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.385547 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f973ebe4-3ce2-49c7-96b8-c7e238c606a7" path="/var/lib/kubelet/pods/f973ebe4-3ce2-49c7-96b8-c7e238c606a7/volumes" Jan 27 18:46:45 crc kubenswrapper[4915]: I0127 18:46:45.860550 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-g6bqr" Jan 27 18:46:49 crc kubenswrapper[4915]: I0127 18:46:49.131312 4915 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 27 18:47:02 crc kubenswrapper[4915]: I0127 18:47:02.916113 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vzw97"] Jan 27 18:47:02 crc kubenswrapper[4915]: I0127 18:47:02.917007 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" podUID="8b2b7254-d441-4474-b545-16b89b18f845" containerName="controller-manager" containerID="cri-o://81ca9ebb5eb695a53c5911b28c06cdd6f153e1bd19cc6cabaae74f447f69d20b" gracePeriod=30 Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.007906 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk"] Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.008104 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk" podUID="a76afbe7-44a3-4e31-ba26-53695d082598" containerName="route-controller-manager" containerID="cri-o://1039985fafb23488126c8d863f1c086b14942401aa0e917d66604b1093b82e97" gracePeriod=30 Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.319407 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.384864 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-75b4f6d956-959ph"] Jan 27 18:47:03 crc kubenswrapper[4915]: E0127 18:47:03.385063 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f973ebe4-3ce2-49c7-96b8-c7e238c606a7" containerName="extract-utilities" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.385074 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f973ebe4-3ce2-49c7-96b8-c7e238c606a7" containerName="extract-utilities" Jan 27 18:47:03 crc kubenswrapper[4915]: E0127 18:47:03.385086 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b62d1e-bcc6-46c1-ac10-7e2819203102" containerName="extract-content" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.385093 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b62d1e-bcc6-46c1-ac10-7e2819203102" containerName="extract-content" Jan 27 18:47:03 crc kubenswrapper[4915]: E0127 18:47:03.385101 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0b86903-8500-47c1-9fdb-5b0dea888375" containerName="marketplace-operator" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.385107 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0b86903-8500-47c1-9fdb-5b0dea888375" containerName="marketplace-operator" Jan 27 18:47:03 crc kubenswrapper[4915]: E0127 18:47:03.385115 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b2b7254-d441-4474-b545-16b89b18f845" containerName="controller-manager" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.385121 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b2b7254-d441-4474-b545-16b89b18f845" containerName="controller-manager" Jan 27 18:47:03 crc kubenswrapper[4915]: E0127 18:47:03.385128 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec10b90-7f91-4bda-bcaa-cce472027970" containerName="extract-utilities" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.385135 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec10b90-7f91-4bda-bcaa-cce472027970" containerName="extract-utilities" Jan 27 18:47:03 crc kubenswrapper[4915]: E0127 18:47:03.385142 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b62d1e-bcc6-46c1-ac10-7e2819203102" containerName="extract-utilities" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.385149 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b62d1e-bcc6-46c1-ac10-7e2819203102" containerName="extract-utilities" Jan 27 18:47:03 crc kubenswrapper[4915]: E0127 18:47:03.385156 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f973ebe4-3ce2-49c7-96b8-c7e238c606a7" containerName="extract-content" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.385162 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f973ebe4-3ce2-49c7-96b8-c7e238c606a7" containerName="extract-content" Jan 27 18:47:03 crc kubenswrapper[4915]: E0127 18:47:03.385171 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644a6ae4-550e-4ca7-9394-d2e09abb6b16" containerName="extract-content" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.385176 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="644a6ae4-550e-4ca7-9394-d2e09abb6b16" containerName="extract-content" Jan 27 18:47:03 crc kubenswrapper[4915]: E0127 18:47:03.385184 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644a6ae4-550e-4ca7-9394-d2e09abb6b16" containerName="registry-server" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.385190 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="644a6ae4-550e-4ca7-9394-d2e09abb6b16" containerName="registry-server" Jan 27 18:47:03 crc kubenswrapper[4915]: E0127 18:47:03.385197 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b62d1e-bcc6-46c1-ac10-7e2819203102" containerName="registry-server" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.385203 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b62d1e-bcc6-46c1-ac10-7e2819203102" containerName="registry-server" Jan 27 18:47:03 crc kubenswrapper[4915]: E0127 18:47:03.385211 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec10b90-7f91-4bda-bcaa-cce472027970" containerName="registry-server" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.385217 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec10b90-7f91-4bda-bcaa-cce472027970" containerName="registry-server" Jan 27 18:47:03 crc kubenswrapper[4915]: E0127 18:47:03.385229 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec10b90-7f91-4bda-bcaa-cce472027970" containerName="extract-content" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.385235 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec10b90-7f91-4bda-bcaa-cce472027970" containerName="extract-content" Jan 27 18:47:03 crc kubenswrapper[4915]: E0127 18:47:03.385243 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644a6ae4-550e-4ca7-9394-d2e09abb6b16" containerName="extract-utilities" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.385249 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="644a6ae4-550e-4ca7-9394-d2e09abb6b16" containerName="extract-utilities" Jan 27 18:47:03 crc kubenswrapper[4915]: E0127 18:47:03.385256 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f973ebe4-3ce2-49c7-96b8-c7e238c606a7" containerName="registry-server" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.385262 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f973ebe4-3ce2-49c7-96b8-c7e238c606a7" containerName="registry-server" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.391652 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.392235 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="644a6ae4-550e-4ca7-9394-d2e09abb6b16" containerName="registry-server" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.392251 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ec10b90-7f91-4bda-bcaa-cce472027970" containerName="registry-server" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.392264 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b2b7254-d441-4474-b545-16b89b18f845" containerName="controller-manager" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.392273 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0b86903-8500-47c1-9fdb-5b0dea888375" containerName="marketplace-operator" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.392281 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b62d1e-bcc6-46c1-ac10-7e2819203102" containerName="registry-server" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.392292 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f973ebe4-3ce2-49c7-96b8-c7e238c606a7" containerName="registry-server" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.392950 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75b4f6d956-959ph"] Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.393048 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75b4f6d956-959ph" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.423249 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b2b7254-d441-4474-b545-16b89b18f845-config\") pod \"8b2b7254-d441-4474-b545-16b89b18f845\" (UID: \"8b2b7254-d441-4474-b545-16b89b18f845\") " Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.423329 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b2b7254-d441-4474-b545-16b89b18f845-serving-cert\") pod \"8b2b7254-d441-4474-b545-16b89b18f845\" (UID: \"8b2b7254-d441-4474-b545-16b89b18f845\") " Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.423362 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b2b7254-d441-4474-b545-16b89b18f845-proxy-ca-bundles\") pod \"8b2b7254-d441-4474-b545-16b89b18f845\" (UID: \"8b2b7254-d441-4474-b545-16b89b18f845\") " Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.423383 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vq88\" (UniqueName: \"kubernetes.io/projected/8b2b7254-d441-4474-b545-16b89b18f845-kube-api-access-2vq88\") pod \"8b2b7254-d441-4474-b545-16b89b18f845\" (UID: \"8b2b7254-d441-4474-b545-16b89b18f845\") " Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.423422 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b2b7254-d441-4474-b545-16b89b18f845-client-ca\") pod \"8b2b7254-d441-4474-b545-16b89b18f845\" (UID: \"8b2b7254-d441-4474-b545-16b89b18f845\") " Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.424108 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b2b7254-d441-4474-b545-16b89b18f845-config" (OuterVolumeSpecName: "config") pod "8b2b7254-d441-4474-b545-16b89b18f845" (UID: "8b2b7254-d441-4474-b545-16b89b18f845"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.424149 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b2b7254-d441-4474-b545-16b89b18f845-client-ca" (OuterVolumeSpecName: "client-ca") pod "8b2b7254-d441-4474-b545-16b89b18f845" (UID: "8b2b7254-d441-4474-b545-16b89b18f845"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.424617 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b2b7254-d441-4474-b545-16b89b18f845-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8b2b7254-d441-4474-b545-16b89b18f845" (UID: "8b2b7254-d441-4474-b545-16b89b18f845"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.431171 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b2b7254-d441-4474-b545-16b89b18f845-kube-api-access-2vq88" (OuterVolumeSpecName: "kube-api-access-2vq88") pod "8b2b7254-d441-4474-b545-16b89b18f845" (UID: "8b2b7254-d441-4474-b545-16b89b18f845"). InnerVolumeSpecName "kube-api-access-2vq88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.434950 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b2b7254-d441-4474-b545-16b89b18f845-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8b2b7254-d441-4474-b545-16b89b18f845" (UID: "8b2b7254-d441-4474-b545-16b89b18f845"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.439661 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42"] Jan 27 18:47:03 crc kubenswrapper[4915]: E0127 18:47:03.439861 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a76afbe7-44a3-4e31-ba26-53695d082598" containerName="route-controller-manager" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.439877 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a76afbe7-44a3-4e31-ba26-53695d082598" containerName="route-controller-manager" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.441563 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a76afbe7-44a3-4e31-ba26-53695d082598" containerName="route-controller-manager" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.442096 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.454924 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42"] Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.524233 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a76afbe7-44a3-4e31-ba26-53695d082598-config\") pod \"a76afbe7-44a3-4e31-ba26-53695d082598\" (UID: \"a76afbe7-44a3-4e31-ba26-53695d082598\") " Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.524305 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a76afbe7-44a3-4e31-ba26-53695d082598-client-ca\") pod \"a76afbe7-44a3-4e31-ba26-53695d082598\" (UID: \"a76afbe7-44a3-4e31-ba26-53695d082598\") " Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.524376 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a76afbe7-44a3-4e31-ba26-53695d082598-serving-cert\") pod \"a76afbe7-44a3-4e31-ba26-53695d082598\" (UID: \"a76afbe7-44a3-4e31-ba26-53695d082598\") " Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.524395 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vr8f\" (UniqueName: \"kubernetes.io/projected/a76afbe7-44a3-4e31-ba26-53695d082598-kube-api-access-6vr8f\") pod \"a76afbe7-44a3-4e31-ba26-53695d082598\" (UID: \"a76afbe7-44a3-4e31-ba26-53695d082598\") " Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.524497 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c729d49-0fc6-454b-9399-409170a5d7ea-proxy-ca-bundles\") pod \"controller-manager-75b4f6d956-959ph\" (UID: \"2c729d49-0fc6-454b-9399-409170a5d7ea\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-959ph" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.524524 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fcw4\" (UniqueName: \"kubernetes.io/projected/77e051f3-73a8-4352-bd63-04f7397480eb-kube-api-access-2fcw4\") pod \"route-controller-manager-9d8c44798-4bw42\" (UID: \"77e051f3-73a8-4352-bd63-04f7397480eb\") " pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.524548 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77e051f3-73a8-4352-bd63-04f7397480eb-config\") pod \"route-controller-manager-9d8c44798-4bw42\" (UID: \"77e051f3-73a8-4352-bd63-04f7397480eb\") " pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.524568 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mnf5\" (UniqueName: \"kubernetes.io/projected/2c729d49-0fc6-454b-9399-409170a5d7ea-kube-api-access-2mnf5\") pod \"controller-manager-75b4f6d956-959ph\" (UID: \"2c729d49-0fc6-454b-9399-409170a5d7ea\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-959ph" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.524594 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c729d49-0fc6-454b-9399-409170a5d7ea-serving-cert\") pod \"controller-manager-75b4f6d956-959ph\" (UID: \"2c729d49-0fc6-454b-9399-409170a5d7ea\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-959ph" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.524632 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c729d49-0fc6-454b-9399-409170a5d7ea-client-ca\") pod \"controller-manager-75b4f6d956-959ph\" (UID: \"2c729d49-0fc6-454b-9399-409170a5d7ea\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-959ph" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.524649 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77e051f3-73a8-4352-bd63-04f7397480eb-serving-cert\") pod \"route-controller-manager-9d8c44798-4bw42\" (UID: \"77e051f3-73a8-4352-bd63-04f7397480eb\") " pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.524667 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c729d49-0fc6-454b-9399-409170a5d7ea-config\") pod \"controller-manager-75b4f6d956-959ph\" (UID: \"2c729d49-0fc6-454b-9399-409170a5d7ea\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-959ph" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.524689 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77e051f3-73a8-4352-bd63-04f7397480eb-client-ca\") pod \"route-controller-manager-9d8c44798-4bw42\" (UID: \"77e051f3-73a8-4352-bd63-04f7397480eb\") " pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.524724 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b2b7254-d441-4474-b545-16b89b18f845-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.524735 4915 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b2b7254-d441-4474-b545-16b89b18f845-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.524743 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vq88\" (UniqueName: \"kubernetes.io/projected/8b2b7254-d441-4474-b545-16b89b18f845-kube-api-access-2vq88\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.524751 4915 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b2b7254-d441-4474-b545-16b89b18f845-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.524759 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b2b7254-d441-4474-b545-16b89b18f845-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.525477 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a76afbe7-44a3-4e31-ba26-53695d082598-client-ca" (OuterVolumeSpecName: "client-ca") pod "a76afbe7-44a3-4e31-ba26-53695d082598" (UID: "a76afbe7-44a3-4e31-ba26-53695d082598"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.525527 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a76afbe7-44a3-4e31-ba26-53695d082598-config" (OuterVolumeSpecName: "config") pod "a76afbe7-44a3-4e31-ba26-53695d082598" (UID: "a76afbe7-44a3-4e31-ba26-53695d082598"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.528660 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a76afbe7-44a3-4e31-ba26-53695d082598-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a76afbe7-44a3-4e31-ba26-53695d082598" (UID: "a76afbe7-44a3-4e31-ba26-53695d082598"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.529589 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a76afbe7-44a3-4e31-ba26-53695d082598-kube-api-access-6vr8f" (OuterVolumeSpecName: "kube-api-access-6vr8f") pod "a76afbe7-44a3-4e31-ba26-53695d082598" (UID: "a76afbe7-44a3-4e31-ba26-53695d082598"). InnerVolumeSpecName "kube-api-access-6vr8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.625837 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c729d49-0fc6-454b-9399-409170a5d7ea-serving-cert\") pod \"controller-manager-75b4f6d956-959ph\" (UID: \"2c729d49-0fc6-454b-9399-409170a5d7ea\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-959ph" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.625964 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c729d49-0fc6-454b-9399-409170a5d7ea-client-ca\") pod \"controller-manager-75b4f6d956-959ph\" (UID: \"2c729d49-0fc6-454b-9399-409170a5d7ea\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-959ph" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.626001 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77e051f3-73a8-4352-bd63-04f7397480eb-serving-cert\") pod \"route-controller-manager-9d8c44798-4bw42\" (UID: \"77e051f3-73a8-4352-bd63-04f7397480eb\") " pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.626044 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c729d49-0fc6-454b-9399-409170a5d7ea-config\") pod \"controller-manager-75b4f6d956-959ph\" (UID: \"2c729d49-0fc6-454b-9399-409170a5d7ea\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-959ph" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.626075 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77e051f3-73a8-4352-bd63-04f7397480eb-client-ca\") pod \"route-controller-manager-9d8c44798-4bw42\" (UID: \"77e051f3-73a8-4352-bd63-04f7397480eb\") " pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.626141 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c729d49-0fc6-454b-9399-409170a5d7ea-proxy-ca-bundles\") pod \"controller-manager-75b4f6d956-959ph\" (UID: \"2c729d49-0fc6-454b-9399-409170a5d7ea\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-959ph" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.626200 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fcw4\" (UniqueName: \"kubernetes.io/projected/77e051f3-73a8-4352-bd63-04f7397480eb-kube-api-access-2fcw4\") pod \"route-controller-manager-9d8c44798-4bw42\" (UID: \"77e051f3-73a8-4352-bd63-04f7397480eb\") " pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.626245 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77e051f3-73a8-4352-bd63-04f7397480eb-config\") pod \"route-controller-manager-9d8c44798-4bw42\" (UID: \"77e051f3-73a8-4352-bd63-04f7397480eb\") " pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.626277 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mnf5\" (UniqueName: \"kubernetes.io/projected/2c729d49-0fc6-454b-9399-409170a5d7ea-kube-api-access-2mnf5\") pod \"controller-manager-75b4f6d956-959ph\" (UID: \"2c729d49-0fc6-454b-9399-409170a5d7ea\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-959ph" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.626352 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a76afbe7-44a3-4e31-ba26-53695d082598-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.626375 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vr8f\" (UniqueName: \"kubernetes.io/projected/a76afbe7-44a3-4e31-ba26-53695d082598-kube-api-access-6vr8f\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.626396 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a76afbe7-44a3-4e31-ba26-53695d082598-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.626414 4915 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a76afbe7-44a3-4e31-ba26-53695d082598-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.629525 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77e051f3-73a8-4352-bd63-04f7397480eb-client-ca\") pod \"route-controller-manager-9d8c44798-4bw42\" (UID: \"77e051f3-73a8-4352-bd63-04f7397480eb\") " pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.629749 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c729d49-0fc6-454b-9399-409170a5d7ea-client-ca\") pod \"controller-manager-75b4f6d956-959ph\" (UID: \"2c729d49-0fc6-454b-9399-409170a5d7ea\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-959ph" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.630434 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c729d49-0fc6-454b-9399-409170a5d7ea-serving-cert\") pod \"controller-manager-75b4f6d956-959ph\" (UID: \"2c729d49-0fc6-454b-9399-409170a5d7ea\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-959ph" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.630533 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c729d49-0fc6-454b-9399-409170a5d7ea-proxy-ca-bundles\") pod \"controller-manager-75b4f6d956-959ph\" (UID: \"2c729d49-0fc6-454b-9399-409170a5d7ea\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-959ph" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.631934 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77e051f3-73a8-4352-bd63-04f7397480eb-serving-cert\") pod \"route-controller-manager-9d8c44798-4bw42\" (UID: \"77e051f3-73a8-4352-bd63-04f7397480eb\") " pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.632671 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c729d49-0fc6-454b-9399-409170a5d7ea-config\") pod \"controller-manager-75b4f6d956-959ph\" (UID: \"2c729d49-0fc6-454b-9399-409170a5d7ea\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-959ph" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.632980 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77e051f3-73a8-4352-bd63-04f7397480eb-config\") pod \"route-controller-manager-9d8c44798-4bw42\" (UID: \"77e051f3-73a8-4352-bd63-04f7397480eb\") " pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.646131 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mnf5\" (UniqueName: \"kubernetes.io/projected/2c729d49-0fc6-454b-9399-409170a5d7ea-kube-api-access-2mnf5\") pod \"controller-manager-75b4f6d956-959ph\" (UID: \"2c729d49-0fc6-454b-9399-409170a5d7ea\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-959ph" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.648236 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fcw4\" (UniqueName: \"kubernetes.io/projected/77e051f3-73a8-4352-bd63-04f7397480eb-kube-api-access-2fcw4\") pod \"route-controller-manager-9d8c44798-4bw42\" (UID: \"77e051f3-73a8-4352-bd63-04f7397480eb\") " pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.716349 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75b4f6d956-959ph" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.755200 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.963586 4915 generic.go:334] "Generic (PLEG): container finished" podID="8b2b7254-d441-4474-b545-16b89b18f845" containerID="81ca9ebb5eb695a53c5911b28c06cdd6f153e1bd19cc6cabaae74f447f69d20b" exitCode=0 Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.963644 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" event={"ID":"8b2b7254-d441-4474-b545-16b89b18f845","Type":"ContainerDied","Data":"81ca9ebb5eb695a53c5911b28c06cdd6f153e1bd19cc6cabaae74f447f69d20b"} Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.963768 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.963938 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vzw97" event={"ID":"8b2b7254-d441-4474-b545-16b89b18f845","Type":"ContainerDied","Data":"42700dd1a9532ccadc923575d286cbcb39f86f98a6861ccdf700137353cc3eb2"} Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.964003 4915 scope.go:117] "RemoveContainer" containerID="81ca9ebb5eb695a53c5911b28c06cdd6f153e1bd19cc6cabaae74f447f69d20b" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.965586 4915 generic.go:334] "Generic (PLEG): container finished" podID="a76afbe7-44a3-4e31-ba26-53695d082598" containerID="1039985fafb23488126c8d863f1c086b14942401aa0e917d66604b1093b82e97" exitCode=0 Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.965643 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk" event={"ID":"a76afbe7-44a3-4e31-ba26-53695d082598","Type":"ContainerDied","Data":"1039985fafb23488126c8d863f1c086b14942401aa0e917d66604b1093b82e97"} Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.965684 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk" event={"ID":"a76afbe7-44a3-4e31-ba26-53695d082598","Type":"ContainerDied","Data":"2c56468ad7acc27b78113e23092eba409c598205cfffd348d6acd0226efb4eb6"} Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.965852 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.969972 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75b4f6d956-959ph"] Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.995380 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vzw97"] Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.997324 4915 scope.go:117] "RemoveContainer" containerID="81ca9ebb5eb695a53c5911b28c06cdd6f153e1bd19cc6cabaae74f447f69d20b" Jan 27 18:47:03 crc kubenswrapper[4915]: E0127 18:47:03.998094 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81ca9ebb5eb695a53c5911b28c06cdd6f153e1bd19cc6cabaae74f447f69d20b\": container with ID starting with 81ca9ebb5eb695a53c5911b28c06cdd6f153e1bd19cc6cabaae74f447f69d20b not found: ID does not exist" containerID="81ca9ebb5eb695a53c5911b28c06cdd6f153e1bd19cc6cabaae74f447f69d20b" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.998133 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81ca9ebb5eb695a53c5911b28c06cdd6f153e1bd19cc6cabaae74f447f69d20b"} err="failed to get container status \"81ca9ebb5eb695a53c5911b28c06cdd6f153e1bd19cc6cabaae74f447f69d20b\": rpc error: code = NotFound desc = could not find container \"81ca9ebb5eb695a53c5911b28c06cdd6f153e1bd19cc6cabaae74f447f69d20b\": container with ID starting with 81ca9ebb5eb695a53c5911b28c06cdd6f153e1bd19cc6cabaae74f447f69d20b not found: ID does not exist" Jan 27 18:47:03 crc kubenswrapper[4915]: I0127 18:47:03.998177 4915 scope.go:117] "RemoveContainer" containerID="1039985fafb23488126c8d863f1c086b14942401aa0e917d66604b1093b82e97" Jan 27 18:47:04 crc kubenswrapper[4915]: I0127 18:47:04.004663 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vzw97"] Jan 27 18:47:04 crc kubenswrapper[4915]: I0127 18:47:04.012379 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42"] Jan 27 18:47:04 crc kubenswrapper[4915]: I0127 18:47:04.026116 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk"] Jan 27 18:47:04 crc kubenswrapper[4915]: I0127 18:47:04.035026 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-882qk"] Jan 27 18:47:04 crc kubenswrapper[4915]: I0127 18:47:04.042410 4915 scope.go:117] "RemoveContainer" containerID="1039985fafb23488126c8d863f1c086b14942401aa0e917d66604b1093b82e97" Jan 27 18:47:04 crc kubenswrapper[4915]: E0127 18:47:04.044224 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1039985fafb23488126c8d863f1c086b14942401aa0e917d66604b1093b82e97\": container with ID starting with 1039985fafb23488126c8d863f1c086b14942401aa0e917d66604b1093b82e97 not found: ID does not exist" containerID="1039985fafb23488126c8d863f1c086b14942401aa0e917d66604b1093b82e97" Jan 27 18:47:04 crc kubenswrapper[4915]: I0127 18:47:04.044256 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1039985fafb23488126c8d863f1c086b14942401aa0e917d66604b1093b82e97"} err="failed to get container status \"1039985fafb23488126c8d863f1c086b14942401aa0e917d66604b1093b82e97\": rpc error: code = NotFound desc = could not find container \"1039985fafb23488126c8d863f1c086b14942401aa0e917d66604b1093b82e97\": container with ID starting with 1039985fafb23488126c8d863f1c086b14942401aa0e917d66604b1093b82e97 not found: ID does not exist" Jan 27 18:47:04 crc kubenswrapper[4915]: I0127 18:47:04.973216 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42" event={"ID":"77e051f3-73a8-4352-bd63-04f7397480eb","Type":"ContainerStarted","Data":"1744eed8dd4d7fd4fd46f670bbd72f736d243d0829840ae2dca700848a2a47bd"} Jan 27 18:47:04 crc kubenswrapper[4915]: I0127 18:47:04.974223 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42" event={"ID":"77e051f3-73a8-4352-bd63-04f7397480eb","Type":"ContainerStarted","Data":"41319284dc55814b0fd388fcccfff39073ed5610e8b1bf029e3b7e609ef4b5cf"} Jan 27 18:47:04 crc kubenswrapper[4915]: I0127 18:47:04.974262 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42" Jan 27 18:47:04 crc kubenswrapper[4915]: I0127 18:47:04.978117 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75b4f6d956-959ph" event={"ID":"2c729d49-0fc6-454b-9399-409170a5d7ea","Type":"ContainerStarted","Data":"4507734f235c0d432bef11341ccc2ce0dc740b2032f693cc3c5f814029990fca"} Jan 27 18:47:04 crc kubenswrapper[4915]: I0127 18:47:04.978166 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75b4f6d956-959ph" event={"ID":"2c729d49-0fc6-454b-9399-409170a5d7ea","Type":"ContainerStarted","Data":"aab6971f1aa58bde962270940f9f7fe7aa566c62490e53947bfd3e7a863cf45c"} Jan 27 18:47:04 crc kubenswrapper[4915]: I0127 18:47:04.978406 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-75b4f6d956-959ph" Jan 27 18:47:04 crc kubenswrapper[4915]: I0127 18:47:04.984549 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42" Jan 27 18:47:04 crc kubenswrapper[4915]: I0127 18:47:04.984857 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-75b4f6d956-959ph" Jan 27 18:47:05 crc kubenswrapper[4915]: I0127 18:47:05.003183 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42" podStartSLOduration=2.003148246 podStartE2EDuration="2.003148246s" podCreationTimestamp="2026-01-27 18:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:47:04.991128259 +0000 UTC m=+316.348981963" watchObservedRunningTime="2026-01-27 18:47:05.003148246 +0000 UTC m=+316.361001950" Jan 27 18:47:05 crc kubenswrapper[4915]: I0127 18:47:05.011428 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-75b4f6d956-959ph" podStartSLOduration=2.011415544 podStartE2EDuration="2.011415544s" podCreationTimestamp="2026-01-27 18:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:47:05.00859801 +0000 UTC m=+316.366451664" watchObservedRunningTime="2026-01-27 18:47:05.011415544 +0000 UTC m=+316.369269208" Jan 27 18:47:05 crc kubenswrapper[4915]: I0127 18:47:05.364664 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b2b7254-d441-4474-b545-16b89b18f845" path="/var/lib/kubelet/pods/8b2b7254-d441-4474-b545-16b89b18f845/volumes" Jan 27 18:47:05 crc kubenswrapper[4915]: I0127 18:47:05.366130 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a76afbe7-44a3-4e31-ba26-53695d082598" path="/var/lib/kubelet/pods/a76afbe7-44a3-4e31-ba26-53695d082598/volumes" Jan 27 18:47:20 crc kubenswrapper[4915]: I0127 18:47:20.624519 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:47:20 crc kubenswrapper[4915]: I0127 18:47:20.625116 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:47:50 crc kubenswrapper[4915]: I0127 18:47:50.624435 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:47:50 crc kubenswrapper[4915]: I0127 18:47:50.625171 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:48:02 crc kubenswrapper[4915]: I0127 18:48:02.906489 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75b4f6d956-959ph"] Jan 27 18:48:02 crc kubenswrapper[4915]: I0127 18:48:02.907457 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-75b4f6d956-959ph" podUID="2c729d49-0fc6-454b-9399-409170a5d7ea" containerName="controller-manager" containerID="cri-o://4507734f235c0d432bef11341ccc2ce0dc740b2032f693cc3c5f814029990fca" gracePeriod=30 Jan 27 18:48:02 crc kubenswrapper[4915]: I0127 18:48:02.924697 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42"] Jan 27 18:48:02 crc kubenswrapper[4915]: I0127 18:48:02.925013 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42" podUID="77e051f3-73a8-4352-bd63-04f7397480eb" containerName="route-controller-manager" containerID="cri-o://1744eed8dd4d7fd4fd46f670bbd72f736d243d0829840ae2dca700848a2a47bd" gracePeriod=30 Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.316573 4915 generic.go:334] "Generic (PLEG): container finished" podID="2c729d49-0fc6-454b-9399-409170a5d7ea" containerID="4507734f235c0d432bef11341ccc2ce0dc740b2032f693cc3c5f814029990fca" exitCode=0 Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.316592 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75b4f6d956-959ph" event={"ID":"2c729d49-0fc6-454b-9399-409170a5d7ea","Type":"ContainerDied","Data":"4507734f235c0d432bef11341ccc2ce0dc740b2032f693cc3c5f814029990fca"} Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.318578 4915 generic.go:334] "Generic (PLEG): container finished" podID="77e051f3-73a8-4352-bd63-04f7397480eb" containerID="1744eed8dd4d7fd4fd46f670bbd72f736d243d0829840ae2dca700848a2a47bd" exitCode=0 Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.318610 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42" event={"ID":"77e051f3-73a8-4352-bd63-04f7397480eb","Type":"ContainerDied","Data":"1744eed8dd4d7fd4fd46f670bbd72f736d243d0829840ae2dca700848a2a47bd"} Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.368147 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42" Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.387329 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75b4f6d956-959ph" Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.450827 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c729d49-0fc6-454b-9399-409170a5d7ea-serving-cert\") pod \"2c729d49-0fc6-454b-9399-409170a5d7ea\" (UID: \"2c729d49-0fc6-454b-9399-409170a5d7ea\") " Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.450902 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c729d49-0fc6-454b-9399-409170a5d7ea-client-ca\") pod \"2c729d49-0fc6-454b-9399-409170a5d7ea\" (UID: \"2c729d49-0fc6-454b-9399-409170a5d7ea\") " Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.450965 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mnf5\" (UniqueName: \"kubernetes.io/projected/2c729d49-0fc6-454b-9399-409170a5d7ea-kube-api-access-2mnf5\") pod \"2c729d49-0fc6-454b-9399-409170a5d7ea\" (UID: \"2c729d49-0fc6-454b-9399-409170a5d7ea\") " Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.450997 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c729d49-0fc6-454b-9399-409170a5d7ea-config\") pod \"2c729d49-0fc6-454b-9399-409170a5d7ea\" (UID: \"2c729d49-0fc6-454b-9399-409170a5d7ea\") " Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.451021 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77e051f3-73a8-4352-bd63-04f7397480eb-serving-cert\") pod \"77e051f3-73a8-4352-bd63-04f7397480eb\" (UID: \"77e051f3-73a8-4352-bd63-04f7397480eb\") " Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.451042 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77e051f3-73a8-4352-bd63-04f7397480eb-config\") pod \"77e051f3-73a8-4352-bd63-04f7397480eb\" (UID: \"77e051f3-73a8-4352-bd63-04f7397480eb\") " Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.451065 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77e051f3-73a8-4352-bd63-04f7397480eb-client-ca\") pod \"77e051f3-73a8-4352-bd63-04f7397480eb\" (UID: \"77e051f3-73a8-4352-bd63-04f7397480eb\") " Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.451099 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fcw4\" (UniqueName: \"kubernetes.io/projected/77e051f3-73a8-4352-bd63-04f7397480eb-kube-api-access-2fcw4\") pod \"77e051f3-73a8-4352-bd63-04f7397480eb\" (UID: \"77e051f3-73a8-4352-bd63-04f7397480eb\") " Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.451137 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c729d49-0fc6-454b-9399-409170a5d7ea-proxy-ca-bundles\") pod \"2c729d49-0fc6-454b-9399-409170a5d7ea\" (UID: \"2c729d49-0fc6-454b-9399-409170a5d7ea\") " Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.452550 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c729d49-0fc6-454b-9399-409170a5d7ea-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2c729d49-0fc6-454b-9399-409170a5d7ea" (UID: "2c729d49-0fc6-454b-9399-409170a5d7ea"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.453421 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77e051f3-73a8-4352-bd63-04f7397480eb-config" (OuterVolumeSpecName: "config") pod "77e051f3-73a8-4352-bd63-04f7397480eb" (UID: "77e051f3-73a8-4352-bd63-04f7397480eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.454141 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77e051f3-73a8-4352-bd63-04f7397480eb-client-ca" (OuterVolumeSpecName: "client-ca") pod "77e051f3-73a8-4352-bd63-04f7397480eb" (UID: "77e051f3-73a8-4352-bd63-04f7397480eb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.454199 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c729d49-0fc6-454b-9399-409170a5d7ea-client-ca" (OuterVolumeSpecName: "client-ca") pod "2c729d49-0fc6-454b-9399-409170a5d7ea" (UID: "2c729d49-0fc6-454b-9399-409170a5d7ea"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.457051 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c729d49-0fc6-454b-9399-409170a5d7ea-config" (OuterVolumeSpecName: "config") pod "2c729d49-0fc6-454b-9399-409170a5d7ea" (UID: "2c729d49-0fc6-454b-9399-409170a5d7ea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.460867 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77e051f3-73a8-4352-bd63-04f7397480eb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "77e051f3-73a8-4352-bd63-04f7397480eb" (UID: "77e051f3-73a8-4352-bd63-04f7397480eb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.460906 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77e051f3-73a8-4352-bd63-04f7397480eb-kube-api-access-2fcw4" (OuterVolumeSpecName: "kube-api-access-2fcw4") pod "77e051f3-73a8-4352-bd63-04f7397480eb" (UID: "77e051f3-73a8-4352-bd63-04f7397480eb"). InnerVolumeSpecName "kube-api-access-2fcw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.460927 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c729d49-0fc6-454b-9399-409170a5d7ea-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2c729d49-0fc6-454b-9399-409170a5d7ea" (UID: "2c729d49-0fc6-454b-9399-409170a5d7ea"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.460951 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c729d49-0fc6-454b-9399-409170a5d7ea-kube-api-access-2mnf5" (OuterVolumeSpecName: "kube-api-access-2mnf5") pod "2c729d49-0fc6-454b-9399-409170a5d7ea" (UID: "2c729d49-0fc6-454b-9399-409170a5d7ea"). InnerVolumeSpecName "kube-api-access-2mnf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.552652 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mnf5\" (UniqueName: \"kubernetes.io/projected/2c729d49-0fc6-454b-9399-409170a5d7ea-kube-api-access-2mnf5\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.552689 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c729d49-0fc6-454b-9399-409170a5d7ea-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.552705 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77e051f3-73a8-4352-bd63-04f7397480eb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.552718 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77e051f3-73a8-4352-bd63-04f7397480eb-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.552728 4915 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77e051f3-73a8-4352-bd63-04f7397480eb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.552739 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fcw4\" (UniqueName: \"kubernetes.io/projected/77e051f3-73a8-4352-bd63-04f7397480eb-kube-api-access-2fcw4\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.552751 4915 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c729d49-0fc6-454b-9399-409170a5d7ea-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.552760 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c729d49-0fc6-454b-9399-409170a5d7ea-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:03 crc kubenswrapper[4915]: I0127 18:48:03.552769 4915 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c729d49-0fc6-454b-9399-409170a5d7ea-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.327208 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75b4f6d956-959ph" event={"ID":"2c729d49-0fc6-454b-9399-409170a5d7ea","Type":"ContainerDied","Data":"aab6971f1aa58bde962270940f9f7fe7aa566c62490e53947bfd3e7a863cf45c"} Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.327284 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75b4f6d956-959ph" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.327578 4915 scope.go:117] "RemoveContainer" containerID="4507734f235c0d432bef11341ccc2ce0dc740b2032f693cc3c5f814029990fca" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.331199 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42" event={"ID":"77e051f3-73a8-4352-bd63-04f7397480eb","Type":"ContainerDied","Data":"41319284dc55814b0fd388fcccfff39073ed5610e8b1bf029e3b7e609ef4b5cf"} Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.331269 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.353971 4915 scope.go:117] "RemoveContainer" containerID="1744eed8dd4d7fd4fd46f670bbd72f736d243d0829840ae2dca700848a2a47bd" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.374014 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75b4f6d956-959ph"] Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.386643 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-75b4f6d956-959ph"] Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.392696 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42"] Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.396098 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9d8c44798-4bw42"] Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.842868 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df6477d49-2l998"] Jan 27 18:48:04 crc kubenswrapper[4915]: E0127 18:48:04.847182 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77e051f3-73a8-4352-bd63-04f7397480eb" containerName="route-controller-manager" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.847244 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="77e051f3-73a8-4352-bd63-04f7397480eb" containerName="route-controller-manager" Jan 27 18:48:04 crc kubenswrapper[4915]: E0127 18:48:04.847281 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c729d49-0fc6-454b-9399-409170a5d7ea" containerName="controller-manager" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.847293 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c729d49-0fc6-454b-9399-409170a5d7ea" containerName="controller-manager" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.847502 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c729d49-0fc6-454b-9399-409170a5d7ea" containerName="controller-manager" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.847527 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="77e051f3-73a8-4352-bd63-04f7397480eb" containerName="route-controller-manager" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.848447 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-df6477d49-2l998" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.850284 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5cf7bf7458-4xhdv"] Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.852025 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cf7bf7458-4xhdv" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.854025 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df6477d49-2l998"] Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.857319 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.858373 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cf7bf7458-4xhdv"] Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.859196 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.860869 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.861427 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.861579 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.861624 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.861853 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.861948 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.861991 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.861954 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.862093 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.862096 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.882271 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.975680 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a53d35cb-c374-4ff2-bd10-ff17d0bba607-serving-cert\") pod \"controller-manager-5cf7bf7458-4xhdv\" (UID: \"a53d35cb-c374-4ff2-bd10-ff17d0bba607\") " pod="openshift-controller-manager/controller-manager-5cf7bf7458-4xhdv" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.975765 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcvdc\" (UniqueName: \"kubernetes.io/projected/a53d35cb-c374-4ff2-bd10-ff17d0bba607-kube-api-access-bcvdc\") pod \"controller-manager-5cf7bf7458-4xhdv\" (UID: \"a53d35cb-c374-4ff2-bd10-ff17d0bba607\") " pod="openshift-controller-manager/controller-manager-5cf7bf7458-4xhdv" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.975806 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d19dcb4-c4b8-4f4d-89b5-c4fbb333833a-config\") pod \"route-controller-manager-df6477d49-2l998\" (UID: \"2d19dcb4-c4b8-4f4d-89b5-c4fbb333833a\") " pod="openshift-route-controller-manager/route-controller-manager-df6477d49-2l998" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.975856 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d19dcb4-c4b8-4f4d-89b5-c4fbb333833a-serving-cert\") pod \"route-controller-manager-df6477d49-2l998\" (UID: \"2d19dcb4-c4b8-4f4d-89b5-c4fbb333833a\") " pod="openshift-route-controller-manager/route-controller-manager-df6477d49-2l998" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.975874 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a53d35cb-c374-4ff2-bd10-ff17d0bba607-client-ca\") pod \"controller-manager-5cf7bf7458-4xhdv\" (UID: \"a53d35cb-c374-4ff2-bd10-ff17d0bba607\") " pod="openshift-controller-manager/controller-manager-5cf7bf7458-4xhdv" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.975908 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a53d35cb-c374-4ff2-bd10-ff17d0bba607-proxy-ca-bundles\") pod \"controller-manager-5cf7bf7458-4xhdv\" (UID: \"a53d35cb-c374-4ff2-bd10-ff17d0bba607\") " pod="openshift-controller-manager/controller-manager-5cf7bf7458-4xhdv" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.976022 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9hqd\" (UniqueName: \"kubernetes.io/projected/2d19dcb4-c4b8-4f4d-89b5-c4fbb333833a-kube-api-access-h9hqd\") pod \"route-controller-manager-df6477d49-2l998\" (UID: \"2d19dcb4-c4b8-4f4d-89b5-c4fbb333833a\") " pod="openshift-route-controller-manager/route-controller-manager-df6477d49-2l998" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.976057 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a53d35cb-c374-4ff2-bd10-ff17d0bba607-config\") pod \"controller-manager-5cf7bf7458-4xhdv\" (UID: \"a53d35cb-c374-4ff2-bd10-ff17d0bba607\") " pod="openshift-controller-manager/controller-manager-5cf7bf7458-4xhdv" Jan 27 18:48:04 crc kubenswrapper[4915]: I0127 18:48:04.976079 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d19dcb4-c4b8-4f4d-89b5-c4fbb333833a-client-ca\") pod \"route-controller-manager-df6477d49-2l998\" (UID: \"2d19dcb4-c4b8-4f4d-89b5-c4fbb333833a\") " pod="openshift-route-controller-manager/route-controller-manager-df6477d49-2l998" Jan 27 18:48:05 crc kubenswrapper[4915]: I0127 18:48:05.077925 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9hqd\" (UniqueName: \"kubernetes.io/projected/2d19dcb4-c4b8-4f4d-89b5-c4fbb333833a-kube-api-access-h9hqd\") pod \"route-controller-manager-df6477d49-2l998\" (UID: \"2d19dcb4-c4b8-4f4d-89b5-c4fbb333833a\") " pod="openshift-route-controller-manager/route-controller-manager-df6477d49-2l998" Jan 27 18:48:05 crc kubenswrapper[4915]: I0127 18:48:05.078005 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a53d35cb-c374-4ff2-bd10-ff17d0bba607-config\") pod \"controller-manager-5cf7bf7458-4xhdv\" (UID: \"a53d35cb-c374-4ff2-bd10-ff17d0bba607\") " pod="openshift-controller-manager/controller-manager-5cf7bf7458-4xhdv" Jan 27 18:48:05 crc kubenswrapper[4915]: I0127 18:48:05.078065 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d19dcb4-c4b8-4f4d-89b5-c4fbb333833a-client-ca\") pod \"route-controller-manager-df6477d49-2l998\" (UID: \"2d19dcb4-c4b8-4f4d-89b5-c4fbb333833a\") " pod="openshift-route-controller-manager/route-controller-manager-df6477d49-2l998" Jan 27 18:48:05 crc kubenswrapper[4915]: I0127 18:48:05.078148 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a53d35cb-c374-4ff2-bd10-ff17d0bba607-serving-cert\") pod \"controller-manager-5cf7bf7458-4xhdv\" (UID: \"a53d35cb-c374-4ff2-bd10-ff17d0bba607\") " pod="openshift-controller-manager/controller-manager-5cf7bf7458-4xhdv" Jan 27 18:48:05 crc kubenswrapper[4915]: I0127 18:48:05.078204 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcvdc\" (UniqueName: \"kubernetes.io/projected/a53d35cb-c374-4ff2-bd10-ff17d0bba607-kube-api-access-bcvdc\") pod \"controller-manager-5cf7bf7458-4xhdv\" (UID: \"a53d35cb-c374-4ff2-bd10-ff17d0bba607\") " pod="openshift-controller-manager/controller-manager-5cf7bf7458-4xhdv" Jan 27 18:48:05 crc kubenswrapper[4915]: I0127 18:48:05.078236 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d19dcb4-c4b8-4f4d-89b5-c4fbb333833a-config\") pod \"route-controller-manager-df6477d49-2l998\" (UID: \"2d19dcb4-c4b8-4f4d-89b5-c4fbb333833a\") " pod="openshift-route-controller-manager/route-controller-manager-df6477d49-2l998" Jan 27 18:48:05 crc kubenswrapper[4915]: I0127 18:48:05.078265 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d19dcb4-c4b8-4f4d-89b5-c4fbb333833a-serving-cert\") pod \"route-controller-manager-df6477d49-2l998\" (UID: \"2d19dcb4-c4b8-4f4d-89b5-c4fbb333833a\") " pod="openshift-route-controller-manager/route-controller-manager-df6477d49-2l998" Jan 27 18:48:05 crc kubenswrapper[4915]: I0127 18:48:05.078298 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a53d35cb-c374-4ff2-bd10-ff17d0bba607-client-ca\") pod \"controller-manager-5cf7bf7458-4xhdv\" (UID: \"a53d35cb-c374-4ff2-bd10-ff17d0bba607\") " pod="openshift-controller-manager/controller-manager-5cf7bf7458-4xhdv" Jan 27 18:48:05 crc kubenswrapper[4915]: I0127 18:48:05.078320 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a53d35cb-c374-4ff2-bd10-ff17d0bba607-proxy-ca-bundles\") pod \"controller-manager-5cf7bf7458-4xhdv\" (UID: \"a53d35cb-c374-4ff2-bd10-ff17d0bba607\") " pod="openshift-controller-manager/controller-manager-5cf7bf7458-4xhdv" Jan 27 18:48:05 crc kubenswrapper[4915]: I0127 18:48:05.079600 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d19dcb4-c4b8-4f4d-89b5-c4fbb333833a-client-ca\") pod \"route-controller-manager-df6477d49-2l998\" (UID: \"2d19dcb4-c4b8-4f4d-89b5-c4fbb333833a\") " pod="openshift-route-controller-manager/route-controller-manager-df6477d49-2l998" Jan 27 18:48:05 crc kubenswrapper[4915]: I0127 18:48:05.079835 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d19dcb4-c4b8-4f4d-89b5-c4fbb333833a-config\") pod \"route-controller-manager-df6477d49-2l998\" (UID: \"2d19dcb4-c4b8-4f4d-89b5-c4fbb333833a\") " pod="openshift-route-controller-manager/route-controller-manager-df6477d49-2l998" Jan 27 18:48:05 crc kubenswrapper[4915]: I0127 18:48:05.079907 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a53d35cb-c374-4ff2-bd10-ff17d0bba607-proxy-ca-bundles\") pod \"controller-manager-5cf7bf7458-4xhdv\" (UID: \"a53d35cb-c374-4ff2-bd10-ff17d0bba607\") " pod="openshift-controller-manager/controller-manager-5cf7bf7458-4xhdv" Jan 27 18:48:05 crc kubenswrapper[4915]: I0127 18:48:05.079918 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a53d35cb-c374-4ff2-bd10-ff17d0bba607-client-ca\") pod \"controller-manager-5cf7bf7458-4xhdv\" (UID: \"a53d35cb-c374-4ff2-bd10-ff17d0bba607\") " pod="openshift-controller-manager/controller-manager-5cf7bf7458-4xhdv" Jan 27 18:48:05 crc kubenswrapper[4915]: I0127 18:48:05.083005 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d19dcb4-c4b8-4f4d-89b5-c4fbb333833a-serving-cert\") pod \"route-controller-manager-df6477d49-2l998\" (UID: \"2d19dcb4-c4b8-4f4d-89b5-c4fbb333833a\") " pod="openshift-route-controller-manager/route-controller-manager-df6477d49-2l998" Jan 27 18:48:05 crc kubenswrapper[4915]: I0127 18:48:05.084494 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a53d35cb-c374-4ff2-bd10-ff17d0bba607-serving-cert\") pod \"controller-manager-5cf7bf7458-4xhdv\" (UID: \"a53d35cb-c374-4ff2-bd10-ff17d0bba607\") " pod="openshift-controller-manager/controller-manager-5cf7bf7458-4xhdv" Jan 27 18:48:05 crc kubenswrapper[4915]: I0127 18:48:05.098057 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9hqd\" (UniqueName: \"kubernetes.io/projected/2d19dcb4-c4b8-4f4d-89b5-c4fbb333833a-kube-api-access-h9hqd\") pod \"route-controller-manager-df6477d49-2l998\" (UID: \"2d19dcb4-c4b8-4f4d-89b5-c4fbb333833a\") " pod="openshift-route-controller-manager/route-controller-manager-df6477d49-2l998" Jan 27 18:48:05 crc kubenswrapper[4915]: I0127 18:48:05.099360 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a53d35cb-c374-4ff2-bd10-ff17d0bba607-config\") pod \"controller-manager-5cf7bf7458-4xhdv\" (UID: \"a53d35cb-c374-4ff2-bd10-ff17d0bba607\") " pod="openshift-controller-manager/controller-manager-5cf7bf7458-4xhdv" Jan 27 18:48:05 crc kubenswrapper[4915]: I0127 18:48:05.099498 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcvdc\" (UniqueName: \"kubernetes.io/projected/a53d35cb-c374-4ff2-bd10-ff17d0bba607-kube-api-access-bcvdc\") pod \"controller-manager-5cf7bf7458-4xhdv\" (UID: \"a53d35cb-c374-4ff2-bd10-ff17d0bba607\") " pod="openshift-controller-manager/controller-manager-5cf7bf7458-4xhdv" Jan 27 18:48:05 crc kubenswrapper[4915]: I0127 18:48:05.191057 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-df6477d49-2l998" Jan 27 18:48:05 crc kubenswrapper[4915]: I0127 18:48:05.207259 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cf7bf7458-4xhdv" Jan 27 18:48:05 crc kubenswrapper[4915]: I0127 18:48:05.367700 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c729d49-0fc6-454b-9399-409170a5d7ea" path="/var/lib/kubelet/pods/2c729d49-0fc6-454b-9399-409170a5d7ea/volumes" Jan 27 18:48:05 crc kubenswrapper[4915]: I0127 18:48:05.369140 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77e051f3-73a8-4352-bd63-04f7397480eb" path="/var/lib/kubelet/pods/77e051f3-73a8-4352-bd63-04f7397480eb/volumes" Jan 27 18:48:05 crc kubenswrapper[4915]: I0127 18:48:05.456208 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df6477d49-2l998"] Jan 27 18:48:05 crc kubenswrapper[4915]: I0127 18:48:05.513718 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cf7bf7458-4xhdv"] Jan 27 18:48:06 crc kubenswrapper[4915]: I0127 18:48:06.350016 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cf7bf7458-4xhdv" event={"ID":"a53d35cb-c374-4ff2-bd10-ff17d0bba607","Type":"ContainerStarted","Data":"e6f1e3eb4e73703ba52f5bcca4bf6225e5d16ff3f71b8a79010dbd8920087919"} Jan 27 18:48:06 crc kubenswrapper[4915]: I0127 18:48:06.350390 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5cf7bf7458-4xhdv" Jan 27 18:48:06 crc kubenswrapper[4915]: I0127 18:48:06.350408 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cf7bf7458-4xhdv" event={"ID":"a53d35cb-c374-4ff2-bd10-ff17d0bba607","Type":"ContainerStarted","Data":"ad68f69e03a2cab6e80c0d038b36ef5b80eeaf64712a4f6d2dfc5f3d0627d86e"} Jan 27 18:48:06 crc kubenswrapper[4915]: I0127 18:48:06.352454 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-df6477d49-2l998" event={"ID":"2d19dcb4-c4b8-4f4d-89b5-c4fbb333833a","Type":"ContainerStarted","Data":"ef41ba8b7f370b63f8a2d6115ae994ea59c4503e04a0e638b3af5fa3f0e8e69f"} Jan 27 18:48:06 crc kubenswrapper[4915]: I0127 18:48:06.352515 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-df6477d49-2l998" event={"ID":"2d19dcb4-c4b8-4f4d-89b5-c4fbb333833a","Type":"ContainerStarted","Data":"e866650365df7ea4b27fa200a4de515c7d0e96487a08f06efb3f664271a260d6"} Jan 27 18:48:06 crc kubenswrapper[4915]: I0127 18:48:06.352667 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-df6477d49-2l998" Jan 27 18:48:06 crc kubenswrapper[4915]: I0127 18:48:06.358534 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5cf7bf7458-4xhdv" Jan 27 18:48:06 crc kubenswrapper[4915]: I0127 18:48:06.361143 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-df6477d49-2l998" Jan 27 18:48:06 crc kubenswrapper[4915]: I0127 18:48:06.372119 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5cf7bf7458-4xhdv" podStartSLOduration=4.372098116 podStartE2EDuration="4.372098116s" podCreationTimestamp="2026-01-27 18:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:48:06.371521901 +0000 UTC m=+377.729375605" watchObservedRunningTime="2026-01-27 18:48:06.372098116 +0000 UTC m=+377.729951800" Jan 27 18:48:06 crc kubenswrapper[4915]: I0127 18:48:06.394277 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-df6477d49-2l998" podStartSLOduration=4.394251097 podStartE2EDuration="4.394251097s" podCreationTimestamp="2026-01-27 18:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:48:06.391051376 +0000 UTC m=+377.748905080" watchObservedRunningTime="2026-01-27 18:48:06.394251097 +0000 UTC m=+377.752104791" Jan 27 18:48:09 crc kubenswrapper[4915]: I0127 18:48:09.761585 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9s28z"] Jan 27 18:48:09 crc kubenswrapper[4915]: I0127 18:48:09.762693 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" Jan 27 18:48:09 crc kubenswrapper[4915]: I0127 18:48:09.779084 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9s28z"] Jan 27 18:48:09 crc kubenswrapper[4915]: I0127 18:48:09.836862 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e67da930-dcad-42ee-8ad4-aebdba34f9cd-registry-tls\") pod \"image-registry-66df7c8f76-9s28z\" (UID: \"e67da930-dcad-42ee-8ad4-aebdba34f9cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" Jan 27 18:48:09 crc kubenswrapper[4915]: I0127 18:48:09.836922 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e67da930-dcad-42ee-8ad4-aebdba34f9cd-registry-certificates\") pod \"image-registry-66df7c8f76-9s28z\" (UID: \"e67da930-dcad-42ee-8ad4-aebdba34f9cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" Jan 27 18:48:09 crc kubenswrapper[4915]: I0127 18:48:09.836946 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e67da930-dcad-42ee-8ad4-aebdba34f9cd-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9s28z\" (UID: \"e67da930-dcad-42ee-8ad4-aebdba34f9cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" Jan 27 18:48:09 crc kubenswrapper[4915]: I0127 18:48:09.836992 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e67da930-dcad-42ee-8ad4-aebdba34f9cd-bound-sa-token\") pod \"image-registry-66df7c8f76-9s28z\" (UID: \"e67da930-dcad-42ee-8ad4-aebdba34f9cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" Jan 27 18:48:09 crc kubenswrapper[4915]: I0127 18:48:09.837045 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e67da930-dcad-42ee-8ad4-aebdba34f9cd-trusted-ca\") pod \"image-registry-66df7c8f76-9s28z\" (UID: \"e67da930-dcad-42ee-8ad4-aebdba34f9cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" Jan 27 18:48:09 crc kubenswrapper[4915]: I0127 18:48:09.837080 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr9x5\" (UniqueName: \"kubernetes.io/projected/e67da930-dcad-42ee-8ad4-aebdba34f9cd-kube-api-access-fr9x5\") pod \"image-registry-66df7c8f76-9s28z\" (UID: \"e67da930-dcad-42ee-8ad4-aebdba34f9cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" Jan 27 18:48:09 crc kubenswrapper[4915]: I0127 18:48:09.837118 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9s28z\" (UID: \"e67da930-dcad-42ee-8ad4-aebdba34f9cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" Jan 27 18:48:09 crc kubenswrapper[4915]: I0127 18:48:09.837176 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e67da930-dcad-42ee-8ad4-aebdba34f9cd-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9s28z\" (UID: \"e67da930-dcad-42ee-8ad4-aebdba34f9cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" Jan 27 18:48:09 crc kubenswrapper[4915]: I0127 18:48:09.866910 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9s28z\" (UID: \"e67da930-dcad-42ee-8ad4-aebdba34f9cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" Jan 27 18:48:09 crc kubenswrapper[4915]: I0127 18:48:09.938557 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e67da930-dcad-42ee-8ad4-aebdba34f9cd-trusted-ca\") pod \"image-registry-66df7c8f76-9s28z\" (UID: \"e67da930-dcad-42ee-8ad4-aebdba34f9cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" Jan 27 18:48:09 crc kubenswrapper[4915]: I0127 18:48:09.938602 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr9x5\" (UniqueName: \"kubernetes.io/projected/e67da930-dcad-42ee-8ad4-aebdba34f9cd-kube-api-access-fr9x5\") pod \"image-registry-66df7c8f76-9s28z\" (UID: \"e67da930-dcad-42ee-8ad4-aebdba34f9cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" Jan 27 18:48:09 crc kubenswrapper[4915]: I0127 18:48:09.938639 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e67da930-dcad-42ee-8ad4-aebdba34f9cd-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9s28z\" (UID: \"e67da930-dcad-42ee-8ad4-aebdba34f9cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" Jan 27 18:48:09 crc kubenswrapper[4915]: I0127 18:48:09.938671 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e67da930-dcad-42ee-8ad4-aebdba34f9cd-registry-tls\") pod \"image-registry-66df7c8f76-9s28z\" (UID: \"e67da930-dcad-42ee-8ad4-aebdba34f9cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" Jan 27 18:48:09 crc kubenswrapper[4915]: I0127 18:48:09.938698 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e67da930-dcad-42ee-8ad4-aebdba34f9cd-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9s28z\" (UID: \"e67da930-dcad-42ee-8ad4-aebdba34f9cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" Jan 27 18:48:09 crc kubenswrapper[4915]: I0127 18:48:09.938715 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e67da930-dcad-42ee-8ad4-aebdba34f9cd-registry-certificates\") pod \"image-registry-66df7c8f76-9s28z\" (UID: \"e67da930-dcad-42ee-8ad4-aebdba34f9cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" Jan 27 18:48:09 crc kubenswrapper[4915]: I0127 18:48:09.938736 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e67da930-dcad-42ee-8ad4-aebdba34f9cd-bound-sa-token\") pod \"image-registry-66df7c8f76-9s28z\" (UID: \"e67da930-dcad-42ee-8ad4-aebdba34f9cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" Jan 27 18:48:09 crc kubenswrapper[4915]: I0127 18:48:09.940121 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e67da930-dcad-42ee-8ad4-aebdba34f9cd-trusted-ca\") pod \"image-registry-66df7c8f76-9s28z\" (UID: \"e67da930-dcad-42ee-8ad4-aebdba34f9cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" Jan 27 18:48:09 crc kubenswrapper[4915]: I0127 18:48:09.941205 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e67da930-dcad-42ee-8ad4-aebdba34f9cd-registry-certificates\") pod \"image-registry-66df7c8f76-9s28z\" (UID: \"e67da930-dcad-42ee-8ad4-aebdba34f9cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" Jan 27 18:48:09 crc kubenswrapper[4915]: I0127 18:48:09.941961 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e67da930-dcad-42ee-8ad4-aebdba34f9cd-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9s28z\" (UID: \"e67da930-dcad-42ee-8ad4-aebdba34f9cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" Jan 27 18:48:09 crc kubenswrapper[4915]: I0127 18:48:09.946098 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e67da930-dcad-42ee-8ad4-aebdba34f9cd-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9s28z\" (UID: \"e67da930-dcad-42ee-8ad4-aebdba34f9cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" Jan 27 18:48:09 crc kubenswrapper[4915]: I0127 18:48:09.958707 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e67da930-dcad-42ee-8ad4-aebdba34f9cd-bound-sa-token\") pod \"image-registry-66df7c8f76-9s28z\" (UID: \"e67da930-dcad-42ee-8ad4-aebdba34f9cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" Jan 27 18:48:09 crc kubenswrapper[4915]: I0127 18:48:09.964425 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e67da930-dcad-42ee-8ad4-aebdba34f9cd-registry-tls\") pod \"image-registry-66df7c8f76-9s28z\" (UID: \"e67da930-dcad-42ee-8ad4-aebdba34f9cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" Jan 27 18:48:09 crc kubenswrapper[4915]: I0127 18:48:09.973122 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr9x5\" (UniqueName: \"kubernetes.io/projected/e67da930-dcad-42ee-8ad4-aebdba34f9cd-kube-api-access-fr9x5\") pod \"image-registry-66df7c8f76-9s28z\" (UID: \"e67da930-dcad-42ee-8ad4-aebdba34f9cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.077094 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.346948 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lhlml"] Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.350822 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhlml" Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.353907 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.356194 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lhlml"] Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.445926 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a9d312f-394f-4678-99fe-5a16946a5c6b-catalog-content\") pod \"redhat-operators-lhlml\" (UID: \"7a9d312f-394f-4678-99fe-5a16946a5c6b\") " pod="openshift-marketplace/redhat-operators-lhlml" Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.445998 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a9d312f-394f-4678-99fe-5a16946a5c6b-utilities\") pod \"redhat-operators-lhlml\" (UID: \"7a9d312f-394f-4678-99fe-5a16946a5c6b\") " pod="openshift-marketplace/redhat-operators-lhlml" Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.446037 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chkwl\" (UniqueName: \"kubernetes.io/projected/7a9d312f-394f-4678-99fe-5a16946a5c6b-kube-api-access-chkwl\") pod \"redhat-operators-lhlml\" (UID: \"7a9d312f-394f-4678-99fe-5a16946a5c6b\") " pod="openshift-marketplace/redhat-operators-lhlml" Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.455450 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9s28z"] Jan 27 18:48:10 crc kubenswrapper[4915]: W0127 18:48:10.463286 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode67da930_dcad_42ee_8ad4_aebdba34f9cd.slice/crio-9b2381ec0993cd2b6fd3915e02a1aea3f122027c94f755bc5cb8a30c73225a53 WatchSource:0}: Error finding container 9b2381ec0993cd2b6fd3915e02a1aea3f122027c94f755bc5cb8a30c73225a53: Status 404 returned error can't find the container with id 9b2381ec0993cd2b6fd3915e02a1aea3f122027c94f755bc5cb8a30c73225a53 Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.535028 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hqwv4"] Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.536243 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqwv4" Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.538260 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.547872 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a9d312f-394f-4678-99fe-5a16946a5c6b-catalog-content\") pod \"redhat-operators-lhlml\" (UID: \"7a9d312f-394f-4678-99fe-5a16946a5c6b\") " pod="openshift-marketplace/redhat-operators-lhlml" Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.547900 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a9d312f-394f-4678-99fe-5a16946a5c6b-utilities\") pod \"redhat-operators-lhlml\" (UID: \"7a9d312f-394f-4678-99fe-5a16946a5c6b\") " pod="openshift-marketplace/redhat-operators-lhlml" Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.547918 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chkwl\" (UniqueName: \"kubernetes.io/projected/7a9d312f-394f-4678-99fe-5a16946a5c6b-kube-api-access-chkwl\") pod \"redhat-operators-lhlml\" (UID: \"7a9d312f-394f-4678-99fe-5a16946a5c6b\") " pod="openshift-marketplace/redhat-operators-lhlml" Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.550322 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqwv4"] Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.550960 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a9d312f-394f-4678-99fe-5a16946a5c6b-catalog-content\") pod \"redhat-operators-lhlml\" (UID: \"7a9d312f-394f-4678-99fe-5a16946a5c6b\") " pod="openshift-marketplace/redhat-operators-lhlml" Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.551258 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a9d312f-394f-4678-99fe-5a16946a5c6b-utilities\") pod \"redhat-operators-lhlml\" (UID: \"7a9d312f-394f-4678-99fe-5a16946a5c6b\") " pod="openshift-marketplace/redhat-operators-lhlml" Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.570939 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chkwl\" (UniqueName: \"kubernetes.io/projected/7a9d312f-394f-4678-99fe-5a16946a5c6b-kube-api-access-chkwl\") pod \"redhat-operators-lhlml\" (UID: \"7a9d312f-394f-4678-99fe-5a16946a5c6b\") " pod="openshift-marketplace/redhat-operators-lhlml" Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.648609 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b20ad5-a32b-4346-999b-df644e36e6d5-catalog-content\") pod \"redhat-marketplace-hqwv4\" (UID: \"b5b20ad5-a32b-4346-999b-df644e36e6d5\") " pod="openshift-marketplace/redhat-marketplace-hqwv4" Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.648661 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b20ad5-a32b-4346-999b-df644e36e6d5-utilities\") pod \"redhat-marketplace-hqwv4\" (UID: \"b5b20ad5-a32b-4346-999b-df644e36e6d5\") " pod="openshift-marketplace/redhat-marketplace-hqwv4" Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.648727 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74t9n\" (UniqueName: \"kubernetes.io/projected/b5b20ad5-a32b-4346-999b-df644e36e6d5-kube-api-access-74t9n\") pod \"redhat-marketplace-hqwv4\" (UID: \"b5b20ad5-a32b-4346-999b-df644e36e6d5\") " pod="openshift-marketplace/redhat-marketplace-hqwv4" Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.668459 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhlml" Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.750014 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74t9n\" (UniqueName: \"kubernetes.io/projected/b5b20ad5-a32b-4346-999b-df644e36e6d5-kube-api-access-74t9n\") pod \"redhat-marketplace-hqwv4\" (UID: \"b5b20ad5-a32b-4346-999b-df644e36e6d5\") " pod="openshift-marketplace/redhat-marketplace-hqwv4" Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.750542 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b20ad5-a32b-4346-999b-df644e36e6d5-catalog-content\") pod \"redhat-marketplace-hqwv4\" (UID: \"b5b20ad5-a32b-4346-999b-df644e36e6d5\") " pod="openshift-marketplace/redhat-marketplace-hqwv4" Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.750575 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b20ad5-a32b-4346-999b-df644e36e6d5-utilities\") pod \"redhat-marketplace-hqwv4\" (UID: \"b5b20ad5-a32b-4346-999b-df644e36e6d5\") " pod="openshift-marketplace/redhat-marketplace-hqwv4" Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.751607 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b20ad5-a32b-4346-999b-df644e36e6d5-utilities\") pod \"redhat-marketplace-hqwv4\" (UID: \"b5b20ad5-a32b-4346-999b-df644e36e6d5\") " pod="openshift-marketplace/redhat-marketplace-hqwv4" Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.751915 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b20ad5-a32b-4346-999b-df644e36e6d5-catalog-content\") pod \"redhat-marketplace-hqwv4\" (UID: \"b5b20ad5-a32b-4346-999b-df644e36e6d5\") " pod="openshift-marketplace/redhat-marketplace-hqwv4" Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.772447 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74t9n\" (UniqueName: \"kubernetes.io/projected/b5b20ad5-a32b-4346-999b-df644e36e6d5-kube-api-access-74t9n\") pod \"redhat-marketplace-hqwv4\" (UID: \"b5b20ad5-a32b-4346-999b-df644e36e6d5\") " pod="openshift-marketplace/redhat-marketplace-hqwv4" Jan 27 18:48:10 crc kubenswrapper[4915]: I0127 18:48:10.865413 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqwv4" Jan 27 18:48:11 crc kubenswrapper[4915]: I0127 18:48:11.041009 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lhlml"] Jan 27 18:48:11 crc kubenswrapper[4915]: W0127 18:48:11.051049 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a9d312f_394f_4678_99fe_5a16946a5c6b.slice/crio-7b694fa73e2ce4e6085c695b913caeb4c65332016f78c9c223ffadae820f50c9 WatchSource:0}: Error finding container 7b694fa73e2ce4e6085c695b913caeb4c65332016f78c9c223ffadae820f50c9: Status 404 returned error can't find the container with id 7b694fa73e2ce4e6085c695b913caeb4c65332016f78c9c223ffadae820f50c9 Jan 27 18:48:11 crc kubenswrapper[4915]: I0127 18:48:11.260408 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqwv4"] Jan 27 18:48:11 crc kubenswrapper[4915]: W0127 18:48:11.274128 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5b20ad5_a32b_4346_999b_df644e36e6d5.slice/crio-06e8866ee31f2b41c2c85a08122e18097fe5a52072e493e623840e1cb33fcc90 WatchSource:0}: Error finding container 06e8866ee31f2b41c2c85a08122e18097fe5a52072e493e623840e1cb33fcc90: Status 404 returned error can't find the container with id 06e8866ee31f2b41c2c85a08122e18097fe5a52072e493e623840e1cb33fcc90 Jan 27 18:48:11 crc kubenswrapper[4915]: I0127 18:48:11.382948 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" event={"ID":"e67da930-dcad-42ee-8ad4-aebdba34f9cd","Type":"ContainerStarted","Data":"f82c1f1df7f395391d2c20a91284572241e6f2f7ea6d9e5dda00f6eac0cc0196"} Jan 27 18:48:11 crc kubenswrapper[4915]: I0127 18:48:11.383001 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" event={"ID":"e67da930-dcad-42ee-8ad4-aebdba34f9cd","Type":"ContainerStarted","Data":"9b2381ec0993cd2b6fd3915e02a1aea3f122027c94f755bc5cb8a30c73225a53"} Jan 27 18:48:11 crc kubenswrapper[4915]: I0127 18:48:11.383050 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" Jan 27 18:48:11 crc kubenswrapper[4915]: I0127 18:48:11.384004 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqwv4" event={"ID":"b5b20ad5-a32b-4346-999b-df644e36e6d5","Type":"ContainerStarted","Data":"6aa17992f480c47df6a07fdb96e0a8b6a6e9d7ee9b14a51db0dd84c53ae531b6"} Jan 27 18:48:11 crc kubenswrapper[4915]: I0127 18:48:11.384047 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqwv4" event={"ID":"b5b20ad5-a32b-4346-999b-df644e36e6d5","Type":"ContainerStarted","Data":"06e8866ee31f2b41c2c85a08122e18097fe5a52072e493e623840e1cb33fcc90"} Jan 27 18:48:11 crc kubenswrapper[4915]: I0127 18:48:11.385615 4915 generic.go:334] "Generic (PLEG): container finished" podID="7a9d312f-394f-4678-99fe-5a16946a5c6b" containerID="7de274664fa3e96e5efc4ec3f6329c3e39b4d962ebfc89ff6c151e7004719718" exitCode=0 Jan 27 18:48:11 crc kubenswrapper[4915]: I0127 18:48:11.385650 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhlml" event={"ID":"7a9d312f-394f-4678-99fe-5a16946a5c6b","Type":"ContainerDied","Data":"7de274664fa3e96e5efc4ec3f6329c3e39b4d962ebfc89ff6c151e7004719718"} Jan 27 18:48:11 crc kubenswrapper[4915]: I0127 18:48:11.385667 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhlml" event={"ID":"7a9d312f-394f-4678-99fe-5a16946a5c6b","Type":"ContainerStarted","Data":"7b694fa73e2ce4e6085c695b913caeb4c65332016f78c9c223ffadae820f50c9"} Jan 27 18:48:11 crc kubenswrapper[4915]: I0127 18:48:11.405235 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" podStartSLOduration=2.405221944 podStartE2EDuration="2.405221944s" podCreationTimestamp="2026-01-27 18:48:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:48:11.400925685 +0000 UTC m=+382.758779349" watchObservedRunningTime="2026-01-27 18:48:11.405221944 +0000 UTC m=+382.763075608" Jan 27 18:48:12 crc kubenswrapper[4915]: I0127 18:48:12.397051 4915 generic.go:334] "Generic (PLEG): container finished" podID="b5b20ad5-a32b-4346-999b-df644e36e6d5" containerID="6aa17992f480c47df6a07fdb96e0a8b6a6e9d7ee9b14a51db0dd84c53ae531b6" exitCode=0 Jan 27 18:48:12 crc kubenswrapper[4915]: I0127 18:48:12.397425 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqwv4" event={"ID":"b5b20ad5-a32b-4346-999b-df644e36e6d5","Type":"ContainerDied","Data":"6aa17992f480c47df6a07fdb96e0a8b6a6e9d7ee9b14a51db0dd84c53ae531b6"} Jan 27 18:48:12 crc kubenswrapper[4915]: I0127 18:48:12.399444 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhlml" event={"ID":"7a9d312f-394f-4678-99fe-5a16946a5c6b","Type":"ContainerStarted","Data":"68609fc6891a63642258798deba5f531ae2e401dd3f5df36a5601d183ac476ff"} Jan 27 18:48:12 crc kubenswrapper[4915]: I0127 18:48:12.944978 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fgszs"] Jan 27 18:48:12 crc kubenswrapper[4915]: I0127 18:48:12.951609 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fgszs" Jan 27 18:48:12 crc kubenswrapper[4915]: I0127 18:48:12.957292 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 18:48:12 crc kubenswrapper[4915]: I0127 18:48:12.970248 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fgszs"] Jan 27 18:48:12 crc kubenswrapper[4915]: I0127 18:48:12.990705 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17b9de2a-d796-4189-8d2c-f8ce1dd65de7-catalog-content\") pod \"community-operators-fgszs\" (UID: \"17b9de2a-d796-4189-8d2c-f8ce1dd65de7\") " pod="openshift-marketplace/community-operators-fgszs" Jan 27 18:48:12 crc kubenswrapper[4915]: I0127 18:48:12.990773 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17b9de2a-d796-4189-8d2c-f8ce1dd65de7-utilities\") pod \"community-operators-fgszs\" (UID: \"17b9de2a-d796-4189-8d2c-f8ce1dd65de7\") " pod="openshift-marketplace/community-operators-fgszs" Jan 27 18:48:12 crc kubenswrapper[4915]: I0127 18:48:12.990826 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtdng\" (UniqueName: \"kubernetes.io/projected/17b9de2a-d796-4189-8d2c-f8ce1dd65de7-kube-api-access-dtdng\") pod \"community-operators-fgszs\" (UID: \"17b9de2a-d796-4189-8d2c-f8ce1dd65de7\") " pod="openshift-marketplace/community-operators-fgszs" Jan 27 18:48:13 crc kubenswrapper[4915]: I0127 18:48:13.092373 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17b9de2a-d796-4189-8d2c-f8ce1dd65de7-catalog-content\") pod \"community-operators-fgszs\" (UID: \"17b9de2a-d796-4189-8d2c-f8ce1dd65de7\") " pod="openshift-marketplace/community-operators-fgszs" Jan 27 18:48:13 crc kubenswrapper[4915]: I0127 18:48:13.092445 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17b9de2a-d796-4189-8d2c-f8ce1dd65de7-utilities\") pod \"community-operators-fgszs\" (UID: \"17b9de2a-d796-4189-8d2c-f8ce1dd65de7\") " pod="openshift-marketplace/community-operators-fgszs" Jan 27 18:48:13 crc kubenswrapper[4915]: I0127 18:48:13.092470 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtdng\" (UniqueName: \"kubernetes.io/projected/17b9de2a-d796-4189-8d2c-f8ce1dd65de7-kube-api-access-dtdng\") pod \"community-operators-fgszs\" (UID: \"17b9de2a-d796-4189-8d2c-f8ce1dd65de7\") " pod="openshift-marketplace/community-operators-fgszs" Jan 27 18:48:13 crc kubenswrapper[4915]: I0127 18:48:13.093412 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17b9de2a-d796-4189-8d2c-f8ce1dd65de7-catalog-content\") pod \"community-operators-fgszs\" (UID: \"17b9de2a-d796-4189-8d2c-f8ce1dd65de7\") " pod="openshift-marketplace/community-operators-fgszs" Jan 27 18:48:13 crc kubenswrapper[4915]: I0127 18:48:13.093656 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17b9de2a-d796-4189-8d2c-f8ce1dd65de7-utilities\") pod \"community-operators-fgszs\" (UID: \"17b9de2a-d796-4189-8d2c-f8ce1dd65de7\") " pod="openshift-marketplace/community-operators-fgszs" Jan 27 18:48:13 crc kubenswrapper[4915]: I0127 18:48:13.116763 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtdng\" (UniqueName: \"kubernetes.io/projected/17b9de2a-d796-4189-8d2c-f8ce1dd65de7-kube-api-access-dtdng\") pod \"community-operators-fgszs\" (UID: \"17b9de2a-d796-4189-8d2c-f8ce1dd65de7\") " pod="openshift-marketplace/community-operators-fgszs" Jan 27 18:48:13 crc kubenswrapper[4915]: I0127 18:48:13.145988 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p6dnm"] Jan 27 18:48:13 crc kubenswrapper[4915]: I0127 18:48:13.147265 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p6dnm" Jan 27 18:48:13 crc kubenswrapper[4915]: I0127 18:48:13.149909 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 18:48:13 crc kubenswrapper[4915]: I0127 18:48:13.154865 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p6dnm"] Jan 27 18:48:13 crc kubenswrapper[4915]: I0127 18:48:13.193829 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b018046c-4f05-4969-ae24-3017ac28f3f7-catalog-content\") pod \"certified-operators-p6dnm\" (UID: \"b018046c-4f05-4969-ae24-3017ac28f3f7\") " pod="openshift-marketplace/certified-operators-p6dnm" Jan 27 18:48:13 crc kubenswrapper[4915]: I0127 18:48:13.194318 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b018046c-4f05-4969-ae24-3017ac28f3f7-utilities\") pod \"certified-operators-p6dnm\" (UID: \"b018046c-4f05-4969-ae24-3017ac28f3f7\") " pod="openshift-marketplace/certified-operators-p6dnm" Jan 27 18:48:13 crc kubenswrapper[4915]: I0127 18:48:13.194365 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2n25\" (UniqueName: \"kubernetes.io/projected/b018046c-4f05-4969-ae24-3017ac28f3f7-kube-api-access-w2n25\") pod \"certified-operators-p6dnm\" (UID: \"b018046c-4f05-4969-ae24-3017ac28f3f7\") " pod="openshift-marketplace/certified-operators-p6dnm" Jan 27 18:48:13 crc kubenswrapper[4915]: I0127 18:48:13.285625 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fgszs" Jan 27 18:48:13 crc kubenswrapper[4915]: I0127 18:48:13.295679 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b018046c-4f05-4969-ae24-3017ac28f3f7-catalog-content\") pod \"certified-operators-p6dnm\" (UID: \"b018046c-4f05-4969-ae24-3017ac28f3f7\") " pod="openshift-marketplace/certified-operators-p6dnm" Jan 27 18:48:13 crc kubenswrapper[4915]: I0127 18:48:13.295761 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b018046c-4f05-4969-ae24-3017ac28f3f7-utilities\") pod \"certified-operators-p6dnm\" (UID: \"b018046c-4f05-4969-ae24-3017ac28f3f7\") " pod="openshift-marketplace/certified-operators-p6dnm" Jan 27 18:48:13 crc kubenswrapper[4915]: I0127 18:48:13.295815 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2n25\" (UniqueName: \"kubernetes.io/projected/b018046c-4f05-4969-ae24-3017ac28f3f7-kube-api-access-w2n25\") pod \"certified-operators-p6dnm\" (UID: \"b018046c-4f05-4969-ae24-3017ac28f3f7\") " pod="openshift-marketplace/certified-operators-p6dnm" Jan 27 18:48:13 crc kubenswrapper[4915]: I0127 18:48:13.296464 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b018046c-4f05-4969-ae24-3017ac28f3f7-catalog-content\") pod \"certified-operators-p6dnm\" (UID: \"b018046c-4f05-4969-ae24-3017ac28f3f7\") " pod="openshift-marketplace/certified-operators-p6dnm" Jan 27 18:48:13 crc kubenswrapper[4915]: I0127 18:48:13.296514 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b018046c-4f05-4969-ae24-3017ac28f3f7-utilities\") pod \"certified-operators-p6dnm\" (UID: \"b018046c-4f05-4969-ae24-3017ac28f3f7\") " pod="openshift-marketplace/certified-operators-p6dnm" Jan 27 18:48:13 crc kubenswrapper[4915]: I0127 18:48:13.312244 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2n25\" (UniqueName: \"kubernetes.io/projected/b018046c-4f05-4969-ae24-3017ac28f3f7-kube-api-access-w2n25\") pod \"certified-operators-p6dnm\" (UID: \"b018046c-4f05-4969-ae24-3017ac28f3f7\") " pod="openshift-marketplace/certified-operators-p6dnm" Jan 27 18:48:13 crc kubenswrapper[4915]: I0127 18:48:13.410734 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqwv4" event={"ID":"b5b20ad5-a32b-4346-999b-df644e36e6d5","Type":"ContainerStarted","Data":"e0bf4afd2ff54cffc257b75d5870127043defd8e20c2e1c0b14fcfdae9108479"} Jan 27 18:48:13 crc kubenswrapper[4915]: I0127 18:48:13.414082 4915 generic.go:334] "Generic (PLEG): container finished" podID="7a9d312f-394f-4678-99fe-5a16946a5c6b" containerID="68609fc6891a63642258798deba5f531ae2e401dd3f5df36a5601d183ac476ff" exitCode=0 Jan 27 18:48:13 crc kubenswrapper[4915]: I0127 18:48:13.414162 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhlml" event={"ID":"7a9d312f-394f-4678-99fe-5a16946a5c6b","Type":"ContainerDied","Data":"68609fc6891a63642258798deba5f531ae2e401dd3f5df36a5601d183ac476ff"} Jan 27 18:48:13 crc kubenswrapper[4915]: I0127 18:48:13.506185 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p6dnm" Jan 27 18:48:13 crc kubenswrapper[4915]: I0127 18:48:13.727009 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fgszs"] Jan 27 18:48:13 crc kubenswrapper[4915]: I0127 18:48:13.890131 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p6dnm"] Jan 27 18:48:13 crc kubenswrapper[4915]: W0127 18:48:13.948806 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb018046c_4f05_4969_ae24_3017ac28f3f7.slice/crio-9706ac2cd0961ae5ee755025d0174311c44410845ea8400264652d3c335b8b07 WatchSource:0}: Error finding container 9706ac2cd0961ae5ee755025d0174311c44410845ea8400264652d3c335b8b07: Status 404 returned error can't find the container with id 9706ac2cd0961ae5ee755025d0174311c44410845ea8400264652d3c335b8b07 Jan 27 18:48:14 crc kubenswrapper[4915]: I0127 18:48:14.421855 4915 generic.go:334] "Generic (PLEG): container finished" podID="b018046c-4f05-4969-ae24-3017ac28f3f7" containerID="b4976f61b3b27d4aeffe0aa433c361eb8d31b86b5f528722817a4e8c89d719aa" exitCode=0 Jan 27 18:48:14 crc kubenswrapper[4915]: I0127 18:48:14.421934 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6dnm" event={"ID":"b018046c-4f05-4969-ae24-3017ac28f3f7","Type":"ContainerDied","Data":"b4976f61b3b27d4aeffe0aa433c361eb8d31b86b5f528722817a4e8c89d719aa"} Jan 27 18:48:14 crc kubenswrapper[4915]: I0127 18:48:14.421982 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6dnm" event={"ID":"b018046c-4f05-4969-ae24-3017ac28f3f7","Type":"ContainerStarted","Data":"9706ac2cd0961ae5ee755025d0174311c44410845ea8400264652d3c335b8b07"} Jan 27 18:48:14 crc kubenswrapper[4915]: I0127 18:48:14.424491 4915 generic.go:334] "Generic (PLEG): container finished" podID="17b9de2a-d796-4189-8d2c-f8ce1dd65de7" containerID="c29a0b49ea599bf4e0ad18d5cfbcd010bab747bd5d740109a6043fa07624e454" exitCode=0 Jan 27 18:48:14 crc kubenswrapper[4915]: I0127 18:48:14.424580 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fgszs" event={"ID":"17b9de2a-d796-4189-8d2c-f8ce1dd65de7","Type":"ContainerDied","Data":"c29a0b49ea599bf4e0ad18d5cfbcd010bab747bd5d740109a6043fa07624e454"} Jan 27 18:48:14 crc kubenswrapper[4915]: I0127 18:48:14.424599 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fgszs" event={"ID":"17b9de2a-d796-4189-8d2c-f8ce1dd65de7","Type":"ContainerStarted","Data":"7c57d5c01fe09bfbf94c1a53a92c0c230acd8febfdb212cab16472d6614b100b"} Jan 27 18:48:14 crc kubenswrapper[4915]: I0127 18:48:14.428434 4915 generic.go:334] "Generic (PLEG): container finished" podID="b5b20ad5-a32b-4346-999b-df644e36e6d5" containerID="e0bf4afd2ff54cffc257b75d5870127043defd8e20c2e1c0b14fcfdae9108479" exitCode=0 Jan 27 18:48:14 crc kubenswrapper[4915]: I0127 18:48:14.428540 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqwv4" event={"ID":"b5b20ad5-a32b-4346-999b-df644e36e6d5","Type":"ContainerDied","Data":"e0bf4afd2ff54cffc257b75d5870127043defd8e20c2e1c0b14fcfdae9108479"} Jan 27 18:48:14 crc kubenswrapper[4915]: I0127 18:48:14.432357 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhlml" event={"ID":"7a9d312f-394f-4678-99fe-5a16946a5c6b","Type":"ContainerStarted","Data":"5cc1e0b1d71bdee481ea54c292e00efd79ee1be16401ef08b39d5f38c4e41a82"} Jan 27 18:48:14 crc kubenswrapper[4915]: I0127 18:48:14.501162 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lhlml" podStartSLOduration=2.097090299 podStartE2EDuration="4.501138655s" podCreationTimestamp="2026-01-27 18:48:10 +0000 UTC" firstStartedPulling="2026-01-27 18:48:11.38653917 +0000 UTC m=+382.744392824" lastFinishedPulling="2026-01-27 18:48:13.790587496 +0000 UTC m=+385.148441180" observedRunningTime="2026-01-27 18:48:14.49463885 +0000 UTC m=+385.852492534" watchObservedRunningTime="2026-01-27 18:48:14.501138655 +0000 UTC m=+385.858992339" Jan 27 18:48:15 crc kubenswrapper[4915]: I0127 18:48:15.438665 4915 generic.go:334] "Generic (PLEG): container finished" podID="b018046c-4f05-4969-ae24-3017ac28f3f7" containerID="7c6839705d15db5e46f3cf46de1fc7476c5b17f2bc90eef13e7774d2df5c5140" exitCode=0 Jan 27 18:48:15 crc kubenswrapper[4915]: I0127 18:48:15.438823 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6dnm" event={"ID":"b018046c-4f05-4969-ae24-3017ac28f3f7","Type":"ContainerDied","Data":"7c6839705d15db5e46f3cf46de1fc7476c5b17f2bc90eef13e7774d2df5c5140"} Jan 27 18:48:15 crc kubenswrapper[4915]: I0127 18:48:15.445991 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqwv4" event={"ID":"b5b20ad5-a32b-4346-999b-df644e36e6d5","Type":"ContainerStarted","Data":"a79651cbc706fad239ef54c0fe71fde6276d71a5ab4d4e5bb7415e93cda05e2d"} Jan 27 18:48:15 crc kubenswrapper[4915]: I0127 18:48:15.486238 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hqwv4" podStartSLOduration=3.055985786 podStartE2EDuration="5.486211156s" podCreationTimestamp="2026-01-27 18:48:10 +0000 UTC" firstStartedPulling="2026-01-27 18:48:12.398948414 +0000 UTC m=+383.756802088" lastFinishedPulling="2026-01-27 18:48:14.829173794 +0000 UTC m=+386.187027458" observedRunningTime="2026-01-27 18:48:15.485274852 +0000 UTC m=+386.843128516" watchObservedRunningTime="2026-01-27 18:48:15.486211156 +0000 UTC m=+386.844064840" Jan 27 18:48:16 crc kubenswrapper[4915]: I0127 18:48:16.452475 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6dnm" event={"ID":"b018046c-4f05-4969-ae24-3017ac28f3f7","Type":"ContainerStarted","Data":"ec0a2741de0339bc6c4484dad325b0901422a32767965c4efa458aa95ffbeba5"} Jan 27 18:48:16 crc kubenswrapper[4915]: I0127 18:48:16.454457 4915 generic.go:334] "Generic (PLEG): container finished" podID="17b9de2a-d796-4189-8d2c-f8ce1dd65de7" containerID="d4c4187252117852bfad7fb89310092e792a490c41f3db4ef52a33067455deb2" exitCode=0 Jan 27 18:48:16 crc kubenswrapper[4915]: I0127 18:48:16.454623 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fgszs" event={"ID":"17b9de2a-d796-4189-8d2c-f8ce1dd65de7","Type":"ContainerDied","Data":"d4c4187252117852bfad7fb89310092e792a490c41f3db4ef52a33067455deb2"} Jan 27 18:48:16 crc kubenswrapper[4915]: I0127 18:48:16.475697 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p6dnm" podStartSLOduration=1.953406335 podStartE2EDuration="3.475672559s" podCreationTimestamp="2026-01-27 18:48:13 +0000 UTC" firstStartedPulling="2026-01-27 18:48:14.423361913 +0000 UTC m=+385.781215577" lastFinishedPulling="2026-01-27 18:48:15.945628126 +0000 UTC m=+387.303481801" observedRunningTime="2026-01-27 18:48:16.472356064 +0000 UTC m=+387.830209728" watchObservedRunningTime="2026-01-27 18:48:16.475672559 +0000 UTC m=+387.833526223" Jan 27 18:48:17 crc kubenswrapper[4915]: I0127 18:48:17.466024 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fgszs" event={"ID":"17b9de2a-d796-4189-8d2c-f8ce1dd65de7","Type":"ContainerStarted","Data":"26b266d0bf6b3250821b0afe58ea56c59a898bbf3110391e070175c409b8b94e"} Jan 27 18:48:17 crc kubenswrapper[4915]: I0127 18:48:17.502983 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fgszs" podStartSLOduration=2.969297638 podStartE2EDuration="5.502957771s" podCreationTimestamp="2026-01-27 18:48:12 +0000 UTC" firstStartedPulling="2026-01-27 18:48:14.426363309 +0000 UTC m=+385.784216973" lastFinishedPulling="2026-01-27 18:48:16.960023422 +0000 UTC m=+388.317877106" observedRunningTime="2026-01-27 18:48:17.501028432 +0000 UTC m=+388.858882106" watchObservedRunningTime="2026-01-27 18:48:17.502957771 +0000 UTC m=+388.860811435" Jan 27 18:48:20 crc kubenswrapper[4915]: I0127 18:48:20.624651 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:48:20 crc kubenswrapper[4915]: I0127 18:48:20.625242 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:48:20 crc kubenswrapper[4915]: I0127 18:48:20.625308 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 18:48:20 crc kubenswrapper[4915]: I0127 18:48:20.626132 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6631bd22e9892834edbb04562817ce83e419fee41fa7145f12ca5baef6b13a1f"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:48:20 crc kubenswrapper[4915]: I0127 18:48:20.626229 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://6631bd22e9892834edbb04562817ce83e419fee41fa7145f12ca5baef6b13a1f" gracePeriod=600 Jan 27 18:48:20 crc kubenswrapper[4915]: I0127 18:48:20.669175 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lhlml" Jan 27 18:48:20 crc kubenswrapper[4915]: I0127 18:48:20.669239 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lhlml" Jan 27 18:48:20 crc kubenswrapper[4915]: I0127 18:48:20.718401 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lhlml" Jan 27 18:48:20 crc kubenswrapper[4915]: I0127 18:48:20.866779 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hqwv4" Jan 27 18:48:20 crc kubenswrapper[4915]: I0127 18:48:20.867444 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hqwv4" Jan 27 18:48:20 crc kubenswrapper[4915]: I0127 18:48:20.918247 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hqwv4" Jan 27 18:48:21 crc kubenswrapper[4915]: I0127 18:48:21.563087 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lhlml" Jan 27 18:48:22 crc kubenswrapper[4915]: I0127 18:48:22.747887 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hqwv4" Jan 27 18:48:23 crc kubenswrapper[4915]: I0127 18:48:23.287224 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fgszs" Jan 27 18:48:23 crc kubenswrapper[4915]: I0127 18:48:23.287291 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fgszs" Jan 27 18:48:23 crc kubenswrapper[4915]: I0127 18:48:23.339926 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fgszs" Jan 27 18:48:23 crc kubenswrapper[4915]: I0127 18:48:23.507295 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p6dnm" Jan 27 18:48:23 crc kubenswrapper[4915]: I0127 18:48:23.507667 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p6dnm" Jan 27 18:48:23 crc kubenswrapper[4915]: I0127 18:48:23.562549 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fgszs" Jan 27 18:48:23 crc kubenswrapper[4915]: I0127 18:48:23.573766 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p6dnm" Jan 27 18:48:24 crc kubenswrapper[4915]: I0127 18:48:24.506335 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="6631bd22e9892834edbb04562817ce83e419fee41fa7145f12ca5baef6b13a1f" exitCode=0 Jan 27 18:48:24 crc kubenswrapper[4915]: I0127 18:48:24.506408 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"6631bd22e9892834edbb04562817ce83e419fee41fa7145f12ca5baef6b13a1f"} Jan 27 18:48:24 crc kubenswrapper[4915]: I0127 18:48:24.506703 4915 scope.go:117] "RemoveContainer" containerID="d8ece92b9a56dbad74eb7511b2563827dac25744e2d856dd6202bebe1e457ba3" Jan 27 18:48:24 crc kubenswrapper[4915]: I0127 18:48:24.561728 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p6dnm" Jan 27 18:48:25 crc kubenswrapper[4915]: I0127 18:48:25.517617 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"202bbea1c00d66a9850296c32c281e814e48bfb1a95e769bc8fef7a681ccc40b"} Jan 27 18:48:30 crc kubenswrapper[4915]: I0127 18:48:30.085939 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-9s28z" Jan 27 18:48:30 crc kubenswrapper[4915]: I0127 18:48:30.160166 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lqcx4"] Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.215703 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" podUID="ed1f13f7-cdf2-4eb8-addd-7087dd72db11" containerName="registry" containerID="cri-o://917452f3a9fcdd89fec98b5134191c36df2b3aac39c4f7f45dfd3cefafbf2adc" gracePeriod=30 Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.715413 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.725678 4915 generic.go:334] "Generic (PLEG): container finished" podID="ed1f13f7-cdf2-4eb8-addd-7087dd72db11" containerID="917452f3a9fcdd89fec98b5134191c36df2b3aac39c4f7f45dfd3cefafbf2adc" exitCode=0 Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.725740 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" event={"ID":"ed1f13f7-cdf2-4eb8-addd-7087dd72db11","Type":"ContainerDied","Data":"917452f3a9fcdd89fec98b5134191c36df2b3aac39c4f7f45dfd3cefafbf2adc"} Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.725778 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" event={"ID":"ed1f13f7-cdf2-4eb8-addd-7087dd72db11","Type":"ContainerDied","Data":"6d46cc4d1bf63ca50c0c74e3bd8c9064dcb52db104aa7ba903bbcee8a2fc5f9f"} Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.725835 4915 scope.go:117] "RemoveContainer" containerID="917452f3a9fcdd89fec98b5134191c36df2b3aac39c4f7f45dfd3cefafbf2adc" Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.725992 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lqcx4" Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.763096 4915 scope.go:117] "RemoveContainer" containerID="917452f3a9fcdd89fec98b5134191c36df2b3aac39c4f7f45dfd3cefafbf2adc" Jan 27 18:48:55 crc kubenswrapper[4915]: E0127 18:48:55.764411 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"917452f3a9fcdd89fec98b5134191c36df2b3aac39c4f7f45dfd3cefafbf2adc\": container with ID starting with 917452f3a9fcdd89fec98b5134191c36df2b3aac39c4f7f45dfd3cefafbf2adc not found: ID does not exist" containerID="917452f3a9fcdd89fec98b5134191c36df2b3aac39c4f7f45dfd3cefafbf2adc" Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.764545 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"917452f3a9fcdd89fec98b5134191c36df2b3aac39c4f7f45dfd3cefafbf2adc"} err="failed to get container status \"917452f3a9fcdd89fec98b5134191c36df2b3aac39c4f7f45dfd3cefafbf2adc\": rpc error: code = NotFound desc = could not find container \"917452f3a9fcdd89fec98b5134191c36df2b3aac39c4f7f45dfd3cefafbf2adc\": container with ID starting with 917452f3a9fcdd89fec98b5134191c36df2b3aac39c4f7f45dfd3cefafbf2adc not found: ID does not exist" Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.820565 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-registry-tls\") pod \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.820650 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-installation-pull-secrets\") pod \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.820817 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.820880 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-trusted-ca\") pod \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.821908 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ed1f13f7-cdf2-4eb8-addd-7087dd72db11" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.822074 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c49ph\" (UniqueName: \"kubernetes.io/projected/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-kube-api-access-c49ph\") pod \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.822130 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-ca-trust-extracted\") pod \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.822178 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-registry-certificates\") pod \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.822380 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-bound-sa-token\") pod \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\" (UID: \"ed1f13f7-cdf2-4eb8-addd-7087dd72db11\") " Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.822976 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "ed1f13f7-cdf2-4eb8-addd-7087dd72db11" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.823729 4915 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.823830 4915 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.830718 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "ed1f13f7-cdf2-4eb8-addd-7087dd72db11" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.833384 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-kube-api-access-c49ph" (OuterVolumeSpecName: "kube-api-access-c49ph") pod "ed1f13f7-cdf2-4eb8-addd-7087dd72db11" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11"). InnerVolumeSpecName "kube-api-access-c49ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.835177 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "ed1f13f7-cdf2-4eb8-addd-7087dd72db11" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.839395 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "ed1f13f7-cdf2-4eb8-addd-7087dd72db11" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.841559 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "ed1f13f7-cdf2-4eb8-addd-7087dd72db11" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.859350 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "ed1f13f7-cdf2-4eb8-addd-7087dd72db11" (UID: "ed1f13f7-cdf2-4eb8-addd-7087dd72db11"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.925869 4915 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.925930 4915 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.925952 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c49ph\" (UniqueName: \"kubernetes.io/projected/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-kube-api-access-c49ph\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.925970 4915 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:55 crc kubenswrapper[4915]: I0127 18:48:55.925990 4915 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ed1f13f7-cdf2-4eb8-addd-7087dd72db11-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:56 crc kubenswrapper[4915]: I0127 18:48:56.076283 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lqcx4"] Jan 27 18:48:56 crc kubenswrapper[4915]: I0127 18:48:56.081749 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lqcx4"] Jan 27 18:48:57 crc kubenswrapper[4915]: I0127 18:48:57.371504 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed1f13f7-cdf2-4eb8-addd-7087dd72db11" path="/var/lib/kubelet/pods/ed1f13f7-cdf2-4eb8-addd-7087dd72db11/volumes" Jan 27 18:50:49 crc kubenswrapper[4915]: I0127 18:50:49.611150 4915 scope.go:117] "RemoveContainer" containerID="f2c419d9283db99758b56abe9530360c5e770ec23714db637e37b914570259a3" Jan 27 18:50:50 crc kubenswrapper[4915]: I0127 18:50:50.624399 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:50:50 crc kubenswrapper[4915]: I0127 18:50:50.624501 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:51:20 crc kubenswrapper[4915]: I0127 18:51:20.624458 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:51:20 crc kubenswrapper[4915]: I0127 18:51:20.625483 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:51:49 crc kubenswrapper[4915]: I0127 18:51:49.654316 4915 scope.go:117] "RemoveContainer" containerID="e2fca945a92f6c10554080f8235fb6ca7d47db6aa70e1c71ffdfb1abb803c1c8" Jan 27 18:51:49 crc kubenswrapper[4915]: I0127 18:51:49.684975 4915 scope.go:117] "RemoveContainer" containerID="841ecd031ef5cf3b15113f1d0f573fcd17c668b5b411b2390dfea442d439d69d" Jan 27 18:51:50 crc kubenswrapper[4915]: I0127 18:51:50.626144 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:51:50 crc kubenswrapper[4915]: I0127 18:51:50.626692 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:51:50 crc kubenswrapper[4915]: I0127 18:51:50.626758 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 18:51:50 crc kubenswrapper[4915]: I0127 18:51:50.627649 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"202bbea1c00d66a9850296c32c281e814e48bfb1a95e769bc8fef7a681ccc40b"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:51:50 crc kubenswrapper[4915]: I0127 18:51:50.627755 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://202bbea1c00d66a9850296c32c281e814e48bfb1a95e769bc8fef7a681ccc40b" gracePeriod=600 Jan 27 18:51:51 crc kubenswrapper[4915]: I0127 18:51:51.270677 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="202bbea1c00d66a9850296c32c281e814e48bfb1a95e769bc8fef7a681ccc40b" exitCode=0 Jan 27 18:51:51 crc kubenswrapper[4915]: I0127 18:51:51.270761 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"202bbea1c00d66a9850296c32c281e814e48bfb1a95e769bc8fef7a681ccc40b"} Jan 27 18:51:51 crc kubenswrapper[4915]: I0127 18:51:51.270905 4915 scope.go:117] "RemoveContainer" containerID="6631bd22e9892834edbb04562817ce83e419fee41fa7145f12ca5baef6b13a1f" Jan 27 18:51:52 crc kubenswrapper[4915]: I0127 18:51:52.283155 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"da511792c908f1faa8a2f05c73323541a29c21bfe552744df49646a41552c036"} Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.275377 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n8spt"] Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.278656 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="ovn-controller" containerID="cri-o://390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271" gracePeriod=30 Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.279216 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="sbdb" containerID="cri-o://d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5" gracePeriod=30 Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.279290 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="nbdb" containerID="cri-o://ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785" gracePeriod=30 Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.279379 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="northd" containerID="cri-o://887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc" gracePeriod=30 Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.279468 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f" gracePeriod=30 Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.279543 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="kube-rbac-proxy-node" containerID="cri-o://0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73" gracePeriod=30 Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.279601 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="ovn-acl-logging" containerID="cri-o://1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047" gracePeriod=30 Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.368864 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="ovnkube-controller" containerID="cri-o://63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8" gracePeriod=30 Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.643083 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8spt_eb87671e-1bee-4bef-843d-6fce9467079d/ovnkube-controller/3.log" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.645181 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8spt_eb87671e-1bee-4bef-843d-6fce9467079d/ovn-acl-logging/0.log" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.645592 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8spt_eb87671e-1bee-4bef-843d-6fce9467079d/ovn-controller/0.log" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.645951 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.690658 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hhfxm"] Jan 27 18:53:56 crc kubenswrapper[4915]: E0127 18:53:56.690909 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="northd" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.690926 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="northd" Jan 27 18:53:56 crc kubenswrapper[4915]: E0127 18:53:56.690940 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="nbdb" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.690948 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="nbdb" Jan 27 18:53:56 crc kubenswrapper[4915]: E0127 18:53:56.690960 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="ovnkube-controller" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.690968 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="ovnkube-controller" Jan 27 18:53:56 crc kubenswrapper[4915]: E0127 18:53:56.690979 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.690987 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 18:53:56 crc kubenswrapper[4915]: E0127 18:53:56.691003 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="ovnkube-controller" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.691010 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="ovnkube-controller" Jan 27 18:53:56 crc kubenswrapper[4915]: E0127 18:53:56.691018 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="sbdb" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.691024 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="sbdb" Jan 27 18:53:56 crc kubenswrapper[4915]: E0127 18:53:56.691035 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="kubecfg-setup" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.691042 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="kubecfg-setup" Jan 27 18:53:56 crc kubenswrapper[4915]: E0127 18:53:56.691050 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="kube-rbac-proxy-node" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.691057 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="kube-rbac-proxy-node" Jan 27 18:53:56 crc kubenswrapper[4915]: E0127 18:53:56.691067 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed1f13f7-cdf2-4eb8-addd-7087dd72db11" containerName="registry" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.691073 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed1f13f7-cdf2-4eb8-addd-7087dd72db11" containerName="registry" Jan 27 18:53:56 crc kubenswrapper[4915]: E0127 18:53:56.691083 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="ovnkube-controller" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.691091 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="ovnkube-controller" Jan 27 18:53:56 crc kubenswrapper[4915]: E0127 18:53:56.691101 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="ovn-acl-logging" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.691108 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="ovn-acl-logging" Jan 27 18:53:56 crc kubenswrapper[4915]: E0127 18:53:56.691119 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="ovn-controller" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.691126 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="ovn-controller" Jan 27 18:53:56 crc kubenswrapper[4915]: E0127 18:53:56.691146 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="ovnkube-controller" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.691155 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="ovnkube-controller" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.691253 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="ovnkube-controller" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.691265 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="kube-rbac-proxy-node" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.691274 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.691282 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="ovnkube-controller" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.691290 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="ovnkube-controller" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.691296 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed1f13f7-cdf2-4eb8-addd-7087dd72db11" containerName="registry" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.691304 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="ovn-controller" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.691311 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="northd" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.691318 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="sbdb" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.691324 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="nbdb" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.691330 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="ovn-acl-logging" Jan 27 18:53:56 crc kubenswrapper[4915]: E0127 18:53:56.691409 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="ovnkube-controller" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.691416 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="ovnkube-controller" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.691494 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="ovnkube-controller" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.691653 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" containerName="ovnkube-controller" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.693138 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.699724 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eb87671e-1bee-4bef-843d-6fce9467079d-env-overrides\") pod \"eb87671e-1bee-4bef-843d-6fce9467079d\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.699778 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-run-ovn-kubernetes\") pod \"eb87671e-1bee-4bef-843d-6fce9467079d\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.699828 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-var-lib-openvswitch\") pod \"eb87671e-1bee-4bef-843d-6fce9467079d\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.699852 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-run-openvswitch\") pod \"eb87671e-1bee-4bef-843d-6fce9467079d\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.699877 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-run-netns\") pod \"eb87671e-1bee-4bef-843d-6fce9467079d\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.699900 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwsc5\" (UniqueName: \"kubernetes.io/projected/eb87671e-1bee-4bef-843d-6fce9467079d-kube-api-access-mwsc5\") pod \"eb87671e-1bee-4bef-843d-6fce9467079d\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.699919 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-run-systemd\") pod \"eb87671e-1bee-4bef-843d-6fce9467079d\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.699945 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-cni-bin\") pod \"eb87671e-1bee-4bef-843d-6fce9467079d\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.699979 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-run-ovn\") pod \"eb87671e-1bee-4bef-843d-6fce9467079d\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.700015 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-systemd-units\") pod \"eb87671e-1bee-4bef-843d-6fce9467079d\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.700038 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-etc-openvswitch\") pod \"eb87671e-1bee-4bef-843d-6fce9467079d\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.700060 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-kubelet\") pod \"eb87671e-1bee-4bef-843d-6fce9467079d\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.700078 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-node-log\") pod \"eb87671e-1bee-4bef-843d-6fce9467079d\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.700096 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-cni-netd\") pod \"eb87671e-1bee-4bef-843d-6fce9467079d\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.700121 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"eb87671e-1bee-4bef-843d-6fce9467079d\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.700148 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eb87671e-1bee-4bef-843d-6fce9467079d-ovnkube-config\") pod \"eb87671e-1bee-4bef-843d-6fce9467079d\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.700181 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-log-socket\") pod \"eb87671e-1bee-4bef-843d-6fce9467079d\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.700214 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eb87671e-1bee-4bef-843d-6fce9467079d-ovn-node-metrics-cert\") pod \"eb87671e-1bee-4bef-843d-6fce9467079d\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.700242 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-slash\") pod \"eb87671e-1bee-4bef-843d-6fce9467079d\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.700267 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/eb87671e-1bee-4bef-843d-6fce9467079d-ovnkube-script-lib\") pod \"eb87671e-1bee-4bef-843d-6fce9467079d\" (UID: \"eb87671e-1bee-4bef-843d-6fce9467079d\") " Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.700870 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "eb87671e-1bee-4bef-843d-6fce9467079d" (UID: "eb87671e-1bee-4bef-843d-6fce9467079d"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.701149 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb87671e-1bee-4bef-843d-6fce9467079d-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "eb87671e-1bee-4bef-843d-6fce9467079d" (UID: "eb87671e-1bee-4bef-843d-6fce9467079d"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.701182 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "eb87671e-1bee-4bef-843d-6fce9467079d" (UID: "eb87671e-1bee-4bef-843d-6fce9467079d"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.701183 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb87671e-1bee-4bef-843d-6fce9467079d-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "eb87671e-1bee-4bef-843d-6fce9467079d" (UID: "eb87671e-1bee-4bef-843d-6fce9467079d"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.701200 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "eb87671e-1bee-4bef-843d-6fce9467079d" (UID: "eb87671e-1bee-4bef-843d-6fce9467079d"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.701185 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "eb87671e-1bee-4bef-843d-6fce9467079d" (UID: "eb87671e-1bee-4bef-843d-6fce9467079d"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.701207 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "eb87671e-1bee-4bef-843d-6fce9467079d" (UID: "eb87671e-1bee-4bef-843d-6fce9467079d"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.701217 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "eb87671e-1bee-4bef-843d-6fce9467079d" (UID: "eb87671e-1bee-4bef-843d-6fce9467079d"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.701199 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "eb87671e-1bee-4bef-843d-6fce9467079d" (UID: "eb87671e-1bee-4bef-843d-6fce9467079d"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.701225 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-node-log" (OuterVolumeSpecName: "node-log") pod "eb87671e-1bee-4bef-843d-6fce9467079d" (UID: "eb87671e-1bee-4bef-843d-6fce9467079d"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.701234 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "eb87671e-1bee-4bef-843d-6fce9467079d" (UID: "eb87671e-1bee-4bef-843d-6fce9467079d"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.701250 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "eb87671e-1bee-4bef-843d-6fce9467079d" (UID: "eb87671e-1bee-4bef-843d-6fce9467079d"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.701252 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "eb87671e-1bee-4bef-843d-6fce9467079d" (UID: "eb87671e-1bee-4bef-843d-6fce9467079d"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.701264 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "eb87671e-1bee-4bef-843d-6fce9467079d" (UID: "eb87671e-1bee-4bef-843d-6fce9467079d"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.701568 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb87671e-1bee-4bef-843d-6fce9467079d-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "eb87671e-1bee-4bef-843d-6fce9467079d" (UID: "eb87671e-1bee-4bef-843d-6fce9467079d"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.701568 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-log-socket" (OuterVolumeSpecName: "log-socket") pod "eb87671e-1bee-4bef-843d-6fce9467079d" (UID: "eb87671e-1bee-4bef-843d-6fce9467079d"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.701618 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-slash" (OuterVolumeSpecName: "host-slash") pod "eb87671e-1bee-4bef-843d-6fce9467079d" (UID: "eb87671e-1bee-4bef-843d-6fce9467079d"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.706558 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb87671e-1bee-4bef-843d-6fce9467079d-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "eb87671e-1bee-4bef-843d-6fce9467079d" (UID: "eb87671e-1bee-4bef-843d-6fce9467079d"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.708036 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb87671e-1bee-4bef-843d-6fce9467079d-kube-api-access-mwsc5" (OuterVolumeSpecName: "kube-api-access-mwsc5") pod "eb87671e-1bee-4bef-843d-6fce9467079d" (UID: "eb87671e-1bee-4bef-843d-6fce9467079d"). InnerVolumeSpecName "kube-api-access-mwsc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.722916 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "eb87671e-1bee-4bef-843d-6fce9467079d" (UID: "eb87671e-1bee-4bef-843d-6fce9467079d"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.801650 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-var-lib-openvswitch\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.801698 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-host-run-netns\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.801741 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-host-kubelet\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.801768 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-run-ovn\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.801804 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-env-overrides\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.801832 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-ovnkube-config\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.801852 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-log-socket\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.801878 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-systemd-units\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.801939 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj47x\" (UniqueName: \"kubernetes.io/projected/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-kube-api-access-lj47x\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.801971 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-node-log\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.801993 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-run-systemd\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802011 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-host-cni-netd\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802043 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-ovn-node-metrics-cert\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802063 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-run-openvswitch\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802082 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-host-run-ovn-kubernetes\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802101 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-etc-openvswitch\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802120 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-host-cni-bin\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802139 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-ovnkube-script-lib\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802176 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-host-slash\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802199 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802261 4915 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802274 4915 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802286 4915 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802297 4915 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-node-log\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802308 4915 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802321 4915 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802335 4915 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eb87671e-1bee-4bef-843d-6fce9467079d-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802346 4915 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-log-socket\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802358 4915 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eb87671e-1bee-4bef-843d-6fce9467079d-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802370 4915 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-slash\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802381 4915 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/eb87671e-1bee-4bef-843d-6fce9467079d-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802394 4915 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eb87671e-1bee-4bef-843d-6fce9467079d-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802405 4915 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802417 4915 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802428 4915 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802439 4915 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802450 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwsc5\" (UniqueName: \"kubernetes.io/projected/eb87671e-1bee-4bef-843d-6fce9467079d-kube-api-access-mwsc5\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802460 4915 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802472 4915 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.802484 4915 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/eb87671e-1bee-4bef-843d-6fce9467079d-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.903839 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-ovnkube-config\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.904148 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-log-socket\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.904325 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-systemd-units\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.904472 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj47x\" (UniqueName: \"kubernetes.io/projected/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-kube-api-access-lj47x\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.904592 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-log-socket\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.904585 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-systemd-units\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.904766 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-node-log\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.904776 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-ovnkube-config\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.905043 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-node-log\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.905202 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-run-systemd\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.905355 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-host-cni-netd\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.905524 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-ovn-node-metrics-cert\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.905673 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-run-openvswitch\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.906070 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-host-run-ovn-kubernetes\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.906393 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-etc-openvswitch\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.905747 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-run-openvswitch\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.905300 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-run-systemd\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.906218 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-host-run-ovn-kubernetes\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.905418 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-host-cni-netd\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.906473 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-etc-openvswitch\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.907482 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-ovnkube-script-lib\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.907879 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-ovnkube-script-lib\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.908101 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-host-cni-bin\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.908300 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-host-slash\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.908670 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.908416 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-host-slash\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.908391 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-host-cni-bin\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.909019 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.909889 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-var-lib-openvswitch\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.910182 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-ovn-node-metrics-cert\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.910039 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-var-lib-openvswitch\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.910697 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-host-run-netns\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.910959 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-host-kubelet\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.911207 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-run-ovn\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.911405 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-env-overrides\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.911040 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-host-run-netns\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.911287 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-run-ovn\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.911068 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-host-kubelet\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.912180 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-env-overrides\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:56 crc kubenswrapper[4915]: I0127 18:53:56.930243 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj47x\" (UniqueName: \"kubernetes.io/projected/f72c55b7-7766-4b3d-a9a4-8900c4b7876e-kube-api-access-lj47x\") pod \"ovnkube-node-hhfxm\" (UID: \"f72c55b7-7766-4b3d-a9a4-8900c4b7876e\") " pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.013316 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.117853 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" event={"ID":"f72c55b7-7766-4b3d-a9a4-8900c4b7876e","Type":"ContainerStarted","Data":"79b4c0c640d32847aebf2c2a19e39439ed617c425c513ab4631f4ae0bc37ea04"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.121465 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8spt_eb87671e-1bee-4bef-843d-6fce9467079d/ovnkube-controller/3.log" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.126680 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8spt_eb87671e-1bee-4bef-843d-6fce9467079d/ovn-acl-logging/0.log" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.127432 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8spt_eb87671e-1bee-4bef-843d-6fce9467079d/ovn-controller/0.log" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128087 4915 generic.go:334] "Generic (PLEG): container finished" podID="eb87671e-1bee-4bef-843d-6fce9467079d" containerID="63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8" exitCode=0 Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128126 4915 generic.go:334] "Generic (PLEG): container finished" podID="eb87671e-1bee-4bef-843d-6fce9467079d" containerID="d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5" exitCode=0 Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128140 4915 generic.go:334] "Generic (PLEG): container finished" podID="eb87671e-1bee-4bef-843d-6fce9467079d" containerID="ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785" exitCode=0 Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128157 4915 generic.go:334] "Generic (PLEG): container finished" podID="eb87671e-1bee-4bef-843d-6fce9467079d" containerID="887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc" exitCode=0 Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128173 4915 generic.go:334] "Generic (PLEG): container finished" podID="eb87671e-1bee-4bef-843d-6fce9467079d" containerID="0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f" exitCode=0 Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128186 4915 generic.go:334] "Generic (PLEG): container finished" podID="eb87671e-1bee-4bef-843d-6fce9467079d" containerID="0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73" exitCode=0 Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128203 4915 generic.go:334] "Generic (PLEG): container finished" podID="eb87671e-1bee-4bef-843d-6fce9467079d" containerID="1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047" exitCode=143 Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128216 4915 generic.go:334] "Generic (PLEG): container finished" podID="eb87671e-1bee-4bef-843d-6fce9467079d" containerID="390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271" exitCode=143 Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128223 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128290 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" event={"ID":"eb87671e-1bee-4bef-843d-6fce9467079d","Type":"ContainerDied","Data":"63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128328 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" event={"ID":"eb87671e-1bee-4bef-843d-6fce9467079d","Type":"ContainerDied","Data":"d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128351 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" event={"ID":"eb87671e-1bee-4bef-843d-6fce9467079d","Type":"ContainerDied","Data":"ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128372 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" event={"ID":"eb87671e-1bee-4bef-843d-6fce9467079d","Type":"ContainerDied","Data":"887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128392 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" event={"ID":"eb87671e-1bee-4bef-843d-6fce9467079d","Type":"ContainerDied","Data":"0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128426 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" event={"ID":"eb87671e-1bee-4bef-843d-6fce9467079d","Type":"ContainerDied","Data":"0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128446 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128463 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128475 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128487 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128498 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128509 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128521 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128532 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128543 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128558 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" event={"ID":"eb87671e-1bee-4bef-843d-6fce9467079d","Type":"ContainerDied","Data":"1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128573 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128585 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128596 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128607 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128618 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128629 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128639 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128650 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128661 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128672 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128687 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" event={"ID":"eb87671e-1bee-4bef-843d-6fce9467079d","Type":"ContainerDied","Data":"390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128702 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128715 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128725 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128736 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128746 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128756 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128766 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128777 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128786 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128822 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128862 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8spt" event={"ID":"eb87671e-1bee-4bef-843d-6fce9467079d","Type":"ContainerDied","Data":"fa9ba831fdb5173312e7761600cf5c9d6a31134af62c8716808cbd914bc5465b"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128878 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128890 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128900 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128911 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128922 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128932 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128942 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128954 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128964 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128974 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.128996 4915 scope.go:117] "RemoveContainer" containerID="63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.132961 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5bpjb_fe27a668-1ea7-44c8-9490-55cf8db5dad9/kube-multus/2.log" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.133612 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5bpjb_fe27a668-1ea7-44c8-9490-55cf8db5dad9/kube-multus/1.log" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.133674 4915 generic.go:334] "Generic (PLEG): container finished" podID="fe27a668-1ea7-44c8-9490-55cf8db5dad9" containerID="9e91d782087fea97d711280d394674401f8eaffbe0664580708da1efeaab5904" exitCode=2 Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.133706 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5bpjb" event={"ID":"fe27a668-1ea7-44c8-9490-55cf8db5dad9","Type":"ContainerDied","Data":"9e91d782087fea97d711280d394674401f8eaffbe0664580708da1efeaab5904"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.133730 4915 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"999d0ed2d215938e26e9b223263ba88b519b694fdb0ae3c3c518907b54762822"} Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.134357 4915 scope.go:117] "RemoveContainer" containerID="9e91d782087fea97d711280d394674401f8eaffbe0664580708da1efeaab5904" Jan 27 18:53:57 crc kubenswrapper[4915]: E0127 18:53:57.134663 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-5bpjb_openshift-multus(fe27a668-1ea7-44c8-9490-55cf8db5dad9)\"" pod="openshift-multus/multus-5bpjb" podUID="fe27a668-1ea7-44c8-9490-55cf8db5dad9" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.188403 4915 scope.go:117] "RemoveContainer" containerID="c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.211862 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n8spt"] Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.219962 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n8spt"] Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.232150 4915 scope.go:117] "RemoveContainer" containerID="d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.253508 4915 scope.go:117] "RemoveContainer" containerID="ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.303710 4915 scope.go:117] "RemoveContainer" containerID="887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.317970 4915 scope.go:117] "RemoveContainer" containerID="0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.335043 4915 scope.go:117] "RemoveContainer" containerID="0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.349847 4915 scope.go:117] "RemoveContainer" containerID="1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.367357 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb87671e-1bee-4bef-843d-6fce9467079d" path="/var/lib/kubelet/pods/eb87671e-1bee-4bef-843d-6fce9467079d/volumes" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.367412 4915 scope.go:117] "RemoveContainer" containerID="390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.381576 4915 scope.go:117] "RemoveContainer" containerID="823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.397265 4915 scope.go:117] "RemoveContainer" containerID="63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8" Jan 27 18:53:57 crc kubenswrapper[4915]: E0127 18:53:57.405760 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8\": container with ID starting with 63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8 not found: ID does not exist" containerID="63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.405821 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8"} err="failed to get container status \"63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8\": rpc error: code = NotFound desc = could not find container \"63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8\": container with ID starting with 63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.405848 4915 scope.go:117] "RemoveContainer" containerID="c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3" Jan 27 18:53:57 crc kubenswrapper[4915]: E0127 18:53:57.406133 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3\": container with ID starting with c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3 not found: ID does not exist" containerID="c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.406159 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3"} err="failed to get container status \"c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3\": rpc error: code = NotFound desc = could not find container \"c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3\": container with ID starting with c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.406178 4915 scope.go:117] "RemoveContainer" containerID="d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5" Jan 27 18:53:57 crc kubenswrapper[4915]: E0127 18:53:57.406473 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\": container with ID starting with d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5 not found: ID does not exist" containerID="d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.406501 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5"} err="failed to get container status \"d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\": rpc error: code = NotFound desc = could not find container \"d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\": container with ID starting with d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.406521 4915 scope.go:117] "RemoveContainer" containerID="ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785" Jan 27 18:53:57 crc kubenswrapper[4915]: E0127 18:53:57.406727 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\": container with ID starting with ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785 not found: ID does not exist" containerID="ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.406752 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785"} err="failed to get container status \"ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\": rpc error: code = NotFound desc = could not find container \"ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\": container with ID starting with ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.406768 4915 scope.go:117] "RemoveContainer" containerID="887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc" Jan 27 18:53:57 crc kubenswrapper[4915]: E0127 18:53:57.407092 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\": container with ID starting with 887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc not found: ID does not exist" containerID="887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.407115 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc"} err="failed to get container status \"887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\": rpc error: code = NotFound desc = could not find container \"887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\": container with ID starting with 887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.407131 4915 scope.go:117] "RemoveContainer" containerID="0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f" Jan 27 18:53:57 crc kubenswrapper[4915]: E0127 18:53:57.407358 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\": container with ID starting with 0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f not found: ID does not exist" containerID="0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.407408 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f"} err="failed to get container status \"0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\": rpc error: code = NotFound desc = could not find container \"0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\": container with ID starting with 0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.407479 4915 scope.go:117] "RemoveContainer" containerID="0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73" Jan 27 18:53:57 crc kubenswrapper[4915]: E0127 18:53:57.407841 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\": container with ID starting with 0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73 not found: ID does not exist" containerID="0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.407887 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73"} err="failed to get container status \"0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\": rpc error: code = NotFound desc = could not find container \"0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\": container with ID starting with 0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.407905 4915 scope.go:117] "RemoveContainer" containerID="1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047" Jan 27 18:53:57 crc kubenswrapper[4915]: E0127 18:53:57.408170 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\": container with ID starting with 1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047 not found: ID does not exist" containerID="1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.408196 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047"} err="failed to get container status \"1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\": rpc error: code = NotFound desc = could not find container \"1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\": container with ID starting with 1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.408213 4915 scope.go:117] "RemoveContainer" containerID="390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271" Jan 27 18:53:57 crc kubenswrapper[4915]: E0127 18:53:57.408423 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\": container with ID starting with 390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271 not found: ID does not exist" containerID="390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.408447 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271"} err="failed to get container status \"390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\": rpc error: code = NotFound desc = could not find container \"390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\": container with ID starting with 390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.408465 4915 scope.go:117] "RemoveContainer" containerID="823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc" Jan 27 18:53:57 crc kubenswrapper[4915]: E0127 18:53:57.408723 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\": container with ID starting with 823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc not found: ID does not exist" containerID="823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.408748 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc"} err="failed to get container status \"823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\": rpc error: code = NotFound desc = could not find container \"823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\": container with ID starting with 823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.408763 4915 scope.go:117] "RemoveContainer" containerID="63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.409060 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8"} err="failed to get container status \"63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8\": rpc error: code = NotFound desc = could not find container \"63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8\": container with ID starting with 63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.409079 4915 scope.go:117] "RemoveContainer" containerID="c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.409272 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3"} err="failed to get container status \"c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3\": rpc error: code = NotFound desc = could not find container \"c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3\": container with ID starting with c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.409289 4915 scope.go:117] "RemoveContainer" containerID="d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.409593 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5"} err="failed to get container status \"d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\": rpc error: code = NotFound desc = could not find container \"d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\": container with ID starting with d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.409618 4915 scope.go:117] "RemoveContainer" containerID="ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.409800 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785"} err="failed to get container status \"ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\": rpc error: code = NotFound desc = could not find container \"ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\": container with ID starting with ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.409826 4915 scope.go:117] "RemoveContainer" containerID="887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.410077 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc"} err="failed to get container status \"887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\": rpc error: code = NotFound desc = could not find container \"887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\": container with ID starting with 887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.410097 4915 scope.go:117] "RemoveContainer" containerID="0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.410425 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f"} err="failed to get container status \"0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\": rpc error: code = NotFound desc = could not find container \"0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\": container with ID starting with 0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.410507 4915 scope.go:117] "RemoveContainer" containerID="0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.410753 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73"} err="failed to get container status \"0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\": rpc error: code = NotFound desc = could not find container \"0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\": container with ID starting with 0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.410779 4915 scope.go:117] "RemoveContainer" containerID="1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.411075 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047"} err="failed to get container status \"1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\": rpc error: code = NotFound desc = could not find container \"1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\": container with ID starting with 1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.411098 4915 scope.go:117] "RemoveContainer" containerID="390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.411335 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271"} err="failed to get container status \"390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\": rpc error: code = NotFound desc = could not find container \"390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\": container with ID starting with 390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.411359 4915 scope.go:117] "RemoveContainer" containerID="823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.411636 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc"} err="failed to get container status \"823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\": rpc error: code = NotFound desc = could not find container \"823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\": container with ID starting with 823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.411658 4915 scope.go:117] "RemoveContainer" containerID="63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.411916 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8"} err="failed to get container status \"63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8\": rpc error: code = NotFound desc = could not find container \"63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8\": container with ID starting with 63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.411933 4915 scope.go:117] "RemoveContainer" containerID="c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.413284 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3"} err="failed to get container status \"c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3\": rpc error: code = NotFound desc = could not find container \"c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3\": container with ID starting with c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.413309 4915 scope.go:117] "RemoveContainer" containerID="d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.413587 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5"} err="failed to get container status \"d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\": rpc error: code = NotFound desc = could not find container \"d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\": container with ID starting with d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.413612 4915 scope.go:117] "RemoveContainer" containerID="ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.414023 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785"} err="failed to get container status \"ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\": rpc error: code = NotFound desc = could not find container \"ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\": container with ID starting with ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.414080 4915 scope.go:117] "RemoveContainer" containerID="887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.414481 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc"} err="failed to get container status \"887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\": rpc error: code = NotFound desc = could not find container \"887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\": container with ID starting with 887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.414507 4915 scope.go:117] "RemoveContainer" containerID="0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.415048 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f"} err="failed to get container status \"0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\": rpc error: code = NotFound desc = could not find container \"0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\": container with ID starting with 0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.415085 4915 scope.go:117] "RemoveContainer" containerID="0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.415399 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73"} err="failed to get container status \"0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\": rpc error: code = NotFound desc = could not find container \"0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\": container with ID starting with 0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.415434 4915 scope.go:117] "RemoveContainer" containerID="1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.415769 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047"} err="failed to get container status \"1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\": rpc error: code = NotFound desc = could not find container \"1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\": container with ID starting with 1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.415811 4915 scope.go:117] "RemoveContainer" containerID="390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.416132 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271"} err="failed to get container status \"390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\": rpc error: code = NotFound desc = could not find container \"390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\": container with ID starting with 390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.416171 4915 scope.go:117] "RemoveContainer" containerID="823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.416503 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc"} err="failed to get container status \"823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\": rpc error: code = NotFound desc = could not find container \"823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\": container with ID starting with 823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.416537 4915 scope.go:117] "RemoveContainer" containerID="63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.416923 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8"} err="failed to get container status \"63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8\": rpc error: code = NotFound desc = could not find container \"63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8\": container with ID starting with 63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.416972 4915 scope.go:117] "RemoveContainer" containerID="c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.418529 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3"} err="failed to get container status \"c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3\": rpc error: code = NotFound desc = could not find container \"c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3\": container with ID starting with c34f636581caa9ea291a1212d78da9aaeab9c59b4ad4ba0d1363afa403eb73b3 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.418562 4915 scope.go:117] "RemoveContainer" containerID="d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.418785 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5"} err="failed to get container status \"d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\": rpc error: code = NotFound desc = could not find container \"d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5\": container with ID starting with d917eccbe5d47a04d5e8546f3d58447586610a207ff932215f2c02c336dc9ad5 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.418841 4915 scope.go:117] "RemoveContainer" containerID="ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.419091 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785"} err="failed to get container status \"ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\": rpc error: code = NotFound desc = could not find container \"ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785\": container with ID starting with ab932d19484f479d895fae674febd44fd5192f425e1a73e08fbe61e840ce3785 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.419116 4915 scope.go:117] "RemoveContainer" containerID="887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.420031 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc"} err="failed to get container status \"887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\": rpc error: code = NotFound desc = could not find container \"887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc\": container with ID starting with 887cae6fef2213a6742dfcfddf9e617f477691f73f29bf1947ba8616a6ea94cc not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.420064 4915 scope.go:117] "RemoveContainer" containerID="0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.420360 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f"} err="failed to get container status \"0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\": rpc error: code = NotFound desc = could not find container \"0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f\": container with ID starting with 0f86704c0e94674782de33bc6f3b3dc41af75d0ab5421ff1cdc93a70b1970d4f not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.420382 4915 scope.go:117] "RemoveContainer" containerID="0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.421050 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73"} err="failed to get container status \"0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\": rpc error: code = NotFound desc = could not find container \"0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73\": container with ID starting with 0035fbcc835098b6718e477f99c657f8085ea0fe3eb250516b6388b2cc906b73 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.421077 4915 scope.go:117] "RemoveContainer" containerID="1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.421406 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047"} err="failed to get container status \"1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\": rpc error: code = NotFound desc = could not find container \"1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047\": container with ID starting with 1ffe65de02479eb6363fec38252c8f4456f8073015ce01ffb1d3c44a3f5cb047 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.421431 4915 scope.go:117] "RemoveContainer" containerID="390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.421665 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271"} err="failed to get container status \"390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\": rpc error: code = NotFound desc = could not find container \"390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271\": container with ID starting with 390616fdc5b72e78d35835f24c5eb7e9e4f1f62b7ad16944a3504ab6a13b1271 not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.421713 4915 scope.go:117] "RemoveContainer" containerID="823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.422110 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc"} err="failed to get container status \"823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\": rpc error: code = NotFound desc = could not find container \"823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc\": container with ID starting with 823e96515b4e4b59b539438ee06bb0151a147e19bd52b4b404e09d3738af68bc not found: ID does not exist" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.422134 4915 scope.go:117] "RemoveContainer" containerID="63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8" Jan 27 18:53:57 crc kubenswrapper[4915]: I0127 18:53:57.422482 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8"} err="failed to get container status \"63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8\": rpc error: code = NotFound desc = could not find container \"63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8\": container with ID starting with 63d6a9f704902382d38d66deaca7c99c958289e1abd566ac1637619a963831d8 not found: ID does not exist" Jan 27 18:53:58 crc kubenswrapper[4915]: I0127 18:53:58.141988 4915 generic.go:334] "Generic (PLEG): container finished" podID="f72c55b7-7766-4b3d-a9a4-8900c4b7876e" containerID="3d242041f55761de7489af4a40af5a173d6f05e5e85bb870082efbb72d7a6952" exitCode=0 Jan 27 18:53:58 crc kubenswrapper[4915]: I0127 18:53:58.142051 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" event={"ID":"f72c55b7-7766-4b3d-a9a4-8900c4b7876e","Type":"ContainerDied","Data":"3d242041f55761de7489af4a40af5a173d6f05e5e85bb870082efbb72d7a6952"} Jan 27 18:53:59 crc kubenswrapper[4915]: I0127 18:53:59.155982 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" event={"ID":"f72c55b7-7766-4b3d-a9a4-8900c4b7876e","Type":"ContainerStarted","Data":"d74c6592a41e10a0a8d82320017cc1d9bb43b2f5b8def8537b5793266f0c0030"} Jan 27 18:53:59 crc kubenswrapper[4915]: I0127 18:53:59.156559 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" event={"ID":"f72c55b7-7766-4b3d-a9a4-8900c4b7876e","Type":"ContainerStarted","Data":"d237235c0aa67fec721bdca2f144bed676396801883b2af8c694ce1f4ddc1b82"} Jan 27 18:53:59 crc kubenswrapper[4915]: I0127 18:53:59.156574 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" event={"ID":"f72c55b7-7766-4b3d-a9a4-8900c4b7876e","Type":"ContainerStarted","Data":"62499957f4d50ae247054825a4d847107462a867045539d6ae9ce7e9eeed670d"} Jan 27 18:53:59 crc kubenswrapper[4915]: I0127 18:53:59.156585 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" event={"ID":"f72c55b7-7766-4b3d-a9a4-8900c4b7876e","Type":"ContainerStarted","Data":"85519b1a0267f189731396e07b8af257e0815f1fc1bc2140a3954520901975fc"} Jan 27 18:54:00 crc kubenswrapper[4915]: I0127 18:54:00.173029 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" event={"ID":"f72c55b7-7766-4b3d-a9a4-8900c4b7876e","Type":"ContainerStarted","Data":"a2d7677ecb7f06cc46b6e4e261e49fb65abad8aea7a608c424a1a2641e69ee89"} Jan 27 18:54:00 crc kubenswrapper[4915]: I0127 18:54:00.173392 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" event={"ID":"f72c55b7-7766-4b3d-a9a4-8900c4b7876e","Type":"ContainerStarted","Data":"fea5219abfb651c0d651824128b9994a566c7e997c7f91594f089029b2b39bb7"} Jan 27 18:54:02 crc kubenswrapper[4915]: I0127 18:54:02.187922 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" event={"ID":"f72c55b7-7766-4b3d-a9a4-8900c4b7876e","Type":"ContainerStarted","Data":"9b22fca9d4a9c6dd9f2db6c5b3ac5117c96dbe6c5a2c3d7629152ad8c6f3eea2"} Jan 27 18:54:03 crc kubenswrapper[4915]: I0127 18:54:03.432621 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-rnmzh"] Jan 27 18:54:03 crc kubenswrapper[4915]: I0127 18:54:03.433725 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rnmzh" Jan 27 18:54:03 crc kubenswrapper[4915]: I0127 18:54:03.437340 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 27 18:54:03 crc kubenswrapper[4915]: I0127 18:54:03.437424 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 27 18:54:03 crc kubenswrapper[4915]: I0127 18:54:03.437421 4915 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-l2x7g" Jan 27 18:54:03 crc kubenswrapper[4915]: I0127 18:54:03.441071 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 27 18:54:03 crc kubenswrapper[4915]: I0127 18:54:03.499403 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/2af86ec3-0a4b-4749-8da0-a67435d42aee-crc-storage\") pod \"crc-storage-crc-rnmzh\" (UID: \"2af86ec3-0a4b-4749-8da0-a67435d42aee\") " pod="crc-storage/crc-storage-crc-rnmzh" Jan 27 18:54:03 crc kubenswrapper[4915]: I0127 18:54:03.499465 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4w76\" (UniqueName: \"kubernetes.io/projected/2af86ec3-0a4b-4749-8da0-a67435d42aee-kube-api-access-k4w76\") pod \"crc-storage-crc-rnmzh\" (UID: \"2af86ec3-0a4b-4749-8da0-a67435d42aee\") " pod="crc-storage/crc-storage-crc-rnmzh" Jan 27 18:54:03 crc kubenswrapper[4915]: I0127 18:54:03.499665 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/2af86ec3-0a4b-4749-8da0-a67435d42aee-node-mnt\") pod \"crc-storage-crc-rnmzh\" (UID: \"2af86ec3-0a4b-4749-8da0-a67435d42aee\") " pod="crc-storage/crc-storage-crc-rnmzh" Jan 27 18:54:03 crc kubenswrapper[4915]: I0127 18:54:03.601363 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/2af86ec3-0a4b-4749-8da0-a67435d42aee-crc-storage\") pod \"crc-storage-crc-rnmzh\" (UID: \"2af86ec3-0a4b-4749-8da0-a67435d42aee\") " pod="crc-storage/crc-storage-crc-rnmzh" Jan 27 18:54:03 crc kubenswrapper[4915]: I0127 18:54:03.601464 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4w76\" (UniqueName: \"kubernetes.io/projected/2af86ec3-0a4b-4749-8da0-a67435d42aee-kube-api-access-k4w76\") pod \"crc-storage-crc-rnmzh\" (UID: \"2af86ec3-0a4b-4749-8da0-a67435d42aee\") " pod="crc-storage/crc-storage-crc-rnmzh" Jan 27 18:54:03 crc kubenswrapper[4915]: I0127 18:54:03.601513 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/2af86ec3-0a4b-4749-8da0-a67435d42aee-node-mnt\") pod \"crc-storage-crc-rnmzh\" (UID: \"2af86ec3-0a4b-4749-8da0-a67435d42aee\") " pod="crc-storage/crc-storage-crc-rnmzh" Jan 27 18:54:03 crc kubenswrapper[4915]: I0127 18:54:03.601865 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/2af86ec3-0a4b-4749-8da0-a67435d42aee-node-mnt\") pod \"crc-storage-crc-rnmzh\" (UID: \"2af86ec3-0a4b-4749-8da0-a67435d42aee\") " pod="crc-storage/crc-storage-crc-rnmzh" Jan 27 18:54:03 crc kubenswrapper[4915]: I0127 18:54:03.602780 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/2af86ec3-0a4b-4749-8da0-a67435d42aee-crc-storage\") pod \"crc-storage-crc-rnmzh\" (UID: \"2af86ec3-0a4b-4749-8da0-a67435d42aee\") " pod="crc-storage/crc-storage-crc-rnmzh" Jan 27 18:54:03 crc kubenswrapper[4915]: I0127 18:54:03.621175 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4w76\" (UniqueName: \"kubernetes.io/projected/2af86ec3-0a4b-4749-8da0-a67435d42aee-kube-api-access-k4w76\") pod \"crc-storage-crc-rnmzh\" (UID: \"2af86ec3-0a4b-4749-8da0-a67435d42aee\") " pod="crc-storage/crc-storage-crc-rnmzh" Jan 27 18:54:03 crc kubenswrapper[4915]: I0127 18:54:03.759033 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rnmzh" Jan 27 18:54:03 crc kubenswrapper[4915]: E0127 18:54:03.790168 4915 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-rnmzh_crc-storage_2af86ec3-0a4b-4749-8da0-a67435d42aee_0(31030f1bcb522f398ea92f134697469e97d4549d7fc495f4a084f6b765ec8217): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:54:03 crc kubenswrapper[4915]: E0127 18:54:03.790390 4915 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-rnmzh_crc-storage_2af86ec3-0a4b-4749-8da0-a67435d42aee_0(31030f1bcb522f398ea92f134697469e97d4549d7fc495f4a084f6b765ec8217): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-rnmzh" Jan 27 18:54:03 crc kubenswrapper[4915]: E0127 18:54:03.790410 4915 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-rnmzh_crc-storage_2af86ec3-0a4b-4749-8da0-a67435d42aee_0(31030f1bcb522f398ea92f134697469e97d4549d7fc495f4a084f6b765ec8217): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-rnmzh" Jan 27 18:54:03 crc kubenswrapper[4915]: E0127 18:54:03.790451 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-rnmzh_crc-storage(2af86ec3-0a4b-4749-8da0-a67435d42aee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-rnmzh_crc-storage(2af86ec3-0a4b-4749-8da0-a67435d42aee)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-rnmzh_crc-storage_2af86ec3-0a4b-4749-8da0-a67435d42aee_0(31030f1bcb522f398ea92f134697469e97d4549d7fc495f4a084f6b765ec8217): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-rnmzh" podUID="2af86ec3-0a4b-4749-8da0-a67435d42aee" Jan 27 18:54:04 crc kubenswrapper[4915]: I0127 18:54:04.213609 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" event={"ID":"f72c55b7-7766-4b3d-a9a4-8900c4b7876e","Type":"ContainerStarted","Data":"8dfb8188fc68e642ad032af04460cc33530d800d7a611d72ade3389724add598"} Jan 27 18:54:04 crc kubenswrapper[4915]: I0127 18:54:04.214085 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:54:04 crc kubenswrapper[4915]: I0127 18:54:04.214267 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:54:04 crc kubenswrapper[4915]: I0127 18:54:04.257682 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" podStartSLOduration=8.257654528 podStartE2EDuration="8.257654528s" podCreationTimestamp="2026-01-27 18:53:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:54:04.251688342 +0000 UTC m=+735.609542036" watchObservedRunningTime="2026-01-27 18:54:04.257654528 +0000 UTC m=+735.615508212" Jan 27 18:54:04 crc kubenswrapper[4915]: I0127 18:54:04.263589 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:54:04 crc kubenswrapper[4915]: I0127 18:54:04.855548 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-rnmzh"] Jan 27 18:54:04 crc kubenswrapper[4915]: I0127 18:54:04.856002 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rnmzh" Jan 27 18:54:04 crc kubenswrapper[4915]: I0127 18:54:04.856433 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rnmzh" Jan 27 18:54:04 crc kubenswrapper[4915]: E0127 18:54:04.878689 4915 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-rnmzh_crc-storage_2af86ec3-0a4b-4749-8da0-a67435d42aee_0(a8bd1536dafefac4fa7d9b6d815084219477714e6cc7cb2137afabe990a99cb0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 18:54:04 crc kubenswrapper[4915]: E0127 18:54:04.878777 4915 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-rnmzh_crc-storage_2af86ec3-0a4b-4749-8da0-a67435d42aee_0(a8bd1536dafefac4fa7d9b6d815084219477714e6cc7cb2137afabe990a99cb0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-rnmzh" Jan 27 18:54:04 crc kubenswrapper[4915]: E0127 18:54:04.878850 4915 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-rnmzh_crc-storage_2af86ec3-0a4b-4749-8da0-a67435d42aee_0(a8bd1536dafefac4fa7d9b6d815084219477714e6cc7cb2137afabe990a99cb0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-rnmzh" Jan 27 18:54:04 crc kubenswrapper[4915]: E0127 18:54:04.878934 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-rnmzh_crc-storage(2af86ec3-0a4b-4749-8da0-a67435d42aee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-rnmzh_crc-storage(2af86ec3-0a4b-4749-8da0-a67435d42aee)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-rnmzh_crc-storage_2af86ec3-0a4b-4749-8da0-a67435d42aee_0(a8bd1536dafefac4fa7d9b6d815084219477714e6cc7cb2137afabe990a99cb0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-rnmzh" podUID="2af86ec3-0a4b-4749-8da0-a67435d42aee" Jan 27 18:54:05 crc kubenswrapper[4915]: I0127 18:54:05.220166 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:54:05 crc kubenswrapper[4915]: I0127 18:54:05.251998 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:54:10 crc kubenswrapper[4915]: I0127 18:54:10.358095 4915 scope.go:117] "RemoveContainer" containerID="9e91d782087fea97d711280d394674401f8eaffbe0664580708da1efeaab5904" Jan 27 18:54:11 crc kubenswrapper[4915]: I0127 18:54:11.256874 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5bpjb_fe27a668-1ea7-44c8-9490-55cf8db5dad9/kube-multus/2.log" Jan 27 18:54:11 crc kubenswrapper[4915]: I0127 18:54:11.257502 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5bpjb_fe27a668-1ea7-44c8-9490-55cf8db5dad9/kube-multus/1.log" Jan 27 18:54:11 crc kubenswrapper[4915]: I0127 18:54:11.257554 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5bpjb" event={"ID":"fe27a668-1ea7-44c8-9490-55cf8db5dad9","Type":"ContainerStarted","Data":"b204a4d11bdbfcc1a1a8e08b3f97132878947a0b0ca53d4437629791a4aa1d49"} Jan 27 18:54:17 crc kubenswrapper[4915]: I0127 18:54:17.357563 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rnmzh" Jan 27 18:54:17 crc kubenswrapper[4915]: I0127 18:54:17.359531 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rnmzh" Jan 27 18:54:17 crc kubenswrapper[4915]: I0127 18:54:17.603958 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-rnmzh"] Jan 27 18:54:17 crc kubenswrapper[4915]: I0127 18:54:17.617457 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:54:18 crc kubenswrapper[4915]: I0127 18:54:18.305134 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-rnmzh" event={"ID":"2af86ec3-0a4b-4749-8da0-a67435d42aee","Type":"ContainerStarted","Data":"27233285b41ae2e8ffc9695a6ea578a2cf45a84dec5fe7b1ccb1b13b633a258c"} Jan 27 18:54:19 crc kubenswrapper[4915]: I0127 18:54:19.316205 4915 generic.go:334] "Generic (PLEG): container finished" podID="2af86ec3-0a4b-4749-8da0-a67435d42aee" containerID="d2e26f7efab91a0c0839e1fee083d2c70a8048d7e522b0db19dadbc99ab3bad3" exitCode=0 Jan 27 18:54:19 crc kubenswrapper[4915]: I0127 18:54:19.316446 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-rnmzh" event={"ID":"2af86ec3-0a4b-4749-8da0-a67435d42aee","Type":"ContainerDied","Data":"d2e26f7efab91a0c0839e1fee083d2c70a8048d7e522b0db19dadbc99ab3bad3"} Jan 27 18:54:20 crc kubenswrapper[4915]: I0127 18:54:20.558583 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rnmzh" Jan 27 18:54:20 crc kubenswrapper[4915]: I0127 18:54:20.625194 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:54:20 crc kubenswrapper[4915]: I0127 18:54:20.625260 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:54:20 crc kubenswrapper[4915]: I0127 18:54:20.667001 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/2af86ec3-0a4b-4749-8da0-a67435d42aee-node-mnt\") pod \"2af86ec3-0a4b-4749-8da0-a67435d42aee\" (UID: \"2af86ec3-0a4b-4749-8da0-a67435d42aee\") " Jan 27 18:54:20 crc kubenswrapper[4915]: I0127 18:54:20.667101 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4w76\" (UniqueName: \"kubernetes.io/projected/2af86ec3-0a4b-4749-8da0-a67435d42aee-kube-api-access-k4w76\") pod \"2af86ec3-0a4b-4749-8da0-a67435d42aee\" (UID: \"2af86ec3-0a4b-4749-8da0-a67435d42aee\") " Jan 27 18:54:20 crc kubenswrapper[4915]: I0127 18:54:20.667138 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/2af86ec3-0a4b-4749-8da0-a67435d42aee-crc-storage\") pod \"2af86ec3-0a4b-4749-8da0-a67435d42aee\" (UID: \"2af86ec3-0a4b-4749-8da0-a67435d42aee\") " Jan 27 18:54:20 crc kubenswrapper[4915]: I0127 18:54:20.667172 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2af86ec3-0a4b-4749-8da0-a67435d42aee-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "2af86ec3-0a4b-4749-8da0-a67435d42aee" (UID: "2af86ec3-0a4b-4749-8da0-a67435d42aee"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:54:20 crc kubenswrapper[4915]: I0127 18:54:20.667410 4915 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/2af86ec3-0a4b-4749-8da0-a67435d42aee-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 27 18:54:20 crc kubenswrapper[4915]: I0127 18:54:20.673743 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2af86ec3-0a4b-4749-8da0-a67435d42aee-kube-api-access-k4w76" (OuterVolumeSpecName: "kube-api-access-k4w76") pod "2af86ec3-0a4b-4749-8da0-a67435d42aee" (UID: "2af86ec3-0a4b-4749-8da0-a67435d42aee"). InnerVolumeSpecName "kube-api-access-k4w76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:54:20 crc kubenswrapper[4915]: I0127 18:54:20.692885 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2af86ec3-0a4b-4749-8da0-a67435d42aee-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "2af86ec3-0a4b-4749-8da0-a67435d42aee" (UID: "2af86ec3-0a4b-4749-8da0-a67435d42aee"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:54:20 crc kubenswrapper[4915]: I0127 18:54:20.768262 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4w76\" (UniqueName: \"kubernetes.io/projected/2af86ec3-0a4b-4749-8da0-a67435d42aee-kube-api-access-k4w76\") on node \"crc\" DevicePath \"\"" Jan 27 18:54:20 crc kubenswrapper[4915]: I0127 18:54:20.768299 4915 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/2af86ec3-0a4b-4749-8da0-a67435d42aee-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 27 18:54:21 crc kubenswrapper[4915]: I0127 18:54:21.328377 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-rnmzh" event={"ID":"2af86ec3-0a4b-4749-8da0-a67435d42aee","Type":"ContainerDied","Data":"27233285b41ae2e8ffc9695a6ea578a2cf45a84dec5fe7b1ccb1b13b633a258c"} Jan 27 18:54:21 crc kubenswrapper[4915]: I0127 18:54:21.328417 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27233285b41ae2e8ffc9695a6ea578a2cf45a84dec5fe7b1ccb1b13b633a258c" Jan 27 18:54:21 crc kubenswrapper[4915]: I0127 18:54:21.328467 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rnmzh" Jan 27 18:54:27 crc kubenswrapper[4915]: I0127 18:54:27.038696 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hhfxm" Jan 27 18:54:27 crc kubenswrapper[4915]: I0127 18:54:27.622527 4915 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 18:54:28 crc kubenswrapper[4915]: I0127 18:54:28.744644 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8"] Jan 27 18:54:28 crc kubenswrapper[4915]: E0127 18:54:28.744842 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2af86ec3-0a4b-4749-8da0-a67435d42aee" containerName="storage" Jan 27 18:54:28 crc kubenswrapper[4915]: I0127 18:54:28.744853 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="2af86ec3-0a4b-4749-8da0-a67435d42aee" containerName="storage" Jan 27 18:54:28 crc kubenswrapper[4915]: I0127 18:54:28.744953 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="2af86ec3-0a4b-4749-8da0-a67435d42aee" containerName="storage" Jan 27 18:54:28 crc kubenswrapper[4915]: I0127 18:54:28.745608 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8" Jan 27 18:54:28 crc kubenswrapper[4915]: I0127 18:54:28.747385 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 18:54:28 crc kubenswrapper[4915]: I0127 18:54:28.757856 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8"] Jan 27 18:54:28 crc kubenswrapper[4915]: I0127 18:54:28.787000 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz7ql\" (UniqueName: \"kubernetes.io/projected/c75c7efc-2e8b-40e1-94ca-7be378c4b004-kube-api-access-fz7ql\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8\" (UID: \"c75c7efc-2e8b-40e1-94ca-7be378c4b004\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8" Jan 27 18:54:28 crc kubenswrapper[4915]: I0127 18:54:28.787083 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c75c7efc-2e8b-40e1-94ca-7be378c4b004-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8\" (UID: \"c75c7efc-2e8b-40e1-94ca-7be378c4b004\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8" Jan 27 18:54:28 crc kubenswrapper[4915]: I0127 18:54:28.787104 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c75c7efc-2e8b-40e1-94ca-7be378c4b004-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8\" (UID: \"c75c7efc-2e8b-40e1-94ca-7be378c4b004\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8" Jan 27 18:54:28 crc kubenswrapper[4915]: I0127 18:54:28.887877 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c75c7efc-2e8b-40e1-94ca-7be378c4b004-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8\" (UID: \"c75c7efc-2e8b-40e1-94ca-7be378c4b004\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8" Jan 27 18:54:28 crc kubenswrapper[4915]: I0127 18:54:28.888202 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c75c7efc-2e8b-40e1-94ca-7be378c4b004-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8\" (UID: \"c75c7efc-2e8b-40e1-94ca-7be378c4b004\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8" Jan 27 18:54:28 crc kubenswrapper[4915]: I0127 18:54:28.888267 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz7ql\" (UniqueName: \"kubernetes.io/projected/c75c7efc-2e8b-40e1-94ca-7be378c4b004-kube-api-access-fz7ql\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8\" (UID: \"c75c7efc-2e8b-40e1-94ca-7be378c4b004\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8" Jan 27 18:54:28 crc kubenswrapper[4915]: I0127 18:54:28.888502 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c75c7efc-2e8b-40e1-94ca-7be378c4b004-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8\" (UID: \"c75c7efc-2e8b-40e1-94ca-7be378c4b004\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8" Jan 27 18:54:28 crc kubenswrapper[4915]: I0127 18:54:28.888630 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c75c7efc-2e8b-40e1-94ca-7be378c4b004-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8\" (UID: \"c75c7efc-2e8b-40e1-94ca-7be378c4b004\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8" Jan 27 18:54:28 crc kubenswrapper[4915]: I0127 18:54:28.907608 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz7ql\" (UniqueName: \"kubernetes.io/projected/c75c7efc-2e8b-40e1-94ca-7be378c4b004-kube-api-access-fz7ql\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8\" (UID: \"c75c7efc-2e8b-40e1-94ca-7be378c4b004\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8" Jan 27 18:54:29 crc kubenswrapper[4915]: I0127 18:54:29.063472 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8" Jan 27 18:54:29 crc kubenswrapper[4915]: I0127 18:54:29.457543 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8"] Jan 27 18:54:29 crc kubenswrapper[4915]: W0127 18:54:29.464346 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc75c7efc_2e8b_40e1_94ca_7be378c4b004.slice/crio-0278f4f3ad8acdcd50889c48dbb4222e4165a97f703baa9b1dc7f8e659af557d WatchSource:0}: Error finding container 0278f4f3ad8acdcd50889c48dbb4222e4165a97f703baa9b1dc7f8e659af557d: Status 404 returned error can't find the container with id 0278f4f3ad8acdcd50889c48dbb4222e4165a97f703baa9b1dc7f8e659af557d Jan 27 18:54:30 crc kubenswrapper[4915]: I0127 18:54:30.379153 4915 generic.go:334] "Generic (PLEG): container finished" podID="c75c7efc-2e8b-40e1-94ca-7be378c4b004" containerID="3b76b16621dfd327c9c4d5084e8804d048d9e28d8151c0697bc66cbce1b1650c" exitCode=0 Jan 27 18:54:30 crc kubenswrapper[4915]: I0127 18:54:30.379263 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8" event={"ID":"c75c7efc-2e8b-40e1-94ca-7be378c4b004","Type":"ContainerDied","Data":"3b76b16621dfd327c9c4d5084e8804d048d9e28d8151c0697bc66cbce1b1650c"} Jan 27 18:54:30 crc kubenswrapper[4915]: I0127 18:54:30.379508 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8" event={"ID":"c75c7efc-2e8b-40e1-94ca-7be378c4b004","Type":"ContainerStarted","Data":"0278f4f3ad8acdcd50889c48dbb4222e4165a97f703baa9b1dc7f8e659af557d"} Jan 27 18:54:31 crc kubenswrapper[4915]: I0127 18:54:31.070764 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-khshf"] Jan 27 18:54:31 crc kubenswrapper[4915]: I0127 18:54:31.073039 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khshf" Jan 27 18:54:31 crc kubenswrapper[4915]: I0127 18:54:31.090072 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-khshf"] Jan 27 18:54:31 crc kubenswrapper[4915]: I0127 18:54:31.217988 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gc4b\" (UniqueName: \"kubernetes.io/projected/b0fc551e-86af-4205-a3b0-79be72e100c7-kube-api-access-9gc4b\") pod \"redhat-operators-khshf\" (UID: \"b0fc551e-86af-4205-a3b0-79be72e100c7\") " pod="openshift-marketplace/redhat-operators-khshf" Jan 27 18:54:31 crc kubenswrapper[4915]: I0127 18:54:31.218131 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0fc551e-86af-4205-a3b0-79be72e100c7-catalog-content\") pod \"redhat-operators-khshf\" (UID: \"b0fc551e-86af-4205-a3b0-79be72e100c7\") " pod="openshift-marketplace/redhat-operators-khshf" Jan 27 18:54:31 crc kubenswrapper[4915]: I0127 18:54:31.218327 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0fc551e-86af-4205-a3b0-79be72e100c7-utilities\") pod \"redhat-operators-khshf\" (UID: \"b0fc551e-86af-4205-a3b0-79be72e100c7\") " pod="openshift-marketplace/redhat-operators-khshf" Jan 27 18:54:31 crc kubenswrapper[4915]: I0127 18:54:31.319038 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0fc551e-86af-4205-a3b0-79be72e100c7-utilities\") pod \"redhat-operators-khshf\" (UID: \"b0fc551e-86af-4205-a3b0-79be72e100c7\") " pod="openshift-marketplace/redhat-operators-khshf" Jan 27 18:54:31 crc kubenswrapper[4915]: I0127 18:54:31.319103 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gc4b\" (UniqueName: \"kubernetes.io/projected/b0fc551e-86af-4205-a3b0-79be72e100c7-kube-api-access-9gc4b\") pod \"redhat-operators-khshf\" (UID: \"b0fc551e-86af-4205-a3b0-79be72e100c7\") " pod="openshift-marketplace/redhat-operators-khshf" Jan 27 18:54:31 crc kubenswrapper[4915]: I0127 18:54:31.319149 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0fc551e-86af-4205-a3b0-79be72e100c7-catalog-content\") pod \"redhat-operators-khshf\" (UID: \"b0fc551e-86af-4205-a3b0-79be72e100c7\") " pod="openshift-marketplace/redhat-operators-khshf" Jan 27 18:54:31 crc kubenswrapper[4915]: I0127 18:54:31.319558 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0fc551e-86af-4205-a3b0-79be72e100c7-utilities\") pod \"redhat-operators-khshf\" (UID: \"b0fc551e-86af-4205-a3b0-79be72e100c7\") " pod="openshift-marketplace/redhat-operators-khshf" Jan 27 18:54:31 crc kubenswrapper[4915]: I0127 18:54:31.319608 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0fc551e-86af-4205-a3b0-79be72e100c7-catalog-content\") pod \"redhat-operators-khshf\" (UID: \"b0fc551e-86af-4205-a3b0-79be72e100c7\") " pod="openshift-marketplace/redhat-operators-khshf" Jan 27 18:54:31 crc kubenswrapper[4915]: I0127 18:54:31.346511 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gc4b\" (UniqueName: \"kubernetes.io/projected/b0fc551e-86af-4205-a3b0-79be72e100c7-kube-api-access-9gc4b\") pod \"redhat-operators-khshf\" (UID: \"b0fc551e-86af-4205-a3b0-79be72e100c7\") " pod="openshift-marketplace/redhat-operators-khshf" Jan 27 18:54:31 crc kubenswrapper[4915]: I0127 18:54:31.395513 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khshf" Jan 27 18:54:31 crc kubenswrapper[4915]: I0127 18:54:31.574692 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-khshf"] Jan 27 18:54:31 crc kubenswrapper[4915]: W0127 18:54:31.581624 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0fc551e_86af_4205_a3b0_79be72e100c7.slice/crio-275857e9c5b2d738f2dfc1283e3c77f560a4094968aca7e93e8798b7d2fe73b5 WatchSource:0}: Error finding container 275857e9c5b2d738f2dfc1283e3c77f560a4094968aca7e93e8798b7d2fe73b5: Status 404 returned error can't find the container with id 275857e9c5b2d738f2dfc1283e3c77f560a4094968aca7e93e8798b7d2fe73b5 Jan 27 18:54:32 crc kubenswrapper[4915]: I0127 18:54:32.392739 4915 generic.go:334] "Generic (PLEG): container finished" podID="b0fc551e-86af-4205-a3b0-79be72e100c7" containerID="0d6f26e980cf0b090eed91f91318156322cc000c861f399ad4e1deff3aa941ec" exitCode=0 Jan 27 18:54:32 crc kubenswrapper[4915]: I0127 18:54:32.392809 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khshf" event={"ID":"b0fc551e-86af-4205-a3b0-79be72e100c7","Type":"ContainerDied","Data":"0d6f26e980cf0b090eed91f91318156322cc000c861f399ad4e1deff3aa941ec"} Jan 27 18:54:32 crc kubenswrapper[4915]: I0127 18:54:32.392840 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khshf" event={"ID":"b0fc551e-86af-4205-a3b0-79be72e100c7","Type":"ContainerStarted","Data":"275857e9c5b2d738f2dfc1283e3c77f560a4094968aca7e93e8798b7d2fe73b5"} Jan 27 18:54:33 crc kubenswrapper[4915]: I0127 18:54:33.402948 4915 generic.go:334] "Generic (PLEG): container finished" podID="c75c7efc-2e8b-40e1-94ca-7be378c4b004" containerID="9f3c5954fdf99874ece6779d42c645526b612ab06a33e67d41782353151733f3" exitCode=0 Jan 27 18:54:33 crc kubenswrapper[4915]: I0127 18:54:33.403048 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8" event={"ID":"c75c7efc-2e8b-40e1-94ca-7be378c4b004","Type":"ContainerDied","Data":"9f3c5954fdf99874ece6779d42c645526b612ab06a33e67d41782353151733f3"} Jan 27 18:54:33 crc kubenswrapper[4915]: I0127 18:54:33.422054 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khshf" event={"ID":"b0fc551e-86af-4205-a3b0-79be72e100c7","Type":"ContainerStarted","Data":"2f5b45d1d58dc0b5f38ef7bce2ede2b9eb7b21becd5c81a5a682f1ac9d55ca2d"} Jan 27 18:54:34 crc kubenswrapper[4915]: I0127 18:54:34.431222 4915 generic.go:334] "Generic (PLEG): container finished" podID="c75c7efc-2e8b-40e1-94ca-7be378c4b004" containerID="dac96d1403382b8c0fd20d726543a347f24cd9b33b8a93aeda5c9f39af10c496" exitCode=0 Jan 27 18:54:34 crc kubenswrapper[4915]: I0127 18:54:34.431300 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8" event={"ID":"c75c7efc-2e8b-40e1-94ca-7be378c4b004","Type":"ContainerDied","Data":"dac96d1403382b8c0fd20d726543a347f24cd9b33b8a93aeda5c9f39af10c496"} Jan 27 18:54:34 crc kubenswrapper[4915]: I0127 18:54:34.433134 4915 generic.go:334] "Generic (PLEG): container finished" podID="b0fc551e-86af-4205-a3b0-79be72e100c7" containerID="2f5b45d1d58dc0b5f38ef7bce2ede2b9eb7b21becd5c81a5a682f1ac9d55ca2d" exitCode=0 Jan 27 18:54:34 crc kubenswrapper[4915]: I0127 18:54:34.433179 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khshf" event={"ID":"b0fc551e-86af-4205-a3b0-79be72e100c7","Type":"ContainerDied","Data":"2f5b45d1d58dc0b5f38ef7bce2ede2b9eb7b21becd5c81a5a682f1ac9d55ca2d"} Jan 27 18:54:35 crc kubenswrapper[4915]: I0127 18:54:35.442571 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khshf" event={"ID":"b0fc551e-86af-4205-a3b0-79be72e100c7","Type":"ContainerStarted","Data":"994c2a0f0c6f81f200d9ca14b4e58274874683fab5d9ac120e09c38373c36f15"} Jan 27 18:54:35 crc kubenswrapper[4915]: I0127 18:54:35.463235 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-khshf" podStartSLOduration=1.98458153 podStartE2EDuration="4.463218928s" podCreationTimestamp="2026-01-27 18:54:31 +0000 UTC" firstStartedPulling="2026-01-27 18:54:32.394028806 +0000 UTC m=+763.751882460" lastFinishedPulling="2026-01-27 18:54:34.872666154 +0000 UTC m=+766.230519858" observedRunningTime="2026-01-27 18:54:35.461208749 +0000 UTC m=+766.819062423" watchObservedRunningTime="2026-01-27 18:54:35.463218928 +0000 UTC m=+766.821072582" Jan 27 18:54:35 crc kubenswrapper[4915]: I0127 18:54:35.708752 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8" Jan 27 18:54:35 crc kubenswrapper[4915]: I0127 18:54:35.879739 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c75c7efc-2e8b-40e1-94ca-7be378c4b004-util\") pod \"c75c7efc-2e8b-40e1-94ca-7be378c4b004\" (UID: \"c75c7efc-2e8b-40e1-94ca-7be378c4b004\") " Jan 27 18:54:35 crc kubenswrapper[4915]: I0127 18:54:35.879826 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz7ql\" (UniqueName: \"kubernetes.io/projected/c75c7efc-2e8b-40e1-94ca-7be378c4b004-kube-api-access-fz7ql\") pod \"c75c7efc-2e8b-40e1-94ca-7be378c4b004\" (UID: \"c75c7efc-2e8b-40e1-94ca-7be378c4b004\") " Jan 27 18:54:35 crc kubenswrapper[4915]: I0127 18:54:35.879882 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c75c7efc-2e8b-40e1-94ca-7be378c4b004-bundle\") pod \"c75c7efc-2e8b-40e1-94ca-7be378c4b004\" (UID: \"c75c7efc-2e8b-40e1-94ca-7be378c4b004\") " Jan 27 18:54:35 crc kubenswrapper[4915]: I0127 18:54:35.880960 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c75c7efc-2e8b-40e1-94ca-7be378c4b004-bundle" (OuterVolumeSpecName: "bundle") pod "c75c7efc-2e8b-40e1-94ca-7be378c4b004" (UID: "c75c7efc-2e8b-40e1-94ca-7be378c4b004"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:54:35 crc kubenswrapper[4915]: I0127 18:54:35.890964 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c75c7efc-2e8b-40e1-94ca-7be378c4b004-util" (OuterVolumeSpecName: "util") pod "c75c7efc-2e8b-40e1-94ca-7be378c4b004" (UID: "c75c7efc-2e8b-40e1-94ca-7be378c4b004"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:54:35 crc kubenswrapper[4915]: I0127 18:54:35.893314 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c75c7efc-2e8b-40e1-94ca-7be378c4b004-kube-api-access-fz7ql" (OuterVolumeSpecName: "kube-api-access-fz7ql") pod "c75c7efc-2e8b-40e1-94ca-7be378c4b004" (UID: "c75c7efc-2e8b-40e1-94ca-7be378c4b004"). InnerVolumeSpecName "kube-api-access-fz7ql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:54:35 crc kubenswrapper[4915]: I0127 18:54:35.981040 4915 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c75c7efc-2e8b-40e1-94ca-7be378c4b004-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:54:35 crc kubenswrapper[4915]: I0127 18:54:35.981088 4915 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c75c7efc-2e8b-40e1-94ca-7be378c4b004-util\") on node \"crc\" DevicePath \"\"" Jan 27 18:54:35 crc kubenswrapper[4915]: I0127 18:54:35.981107 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz7ql\" (UniqueName: \"kubernetes.io/projected/c75c7efc-2e8b-40e1-94ca-7be378c4b004-kube-api-access-fz7ql\") on node \"crc\" DevicePath \"\"" Jan 27 18:54:36 crc kubenswrapper[4915]: I0127 18:54:36.451854 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8" event={"ID":"c75c7efc-2e8b-40e1-94ca-7be378c4b004","Type":"ContainerDied","Data":"0278f4f3ad8acdcd50889c48dbb4222e4165a97f703baa9b1dc7f8e659af557d"} Jan 27 18:54:36 crc kubenswrapper[4915]: I0127 18:54:36.451956 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0278f4f3ad8acdcd50889c48dbb4222e4165a97f703baa9b1dc7f8e659af557d" Jan 27 18:54:36 crc kubenswrapper[4915]: I0127 18:54:36.451894 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8" Jan 27 18:54:38 crc kubenswrapper[4915]: I0127 18:54:38.973058 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-r6dxr"] Jan 27 18:54:38 crc kubenswrapper[4915]: E0127 18:54:38.973464 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c75c7efc-2e8b-40e1-94ca-7be378c4b004" containerName="extract" Jan 27 18:54:38 crc kubenswrapper[4915]: I0127 18:54:38.973475 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c75c7efc-2e8b-40e1-94ca-7be378c4b004" containerName="extract" Jan 27 18:54:38 crc kubenswrapper[4915]: E0127 18:54:38.973487 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c75c7efc-2e8b-40e1-94ca-7be378c4b004" containerName="util" Jan 27 18:54:38 crc kubenswrapper[4915]: I0127 18:54:38.973493 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c75c7efc-2e8b-40e1-94ca-7be378c4b004" containerName="util" Jan 27 18:54:38 crc kubenswrapper[4915]: E0127 18:54:38.973505 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c75c7efc-2e8b-40e1-94ca-7be378c4b004" containerName="pull" Jan 27 18:54:38 crc kubenswrapper[4915]: I0127 18:54:38.973511 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c75c7efc-2e8b-40e1-94ca-7be378c4b004" containerName="pull" Jan 27 18:54:38 crc kubenswrapper[4915]: I0127 18:54:38.973596 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c75c7efc-2e8b-40e1-94ca-7be378c4b004" containerName="extract" Jan 27 18:54:38 crc kubenswrapper[4915]: I0127 18:54:38.973934 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-r6dxr" Jan 27 18:54:38 crc kubenswrapper[4915]: I0127 18:54:38.976628 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-q69gl" Jan 27 18:54:38 crc kubenswrapper[4915]: I0127 18:54:38.976998 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 27 18:54:38 crc kubenswrapper[4915]: I0127 18:54:38.977068 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 27 18:54:38 crc kubenswrapper[4915]: I0127 18:54:38.988908 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-r6dxr"] Jan 27 18:54:39 crc kubenswrapper[4915]: I0127 18:54:39.019036 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb24j\" (UniqueName: \"kubernetes.io/projected/431a5ebc-8bd2-400d-a65a-e696e4a00f5c-kube-api-access-mb24j\") pod \"nmstate-operator-646758c888-r6dxr\" (UID: \"431a5ebc-8bd2-400d-a65a-e696e4a00f5c\") " pod="openshift-nmstate/nmstate-operator-646758c888-r6dxr" Jan 27 18:54:39 crc kubenswrapper[4915]: I0127 18:54:39.120169 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb24j\" (UniqueName: \"kubernetes.io/projected/431a5ebc-8bd2-400d-a65a-e696e4a00f5c-kube-api-access-mb24j\") pod \"nmstate-operator-646758c888-r6dxr\" (UID: \"431a5ebc-8bd2-400d-a65a-e696e4a00f5c\") " pod="openshift-nmstate/nmstate-operator-646758c888-r6dxr" Jan 27 18:54:39 crc kubenswrapper[4915]: I0127 18:54:39.151492 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb24j\" (UniqueName: \"kubernetes.io/projected/431a5ebc-8bd2-400d-a65a-e696e4a00f5c-kube-api-access-mb24j\") pod \"nmstate-operator-646758c888-r6dxr\" (UID: \"431a5ebc-8bd2-400d-a65a-e696e4a00f5c\") " pod="openshift-nmstate/nmstate-operator-646758c888-r6dxr" Jan 27 18:54:39 crc kubenswrapper[4915]: I0127 18:54:39.289657 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-r6dxr" Jan 27 18:54:39 crc kubenswrapper[4915]: I0127 18:54:39.519239 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-r6dxr"] Jan 27 18:54:40 crc kubenswrapper[4915]: I0127 18:54:40.478543 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-r6dxr" event={"ID":"431a5ebc-8bd2-400d-a65a-e696e4a00f5c","Type":"ContainerStarted","Data":"3883985c098c89df769a753518aa1d7478bb43cfc012b1283ea9f2d2df0d3e33"} Jan 27 18:54:41 crc kubenswrapper[4915]: I0127 18:54:41.396564 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-khshf" Jan 27 18:54:41 crc kubenswrapper[4915]: I0127 18:54:41.396904 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-khshf" Jan 27 18:54:42 crc kubenswrapper[4915]: I0127 18:54:42.461404 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-khshf" podUID="b0fc551e-86af-4205-a3b0-79be72e100c7" containerName="registry-server" probeResult="failure" output=< Jan 27 18:54:42 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Jan 27 18:54:42 crc kubenswrapper[4915]: > Jan 27 18:54:43 crc kubenswrapper[4915]: I0127 18:54:43.501444 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-r6dxr" event={"ID":"431a5ebc-8bd2-400d-a65a-e696e4a00f5c","Type":"ContainerStarted","Data":"25a4200cbb60177466d47ba5a15f5a3d7e200df4fc2991e95555202c18944d81"} Jan 27 18:54:43 crc kubenswrapper[4915]: I0127 18:54:43.520513 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-r6dxr" podStartSLOduration=1.885503514 podStartE2EDuration="5.520491608s" podCreationTimestamp="2026-01-27 18:54:38 +0000 UTC" firstStartedPulling="2026-01-27 18:54:39.52677874 +0000 UTC m=+770.884632414" lastFinishedPulling="2026-01-27 18:54:43.161766844 +0000 UTC m=+774.519620508" observedRunningTime="2026-01-27 18:54:43.514138793 +0000 UTC m=+774.871992457" watchObservedRunningTime="2026-01-27 18:54:43.520491608 +0000 UTC m=+774.878345272" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.063098 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-m56sv"] Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.064751 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-m56sv" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.067826 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-q2pww" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.076992 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-89j8n"] Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.077745 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-89j8n" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.080279 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.089007 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-m56sv"] Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.110811 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-knsbk"] Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.123955 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-knsbk" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.160368 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhjrx\" (UniqueName: \"kubernetes.io/projected/c08f40b6-6f77-4a0b-8236-15d653d5fc44-kube-api-access-nhjrx\") pod \"nmstate-metrics-54757c584b-m56sv\" (UID: \"c08f40b6-6f77-4a0b-8236-15d653d5fc44\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-m56sv" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.160423 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ntzb\" (UniqueName: \"kubernetes.io/projected/670c5b3f-6bbe-4055-ad13-e2db42a6c70e-kube-api-access-7ntzb\") pod \"nmstate-webhook-8474b5b9d8-89j8n\" (UID: \"670c5b3f-6bbe-4055-ad13-e2db42a6c70e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-89j8n" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.160452 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/670c5b3f-6bbe-4055-ad13-e2db42a6c70e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-89j8n\" (UID: \"670c5b3f-6bbe-4055-ad13-e2db42a6c70e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-89j8n" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.171530 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-89j8n"] Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.208451 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-g6dxv"] Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.209877 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-g6dxv" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.216153 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.216254 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-g6h9k" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.216272 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.219745 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-g6dxv"] Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.261934 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6da2a25b-7823-4b43-849c-9a7d3e3894f6-dbus-socket\") pod \"nmstate-handler-knsbk\" (UID: \"6da2a25b-7823-4b43-849c-9a7d3e3894f6\") " pod="openshift-nmstate/nmstate-handler-knsbk" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.262045 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/670c5b3f-6bbe-4055-ad13-e2db42a6c70e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-89j8n\" (UID: \"670c5b3f-6bbe-4055-ad13-e2db42a6c70e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-89j8n" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.262106 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml9bh\" (UniqueName: \"kubernetes.io/projected/6da2a25b-7823-4b43-849c-9a7d3e3894f6-kube-api-access-ml9bh\") pod \"nmstate-handler-knsbk\" (UID: \"6da2a25b-7823-4b43-849c-9a7d3e3894f6\") " pod="openshift-nmstate/nmstate-handler-knsbk" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.262164 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhjrx\" (UniqueName: \"kubernetes.io/projected/c08f40b6-6f77-4a0b-8236-15d653d5fc44-kube-api-access-nhjrx\") pod \"nmstate-metrics-54757c584b-m56sv\" (UID: \"c08f40b6-6f77-4a0b-8236-15d653d5fc44\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-m56sv" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.262196 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6da2a25b-7823-4b43-849c-9a7d3e3894f6-ovs-socket\") pod \"nmstate-handler-knsbk\" (UID: \"6da2a25b-7823-4b43-849c-9a7d3e3894f6\") " pod="openshift-nmstate/nmstate-handler-knsbk" Jan 27 18:54:49 crc kubenswrapper[4915]: E0127 18:54:49.262252 4915 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 27 18:54:49 crc kubenswrapper[4915]: E0127 18:54:49.262337 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/670c5b3f-6bbe-4055-ad13-e2db42a6c70e-tls-key-pair podName:670c5b3f-6bbe-4055-ad13-e2db42a6c70e nodeName:}" failed. No retries permitted until 2026-01-27 18:54:49.76231386 +0000 UTC m=+781.120167524 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/670c5b3f-6bbe-4055-ad13-e2db42a6c70e-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-89j8n" (UID: "670c5b3f-6bbe-4055-ad13-e2db42a6c70e") : secret "openshift-nmstate-webhook" not found Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.262602 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ntzb\" (UniqueName: \"kubernetes.io/projected/670c5b3f-6bbe-4055-ad13-e2db42a6c70e-kube-api-access-7ntzb\") pod \"nmstate-webhook-8474b5b9d8-89j8n\" (UID: \"670c5b3f-6bbe-4055-ad13-e2db42a6c70e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-89j8n" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.262631 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6da2a25b-7823-4b43-849c-9a7d3e3894f6-nmstate-lock\") pod \"nmstate-handler-knsbk\" (UID: \"6da2a25b-7823-4b43-849c-9a7d3e3894f6\") " pod="openshift-nmstate/nmstate-handler-knsbk" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.281352 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhjrx\" (UniqueName: \"kubernetes.io/projected/c08f40b6-6f77-4a0b-8236-15d653d5fc44-kube-api-access-nhjrx\") pod \"nmstate-metrics-54757c584b-m56sv\" (UID: \"c08f40b6-6f77-4a0b-8236-15d653d5fc44\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-m56sv" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.290865 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ntzb\" (UniqueName: \"kubernetes.io/projected/670c5b3f-6bbe-4055-ad13-e2db42a6c70e-kube-api-access-7ntzb\") pod \"nmstate-webhook-8474b5b9d8-89j8n\" (UID: \"670c5b3f-6bbe-4055-ad13-e2db42a6c70e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-89j8n" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.363372 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9pzb\" (UniqueName: \"kubernetes.io/projected/ed1227d1-5878-4f4f-8407-d2d1228792d1-kube-api-access-z9pzb\") pod \"nmstate-console-plugin-7754f76f8b-g6dxv\" (UID: \"ed1227d1-5878-4f4f-8407-d2d1228792d1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-g6dxv" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.363744 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml9bh\" (UniqueName: \"kubernetes.io/projected/6da2a25b-7823-4b43-849c-9a7d3e3894f6-kube-api-access-ml9bh\") pod \"nmstate-handler-knsbk\" (UID: \"6da2a25b-7823-4b43-849c-9a7d3e3894f6\") " pod="openshift-nmstate/nmstate-handler-knsbk" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.363843 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed1227d1-5878-4f4f-8407-d2d1228792d1-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-g6dxv\" (UID: \"ed1227d1-5878-4f4f-8407-d2d1228792d1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-g6dxv" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.363924 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ed1227d1-5878-4f4f-8407-d2d1228792d1-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-g6dxv\" (UID: \"ed1227d1-5878-4f4f-8407-d2d1228792d1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-g6dxv" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.364019 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6da2a25b-7823-4b43-849c-9a7d3e3894f6-ovs-socket\") pod \"nmstate-handler-knsbk\" (UID: \"6da2a25b-7823-4b43-849c-9a7d3e3894f6\") " pod="openshift-nmstate/nmstate-handler-knsbk" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.364107 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6da2a25b-7823-4b43-849c-9a7d3e3894f6-nmstate-lock\") pod \"nmstate-handler-knsbk\" (UID: \"6da2a25b-7823-4b43-849c-9a7d3e3894f6\") " pod="openshift-nmstate/nmstate-handler-knsbk" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.364160 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6da2a25b-7823-4b43-849c-9a7d3e3894f6-nmstate-lock\") pod \"nmstate-handler-knsbk\" (UID: \"6da2a25b-7823-4b43-849c-9a7d3e3894f6\") " pod="openshift-nmstate/nmstate-handler-knsbk" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.364242 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6da2a25b-7823-4b43-849c-9a7d3e3894f6-dbus-socket\") pod \"nmstate-handler-knsbk\" (UID: \"6da2a25b-7823-4b43-849c-9a7d3e3894f6\") " pod="openshift-nmstate/nmstate-handler-knsbk" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.364082 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6da2a25b-7823-4b43-849c-9a7d3e3894f6-ovs-socket\") pod \"nmstate-handler-knsbk\" (UID: \"6da2a25b-7823-4b43-849c-9a7d3e3894f6\") " pod="openshift-nmstate/nmstate-handler-knsbk" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.364566 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6da2a25b-7823-4b43-849c-9a7d3e3894f6-dbus-socket\") pod \"nmstate-handler-knsbk\" (UID: \"6da2a25b-7823-4b43-849c-9a7d3e3894f6\") " pod="openshift-nmstate/nmstate-handler-knsbk" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.390576 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml9bh\" (UniqueName: \"kubernetes.io/projected/6da2a25b-7823-4b43-849c-9a7d3e3894f6-kube-api-access-ml9bh\") pod \"nmstate-handler-knsbk\" (UID: \"6da2a25b-7823-4b43-849c-9a7d3e3894f6\") " pod="openshift-nmstate/nmstate-handler-knsbk" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.397065 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-597779d89c-5s2f9"] Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.398361 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-597779d89c-5s2f9" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.411933 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-597779d89c-5s2f9"] Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.450604 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-q2pww" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.459721 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-m56sv" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.465540 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9pzb\" (UniqueName: \"kubernetes.io/projected/ed1227d1-5878-4f4f-8407-d2d1228792d1-kube-api-access-z9pzb\") pod \"nmstate-console-plugin-7754f76f8b-g6dxv\" (UID: \"ed1227d1-5878-4f4f-8407-d2d1228792d1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-g6dxv" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.465593 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed1227d1-5878-4f4f-8407-d2d1228792d1-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-g6dxv\" (UID: \"ed1227d1-5878-4f4f-8407-d2d1228792d1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-g6dxv" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.465620 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ed1227d1-5878-4f4f-8407-d2d1228792d1-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-g6dxv\" (UID: \"ed1227d1-5878-4f4f-8407-d2d1228792d1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-g6dxv" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.467524 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.467750 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.470074 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-knsbk" Jan 27 18:54:49 crc kubenswrapper[4915]: E0127 18:54:49.476143 4915 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 27 18:54:49 crc kubenswrapper[4915]: E0127 18:54:49.476203 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed1227d1-5878-4f4f-8407-d2d1228792d1-plugin-serving-cert podName:ed1227d1-5878-4f4f-8407-d2d1228792d1 nodeName:}" failed. No retries permitted until 2026-01-27 18:54:49.976183521 +0000 UTC m=+781.334037185 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/ed1227d1-5878-4f4f-8407-d2d1228792d1-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-g6dxv" (UID: "ed1227d1-5878-4f4f-8407-d2d1228792d1") : secret "plugin-serving-cert" not found Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.477143 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ed1227d1-5878-4f4f-8407-d2d1228792d1-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-g6dxv\" (UID: \"ed1227d1-5878-4f4f-8407-d2d1228792d1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-g6dxv" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.488787 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9pzb\" (UniqueName: \"kubernetes.io/projected/ed1227d1-5878-4f4f-8407-d2d1228792d1-kube-api-access-z9pzb\") pod \"nmstate-console-plugin-7754f76f8b-g6dxv\" (UID: \"ed1227d1-5878-4f4f-8407-d2d1228792d1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-g6dxv" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.543977 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-knsbk" event={"ID":"6da2a25b-7823-4b43-849c-9a7d3e3894f6","Type":"ContainerStarted","Data":"7c985c930c78a38d2cc09adb8b785f5b119def4dae94efa6e6c1e7ed2b417bad"} Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.566584 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a064db71-acba-4f7e-8c56-8391f0113c2c-console-serving-cert\") pod \"console-597779d89c-5s2f9\" (UID: \"a064db71-acba-4f7e-8c56-8391f0113c2c\") " pod="openshift-console/console-597779d89c-5s2f9" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.566849 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a064db71-acba-4f7e-8c56-8391f0113c2c-console-oauth-config\") pod \"console-597779d89c-5s2f9\" (UID: \"a064db71-acba-4f7e-8c56-8391f0113c2c\") " pod="openshift-console/console-597779d89c-5s2f9" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.566867 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7trv\" (UniqueName: \"kubernetes.io/projected/a064db71-acba-4f7e-8c56-8391f0113c2c-kube-api-access-k7trv\") pod \"console-597779d89c-5s2f9\" (UID: \"a064db71-acba-4f7e-8c56-8391f0113c2c\") " pod="openshift-console/console-597779d89c-5s2f9" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.566889 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a064db71-acba-4f7e-8c56-8391f0113c2c-trusted-ca-bundle\") pod \"console-597779d89c-5s2f9\" (UID: \"a064db71-acba-4f7e-8c56-8391f0113c2c\") " pod="openshift-console/console-597779d89c-5s2f9" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.567023 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a064db71-acba-4f7e-8c56-8391f0113c2c-oauth-serving-cert\") pod \"console-597779d89c-5s2f9\" (UID: \"a064db71-acba-4f7e-8c56-8391f0113c2c\") " pod="openshift-console/console-597779d89c-5s2f9" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.567055 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a064db71-acba-4f7e-8c56-8391f0113c2c-service-ca\") pod \"console-597779d89c-5s2f9\" (UID: \"a064db71-acba-4f7e-8c56-8391f0113c2c\") " pod="openshift-console/console-597779d89c-5s2f9" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.567073 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a064db71-acba-4f7e-8c56-8391f0113c2c-console-config\") pod \"console-597779d89c-5s2f9\" (UID: \"a064db71-acba-4f7e-8c56-8391f0113c2c\") " pod="openshift-console/console-597779d89c-5s2f9" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.665668 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-m56sv"] Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.668372 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a064db71-acba-4f7e-8c56-8391f0113c2c-service-ca\") pod \"console-597779d89c-5s2f9\" (UID: \"a064db71-acba-4f7e-8c56-8391f0113c2c\") " pod="openshift-console/console-597779d89c-5s2f9" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.668409 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a064db71-acba-4f7e-8c56-8391f0113c2c-console-config\") pod \"console-597779d89c-5s2f9\" (UID: \"a064db71-acba-4f7e-8c56-8391f0113c2c\") " pod="openshift-console/console-597779d89c-5s2f9" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.668449 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a064db71-acba-4f7e-8c56-8391f0113c2c-console-serving-cert\") pod \"console-597779d89c-5s2f9\" (UID: \"a064db71-acba-4f7e-8c56-8391f0113c2c\") " pod="openshift-console/console-597779d89c-5s2f9" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.668472 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a064db71-acba-4f7e-8c56-8391f0113c2c-console-oauth-config\") pod \"console-597779d89c-5s2f9\" (UID: \"a064db71-acba-4f7e-8c56-8391f0113c2c\") " pod="openshift-console/console-597779d89c-5s2f9" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.668489 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7trv\" (UniqueName: \"kubernetes.io/projected/a064db71-acba-4f7e-8c56-8391f0113c2c-kube-api-access-k7trv\") pod \"console-597779d89c-5s2f9\" (UID: \"a064db71-acba-4f7e-8c56-8391f0113c2c\") " pod="openshift-console/console-597779d89c-5s2f9" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.668521 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a064db71-acba-4f7e-8c56-8391f0113c2c-trusted-ca-bundle\") pod \"console-597779d89c-5s2f9\" (UID: \"a064db71-acba-4f7e-8c56-8391f0113c2c\") " pod="openshift-console/console-597779d89c-5s2f9" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.668564 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a064db71-acba-4f7e-8c56-8391f0113c2c-oauth-serving-cert\") pod \"console-597779d89c-5s2f9\" (UID: \"a064db71-acba-4f7e-8c56-8391f0113c2c\") " pod="openshift-console/console-597779d89c-5s2f9" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.669404 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a064db71-acba-4f7e-8c56-8391f0113c2c-service-ca\") pod \"console-597779d89c-5s2f9\" (UID: \"a064db71-acba-4f7e-8c56-8391f0113c2c\") " pod="openshift-console/console-597779d89c-5s2f9" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.669733 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a064db71-acba-4f7e-8c56-8391f0113c2c-oauth-serving-cert\") pod \"console-597779d89c-5s2f9\" (UID: \"a064db71-acba-4f7e-8c56-8391f0113c2c\") " pod="openshift-console/console-597779d89c-5s2f9" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.669869 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a064db71-acba-4f7e-8c56-8391f0113c2c-console-config\") pod \"console-597779d89c-5s2f9\" (UID: \"a064db71-acba-4f7e-8c56-8391f0113c2c\") " pod="openshift-console/console-597779d89c-5s2f9" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.670334 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a064db71-acba-4f7e-8c56-8391f0113c2c-trusted-ca-bundle\") pod \"console-597779d89c-5s2f9\" (UID: \"a064db71-acba-4f7e-8c56-8391f0113c2c\") " pod="openshift-console/console-597779d89c-5s2f9" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.672743 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a064db71-acba-4f7e-8c56-8391f0113c2c-console-oauth-config\") pod \"console-597779d89c-5s2f9\" (UID: \"a064db71-acba-4f7e-8c56-8391f0113c2c\") " pod="openshift-console/console-597779d89c-5s2f9" Jan 27 18:54:49 crc kubenswrapper[4915]: W0127 18:54:49.674520 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc08f40b6_6f77_4a0b_8236_15d653d5fc44.slice/crio-b50ab261d68ec6bd38621a3d980cda6b58f7a9160bf3b1e69b3119b60cce1436 WatchSource:0}: Error finding container b50ab261d68ec6bd38621a3d980cda6b58f7a9160bf3b1e69b3119b60cce1436: Status 404 returned error can't find the container with id b50ab261d68ec6bd38621a3d980cda6b58f7a9160bf3b1e69b3119b60cce1436 Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.675458 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a064db71-acba-4f7e-8c56-8391f0113c2c-console-serving-cert\") pod \"console-597779d89c-5s2f9\" (UID: \"a064db71-acba-4f7e-8c56-8391f0113c2c\") " pod="openshift-console/console-597779d89c-5s2f9" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.689400 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7trv\" (UniqueName: \"kubernetes.io/projected/a064db71-acba-4f7e-8c56-8391f0113c2c-kube-api-access-k7trv\") pod \"console-597779d89c-5s2f9\" (UID: \"a064db71-acba-4f7e-8c56-8391f0113c2c\") " pod="openshift-console/console-597779d89c-5s2f9" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.728365 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-597779d89c-5s2f9" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.753426 4915 scope.go:117] "RemoveContainer" containerID="999d0ed2d215938e26e9b223263ba88b519b694fdb0ae3c3c518907b54762822" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.769507 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/670c5b3f-6bbe-4055-ad13-e2db42a6c70e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-89j8n\" (UID: \"670c5b3f-6bbe-4055-ad13-e2db42a6c70e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-89j8n" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.773327 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/670c5b3f-6bbe-4055-ad13-e2db42a6c70e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-89j8n\" (UID: \"670c5b3f-6bbe-4055-ad13-e2db42a6c70e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-89j8n" Jan 27 18:54:49 crc kubenswrapper[4915]: I0127 18:54:49.909363 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-597779d89c-5s2f9"] Jan 27 18:54:49 crc kubenswrapper[4915]: W0127 18:54:49.915256 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda064db71_acba_4f7e_8c56_8391f0113c2c.slice/crio-e3739b975c56a3152469f13399f2c558781b80bf93ebb70bdeede10d5a713d13 WatchSource:0}: Error finding container e3739b975c56a3152469f13399f2c558781b80bf93ebb70bdeede10d5a713d13: Status 404 returned error can't find the container with id e3739b975c56a3152469f13399f2c558781b80bf93ebb70bdeede10d5a713d13 Jan 27 18:54:50 crc kubenswrapper[4915]: I0127 18:54:50.063181 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-89j8n" Jan 27 18:54:50 crc kubenswrapper[4915]: I0127 18:54:50.073869 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed1227d1-5878-4f4f-8407-d2d1228792d1-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-g6dxv\" (UID: \"ed1227d1-5878-4f4f-8407-d2d1228792d1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-g6dxv" Jan 27 18:54:50 crc kubenswrapper[4915]: I0127 18:54:50.079394 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed1227d1-5878-4f4f-8407-d2d1228792d1-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-g6dxv\" (UID: \"ed1227d1-5878-4f4f-8407-d2d1228792d1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-g6dxv" Jan 27 18:54:50 crc kubenswrapper[4915]: I0127 18:54:50.132688 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-g6h9k" Jan 27 18:54:50 crc kubenswrapper[4915]: I0127 18:54:50.145502 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-g6dxv" Jan 27 18:54:50 crc kubenswrapper[4915]: I0127 18:54:50.302372 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-89j8n"] Jan 27 18:54:50 crc kubenswrapper[4915]: I0127 18:54:50.551134 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-89j8n" event={"ID":"670c5b3f-6bbe-4055-ad13-e2db42a6c70e","Type":"ContainerStarted","Data":"1739be2ef61ef27df2e71f1e34352d0a1169201b73b71b2d4a95e7bd742157ba"} Jan 27 18:54:50 crc kubenswrapper[4915]: I0127 18:54:50.552262 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-m56sv" event={"ID":"c08f40b6-6f77-4a0b-8236-15d653d5fc44","Type":"ContainerStarted","Data":"b50ab261d68ec6bd38621a3d980cda6b58f7a9160bf3b1e69b3119b60cce1436"} Jan 27 18:54:50 crc kubenswrapper[4915]: I0127 18:54:50.554651 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5bpjb_fe27a668-1ea7-44c8-9490-55cf8db5dad9/kube-multus/2.log" Jan 27 18:54:50 crc kubenswrapper[4915]: I0127 18:54:50.555982 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-597779d89c-5s2f9" event={"ID":"a064db71-acba-4f7e-8c56-8391f0113c2c","Type":"ContainerStarted","Data":"28facd0c189144e776da96054490637e627d5b2f6019d8e53d38a6389a2218e4"} Jan 27 18:54:50 crc kubenswrapper[4915]: I0127 18:54:50.556025 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-597779d89c-5s2f9" event={"ID":"a064db71-acba-4f7e-8c56-8391f0113c2c","Type":"ContainerStarted","Data":"e3739b975c56a3152469f13399f2c558781b80bf93ebb70bdeede10d5a713d13"} Jan 27 18:54:50 crc kubenswrapper[4915]: I0127 18:54:50.603244 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-597779d89c-5s2f9" podStartSLOduration=1.603224099 podStartE2EDuration="1.603224099s" podCreationTimestamp="2026-01-27 18:54:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:54:50.578945972 +0000 UTC m=+781.936799666" watchObservedRunningTime="2026-01-27 18:54:50.603224099 +0000 UTC m=+781.961077763" Jan 27 18:54:50 crc kubenswrapper[4915]: I0127 18:54:50.606707 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-g6dxv"] Jan 27 18:54:50 crc kubenswrapper[4915]: W0127 18:54:50.612024 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded1227d1_5878_4f4f_8407_d2d1228792d1.slice/crio-6a0e49ea3c7b8019ba06c406ddad963745c13d33850a85a46e76a66067c84fab WatchSource:0}: Error finding container 6a0e49ea3c7b8019ba06c406ddad963745c13d33850a85a46e76a66067c84fab: Status 404 returned error can't find the container with id 6a0e49ea3c7b8019ba06c406ddad963745c13d33850a85a46e76a66067c84fab Jan 27 18:54:50 crc kubenswrapper[4915]: I0127 18:54:50.625199 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:54:50 crc kubenswrapper[4915]: I0127 18:54:50.625248 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:54:51 crc kubenswrapper[4915]: I0127 18:54:51.432219 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-khshf" Jan 27 18:54:51 crc kubenswrapper[4915]: I0127 18:54:51.475286 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-khshf" Jan 27 18:54:51 crc kubenswrapper[4915]: I0127 18:54:51.562061 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-g6dxv" event={"ID":"ed1227d1-5878-4f4f-8407-d2d1228792d1","Type":"ContainerStarted","Data":"6a0e49ea3c7b8019ba06c406ddad963745c13d33850a85a46e76a66067c84fab"} Jan 27 18:54:51 crc kubenswrapper[4915]: I0127 18:54:51.657307 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-khshf"] Jan 27 18:54:52 crc kubenswrapper[4915]: I0127 18:54:52.568233 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-89j8n" event={"ID":"670c5b3f-6bbe-4055-ad13-e2db42a6c70e","Type":"ContainerStarted","Data":"1291224981d1214e20f45c5e9c354f0343b2e7dd358a8486089103a715830fd9"} Jan 27 18:54:52 crc kubenswrapper[4915]: I0127 18:54:52.568366 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-khshf" podUID="b0fc551e-86af-4205-a3b0-79be72e100c7" containerName="registry-server" containerID="cri-o://994c2a0f0c6f81f200d9ca14b4e58274874683fab5d9ac120e09c38373c36f15" gracePeriod=2 Jan 27 18:54:52 crc kubenswrapper[4915]: I0127 18:54:52.591148 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-89j8n" podStartSLOduration=1.555945743 podStartE2EDuration="3.591133574s" podCreationTimestamp="2026-01-27 18:54:49 +0000 UTC" firstStartedPulling="2026-01-27 18:54:50.317288433 +0000 UTC m=+781.675142097" lastFinishedPulling="2026-01-27 18:54:52.352476254 +0000 UTC m=+783.710329928" observedRunningTime="2026-01-27 18:54:52.587324779 +0000 UTC m=+783.945178443" watchObservedRunningTime="2026-01-27 18:54:52.591133574 +0000 UTC m=+783.948987238" Jan 27 18:54:52 crc kubenswrapper[4915]: I0127 18:54:52.943000 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khshf" Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.110253 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0fc551e-86af-4205-a3b0-79be72e100c7-utilities\") pod \"b0fc551e-86af-4205-a3b0-79be72e100c7\" (UID: \"b0fc551e-86af-4205-a3b0-79be72e100c7\") " Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.110613 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gc4b\" (UniqueName: \"kubernetes.io/projected/b0fc551e-86af-4205-a3b0-79be72e100c7-kube-api-access-9gc4b\") pod \"b0fc551e-86af-4205-a3b0-79be72e100c7\" (UID: \"b0fc551e-86af-4205-a3b0-79be72e100c7\") " Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.110746 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0fc551e-86af-4205-a3b0-79be72e100c7-catalog-content\") pod \"b0fc551e-86af-4205-a3b0-79be72e100c7\" (UID: \"b0fc551e-86af-4205-a3b0-79be72e100c7\") " Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.111403 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0fc551e-86af-4205-a3b0-79be72e100c7-utilities" (OuterVolumeSpecName: "utilities") pod "b0fc551e-86af-4205-a3b0-79be72e100c7" (UID: "b0fc551e-86af-4205-a3b0-79be72e100c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.117223 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0fc551e-86af-4205-a3b0-79be72e100c7-kube-api-access-9gc4b" (OuterVolumeSpecName: "kube-api-access-9gc4b") pod "b0fc551e-86af-4205-a3b0-79be72e100c7" (UID: "b0fc551e-86af-4205-a3b0-79be72e100c7"). InnerVolumeSpecName "kube-api-access-9gc4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.212392 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0fc551e-86af-4205-a3b0-79be72e100c7-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.212428 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gc4b\" (UniqueName: \"kubernetes.io/projected/b0fc551e-86af-4205-a3b0-79be72e100c7-kube-api-access-9gc4b\") on node \"crc\" DevicePath \"\"" Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.235684 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0fc551e-86af-4205-a3b0-79be72e100c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0fc551e-86af-4205-a3b0-79be72e100c7" (UID: "b0fc551e-86af-4205-a3b0-79be72e100c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.314012 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0fc551e-86af-4205-a3b0-79be72e100c7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.573315 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-knsbk" event={"ID":"6da2a25b-7823-4b43-849c-9a7d3e3894f6","Type":"ContainerStarted","Data":"e566d7e2bb7f8103986ca0fa2e42a0b8a72f096e8c7761ac964ec026913b5f3c"} Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.573390 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-knsbk" Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.575370 4915 generic.go:334] "Generic (PLEG): container finished" podID="b0fc551e-86af-4205-a3b0-79be72e100c7" containerID="994c2a0f0c6f81f200d9ca14b4e58274874683fab5d9ac120e09c38373c36f15" exitCode=0 Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.575437 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khshf" event={"ID":"b0fc551e-86af-4205-a3b0-79be72e100c7","Type":"ContainerDied","Data":"994c2a0f0c6f81f200d9ca14b4e58274874683fab5d9ac120e09c38373c36f15"} Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.575451 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khshf" Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.575466 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khshf" event={"ID":"b0fc551e-86af-4205-a3b0-79be72e100c7","Type":"ContainerDied","Data":"275857e9c5b2d738f2dfc1283e3c77f560a4094968aca7e93e8798b7d2fe73b5"} Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.575511 4915 scope.go:117] "RemoveContainer" containerID="994c2a0f0c6f81f200d9ca14b4e58274874683fab5d9ac120e09c38373c36f15" Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.576775 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-m56sv" event={"ID":"c08f40b6-6f77-4a0b-8236-15d653d5fc44","Type":"ContainerStarted","Data":"9ddfa16a3fd69b43f7f96b66f8de2fdc736c69d3176d782b1a51086d8cf783d8"} Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.576911 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-89j8n" Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.589197 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-knsbk" podStartSLOduration=1.768529704 podStartE2EDuration="4.589180373s" podCreationTimestamp="2026-01-27 18:54:49 +0000 UTC" firstStartedPulling="2026-01-27 18:54:49.50807015 +0000 UTC m=+780.865923814" lastFinishedPulling="2026-01-27 18:54:52.328720799 +0000 UTC m=+783.686574483" observedRunningTime="2026-01-27 18:54:53.58729028 +0000 UTC m=+784.945143944" watchObservedRunningTime="2026-01-27 18:54:53.589180373 +0000 UTC m=+784.947034037" Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.596293 4915 scope.go:117] "RemoveContainer" containerID="2f5b45d1d58dc0b5f38ef7bce2ede2b9eb7b21becd5c81a5a682f1ac9d55ca2d" Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.603008 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-khshf"] Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.609725 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-khshf"] Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.615881 4915 scope.go:117] "RemoveContainer" containerID="0d6f26e980cf0b090eed91f91318156322cc000c861f399ad4e1deff3aa941ec" Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.693400 4915 scope.go:117] "RemoveContainer" containerID="994c2a0f0c6f81f200d9ca14b4e58274874683fab5d9ac120e09c38373c36f15" Jan 27 18:54:53 crc kubenswrapper[4915]: E0127 18:54:53.693989 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"994c2a0f0c6f81f200d9ca14b4e58274874683fab5d9ac120e09c38373c36f15\": container with ID starting with 994c2a0f0c6f81f200d9ca14b4e58274874683fab5d9ac120e09c38373c36f15 not found: ID does not exist" containerID="994c2a0f0c6f81f200d9ca14b4e58274874683fab5d9ac120e09c38373c36f15" Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.694054 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"994c2a0f0c6f81f200d9ca14b4e58274874683fab5d9ac120e09c38373c36f15"} err="failed to get container status \"994c2a0f0c6f81f200d9ca14b4e58274874683fab5d9ac120e09c38373c36f15\": rpc error: code = NotFound desc = could not find container \"994c2a0f0c6f81f200d9ca14b4e58274874683fab5d9ac120e09c38373c36f15\": container with ID starting with 994c2a0f0c6f81f200d9ca14b4e58274874683fab5d9ac120e09c38373c36f15 not found: ID does not exist" Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.694085 4915 scope.go:117] "RemoveContainer" containerID="2f5b45d1d58dc0b5f38ef7bce2ede2b9eb7b21becd5c81a5a682f1ac9d55ca2d" Jan 27 18:54:53 crc kubenswrapper[4915]: E0127 18:54:53.694528 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f5b45d1d58dc0b5f38ef7bce2ede2b9eb7b21becd5c81a5a682f1ac9d55ca2d\": container with ID starting with 2f5b45d1d58dc0b5f38ef7bce2ede2b9eb7b21becd5c81a5a682f1ac9d55ca2d not found: ID does not exist" containerID="2f5b45d1d58dc0b5f38ef7bce2ede2b9eb7b21becd5c81a5a682f1ac9d55ca2d" Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.694561 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f5b45d1d58dc0b5f38ef7bce2ede2b9eb7b21becd5c81a5a682f1ac9d55ca2d"} err="failed to get container status \"2f5b45d1d58dc0b5f38ef7bce2ede2b9eb7b21becd5c81a5a682f1ac9d55ca2d\": rpc error: code = NotFound desc = could not find container \"2f5b45d1d58dc0b5f38ef7bce2ede2b9eb7b21becd5c81a5a682f1ac9d55ca2d\": container with ID starting with 2f5b45d1d58dc0b5f38ef7bce2ede2b9eb7b21becd5c81a5a682f1ac9d55ca2d not found: ID does not exist" Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.694580 4915 scope.go:117] "RemoveContainer" containerID="0d6f26e980cf0b090eed91f91318156322cc000c861f399ad4e1deff3aa941ec" Jan 27 18:54:53 crc kubenswrapper[4915]: E0127 18:54:53.695083 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d6f26e980cf0b090eed91f91318156322cc000c861f399ad4e1deff3aa941ec\": container with ID starting with 0d6f26e980cf0b090eed91f91318156322cc000c861f399ad4e1deff3aa941ec not found: ID does not exist" containerID="0d6f26e980cf0b090eed91f91318156322cc000c861f399ad4e1deff3aa941ec" Jan 27 18:54:53 crc kubenswrapper[4915]: I0127 18:54:53.695117 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d6f26e980cf0b090eed91f91318156322cc000c861f399ad4e1deff3aa941ec"} err="failed to get container status \"0d6f26e980cf0b090eed91f91318156322cc000c861f399ad4e1deff3aa941ec\": rpc error: code = NotFound desc = could not find container \"0d6f26e980cf0b090eed91f91318156322cc000c861f399ad4e1deff3aa941ec\": container with ID starting with 0d6f26e980cf0b090eed91f91318156322cc000c861f399ad4e1deff3aa941ec not found: ID does not exist" Jan 27 18:54:54 crc kubenswrapper[4915]: I0127 18:54:54.585850 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-g6dxv" event={"ID":"ed1227d1-5878-4f4f-8407-d2d1228792d1","Type":"ContainerStarted","Data":"cfea5ce897b2d309480fea819d0636d4f0cc57f0bbacd06dc4bb246dc63368db"} Jan 27 18:54:54 crc kubenswrapper[4915]: I0127 18:54:54.619829 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-g6dxv" podStartSLOduration=2.522824641 podStartE2EDuration="5.619802908s" podCreationTimestamp="2026-01-27 18:54:49 +0000 UTC" firstStartedPulling="2026-01-27 18:54:50.614922203 +0000 UTC m=+781.972775887" lastFinishedPulling="2026-01-27 18:54:53.7119005 +0000 UTC m=+785.069754154" observedRunningTime="2026-01-27 18:54:54.603127492 +0000 UTC m=+785.960981206" watchObservedRunningTime="2026-01-27 18:54:54.619802908 +0000 UTC m=+785.977656582" Jan 27 18:54:55 crc kubenswrapper[4915]: I0127 18:54:55.378675 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0fc551e-86af-4205-a3b0-79be72e100c7" path="/var/lib/kubelet/pods/b0fc551e-86af-4205-a3b0-79be72e100c7/volumes" Jan 27 18:54:55 crc kubenswrapper[4915]: I0127 18:54:55.592604 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-m56sv" event={"ID":"c08f40b6-6f77-4a0b-8236-15d653d5fc44","Type":"ContainerStarted","Data":"6ee1cf12f96489f6499862d1492b81141b1ef9b4134e18421498704ba27d5333"} Jan 27 18:54:55 crc kubenswrapper[4915]: I0127 18:54:55.615054 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-m56sv" podStartSLOduration=1.634864493 podStartE2EDuration="6.615022794s" podCreationTimestamp="2026-01-27 18:54:49 +0000 UTC" firstStartedPulling="2026-01-27 18:54:49.677387937 +0000 UTC m=+781.035241601" lastFinishedPulling="2026-01-27 18:54:54.657546228 +0000 UTC m=+786.015399902" observedRunningTime="2026-01-27 18:54:55.608851955 +0000 UTC m=+786.966705619" watchObservedRunningTime="2026-01-27 18:54:55.615022794 +0000 UTC m=+786.972876498" Jan 27 18:54:59 crc kubenswrapper[4915]: I0127 18:54:59.495850 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-knsbk" Jan 27 18:54:59 crc kubenswrapper[4915]: I0127 18:54:59.728609 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-597779d89c-5s2f9" Jan 27 18:54:59 crc kubenswrapper[4915]: I0127 18:54:59.728672 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-597779d89c-5s2f9" Jan 27 18:54:59 crc kubenswrapper[4915]: I0127 18:54:59.735040 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-597779d89c-5s2f9" Jan 27 18:55:00 crc kubenswrapper[4915]: I0127 18:55:00.628617 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-597779d89c-5s2f9" Jan 27 18:55:00 crc kubenswrapper[4915]: I0127 18:55:00.700868 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-sd8x9"] Jan 27 18:55:10 crc kubenswrapper[4915]: I0127 18:55:10.070365 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-89j8n" Jan 27 18:55:20 crc kubenswrapper[4915]: I0127 18:55:20.624609 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:55:20 crc kubenswrapper[4915]: I0127 18:55:20.625228 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:55:20 crc kubenswrapper[4915]: I0127 18:55:20.625272 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 18:55:20 crc kubenswrapper[4915]: I0127 18:55:20.625957 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"da511792c908f1faa8a2f05c73323541a29c21bfe552744df49646a41552c036"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:55:20 crc kubenswrapper[4915]: I0127 18:55:20.626006 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://da511792c908f1faa8a2f05c73323541a29c21bfe552744df49646a41552c036" gracePeriod=600 Jan 27 18:55:21 crc kubenswrapper[4915]: I0127 18:55:21.765085 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="da511792c908f1faa8a2f05c73323541a29c21bfe552744df49646a41552c036" exitCode=0 Jan 27 18:55:21 crc kubenswrapper[4915]: I0127 18:55:21.765160 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"da511792c908f1faa8a2f05c73323541a29c21bfe552744df49646a41552c036"} Jan 27 18:55:21 crc kubenswrapper[4915]: I0127 18:55:21.765592 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"0fd9f5796cc3a57e9fc7c4db2c85fe065ddeb03c518ee01197233c97b80d5a44"} Jan 27 18:55:21 crc kubenswrapper[4915]: I0127 18:55:21.765613 4915 scope.go:117] "RemoveContainer" containerID="202bbea1c00d66a9850296c32c281e814e48bfb1a95e769bc8fef7a681ccc40b" Jan 27 18:55:23 crc kubenswrapper[4915]: I0127 18:55:23.335104 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-htsj9"] Jan 27 18:55:23 crc kubenswrapper[4915]: E0127 18:55:23.335600 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0fc551e-86af-4205-a3b0-79be72e100c7" containerName="extract-content" Jan 27 18:55:23 crc kubenswrapper[4915]: I0127 18:55:23.335618 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0fc551e-86af-4205-a3b0-79be72e100c7" containerName="extract-content" Jan 27 18:55:23 crc kubenswrapper[4915]: E0127 18:55:23.335644 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0fc551e-86af-4205-a3b0-79be72e100c7" containerName="extract-utilities" Jan 27 18:55:23 crc kubenswrapper[4915]: I0127 18:55:23.335653 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0fc551e-86af-4205-a3b0-79be72e100c7" containerName="extract-utilities" Jan 27 18:55:23 crc kubenswrapper[4915]: E0127 18:55:23.335666 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0fc551e-86af-4205-a3b0-79be72e100c7" containerName="registry-server" Jan 27 18:55:23 crc kubenswrapper[4915]: I0127 18:55:23.335674 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0fc551e-86af-4205-a3b0-79be72e100c7" containerName="registry-server" Jan 27 18:55:23 crc kubenswrapper[4915]: I0127 18:55:23.335823 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0fc551e-86af-4205-a3b0-79be72e100c7" containerName="registry-server" Jan 27 18:55:23 crc kubenswrapper[4915]: I0127 18:55:23.336656 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-htsj9" Jan 27 18:55:23 crc kubenswrapper[4915]: I0127 18:55:23.347241 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-htsj9"] Jan 27 18:55:23 crc kubenswrapper[4915]: I0127 18:55:23.458368 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83280a6b-909c-4fe9-99fa-5ecce9923b8c-utilities\") pod \"redhat-marketplace-htsj9\" (UID: \"83280a6b-909c-4fe9-99fa-5ecce9923b8c\") " pod="openshift-marketplace/redhat-marketplace-htsj9" Jan 27 18:55:23 crc kubenswrapper[4915]: I0127 18:55:23.459342 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc47l\" (UniqueName: \"kubernetes.io/projected/83280a6b-909c-4fe9-99fa-5ecce9923b8c-kube-api-access-lc47l\") pod \"redhat-marketplace-htsj9\" (UID: \"83280a6b-909c-4fe9-99fa-5ecce9923b8c\") " pod="openshift-marketplace/redhat-marketplace-htsj9" Jan 27 18:55:23 crc kubenswrapper[4915]: I0127 18:55:23.459381 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83280a6b-909c-4fe9-99fa-5ecce9923b8c-catalog-content\") pod \"redhat-marketplace-htsj9\" (UID: \"83280a6b-909c-4fe9-99fa-5ecce9923b8c\") " pod="openshift-marketplace/redhat-marketplace-htsj9" Jan 27 18:55:23 crc kubenswrapper[4915]: I0127 18:55:23.561011 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc47l\" (UniqueName: \"kubernetes.io/projected/83280a6b-909c-4fe9-99fa-5ecce9923b8c-kube-api-access-lc47l\") pod \"redhat-marketplace-htsj9\" (UID: \"83280a6b-909c-4fe9-99fa-5ecce9923b8c\") " pod="openshift-marketplace/redhat-marketplace-htsj9" Jan 27 18:55:23 crc kubenswrapper[4915]: I0127 18:55:23.561069 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83280a6b-909c-4fe9-99fa-5ecce9923b8c-catalog-content\") pod \"redhat-marketplace-htsj9\" (UID: \"83280a6b-909c-4fe9-99fa-5ecce9923b8c\") " pod="openshift-marketplace/redhat-marketplace-htsj9" Jan 27 18:55:23 crc kubenswrapper[4915]: I0127 18:55:23.561106 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83280a6b-909c-4fe9-99fa-5ecce9923b8c-utilities\") pod \"redhat-marketplace-htsj9\" (UID: \"83280a6b-909c-4fe9-99fa-5ecce9923b8c\") " pod="openshift-marketplace/redhat-marketplace-htsj9" Jan 27 18:55:23 crc kubenswrapper[4915]: I0127 18:55:23.561744 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83280a6b-909c-4fe9-99fa-5ecce9923b8c-catalog-content\") pod \"redhat-marketplace-htsj9\" (UID: \"83280a6b-909c-4fe9-99fa-5ecce9923b8c\") " pod="openshift-marketplace/redhat-marketplace-htsj9" Jan 27 18:55:23 crc kubenswrapper[4915]: I0127 18:55:23.562011 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83280a6b-909c-4fe9-99fa-5ecce9923b8c-utilities\") pod \"redhat-marketplace-htsj9\" (UID: \"83280a6b-909c-4fe9-99fa-5ecce9923b8c\") " pod="openshift-marketplace/redhat-marketplace-htsj9" Jan 27 18:55:23 crc kubenswrapper[4915]: I0127 18:55:23.583927 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc47l\" (UniqueName: \"kubernetes.io/projected/83280a6b-909c-4fe9-99fa-5ecce9923b8c-kube-api-access-lc47l\") pod \"redhat-marketplace-htsj9\" (UID: \"83280a6b-909c-4fe9-99fa-5ecce9923b8c\") " pod="openshift-marketplace/redhat-marketplace-htsj9" Jan 27 18:55:23 crc kubenswrapper[4915]: I0127 18:55:23.677889 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-htsj9" Jan 27 18:55:23 crc kubenswrapper[4915]: I0127 18:55:23.938348 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-htsj9"] Jan 27 18:55:24 crc kubenswrapper[4915]: I0127 18:55:24.795341 4915 generic.go:334] "Generic (PLEG): container finished" podID="83280a6b-909c-4fe9-99fa-5ecce9923b8c" containerID="7fbfabae64f14fe40e9f4a0f14f6beaff652fa95bf4be419f01f98a7a3dca4e8" exitCode=0 Jan 27 18:55:24 crc kubenswrapper[4915]: I0127 18:55:24.795395 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-htsj9" event={"ID":"83280a6b-909c-4fe9-99fa-5ecce9923b8c","Type":"ContainerDied","Data":"7fbfabae64f14fe40e9f4a0f14f6beaff652fa95bf4be419f01f98a7a3dca4e8"} Jan 27 18:55:24 crc kubenswrapper[4915]: I0127 18:55:24.795624 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-htsj9" event={"ID":"83280a6b-909c-4fe9-99fa-5ecce9923b8c","Type":"ContainerStarted","Data":"bfa6cbdaadda8e403bd4f7c28f712e6c3432dae92da3fb5013d903d98a17eb4c"} Jan 27 18:55:25 crc kubenswrapper[4915]: I0127 18:55:25.329622 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh"] Jan 27 18:55:25 crc kubenswrapper[4915]: I0127 18:55:25.331179 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh" Jan 27 18:55:25 crc kubenswrapper[4915]: I0127 18:55:25.333332 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 18:55:25 crc kubenswrapper[4915]: I0127 18:55:25.344771 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh"] Jan 27 18:55:25 crc kubenswrapper[4915]: I0127 18:55:25.485454 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2zxw\" (UniqueName: \"kubernetes.io/projected/779f706d-66f4-4ade-8c06-88382cc4a041-kube-api-access-d2zxw\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh\" (UID: \"779f706d-66f4-4ade-8c06-88382cc4a041\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh" Jan 27 18:55:25 crc kubenswrapper[4915]: I0127 18:55:25.485622 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/779f706d-66f4-4ade-8c06-88382cc4a041-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh\" (UID: \"779f706d-66f4-4ade-8c06-88382cc4a041\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh" Jan 27 18:55:25 crc kubenswrapper[4915]: I0127 18:55:25.485659 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/779f706d-66f4-4ade-8c06-88382cc4a041-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh\" (UID: \"779f706d-66f4-4ade-8c06-88382cc4a041\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh" Jan 27 18:55:25 crc kubenswrapper[4915]: I0127 18:55:25.586549 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/779f706d-66f4-4ade-8c06-88382cc4a041-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh\" (UID: \"779f706d-66f4-4ade-8c06-88382cc4a041\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh" Jan 27 18:55:25 crc kubenswrapper[4915]: I0127 18:55:25.586923 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/779f706d-66f4-4ade-8c06-88382cc4a041-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh\" (UID: \"779f706d-66f4-4ade-8c06-88382cc4a041\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh" Jan 27 18:55:25 crc kubenswrapper[4915]: I0127 18:55:25.586952 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2zxw\" (UniqueName: \"kubernetes.io/projected/779f706d-66f4-4ade-8c06-88382cc4a041-kube-api-access-d2zxw\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh\" (UID: \"779f706d-66f4-4ade-8c06-88382cc4a041\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh" Jan 27 18:55:25 crc kubenswrapper[4915]: I0127 18:55:25.587066 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/779f706d-66f4-4ade-8c06-88382cc4a041-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh\" (UID: \"779f706d-66f4-4ade-8c06-88382cc4a041\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh" Jan 27 18:55:25 crc kubenswrapper[4915]: I0127 18:55:25.587586 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/779f706d-66f4-4ade-8c06-88382cc4a041-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh\" (UID: \"779f706d-66f4-4ade-8c06-88382cc4a041\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh" Jan 27 18:55:25 crc kubenswrapper[4915]: I0127 18:55:25.607466 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2zxw\" (UniqueName: \"kubernetes.io/projected/779f706d-66f4-4ade-8c06-88382cc4a041-kube-api-access-d2zxw\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh\" (UID: \"779f706d-66f4-4ade-8c06-88382cc4a041\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh" Jan 27 18:55:25 crc kubenswrapper[4915]: I0127 18:55:25.653038 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh" Jan 27 18:55:25 crc kubenswrapper[4915]: I0127 18:55:25.761834 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-sd8x9" podUID="d97ce85d-90e3-410f-bd7c-812149c6933f" containerName="console" containerID="cri-o://146fe73eb3c89f7e673257e6e3ae198e12975df38dbade4ce1e67070d3981f08" gracePeriod=15 Jan 27 18:55:25 crc kubenswrapper[4915]: I0127 18:55:25.813883 4915 generic.go:334] "Generic (PLEG): container finished" podID="83280a6b-909c-4fe9-99fa-5ecce9923b8c" containerID="4b9105e553fd50474a9e566e1c134c7e803c1e2d4a14bef3de04c621c1b1a891" exitCode=0 Jan 27 18:55:25 crc kubenswrapper[4915]: I0127 18:55:25.813926 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-htsj9" event={"ID":"83280a6b-909c-4fe9-99fa-5ecce9923b8c","Type":"ContainerDied","Data":"4b9105e553fd50474a9e566e1c134c7e803c1e2d4a14bef3de04c621c1b1a891"} Jan 27 18:55:25 crc kubenswrapper[4915]: I0127 18:55:25.860642 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh"] Jan 27 18:55:25 crc kubenswrapper[4915]: W0127 18:55:25.865351 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod779f706d_66f4_4ade_8c06_88382cc4a041.slice/crio-883a229bff10175c9f0f4b2265fa1b70a4ce88d973ea8634ed42a8bada2e8fdb WatchSource:0}: Error finding container 883a229bff10175c9f0f4b2265fa1b70a4ce88d973ea8634ed42a8bada2e8fdb: Status 404 returned error can't find the container with id 883a229bff10175c9f0f4b2265fa1b70a4ce88d973ea8634ed42a8bada2e8fdb Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.196439 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-sd8x9_d97ce85d-90e3-410f-bd7c-812149c6933f/console/0.log" Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.196501 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.293609 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d97ce85d-90e3-410f-bd7c-812149c6933f-oauth-serving-cert\") pod \"d97ce85d-90e3-410f-bd7c-812149c6933f\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.293657 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d97ce85d-90e3-410f-bd7c-812149c6933f-console-oauth-config\") pod \"d97ce85d-90e3-410f-bd7c-812149c6933f\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.293688 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d97ce85d-90e3-410f-bd7c-812149c6933f-console-config\") pod \"d97ce85d-90e3-410f-bd7c-812149c6933f\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.293732 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d97ce85d-90e3-410f-bd7c-812149c6933f-trusted-ca-bundle\") pod \"d97ce85d-90e3-410f-bd7c-812149c6933f\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.293748 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d97ce85d-90e3-410f-bd7c-812149c6933f-console-serving-cert\") pod \"d97ce85d-90e3-410f-bd7c-812149c6933f\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.293781 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfbzf\" (UniqueName: \"kubernetes.io/projected/d97ce85d-90e3-410f-bd7c-812149c6933f-kube-api-access-sfbzf\") pod \"d97ce85d-90e3-410f-bd7c-812149c6933f\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.294744 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d97ce85d-90e3-410f-bd7c-812149c6933f-console-config" (OuterVolumeSpecName: "console-config") pod "d97ce85d-90e3-410f-bd7c-812149c6933f" (UID: "d97ce85d-90e3-410f-bd7c-812149c6933f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.294766 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d97ce85d-90e3-410f-bd7c-812149c6933f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d97ce85d-90e3-410f-bd7c-812149c6933f" (UID: "d97ce85d-90e3-410f-bd7c-812149c6933f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.294909 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d97ce85d-90e3-410f-bd7c-812149c6933f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d97ce85d-90e3-410f-bd7c-812149c6933f" (UID: "d97ce85d-90e3-410f-bd7c-812149c6933f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.295356 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d97ce85d-90e3-410f-bd7c-812149c6933f-service-ca\") pod \"d97ce85d-90e3-410f-bd7c-812149c6933f\" (UID: \"d97ce85d-90e3-410f-bd7c-812149c6933f\") " Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.295381 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d97ce85d-90e3-410f-bd7c-812149c6933f-service-ca" (OuterVolumeSpecName: "service-ca") pod "d97ce85d-90e3-410f-bd7c-812149c6933f" (UID: "d97ce85d-90e3-410f-bd7c-812149c6933f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.295596 4915 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d97ce85d-90e3-410f-bd7c-812149c6933f-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.295611 4915 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d97ce85d-90e3-410f-bd7c-812149c6933f-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.295620 4915 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d97ce85d-90e3-410f-bd7c-812149c6933f-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.295628 4915 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d97ce85d-90e3-410f-bd7c-812149c6933f-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.301395 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d97ce85d-90e3-410f-bd7c-812149c6933f-kube-api-access-sfbzf" (OuterVolumeSpecName: "kube-api-access-sfbzf") pod "d97ce85d-90e3-410f-bd7c-812149c6933f" (UID: "d97ce85d-90e3-410f-bd7c-812149c6933f"). InnerVolumeSpecName "kube-api-access-sfbzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.301388 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d97ce85d-90e3-410f-bd7c-812149c6933f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d97ce85d-90e3-410f-bd7c-812149c6933f" (UID: "d97ce85d-90e3-410f-bd7c-812149c6933f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.303382 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d97ce85d-90e3-410f-bd7c-812149c6933f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d97ce85d-90e3-410f-bd7c-812149c6933f" (UID: "d97ce85d-90e3-410f-bd7c-812149c6933f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.397055 4915 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d97ce85d-90e3-410f-bd7c-812149c6933f-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.397417 4915 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d97ce85d-90e3-410f-bd7c-812149c6933f-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.397428 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfbzf\" (UniqueName: \"kubernetes.io/projected/d97ce85d-90e3-410f-bd7c-812149c6933f-kube-api-access-sfbzf\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.825132 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-htsj9" event={"ID":"83280a6b-909c-4fe9-99fa-5ecce9923b8c","Type":"ContainerStarted","Data":"afc8a7c50d1d7d9b691ec20bfd035f33f73115c2d338491ce04aa79732877047"} Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.827322 4915 generic.go:334] "Generic (PLEG): container finished" podID="779f706d-66f4-4ade-8c06-88382cc4a041" containerID="0c761df774aeebcb10933043796e303a854fe596e24b80b1299672d6aa83a73b" exitCode=0 Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.827437 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh" event={"ID":"779f706d-66f4-4ade-8c06-88382cc4a041","Type":"ContainerDied","Data":"0c761df774aeebcb10933043796e303a854fe596e24b80b1299672d6aa83a73b"} Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.827476 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh" event={"ID":"779f706d-66f4-4ade-8c06-88382cc4a041","Type":"ContainerStarted","Data":"883a229bff10175c9f0f4b2265fa1b70a4ce88d973ea8634ed42a8bada2e8fdb"} Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.830265 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-sd8x9_d97ce85d-90e3-410f-bd7c-812149c6933f/console/0.log" Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.830323 4915 generic.go:334] "Generic (PLEG): container finished" podID="d97ce85d-90e3-410f-bd7c-812149c6933f" containerID="146fe73eb3c89f7e673257e6e3ae198e12975df38dbade4ce1e67070d3981f08" exitCode=2 Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.830358 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sd8x9" event={"ID":"d97ce85d-90e3-410f-bd7c-812149c6933f","Type":"ContainerDied","Data":"146fe73eb3c89f7e673257e6e3ae198e12975df38dbade4ce1e67070d3981f08"} Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.830395 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sd8x9" event={"ID":"d97ce85d-90e3-410f-bd7c-812149c6933f","Type":"ContainerDied","Data":"24df1cc5a74222740510a219dc1ba69914f943471e1404fcfa027b80910e3862"} Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.830419 4915 scope.go:117] "RemoveContainer" containerID="146fe73eb3c89f7e673257e6e3ae198e12975df38dbade4ce1e67070d3981f08" Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.830412 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sd8x9" Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.861694 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-htsj9" podStartSLOduration=2.443521093 podStartE2EDuration="3.861677642s" podCreationTimestamp="2026-01-27 18:55:23 +0000 UTC" firstStartedPulling="2026-01-27 18:55:24.797763865 +0000 UTC m=+816.155617529" lastFinishedPulling="2026-01-27 18:55:26.215920424 +0000 UTC m=+817.573774078" observedRunningTime="2026-01-27 18:55:26.85492569 +0000 UTC m=+818.212779394" watchObservedRunningTime="2026-01-27 18:55:26.861677642 +0000 UTC m=+818.219531316" Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.866136 4915 scope.go:117] "RemoveContainer" containerID="146fe73eb3c89f7e673257e6e3ae198e12975df38dbade4ce1e67070d3981f08" Jan 27 18:55:26 crc kubenswrapper[4915]: E0127 18:55:26.866757 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"146fe73eb3c89f7e673257e6e3ae198e12975df38dbade4ce1e67070d3981f08\": container with ID starting with 146fe73eb3c89f7e673257e6e3ae198e12975df38dbade4ce1e67070d3981f08 not found: ID does not exist" containerID="146fe73eb3c89f7e673257e6e3ae198e12975df38dbade4ce1e67070d3981f08" Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.866889 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"146fe73eb3c89f7e673257e6e3ae198e12975df38dbade4ce1e67070d3981f08"} err="failed to get container status \"146fe73eb3c89f7e673257e6e3ae198e12975df38dbade4ce1e67070d3981f08\": rpc error: code = NotFound desc = could not find container \"146fe73eb3c89f7e673257e6e3ae198e12975df38dbade4ce1e67070d3981f08\": container with ID starting with 146fe73eb3c89f7e673257e6e3ae198e12975df38dbade4ce1e67070d3981f08 not found: ID does not exist" Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.904826 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-sd8x9"] Jan 27 18:55:26 crc kubenswrapper[4915]: I0127 18:55:26.914158 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-sd8x9"] Jan 27 18:55:27 crc kubenswrapper[4915]: I0127 18:55:27.367555 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d97ce85d-90e3-410f-bd7c-812149c6933f" path="/var/lib/kubelet/pods/d97ce85d-90e3-410f-bd7c-812149c6933f/volumes" Jan 27 18:55:28 crc kubenswrapper[4915]: I0127 18:55:28.848765 4915 generic.go:334] "Generic (PLEG): container finished" podID="779f706d-66f4-4ade-8c06-88382cc4a041" containerID="11b0a6ac3834014ee7f9091bf192cd07ecd3a8a375d6d33212050aaa214d19f7" exitCode=0 Jan 27 18:55:28 crc kubenswrapper[4915]: I0127 18:55:28.848860 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh" event={"ID":"779f706d-66f4-4ade-8c06-88382cc4a041","Type":"ContainerDied","Data":"11b0a6ac3834014ee7f9091bf192cd07ecd3a8a375d6d33212050aaa214d19f7"} Jan 27 18:55:29 crc kubenswrapper[4915]: I0127 18:55:29.858554 4915 generic.go:334] "Generic (PLEG): container finished" podID="779f706d-66f4-4ade-8c06-88382cc4a041" containerID="b897a5a70c9e082c24b8362808d877a937e81bf5699b194663df8eb2a5ab22ca" exitCode=0 Jan 27 18:55:29 crc kubenswrapper[4915]: I0127 18:55:29.858624 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh" event={"ID":"779f706d-66f4-4ade-8c06-88382cc4a041","Type":"ContainerDied","Data":"b897a5a70c9e082c24b8362808d877a937e81bf5699b194663df8eb2a5ab22ca"} Jan 27 18:55:31 crc kubenswrapper[4915]: I0127 18:55:31.244848 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh" Jan 27 18:55:31 crc kubenswrapper[4915]: I0127 18:55:31.363068 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/779f706d-66f4-4ade-8c06-88382cc4a041-bundle\") pod \"779f706d-66f4-4ade-8c06-88382cc4a041\" (UID: \"779f706d-66f4-4ade-8c06-88382cc4a041\") " Jan 27 18:55:31 crc kubenswrapper[4915]: I0127 18:55:31.363169 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/779f706d-66f4-4ade-8c06-88382cc4a041-util\") pod \"779f706d-66f4-4ade-8c06-88382cc4a041\" (UID: \"779f706d-66f4-4ade-8c06-88382cc4a041\") " Jan 27 18:55:31 crc kubenswrapper[4915]: I0127 18:55:31.363266 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2zxw\" (UniqueName: \"kubernetes.io/projected/779f706d-66f4-4ade-8c06-88382cc4a041-kube-api-access-d2zxw\") pod \"779f706d-66f4-4ade-8c06-88382cc4a041\" (UID: \"779f706d-66f4-4ade-8c06-88382cc4a041\") " Jan 27 18:55:31 crc kubenswrapper[4915]: I0127 18:55:31.364267 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/779f706d-66f4-4ade-8c06-88382cc4a041-bundle" (OuterVolumeSpecName: "bundle") pod "779f706d-66f4-4ade-8c06-88382cc4a041" (UID: "779f706d-66f4-4ade-8c06-88382cc4a041"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:55:31 crc kubenswrapper[4915]: I0127 18:55:31.376493 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/779f706d-66f4-4ade-8c06-88382cc4a041-kube-api-access-d2zxw" (OuterVolumeSpecName: "kube-api-access-d2zxw") pod "779f706d-66f4-4ade-8c06-88382cc4a041" (UID: "779f706d-66f4-4ade-8c06-88382cc4a041"). InnerVolumeSpecName "kube-api-access-d2zxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:55:31 crc kubenswrapper[4915]: I0127 18:55:31.465017 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2zxw\" (UniqueName: \"kubernetes.io/projected/779f706d-66f4-4ade-8c06-88382cc4a041-kube-api-access-d2zxw\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:31 crc kubenswrapper[4915]: I0127 18:55:31.465073 4915 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/779f706d-66f4-4ade-8c06-88382cc4a041-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:31 crc kubenswrapper[4915]: I0127 18:55:31.541204 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/779f706d-66f4-4ade-8c06-88382cc4a041-util" (OuterVolumeSpecName: "util") pod "779f706d-66f4-4ade-8c06-88382cc4a041" (UID: "779f706d-66f4-4ade-8c06-88382cc4a041"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:55:31 crc kubenswrapper[4915]: I0127 18:55:31.566115 4915 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/779f706d-66f4-4ade-8c06-88382cc4a041-util\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:31 crc kubenswrapper[4915]: I0127 18:55:31.871600 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh" event={"ID":"779f706d-66f4-4ade-8c06-88382cc4a041","Type":"ContainerDied","Data":"883a229bff10175c9f0f4b2265fa1b70a4ce88d973ea8634ed42a8bada2e8fdb"} Jan 27 18:55:31 crc kubenswrapper[4915]: I0127 18:55:31.871667 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="883a229bff10175c9f0f4b2265fa1b70a4ce88d973ea8634ed42a8bada2e8fdb" Jan 27 18:55:31 crc kubenswrapper[4915]: I0127 18:55:31.871634 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh" Jan 27 18:55:33 crc kubenswrapper[4915]: I0127 18:55:33.678330 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-htsj9" Jan 27 18:55:33 crc kubenswrapper[4915]: I0127 18:55:33.678654 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-htsj9" Jan 27 18:55:33 crc kubenswrapper[4915]: I0127 18:55:33.712593 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-htsj9" Jan 27 18:55:33 crc kubenswrapper[4915]: I0127 18:55:33.930550 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-htsj9" Jan 27 18:55:35 crc kubenswrapper[4915]: I0127 18:55:35.695107 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-htsj9"] Jan 27 18:55:35 crc kubenswrapper[4915]: I0127 18:55:35.899131 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-htsj9" podUID="83280a6b-909c-4fe9-99fa-5ecce9923b8c" containerName="registry-server" containerID="cri-o://afc8a7c50d1d7d9b691ec20bfd035f33f73115c2d338491ce04aa79732877047" gracePeriod=2 Jan 27 18:55:36 crc kubenswrapper[4915]: I0127 18:55:36.303220 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-htsj9" Jan 27 18:55:36 crc kubenswrapper[4915]: I0127 18:55:36.428654 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lc47l\" (UniqueName: \"kubernetes.io/projected/83280a6b-909c-4fe9-99fa-5ecce9923b8c-kube-api-access-lc47l\") pod \"83280a6b-909c-4fe9-99fa-5ecce9923b8c\" (UID: \"83280a6b-909c-4fe9-99fa-5ecce9923b8c\") " Jan 27 18:55:36 crc kubenswrapper[4915]: I0127 18:55:36.428714 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83280a6b-909c-4fe9-99fa-5ecce9923b8c-utilities\") pod \"83280a6b-909c-4fe9-99fa-5ecce9923b8c\" (UID: \"83280a6b-909c-4fe9-99fa-5ecce9923b8c\") " Jan 27 18:55:36 crc kubenswrapper[4915]: I0127 18:55:36.428755 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83280a6b-909c-4fe9-99fa-5ecce9923b8c-catalog-content\") pod \"83280a6b-909c-4fe9-99fa-5ecce9923b8c\" (UID: \"83280a6b-909c-4fe9-99fa-5ecce9923b8c\") " Jan 27 18:55:36 crc kubenswrapper[4915]: I0127 18:55:36.429620 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83280a6b-909c-4fe9-99fa-5ecce9923b8c-utilities" (OuterVolumeSpecName: "utilities") pod "83280a6b-909c-4fe9-99fa-5ecce9923b8c" (UID: "83280a6b-909c-4fe9-99fa-5ecce9923b8c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:55:36 crc kubenswrapper[4915]: I0127 18:55:36.436733 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83280a6b-909c-4fe9-99fa-5ecce9923b8c-kube-api-access-lc47l" (OuterVolumeSpecName: "kube-api-access-lc47l") pod "83280a6b-909c-4fe9-99fa-5ecce9923b8c" (UID: "83280a6b-909c-4fe9-99fa-5ecce9923b8c"). InnerVolumeSpecName "kube-api-access-lc47l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:55:36 crc kubenswrapper[4915]: I0127 18:55:36.451717 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83280a6b-909c-4fe9-99fa-5ecce9923b8c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83280a6b-909c-4fe9-99fa-5ecce9923b8c" (UID: "83280a6b-909c-4fe9-99fa-5ecce9923b8c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:55:36 crc kubenswrapper[4915]: I0127 18:55:36.530354 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lc47l\" (UniqueName: \"kubernetes.io/projected/83280a6b-909c-4fe9-99fa-5ecce9923b8c-kube-api-access-lc47l\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:36 crc kubenswrapper[4915]: I0127 18:55:36.530389 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83280a6b-909c-4fe9-99fa-5ecce9923b8c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:36 crc kubenswrapper[4915]: I0127 18:55:36.530401 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83280a6b-909c-4fe9-99fa-5ecce9923b8c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:36 crc kubenswrapper[4915]: I0127 18:55:36.910319 4915 generic.go:334] "Generic (PLEG): container finished" podID="83280a6b-909c-4fe9-99fa-5ecce9923b8c" containerID="afc8a7c50d1d7d9b691ec20bfd035f33f73115c2d338491ce04aa79732877047" exitCode=0 Jan 27 18:55:36 crc kubenswrapper[4915]: I0127 18:55:36.910360 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-htsj9" event={"ID":"83280a6b-909c-4fe9-99fa-5ecce9923b8c","Type":"ContainerDied","Data":"afc8a7c50d1d7d9b691ec20bfd035f33f73115c2d338491ce04aa79732877047"} Jan 27 18:55:36 crc kubenswrapper[4915]: I0127 18:55:36.910386 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-htsj9" event={"ID":"83280a6b-909c-4fe9-99fa-5ecce9923b8c","Type":"ContainerDied","Data":"bfa6cbdaadda8e403bd4f7c28f712e6c3432dae92da3fb5013d903d98a17eb4c"} Jan 27 18:55:36 crc kubenswrapper[4915]: I0127 18:55:36.910402 4915 scope.go:117] "RemoveContainer" containerID="afc8a7c50d1d7d9b691ec20bfd035f33f73115c2d338491ce04aa79732877047" Jan 27 18:55:36 crc kubenswrapper[4915]: I0127 18:55:36.910509 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-htsj9" Jan 27 18:55:36 crc kubenswrapper[4915]: I0127 18:55:36.932818 4915 scope.go:117] "RemoveContainer" containerID="4b9105e553fd50474a9e566e1c134c7e803c1e2d4a14bef3de04c621c1b1a891" Jan 27 18:55:36 crc kubenswrapper[4915]: I0127 18:55:36.935615 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-htsj9"] Jan 27 18:55:36 crc kubenswrapper[4915]: I0127 18:55:36.939689 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-htsj9"] Jan 27 18:55:36 crc kubenswrapper[4915]: I0127 18:55:36.947994 4915 scope.go:117] "RemoveContainer" containerID="7fbfabae64f14fe40e9f4a0f14f6beaff652fa95bf4be419f01f98a7a3dca4e8" Jan 27 18:55:36 crc kubenswrapper[4915]: I0127 18:55:36.963984 4915 scope.go:117] "RemoveContainer" containerID="afc8a7c50d1d7d9b691ec20bfd035f33f73115c2d338491ce04aa79732877047" Jan 27 18:55:36 crc kubenswrapper[4915]: E0127 18:55:36.966381 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afc8a7c50d1d7d9b691ec20bfd035f33f73115c2d338491ce04aa79732877047\": container with ID starting with afc8a7c50d1d7d9b691ec20bfd035f33f73115c2d338491ce04aa79732877047 not found: ID does not exist" containerID="afc8a7c50d1d7d9b691ec20bfd035f33f73115c2d338491ce04aa79732877047" Jan 27 18:55:36 crc kubenswrapper[4915]: I0127 18:55:36.966411 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afc8a7c50d1d7d9b691ec20bfd035f33f73115c2d338491ce04aa79732877047"} err="failed to get container status \"afc8a7c50d1d7d9b691ec20bfd035f33f73115c2d338491ce04aa79732877047\": rpc error: code = NotFound desc = could not find container \"afc8a7c50d1d7d9b691ec20bfd035f33f73115c2d338491ce04aa79732877047\": container with ID starting with afc8a7c50d1d7d9b691ec20bfd035f33f73115c2d338491ce04aa79732877047 not found: ID does not exist" Jan 27 18:55:36 crc kubenswrapper[4915]: I0127 18:55:36.966431 4915 scope.go:117] "RemoveContainer" containerID="4b9105e553fd50474a9e566e1c134c7e803c1e2d4a14bef3de04c621c1b1a891" Jan 27 18:55:36 crc kubenswrapper[4915]: E0127 18:55:36.966650 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b9105e553fd50474a9e566e1c134c7e803c1e2d4a14bef3de04c621c1b1a891\": container with ID starting with 4b9105e553fd50474a9e566e1c134c7e803c1e2d4a14bef3de04c621c1b1a891 not found: ID does not exist" containerID="4b9105e553fd50474a9e566e1c134c7e803c1e2d4a14bef3de04c621c1b1a891" Jan 27 18:55:36 crc kubenswrapper[4915]: I0127 18:55:36.966688 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b9105e553fd50474a9e566e1c134c7e803c1e2d4a14bef3de04c621c1b1a891"} err="failed to get container status \"4b9105e553fd50474a9e566e1c134c7e803c1e2d4a14bef3de04c621c1b1a891\": rpc error: code = NotFound desc = could not find container \"4b9105e553fd50474a9e566e1c134c7e803c1e2d4a14bef3de04c621c1b1a891\": container with ID starting with 4b9105e553fd50474a9e566e1c134c7e803c1e2d4a14bef3de04c621c1b1a891 not found: ID does not exist" Jan 27 18:55:36 crc kubenswrapper[4915]: I0127 18:55:36.966707 4915 scope.go:117] "RemoveContainer" containerID="7fbfabae64f14fe40e9f4a0f14f6beaff652fa95bf4be419f01f98a7a3dca4e8" Jan 27 18:55:36 crc kubenswrapper[4915]: E0127 18:55:36.967182 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fbfabae64f14fe40e9f4a0f14f6beaff652fa95bf4be419f01f98a7a3dca4e8\": container with ID starting with 7fbfabae64f14fe40e9f4a0f14f6beaff652fa95bf4be419f01f98a7a3dca4e8 not found: ID does not exist" containerID="7fbfabae64f14fe40e9f4a0f14f6beaff652fa95bf4be419f01f98a7a3dca4e8" Jan 27 18:55:36 crc kubenswrapper[4915]: I0127 18:55:36.967202 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fbfabae64f14fe40e9f4a0f14f6beaff652fa95bf4be419f01f98a7a3dca4e8"} err="failed to get container status \"7fbfabae64f14fe40e9f4a0f14f6beaff652fa95bf4be419f01f98a7a3dca4e8\": rpc error: code = NotFound desc = could not find container \"7fbfabae64f14fe40e9f4a0f14f6beaff652fa95bf4be419f01f98a7a3dca4e8\": container with ID starting with 7fbfabae64f14fe40e9f4a0f14f6beaff652fa95bf4be419f01f98a7a3dca4e8 not found: ID does not exist" Jan 27 18:55:37 crc kubenswrapper[4915]: I0127 18:55:37.363955 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83280a6b-909c-4fe9-99fa-5ecce9923b8c" path="/var/lib/kubelet/pods/83280a6b-909c-4fe9-99fa-5ecce9923b8c/volumes" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.023332 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b5dc76568-9pz92"] Jan 27 18:55:40 crc kubenswrapper[4915]: E0127 18:55:40.023727 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="779f706d-66f4-4ade-8c06-88382cc4a041" containerName="extract" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.023738 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="779f706d-66f4-4ade-8c06-88382cc4a041" containerName="extract" Jan 27 18:55:40 crc kubenswrapper[4915]: E0127 18:55:40.023745 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d97ce85d-90e3-410f-bd7c-812149c6933f" containerName="console" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.023750 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d97ce85d-90e3-410f-bd7c-812149c6933f" containerName="console" Jan 27 18:55:40 crc kubenswrapper[4915]: E0127 18:55:40.023757 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83280a6b-909c-4fe9-99fa-5ecce9923b8c" containerName="extract-utilities" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.023763 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="83280a6b-909c-4fe9-99fa-5ecce9923b8c" containerName="extract-utilities" Jan 27 18:55:40 crc kubenswrapper[4915]: E0127 18:55:40.023773 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="779f706d-66f4-4ade-8c06-88382cc4a041" containerName="util" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.023778 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="779f706d-66f4-4ade-8c06-88382cc4a041" containerName="util" Jan 27 18:55:40 crc kubenswrapper[4915]: E0127 18:55:40.023800 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="779f706d-66f4-4ade-8c06-88382cc4a041" containerName="pull" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.023806 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="779f706d-66f4-4ade-8c06-88382cc4a041" containerName="pull" Jan 27 18:55:40 crc kubenswrapper[4915]: E0127 18:55:40.023813 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83280a6b-909c-4fe9-99fa-5ecce9923b8c" containerName="extract-content" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.023819 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="83280a6b-909c-4fe9-99fa-5ecce9923b8c" containerName="extract-content" Jan 27 18:55:40 crc kubenswrapper[4915]: E0127 18:55:40.023828 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83280a6b-909c-4fe9-99fa-5ecce9923b8c" containerName="registry-server" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.023834 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="83280a6b-909c-4fe9-99fa-5ecce9923b8c" containerName="registry-server" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.023922 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="779f706d-66f4-4ade-8c06-88382cc4a041" containerName="extract" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.023931 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="d97ce85d-90e3-410f-bd7c-812149c6933f" containerName="console" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.023939 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="83280a6b-909c-4fe9-99fa-5ecce9923b8c" containerName="registry-server" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.024297 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5b5dc76568-9pz92" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.034245 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.034446 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.034554 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.034698 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.034824 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-wm2qp" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.063685 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b5dc76568-9pz92"] Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.087447 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/626172e0-b44e-4fc2-8aec-ca96111d56eb-apiservice-cert\") pod \"metallb-operator-controller-manager-5b5dc76568-9pz92\" (UID: \"626172e0-b44e-4fc2-8aec-ca96111d56eb\") " pod="metallb-system/metallb-operator-controller-manager-5b5dc76568-9pz92" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.087511 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/626172e0-b44e-4fc2-8aec-ca96111d56eb-webhook-cert\") pod \"metallb-operator-controller-manager-5b5dc76568-9pz92\" (UID: \"626172e0-b44e-4fc2-8aec-ca96111d56eb\") " pod="metallb-system/metallb-operator-controller-manager-5b5dc76568-9pz92" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.087674 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkjt4\" (UniqueName: \"kubernetes.io/projected/626172e0-b44e-4fc2-8aec-ca96111d56eb-kube-api-access-vkjt4\") pod \"metallb-operator-controller-manager-5b5dc76568-9pz92\" (UID: \"626172e0-b44e-4fc2-8aec-ca96111d56eb\") " pod="metallb-system/metallb-operator-controller-manager-5b5dc76568-9pz92" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.188522 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/626172e0-b44e-4fc2-8aec-ca96111d56eb-webhook-cert\") pod \"metallb-operator-controller-manager-5b5dc76568-9pz92\" (UID: \"626172e0-b44e-4fc2-8aec-ca96111d56eb\") " pod="metallb-system/metallb-operator-controller-manager-5b5dc76568-9pz92" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.188625 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkjt4\" (UniqueName: \"kubernetes.io/projected/626172e0-b44e-4fc2-8aec-ca96111d56eb-kube-api-access-vkjt4\") pod \"metallb-operator-controller-manager-5b5dc76568-9pz92\" (UID: \"626172e0-b44e-4fc2-8aec-ca96111d56eb\") " pod="metallb-system/metallb-operator-controller-manager-5b5dc76568-9pz92" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.188703 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/626172e0-b44e-4fc2-8aec-ca96111d56eb-apiservice-cert\") pod \"metallb-operator-controller-manager-5b5dc76568-9pz92\" (UID: \"626172e0-b44e-4fc2-8aec-ca96111d56eb\") " pod="metallb-system/metallb-operator-controller-manager-5b5dc76568-9pz92" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.195055 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/626172e0-b44e-4fc2-8aec-ca96111d56eb-apiservice-cert\") pod \"metallb-operator-controller-manager-5b5dc76568-9pz92\" (UID: \"626172e0-b44e-4fc2-8aec-ca96111d56eb\") " pod="metallb-system/metallb-operator-controller-manager-5b5dc76568-9pz92" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.195949 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/626172e0-b44e-4fc2-8aec-ca96111d56eb-webhook-cert\") pod \"metallb-operator-controller-manager-5b5dc76568-9pz92\" (UID: \"626172e0-b44e-4fc2-8aec-ca96111d56eb\") " pod="metallb-system/metallb-operator-controller-manager-5b5dc76568-9pz92" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.213522 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkjt4\" (UniqueName: \"kubernetes.io/projected/626172e0-b44e-4fc2-8aec-ca96111d56eb-kube-api-access-vkjt4\") pod \"metallb-operator-controller-manager-5b5dc76568-9pz92\" (UID: \"626172e0-b44e-4fc2-8aec-ca96111d56eb\") " pod="metallb-system/metallb-operator-controller-manager-5b5dc76568-9pz92" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.260440 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-cf655d589-7h4rj"] Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.261175 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-cf655d589-7h4rj" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.263771 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.264048 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-fjdk4" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.264704 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.280271 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-cf655d589-7h4rj"] Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.289650 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c4299dfb-b500-4c1b-94b2-576aedc4704a-apiservice-cert\") pod \"metallb-operator-webhook-server-cf655d589-7h4rj\" (UID: \"c4299dfb-b500-4c1b-94b2-576aedc4704a\") " pod="metallb-system/metallb-operator-webhook-server-cf655d589-7h4rj" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.290131 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c4299dfb-b500-4c1b-94b2-576aedc4704a-webhook-cert\") pod \"metallb-operator-webhook-server-cf655d589-7h4rj\" (UID: \"c4299dfb-b500-4c1b-94b2-576aedc4704a\") " pod="metallb-system/metallb-operator-webhook-server-cf655d589-7h4rj" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.290286 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbgmg\" (UniqueName: \"kubernetes.io/projected/c4299dfb-b500-4c1b-94b2-576aedc4704a-kube-api-access-fbgmg\") pod \"metallb-operator-webhook-server-cf655d589-7h4rj\" (UID: \"c4299dfb-b500-4c1b-94b2-576aedc4704a\") " pod="metallb-system/metallb-operator-webhook-server-cf655d589-7h4rj" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.347699 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5b5dc76568-9pz92" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.391826 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbgmg\" (UniqueName: \"kubernetes.io/projected/c4299dfb-b500-4c1b-94b2-576aedc4704a-kube-api-access-fbgmg\") pod \"metallb-operator-webhook-server-cf655d589-7h4rj\" (UID: \"c4299dfb-b500-4c1b-94b2-576aedc4704a\") " pod="metallb-system/metallb-operator-webhook-server-cf655d589-7h4rj" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.391879 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c4299dfb-b500-4c1b-94b2-576aedc4704a-apiservice-cert\") pod \"metallb-operator-webhook-server-cf655d589-7h4rj\" (UID: \"c4299dfb-b500-4c1b-94b2-576aedc4704a\") " pod="metallb-system/metallb-operator-webhook-server-cf655d589-7h4rj" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.391930 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c4299dfb-b500-4c1b-94b2-576aedc4704a-webhook-cert\") pod \"metallb-operator-webhook-server-cf655d589-7h4rj\" (UID: \"c4299dfb-b500-4c1b-94b2-576aedc4704a\") " pod="metallb-system/metallb-operator-webhook-server-cf655d589-7h4rj" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.395381 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c4299dfb-b500-4c1b-94b2-576aedc4704a-webhook-cert\") pod \"metallb-operator-webhook-server-cf655d589-7h4rj\" (UID: \"c4299dfb-b500-4c1b-94b2-576aedc4704a\") " pod="metallb-system/metallb-operator-webhook-server-cf655d589-7h4rj" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.396389 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c4299dfb-b500-4c1b-94b2-576aedc4704a-apiservice-cert\") pod \"metallb-operator-webhook-server-cf655d589-7h4rj\" (UID: \"c4299dfb-b500-4c1b-94b2-576aedc4704a\") " pod="metallb-system/metallb-operator-webhook-server-cf655d589-7h4rj" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.412550 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbgmg\" (UniqueName: \"kubernetes.io/projected/c4299dfb-b500-4c1b-94b2-576aedc4704a-kube-api-access-fbgmg\") pod \"metallb-operator-webhook-server-cf655d589-7h4rj\" (UID: \"c4299dfb-b500-4c1b-94b2-576aedc4704a\") " pod="metallb-system/metallb-operator-webhook-server-cf655d589-7h4rj" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.580937 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-cf655d589-7h4rj" Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.581219 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b5dc76568-9pz92"] Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.909902 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-cf655d589-7h4rj"] Jan 27 18:55:40 crc kubenswrapper[4915]: W0127 18:55:40.917140 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4299dfb_b500_4c1b_94b2_576aedc4704a.slice/crio-c0512ad1635a8bffe23678e7d756076a12d41ef0fa63cb277ea5e973dc1055de WatchSource:0}: Error finding container c0512ad1635a8bffe23678e7d756076a12d41ef0fa63cb277ea5e973dc1055de: Status 404 returned error can't find the container with id c0512ad1635a8bffe23678e7d756076a12d41ef0fa63cb277ea5e973dc1055de Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.942510 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-cf655d589-7h4rj" event={"ID":"c4299dfb-b500-4c1b-94b2-576aedc4704a","Type":"ContainerStarted","Data":"c0512ad1635a8bffe23678e7d756076a12d41ef0fa63cb277ea5e973dc1055de"} Jan 27 18:55:40 crc kubenswrapper[4915]: I0127 18:55:40.943376 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b5dc76568-9pz92" event={"ID":"626172e0-b44e-4fc2-8aec-ca96111d56eb","Type":"ContainerStarted","Data":"351df11a3f2e1aa570b60464674bc2fb83e7e9b893567ec28d846a55eaeadc59"} Jan 27 18:55:49 crc kubenswrapper[4915]: I0127 18:55:49.002222 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-cf655d589-7h4rj" event={"ID":"c4299dfb-b500-4c1b-94b2-576aedc4704a","Type":"ContainerStarted","Data":"0f5d01ae35371889afad031c037657aeee730e2de23bd9b6f300acb8ec0ccb4e"} Jan 27 18:55:49 crc kubenswrapper[4915]: I0127 18:55:49.002930 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-cf655d589-7h4rj" Jan 27 18:55:49 crc kubenswrapper[4915]: I0127 18:55:49.005353 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b5dc76568-9pz92" event={"ID":"626172e0-b44e-4fc2-8aec-ca96111d56eb","Type":"ContainerStarted","Data":"78f9fa95b7dfa0fb05a739075905cd7c3fa4045e544c81bd2b5aed6f64911368"} Jan 27 18:55:49 crc kubenswrapper[4915]: I0127 18:55:49.005611 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5b5dc76568-9pz92" Jan 27 18:55:49 crc kubenswrapper[4915]: I0127 18:55:49.033314 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-cf655d589-7h4rj" podStartSLOduration=1.962057555 podStartE2EDuration="9.033298345s" podCreationTimestamp="2026-01-27 18:55:40 +0000 UTC" firstStartedPulling="2026-01-27 18:55:40.924123697 +0000 UTC m=+832.281977351" lastFinishedPulling="2026-01-27 18:55:47.995364487 +0000 UTC m=+839.353218141" observedRunningTime="2026-01-27 18:55:49.031622488 +0000 UTC m=+840.389476202" watchObservedRunningTime="2026-01-27 18:55:49.033298345 +0000 UTC m=+840.391152009" Jan 27 18:55:49 crc kubenswrapper[4915]: I0127 18:55:49.054228 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5b5dc76568-9pz92" podStartSLOduration=1.659597338 podStartE2EDuration="9.054209777s" podCreationTimestamp="2026-01-27 18:55:40 +0000 UTC" firstStartedPulling="2026-01-27 18:55:40.578017735 +0000 UTC m=+831.935871399" lastFinishedPulling="2026-01-27 18:55:47.972630174 +0000 UTC m=+839.330483838" observedRunningTime="2026-01-27 18:55:49.051878604 +0000 UTC m=+840.409732288" watchObservedRunningTime="2026-01-27 18:55:49.054209777 +0000 UTC m=+840.412063441" Jan 27 18:56:00 crc kubenswrapper[4915]: I0127 18:56:00.588579 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-cf655d589-7h4rj" Jan 27 18:56:20 crc kubenswrapper[4915]: I0127 18:56:20.352827 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5b5dc76568-9pz92" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.112180 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-jr8xt"] Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.114739 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.116813 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-kt7w6" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.117094 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.117353 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.141705 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkl8l"] Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.143008 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b6855ba9-2158-402c-ba8a-207bb1000e0d-frr-startup\") pod \"frr-k8s-jr8xt\" (UID: \"b6855ba9-2158-402c-ba8a-207bb1000e0d\") " pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.143057 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b6855ba9-2158-402c-ba8a-207bb1000e0d-metrics\") pod \"frr-k8s-jr8xt\" (UID: \"b6855ba9-2158-402c-ba8a-207bb1000e0d\") " pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.143090 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b6855ba9-2158-402c-ba8a-207bb1000e0d-reloader\") pod \"frr-k8s-jr8xt\" (UID: \"b6855ba9-2158-402c-ba8a-207bb1000e0d\") " pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.143118 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b6855ba9-2158-402c-ba8a-207bb1000e0d-frr-sockets\") pod \"frr-k8s-jr8xt\" (UID: \"b6855ba9-2158-402c-ba8a-207bb1000e0d\") " pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.143155 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsd9g\" (UniqueName: \"kubernetes.io/projected/b6855ba9-2158-402c-ba8a-207bb1000e0d-kube-api-access-rsd9g\") pod \"frr-k8s-jr8xt\" (UID: \"b6855ba9-2158-402c-ba8a-207bb1000e0d\") " pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.143188 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b6855ba9-2158-402c-ba8a-207bb1000e0d-frr-conf\") pod \"frr-k8s-jr8xt\" (UID: \"b6855ba9-2158-402c-ba8a-207bb1000e0d\") " pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.143231 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b6855ba9-2158-402c-ba8a-207bb1000e0d-metrics-certs\") pod \"frr-k8s-jr8xt\" (UID: \"b6855ba9-2158-402c-ba8a-207bb1000e0d\") " pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.158939 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkl8l" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.160828 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.167654 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkl8l"] Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.214594 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-2t6jv"] Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.215718 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2t6jv" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.224942 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.225114 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.225262 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-fzj7j" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.227424 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.229395 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-kbldf"] Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.230405 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-kbldf" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.234142 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.244283 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsd9g\" (UniqueName: \"kubernetes.io/projected/b6855ba9-2158-402c-ba8a-207bb1000e0d-kube-api-access-rsd9g\") pod \"frr-k8s-jr8xt\" (UID: \"b6855ba9-2158-402c-ba8a-207bb1000e0d\") " pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.244334 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sptgf\" (UniqueName: \"kubernetes.io/projected/78f43a7d-8f1a-4ce5-b9bf-9dacd8e1b5b4-kube-api-access-sptgf\") pod \"frr-k8s-webhook-server-7df86c4f6c-bkl8l\" (UID: \"78f43a7d-8f1a-4ce5-b9bf-9dacd8e1b5b4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkl8l" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.244377 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b6855ba9-2158-402c-ba8a-207bb1000e0d-frr-conf\") pod \"frr-k8s-jr8xt\" (UID: \"b6855ba9-2158-402c-ba8a-207bb1000e0d\") " pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.244434 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65dcae83-67d2-4c0c-8a1b-2693b4024913-memberlist\") pod \"speaker-2t6jv\" (UID: \"65dcae83-67d2-4c0c-8a1b-2693b4024913\") " pod="metallb-system/speaker-2t6jv" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.244455 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b6855ba9-2158-402c-ba8a-207bb1000e0d-metrics-certs\") pod \"frr-k8s-jr8xt\" (UID: \"b6855ba9-2158-402c-ba8a-207bb1000e0d\") " pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.244477 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k44ps\" (UniqueName: \"kubernetes.io/projected/80cb012a-8a30-42f8-b7f6-95505812f59e-kube-api-access-k44ps\") pod \"controller-6968d8fdc4-kbldf\" (UID: \"80cb012a-8a30-42f8-b7f6-95505812f59e\") " pod="metallb-system/controller-6968d8fdc4-kbldf" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.244498 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65dcae83-67d2-4c0c-8a1b-2693b4024913-metrics-certs\") pod \"speaker-2t6jv\" (UID: \"65dcae83-67d2-4c0c-8a1b-2693b4024913\") " pod="metallb-system/speaker-2t6jv" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.244520 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl46j\" (UniqueName: \"kubernetes.io/projected/65dcae83-67d2-4c0c-8a1b-2693b4024913-kube-api-access-kl46j\") pod \"speaker-2t6jv\" (UID: \"65dcae83-67d2-4c0c-8a1b-2693b4024913\") " pod="metallb-system/speaker-2t6jv" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.244562 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78f43a7d-8f1a-4ce5-b9bf-9dacd8e1b5b4-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-bkl8l\" (UID: \"78f43a7d-8f1a-4ce5-b9bf-9dacd8e1b5b4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkl8l" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.244587 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b6855ba9-2158-402c-ba8a-207bb1000e0d-frr-startup\") pod \"frr-k8s-jr8xt\" (UID: \"b6855ba9-2158-402c-ba8a-207bb1000e0d\") " pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.244605 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b6855ba9-2158-402c-ba8a-207bb1000e0d-metrics\") pod \"frr-k8s-jr8xt\" (UID: \"b6855ba9-2158-402c-ba8a-207bb1000e0d\") " pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.244626 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/65dcae83-67d2-4c0c-8a1b-2693b4024913-metallb-excludel2\") pod \"speaker-2t6jv\" (UID: \"65dcae83-67d2-4c0c-8a1b-2693b4024913\") " pod="metallb-system/speaker-2t6jv" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.244655 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b6855ba9-2158-402c-ba8a-207bb1000e0d-reloader\") pod \"frr-k8s-jr8xt\" (UID: \"b6855ba9-2158-402c-ba8a-207bb1000e0d\") " pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.244682 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b6855ba9-2158-402c-ba8a-207bb1000e0d-frr-sockets\") pod \"frr-k8s-jr8xt\" (UID: \"b6855ba9-2158-402c-ba8a-207bb1000e0d\") " pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.244702 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80cb012a-8a30-42f8-b7f6-95505812f59e-cert\") pod \"controller-6968d8fdc4-kbldf\" (UID: \"80cb012a-8a30-42f8-b7f6-95505812f59e\") " pod="metallb-system/controller-6968d8fdc4-kbldf" Jan 27 18:56:21 crc kubenswrapper[4915]: E0127 18:56:21.244706 4915 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.244749 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80cb012a-8a30-42f8-b7f6-95505812f59e-metrics-certs\") pod \"controller-6968d8fdc4-kbldf\" (UID: \"80cb012a-8a30-42f8-b7f6-95505812f59e\") " pod="metallb-system/controller-6968d8fdc4-kbldf" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.244770 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b6855ba9-2158-402c-ba8a-207bb1000e0d-frr-conf\") pod \"frr-k8s-jr8xt\" (UID: \"b6855ba9-2158-402c-ba8a-207bb1000e0d\") " pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:21 crc kubenswrapper[4915]: E0127 18:56:21.244804 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6855ba9-2158-402c-ba8a-207bb1000e0d-metrics-certs podName:b6855ba9-2158-402c-ba8a-207bb1000e0d nodeName:}" failed. No retries permitted until 2026-01-27 18:56:21.744770173 +0000 UTC m=+873.102623947 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b6855ba9-2158-402c-ba8a-207bb1000e0d-metrics-certs") pod "frr-k8s-jr8xt" (UID: "b6855ba9-2158-402c-ba8a-207bb1000e0d") : secret "frr-k8s-certs-secret" not found Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.245044 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b6855ba9-2158-402c-ba8a-207bb1000e0d-reloader\") pod \"frr-k8s-jr8xt\" (UID: \"b6855ba9-2158-402c-ba8a-207bb1000e0d\") " pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.245245 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b6855ba9-2158-402c-ba8a-207bb1000e0d-metrics\") pod \"frr-k8s-jr8xt\" (UID: \"b6855ba9-2158-402c-ba8a-207bb1000e0d\") " pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.245262 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b6855ba9-2158-402c-ba8a-207bb1000e0d-frr-sockets\") pod \"frr-k8s-jr8xt\" (UID: \"b6855ba9-2158-402c-ba8a-207bb1000e0d\") " pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.245805 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b6855ba9-2158-402c-ba8a-207bb1000e0d-frr-startup\") pod \"frr-k8s-jr8xt\" (UID: \"b6855ba9-2158-402c-ba8a-207bb1000e0d\") " pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.261670 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-kbldf"] Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.278666 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsd9g\" (UniqueName: \"kubernetes.io/projected/b6855ba9-2158-402c-ba8a-207bb1000e0d-kube-api-access-rsd9g\") pod \"frr-k8s-jr8xt\" (UID: \"b6855ba9-2158-402c-ba8a-207bb1000e0d\") " pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.345600 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78f43a7d-8f1a-4ce5-b9bf-9dacd8e1b5b4-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-bkl8l\" (UID: \"78f43a7d-8f1a-4ce5-b9bf-9dacd8e1b5b4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkl8l" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.345654 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/65dcae83-67d2-4c0c-8a1b-2693b4024913-metallb-excludel2\") pod \"speaker-2t6jv\" (UID: \"65dcae83-67d2-4c0c-8a1b-2693b4024913\") " pod="metallb-system/speaker-2t6jv" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.346411 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80cb012a-8a30-42f8-b7f6-95505812f59e-cert\") pod \"controller-6968d8fdc4-kbldf\" (UID: \"80cb012a-8a30-42f8-b7f6-95505812f59e\") " pod="metallb-system/controller-6968d8fdc4-kbldf" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.346440 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80cb012a-8a30-42f8-b7f6-95505812f59e-metrics-certs\") pod \"controller-6968d8fdc4-kbldf\" (UID: \"80cb012a-8a30-42f8-b7f6-95505812f59e\") " pod="metallb-system/controller-6968d8fdc4-kbldf" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.346476 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sptgf\" (UniqueName: \"kubernetes.io/projected/78f43a7d-8f1a-4ce5-b9bf-9dacd8e1b5b4-kube-api-access-sptgf\") pod \"frr-k8s-webhook-server-7df86c4f6c-bkl8l\" (UID: \"78f43a7d-8f1a-4ce5-b9bf-9dacd8e1b5b4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkl8l" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.346511 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65dcae83-67d2-4c0c-8a1b-2693b4024913-memberlist\") pod \"speaker-2t6jv\" (UID: \"65dcae83-67d2-4c0c-8a1b-2693b4024913\") " pod="metallb-system/speaker-2t6jv" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.346534 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k44ps\" (UniqueName: \"kubernetes.io/projected/80cb012a-8a30-42f8-b7f6-95505812f59e-kube-api-access-k44ps\") pod \"controller-6968d8fdc4-kbldf\" (UID: \"80cb012a-8a30-42f8-b7f6-95505812f59e\") " pod="metallb-system/controller-6968d8fdc4-kbldf" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.346549 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65dcae83-67d2-4c0c-8a1b-2693b4024913-metrics-certs\") pod \"speaker-2t6jv\" (UID: \"65dcae83-67d2-4c0c-8a1b-2693b4024913\") " pod="metallb-system/speaker-2t6jv" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.346568 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl46j\" (UniqueName: \"kubernetes.io/projected/65dcae83-67d2-4c0c-8a1b-2693b4024913-kube-api-access-kl46j\") pod \"speaker-2t6jv\" (UID: \"65dcae83-67d2-4c0c-8a1b-2693b4024913\") " pod="metallb-system/speaker-2t6jv" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.346631 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/65dcae83-67d2-4c0c-8a1b-2693b4024913-metallb-excludel2\") pod \"speaker-2t6jv\" (UID: \"65dcae83-67d2-4c0c-8a1b-2693b4024913\") " pod="metallb-system/speaker-2t6jv" Jan 27 18:56:21 crc kubenswrapper[4915]: E0127 18:56:21.346743 4915 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 18:56:21 crc kubenswrapper[4915]: E0127 18:56:21.346815 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65dcae83-67d2-4c0c-8a1b-2693b4024913-memberlist podName:65dcae83-67d2-4c0c-8a1b-2693b4024913 nodeName:}" failed. No retries permitted until 2026-01-27 18:56:21.846779218 +0000 UTC m=+873.204632882 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/65dcae83-67d2-4c0c-8a1b-2693b4024913-memberlist") pod "speaker-2t6jv" (UID: "65dcae83-67d2-4c0c-8a1b-2693b4024913") : secret "metallb-memberlist" not found Jan 27 18:56:21 crc kubenswrapper[4915]: E0127 18:56:21.346868 4915 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 27 18:56:21 crc kubenswrapper[4915]: E0127 18:56:21.346909 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65dcae83-67d2-4c0c-8a1b-2693b4024913-metrics-certs podName:65dcae83-67d2-4c0c-8a1b-2693b4024913 nodeName:}" failed. No retries permitted until 2026-01-27 18:56:21.846896471 +0000 UTC m=+873.204750215 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/65dcae83-67d2-4c0c-8a1b-2693b4024913-metrics-certs") pod "speaker-2t6jv" (UID: "65dcae83-67d2-4c0c-8a1b-2693b4024913") : secret "speaker-certs-secret" not found Jan 27 18:56:21 crc kubenswrapper[4915]: E0127 18:56:21.346948 4915 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 27 18:56:21 crc kubenswrapper[4915]: E0127 18:56:21.346967 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80cb012a-8a30-42f8-b7f6-95505812f59e-metrics-certs podName:80cb012a-8a30-42f8-b7f6-95505812f59e nodeName:}" failed. No retries permitted until 2026-01-27 18:56:21.846961533 +0000 UTC m=+873.204815197 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80cb012a-8a30-42f8-b7f6-95505812f59e-metrics-certs") pod "controller-6968d8fdc4-kbldf" (UID: "80cb012a-8a30-42f8-b7f6-95505812f59e") : secret "controller-certs-secret" not found Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.349527 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.350231 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78f43a7d-8f1a-4ce5-b9bf-9dacd8e1b5b4-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-bkl8l\" (UID: \"78f43a7d-8f1a-4ce5-b9bf-9dacd8e1b5b4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkl8l" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.361317 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80cb012a-8a30-42f8-b7f6-95505812f59e-cert\") pod \"controller-6968d8fdc4-kbldf\" (UID: \"80cb012a-8a30-42f8-b7f6-95505812f59e\") " pod="metallb-system/controller-6968d8fdc4-kbldf" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.370610 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sptgf\" (UniqueName: \"kubernetes.io/projected/78f43a7d-8f1a-4ce5-b9bf-9dacd8e1b5b4-kube-api-access-sptgf\") pod \"frr-k8s-webhook-server-7df86c4f6c-bkl8l\" (UID: \"78f43a7d-8f1a-4ce5-b9bf-9dacd8e1b5b4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkl8l" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.380302 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl46j\" (UniqueName: \"kubernetes.io/projected/65dcae83-67d2-4c0c-8a1b-2693b4024913-kube-api-access-kl46j\") pod \"speaker-2t6jv\" (UID: \"65dcae83-67d2-4c0c-8a1b-2693b4024913\") " pod="metallb-system/speaker-2t6jv" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.392985 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k44ps\" (UniqueName: \"kubernetes.io/projected/80cb012a-8a30-42f8-b7f6-95505812f59e-kube-api-access-k44ps\") pod \"controller-6968d8fdc4-kbldf\" (UID: \"80cb012a-8a30-42f8-b7f6-95505812f59e\") " pod="metallb-system/controller-6968d8fdc4-kbldf" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.475530 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkl8l" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.753559 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b6855ba9-2158-402c-ba8a-207bb1000e0d-metrics-certs\") pod \"frr-k8s-jr8xt\" (UID: \"b6855ba9-2158-402c-ba8a-207bb1000e0d\") " pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.758336 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b6855ba9-2158-402c-ba8a-207bb1000e0d-metrics-certs\") pod \"frr-k8s-jr8xt\" (UID: \"b6855ba9-2158-402c-ba8a-207bb1000e0d\") " pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.855142 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65dcae83-67d2-4c0c-8a1b-2693b4024913-memberlist\") pod \"speaker-2t6jv\" (UID: \"65dcae83-67d2-4c0c-8a1b-2693b4024913\") " pod="metallb-system/speaker-2t6jv" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.855214 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65dcae83-67d2-4c0c-8a1b-2693b4024913-metrics-certs\") pod \"speaker-2t6jv\" (UID: \"65dcae83-67d2-4c0c-8a1b-2693b4024913\") " pod="metallb-system/speaker-2t6jv" Jan 27 18:56:21 crc kubenswrapper[4915]: E0127 18:56:21.855328 4915 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.855346 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80cb012a-8a30-42f8-b7f6-95505812f59e-metrics-certs\") pod \"controller-6968d8fdc4-kbldf\" (UID: \"80cb012a-8a30-42f8-b7f6-95505812f59e\") " pod="metallb-system/controller-6968d8fdc4-kbldf" Jan 27 18:56:21 crc kubenswrapper[4915]: E0127 18:56:21.855411 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65dcae83-67d2-4c0c-8a1b-2693b4024913-memberlist podName:65dcae83-67d2-4c0c-8a1b-2693b4024913 nodeName:}" failed. No retries permitted until 2026-01-27 18:56:22.855393987 +0000 UTC m=+874.213247651 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/65dcae83-67d2-4c0c-8a1b-2693b4024913-memberlist") pod "speaker-2t6jv" (UID: "65dcae83-67d2-4c0c-8a1b-2693b4024913") : secret "metallb-memberlist" not found Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.859307 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65dcae83-67d2-4c0c-8a1b-2693b4024913-metrics-certs\") pod \"speaker-2t6jv\" (UID: \"65dcae83-67d2-4c0c-8a1b-2693b4024913\") " pod="metallb-system/speaker-2t6jv" Jan 27 18:56:21 crc kubenswrapper[4915]: I0127 18:56:21.860731 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80cb012a-8a30-42f8-b7f6-95505812f59e-metrics-certs\") pod \"controller-6968d8fdc4-kbldf\" (UID: \"80cb012a-8a30-42f8-b7f6-95505812f59e\") " pod="metallb-system/controller-6968d8fdc4-kbldf" Jan 27 18:56:22 crc kubenswrapper[4915]: I0127 18:56:22.054959 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:22 crc kubenswrapper[4915]: I0127 18:56:22.147907 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-kbldf" Jan 27 18:56:22 crc kubenswrapper[4915]: I0127 18:56:22.375907 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkl8l"] Jan 27 18:56:22 crc kubenswrapper[4915]: W0127 18:56:22.386125 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78f43a7d_8f1a_4ce5_b9bf_9dacd8e1b5b4.slice/crio-6b00124a6b179f4929c7c4b7895d3f5f2fd8f4b26e0b4f6641cd0006d696c973 WatchSource:0}: Error finding container 6b00124a6b179f4929c7c4b7895d3f5f2fd8f4b26e0b4f6641cd0006d696c973: Status 404 returned error can't find the container with id 6b00124a6b179f4929c7c4b7895d3f5f2fd8f4b26e0b4f6641cd0006d696c973 Jan 27 18:56:22 crc kubenswrapper[4915]: I0127 18:56:22.430424 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-kbldf"] Jan 27 18:56:22 crc kubenswrapper[4915]: I0127 18:56:22.879060 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65dcae83-67d2-4c0c-8a1b-2693b4024913-memberlist\") pod \"speaker-2t6jv\" (UID: \"65dcae83-67d2-4c0c-8a1b-2693b4024913\") " pod="metallb-system/speaker-2t6jv" Jan 27 18:56:22 crc kubenswrapper[4915]: I0127 18:56:22.887895 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65dcae83-67d2-4c0c-8a1b-2693b4024913-memberlist\") pod \"speaker-2t6jv\" (UID: \"65dcae83-67d2-4c0c-8a1b-2693b4024913\") " pod="metallb-system/speaker-2t6jv" Jan 27 18:56:23 crc kubenswrapper[4915]: I0127 18:56:23.037629 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2t6jv" Jan 27 18:56:23 crc kubenswrapper[4915]: W0127 18:56:23.054431 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65dcae83_67d2_4c0c_8a1b_2693b4024913.slice/crio-6dfe5e2aa876054fb1d112c62d36f2f90c504a6bc6410dfd50185690bff1555e WatchSource:0}: Error finding container 6dfe5e2aa876054fb1d112c62d36f2f90c504a6bc6410dfd50185690bff1555e: Status 404 returned error can't find the container with id 6dfe5e2aa876054fb1d112c62d36f2f90c504a6bc6410dfd50185690bff1555e Jan 27 18:56:23 crc kubenswrapper[4915]: I0127 18:56:23.218189 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2t6jv" event={"ID":"65dcae83-67d2-4c0c-8a1b-2693b4024913","Type":"ContainerStarted","Data":"6dfe5e2aa876054fb1d112c62d36f2f90c504a6bc6410dfd50185690bff1555e"} Jan 27 18:56:23 crc kubenswrapper[4915]: I0127 18:56:23.220057 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jr8xt" event={"ID":"b6855ba9-2158-402c-ba8a-207bb1000e0d","Type":"ContainerStarted","Data":"de8f463f6faaa282f1a90b29bd5c7c3bf20a5b6eea45617402d8de9a6c136ca0"} Jan 27 18:56:23 crc kubenswrapper[4915]: I0127 18:56:23.221484 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkl8l" event={"ID":"78f43a7d-8f1a-4ce5-b9bf-9dacd8e1b5b4","Type":"ContainerStarted","Data":"6b00124a6b179f4929c7c4b7895d3f5f2fd8f4b26e0b4f6641cd0006d696c973"} Jan 27 18:56:23 crc kubenswrapper[4915]: I0127 18:56:23.223527 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-kbldf" event={"ID":"80cb012a-8a30-42f8-b7f6-95505812f59e","Type":"ContainerStarted","Data":"bd9202bd894a06dbc920b236c68bec9a36b0b03bd56bd3f3441bd54cab67a26d"} Jan 27 18:56:23 crc kubenswrapper[4915]: I0127 18:56:23.223563 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-kbldf" event={"ID":"80cb012a-8a30-42f8-b7f6-95505812f59e","Type":"ContainerStarted","Data":"c1ea251e575a60c04a980e3670d8ccd99b7064c2c3ecf7dd90d852f3611fe020"} Jan 27 18:56:23 crc kubenswrapper[4915]: I0127 18:56:23.223576 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-kbldf" event={"ID":"80cb012a-8a30-42f8-b7f6-95505812f59e","Type":"ContainerStarted","Data":"115efb29a0861b2f6e1f6d626a2e1b4f301d9f685549df962cf1177498d868fa"} Jan 27 18:56:23 crc kubenswrapper[4915]: I0127 18:56:23.223656 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-kbldf" Jan 27 18:56:23 crc kubenswrapper[4915]: I0127 18:56:23.249457 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-kbldf" podStartSLOduration=2.249439341 podStartE2EDuration="2.249439341s" podCreationTimestamp="2026-01-27 18:56:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:56:23.247054783 +0000 UTC m=+874.604908447" watchObservedRunningTime="2026-01-27 18:56:23.249439341 +0000 UTC m=+874.607293005" Jan 27 18:56:24 crc kubenswrapper[4915]: I0127 18:56:24.247712 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2t6jv" event={"ID":"65dcae83-67d2-4c0c-8a1b-2693b4024913","Type":"ContainerStarted","Data":"8dfc75fe39871d39936b2322f4ad08b777f8ef9cf30dab141c49faee8b6dd5e5"} Jan 27 18:56:24 crc kubenswrapper[4915]: I0127 18:56:24.248453 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2t6jv" event={"ID":"65dcae83-67d2-4c0c-8a1b-2693b4024913","Type":"ContainerStarted","Data":"e4f59c7eb4227fb76192d1745af255e99ba48f49edf182fdc06dd1f04acdb2af"} Jan 27 18:56:24 crc kubenswrapper[4915]: I0127 18:56:24.248481 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-2t6jv" Jan 27 18:56:24 crc kubenswrapper[4915]: I0127 18:56:24.266826 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-2t6jv" podStartSLOduration=3.266808221 podStartE2EDuration="3.266808221s" podCreationTimestamp="2026-01-27 18:56:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:56:24.265766326 +0000 UTC m=+875.623619990" watchObservedRunningTime="2026-01-27 18:56:24.266808221 +0000 UTC m=+875.624661885" Jan 27 18:56:30 crc kubenswrapper[4915]: I0127 18:56:30.302915 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkl8l" event={"ID":"78f43a7d-8f1a-4ce5-b9bf-9dacd8e1b5b4","Type":"ContainerStarted","Data":"aad350e6812b10361a68fea26e33686bdd3e04d42d6c60a6fd9e8a891c362dda"} Jan 27 18:56:30 crc kubenswrapper[4915]: I0127 18:56:30.303491 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkl8l" Jan 27 18:56:30 crc kubenswrapper[4915]: I0127 18:56:30.305153 4915 generic.go:334] "Generic (PLEG): container finished" podID="b6855ba9-2158-402c-ba8a-207bb1000e0d" containerID="1a250ce631a9350ffa05c049f409680cdb44b26e5430e10c02276afe9ac1b73f" exitCode=0 Jan 27 18:56:30 crc kubenswrapper[4915]: I0127 18:56:30.305198 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jr8xt" event={"ID":"b6855ba9-2158-402c-ba8a-207bb1000e0d","Type":"ContainerDied","Data":"1a250ce631a9350ffa05c049f409680cdb44b26e5430e10c02276afe9ac1b73f"} Jan 27 18:56:30 crc kubenswrapper[4915]: I0127 18:56:30.327650 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkl8l" podStartSLOduration=2.478443878 podStartE2EDuration="9.327627914s" podCreationTimestamp="2026-01-27 18:56:21 +0000 UTC" firstStartedPulling="2026-01-27 18:56:22.387829294 +0000 UTC m=+873.745682958" lastFinishedPulling="2026-01-27 18:56:29.23701333 +0000 UTC m=+880.594866994" observedRunningTime="2026-01-27 18:56:30.3258435 +0000 UTC m=+881.683697194" watchObservedRunningTime="2026-01-27 18:56:30.327627914 +0000 UTC m=+881.685481588" Jan 27 18:56:31 crc kubenswrapper[4915]: I0127 18:56:31.312489 4915 generic.go:334] "Generic (PLEG): container finished" podID="b6855ba9-2158-402c-ba8a-207bb1000e0d" containerID="7200a6a8c10bc0da57b3287cad8619a34a53e11fab9020556da44f36e20d30c2" exitCode=0 Jan 27 18:56:31 crc kubenswrapper[4915]: I0127 18:56:31.312828 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jr8xt" event={"ID":"b6855ba9-2158-402c-ba8a-207bb1000e0d","Type":"ContainerDied","Data":"7200a6a8c10bc0da57b3287cad8619a34a53e11fab9020556da44f36e20d30c2"} Jan 27 18:56:32 crc kubenswrapper[4915]: I0127 18:56:32.152510 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-kbldf" Jan 27 18:56:32 crc kubenswrapper[4915]: I0127 18:56:32.321915 4915 generic.go:334] "Generic (PLEG): container finished" podID="b6855ba9-2158-402c-ba8a-207bb1000e0d" containerID="2d868656fa131efd6f45ae3e5645965aa876dc0bfec0f243e2aaf8882dfc7b88" exitCode=0 Jan 27 18:56:32 crc kubenswrapper[4915]: I0127 18:56:32.323581 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jr8xt" event={"ID":"b6855ba9-2158-402c-ba8a-207bb1000e0d","Type":"ContainerDied","Data":"2d868656fa131efd6f45ae3e5645965aa876dc0bfec0f243e2aaf8882dfc7b88"} Jan 27 18:56:33 crc kubenswrapper[4915]: I0127 18:56:33.042420 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-2t6jv" Jan 27 18:56:33 crc kubenswrapper[4915]: I0127 18:56:33.332848 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jr8xt" event={"ID":"b6855ba9-2158-402c-ba8a-207bb1000e0d","Type":"ContainerStarted","Data":"330730ff62583fa86c26cdff1d251f962dfcd6cce1e597dfde7d4575fe27be06"} Jan 27 18:56:33 crc kubenswrapper[4915]: I0127 18:56:33.332883 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jr8xt" event={"ID":"b6855ba9-2158-402c-ba8a-207bb1000e0d","Type":"ContainerStarted","Data":"6c1f16b68216dc8a862671152d87c0a07c219856fbe9442f886e96e293258c60"} Jan 27 18:56:34 crc kubenswrapper[4915]: I0127 18:56:34.346281 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jr8xt" event={"ID":"b6855ba9-2158-402c-ba8a-207bb1000e0d","Type":"ContainerStarted","Data":"1060710f821e62b8f9134e76c8074b8ec5b901f70316c8e6b97edc2822655bd4"} Jan 27 18:56:34 crc kubenswrapper[4915]: I0127 18:56:34.346542 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:34 crc kubenswrapper[4915]: I0127 18:56:34.346552 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jr8xt" event={"ID":"b6855ba9-2158-402c-ba8a-207bb1000e0d","Type":"ContainerStarted","Data":"a41417e0e2d595fde97ecded12c0582ce20b07f5c983c9c0f72263f54c8896f8"} Jan 27 18:56:34 crc kubenswrapper[4915]: I0127 18:56:34.346562 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jr8xt" event={"ID":"b6855ba9-2158-402c-ba8a-207bb1000e0d","Type":"ContainerStarted","Data":"3d4b60d5b450c8a2c1a318909494351dc43ee5a4e195341572a5bf2af77665e4"} Jan 27 18:56:34 crc kubenswrapper[4915]: I0127 18:56:34.346571 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jr8xt" event={"ID":"b6855ba9-2158-402c-ba8a-207bb1000e0d","Type":"ContainerStarted","Data":"c255c0cd1eb6c0a64de0aaa8ea098091f59d56e4dd96b88b0ea2d45d0c4a10c7"} Jan 27 18:56:34 crc kubenswrapper[4915]: I0127 18:56:34.375156 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-jr8xt" podStartSLOduration=6.451350375 podStartE2EDuration="13.375141169s" podCreationTimestamp="2026-01-27 18:56:21 +0000 UTC" firstStartedPulling="2026-01-27 18:56:22.311960756 +0000 UTC m=+873.669814420" lastFinishedPulling="2026-01-27 18:56:29.23575155 +0000 UTC m=+880.593605214" observedRunningTime="2026-01-27 18:56:34.374338019 +0000 UTC m=+885.732191683" watchObservedRunningTime="2026-01-27 18:56:34.375141169 +0000 UTC m=+885.732994833" Jan 27 18:56:34 crc kubenswrapper[4915]: I0127 18:56:34.739216 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th"] Jan 27 18:56:34 crc kubenswrapper[4915]: I0127 18:56:34.740708 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th" Jan 27 18:56:34 crc kubenswrapper[4915]: I0127 18:56:34.746099 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th"] Jan 27 18:56:34 crc kubenswrapper[4915]: I0127 18:56:34.747421 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 18:56:34 crc kubenswrapper[4915]: I0127 18:56:34.900911 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqjvl\" (UniqueName: \"kubernetes.io/projected/c0908716-de6b-48e7-a672-bf577fcaba00-kube-api-access-xqjvl\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th\" (UID: \"c0908716-de6b-48e7-a672-bf577fcaba00\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th" Jan 27 18:56:34 crc kubenswrapper[4915]: I0127 18:56:34.900978 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c0908716-de6b-48e7-a672-bf577fcaba00-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th\" (UID: \"c0908716-de6b-48e7-a672-bf577fcaba00\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th" Jan 27 18:56:34 crc kubenswrapper[4915]: I0127 18:56:34.901119 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c0908716-de6b-48e7-a672-bf577fcaba00-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th\" (UID: \"c0908716-de6b-48e7-a672-bf577fcaba00\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th" Jan 27 18:56:35 crc kubenswrapper[4915]: I0127 18:56:35.002920 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c0908716-de6b-48e7-a672-bf577fcaba00-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th\" (UID: \"c0908716-de6b-48e7-a672-bf577fcaba00\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th" Jan 27 18:56:35 crc kubenswrapper[4915]: I0127 18:56:35.003053 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqjvl\" (UniqueName: \"kubernetes.io/projected/c0908716-de6b-48e7-a672-bf577fcaba00-kube-api-access-xqjvl\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th\" (UID: \"c0908716-de6b-48e7-a672-bf577fcaba00\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th" Jan 27 18:56:35 crc kubenswrapper[4915]: I0127 18:56:35.003119 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c0908716-de6b-48e7-a672-bf577fcaba00-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th\" (UID: \"c0908716-de6b-48e7-a672-bf577fcaba00\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th" Jan 27 18:56:35 crc kubenswrapper[4915]: I0127 18:56:35.003391 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c0908716-de6b-48e7-a672-bf577fcaba00-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th\" (UID: \"c0908716-de6b-48e7-a672-bf577fcaba00\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th" Jan 27 18:56:35 crc kubenswrapper[4915]: I0127 18:56:35.003894 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c0908716-de6b-48e7-a672-bf577fcaba00-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th\" (UID: \"c0908716-de6b-48e7-a672-bf577fcaba00\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th" Jan 27 18:56:35 crc kubenswrapper[4915]: I0127 18:56:35.025629 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqjvl\" (UniqueName: \"kubernetes.io/projected/c0908716-de6b-48e7-a672-bf577fcaba00-kube-api-access-xqjvl\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th\" (UID: \"c0908716-de6b-48e7-a672-bf577fcaba00\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th" Jan 27 18:56:35 crc kubenswrapper[4915]: I0127 18:56:35.113308 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th" Jan 27 18:56:35 crc kubenswrapper[4915]: I0127 18:56:35.333515 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th"] Jan 27 18:56:35 crc kubenswrapper[4915]: I0127 18:56:35.369976 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th" event={"ID":"c0908716-de6b-48e7-a672-bf577fcaba00","Type":"ContainerStarted","Data":"3179d68ac4259aab8a00c8155c34e6273a6e51fea53cd7270295e21991c5dbad"} Jan 27 18:56:36 crc kubenswrapper[4915]: I0127 18:56:36.374413 4915 generic.go:334] "Generic (PLEG): container finished" podID="c0908716-de6b-48e7-a672-bf577fcaba00" containerID="a4396b0bdb1ff76e50255151af152033090c5399f637de7c34b6a251758ddb54" exitCode=0 Jan 27 18:56:36 crc kubenswrapper[4915]: I0127 18:56:36.374509 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th" event={"ID":"c0908716-de6b-48e7-a672-bf577fcaba00","Type":"ContainerDied","Data":"a4396b0bdb1ff76e50255151af152033090c5399f637de7c34b6a251758ddb54"} Jan 27 18:56:37 crc kubenswrapper[4915]: I0127 18:56:37.055901 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:37 crc kubenswrapper[4915]: I0127 18:56:37.094048 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:40 crc kubenswrapper[4915]: E0127 18:56:40.802655 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0908716_de6b_48e7_a672_bf577fcaba00.slice/crio-4f6ee96ce04a69cccbda5bed888cfd6f6fa974f9ceb8d0e144a9e5e07bb18c2b.scope\": RecentStats: unable to find data in memory cache]" Jan 27 18:56:41 crc kubenswrapper[4915]: I0127 18:56:41.411533 4915 generic.go:334] "Generic (PLEG): container finished" podID="c0908716-de6b-48e7-a672-bf577fcaba00" containerID="4f6ee96ce04a69cccbda5bed888cfd6f6fa974f9ceb8d0e144a9e5e07bb18c2b" exitCode=0 Jan 27 18:56:41 crc kubenswrapper[4915]: I0127 18:56:41.411656 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th" event={"ID":"c0908716-de6b-48e7-a672-bf577fcaba00","Type":"ContainerDied","Data":"4f6ee96ce04a69cccbda5bed888cfd6f6fa974f9ceb8d0e144a9e5e07bb18c2b"} Jan 27 18:56:41 crc kubenswrapper[4915]: I0127 18:56:41.483505 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkl8l" Jan 27 18:56:42 crc kubenswrapper[4915]: I0127 18:56:42.059249 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-jr8xt" Jan 27 18:56:42 crc kubenswrapper[4915]: I0127 18:56:42.422270 4915 generic.go:334] "Generic (PLEG): container finished" podID="c0908716-de6b-48e7-a672-bf577fcaba00" containerID="3205cea78aed50e5c57357b20504b76478df80148a75bf45aa95faf97f43668f" exitCode=0 Jan 27 18:56:42 crc kubenswrapper[4915]: I0127 18:56:42.422336 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th" event={"ID":"c0908716-de6b-48e7-a672-bf577fcaba00","Type":"ContainerDied","Data":"3205cea78aed50e5c57357b20504b76478df80148a75bf45aa95faf97f43668f"} Jan 27 18:56:43 crc kubenswrapper[4915]: I0127 18:56:43.633575 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th" Jan 27 18:56:43 crc kubenswrapper[4915]: I0127 18:56:43.744111 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqjvl\" (UniqueName: \"kubernetes.io/projected/c0908716-de6b-48e7-a672-bf577fcaba00-kube-api-access-xqjvl\") pod \"c0908716-de6b-48e7-a672-bf577fcaba00\" (UID: \"c0908716-de6b-48e7-a672-bf577fcaba00\") " Jan 27 18:56:43 crc kubenswrapper[4915]: I0127 18:56:43.744493 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c0908716-de6b-48e7-a672-bf577fcaba00-bundle\") pod \"c0908716-de6b-48e7-a672-bf577fcaba00\" (UID: \"c0908716-de6b-48e7-a672-bf577fcaba00\") " Jan 27 18:56:43 crc kubenswrapper[4915]: I0127 18:56:43.744569 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c0908716-de6b-48e7-a672-bf577fcaba00-util\") pod \"c0908716-de6b-48e7-a672-bf577fcaba00\" (UID: \"c0908716-de6b-48e7-a672-bf577fcaba00\") " Jan 27 18:56:43 crc kubenswrapper[4915]: I0127 18:56:43.748753 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0908716-de6b-48e7-a672-bf577fcaba00-bundle" (OuterVolumeSpecName: "bundle") pod "c0908716-de6b-48e7-a672-bf577fcaba00" (UID: "c0908716-de6b-48e7-a672-bf577fcaba00"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:56:43 crc kubenswrapper[4915]: I0127 18:56:43.749380 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0908716-de6b-48e7-a672-bf577fcaba00-kube-api-access-xqjvl" (OuterVolumeSpecName: "kube-api-access-xqjvl") pod "c0908716-de6b-48e7-a672-bf577fcaba00" (UID: "c0908716-de6b-48e7-a672-bf577fcaba00"). InnerVolumeSpecName "kube-api-access-xqjvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:56:43 crc kubenswrapper[4915]: I0127 18:56:43.757159 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0908716-de6b-48e7-a672-bf577fcaba00-util" (OuterVolumeSpecName: "util") pod "c0908716-de6b-48e7-a672-bf577fcaba00" (UID: "c0908716-de6b-48e7-a672-bf577fcaba00"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:56:43 crc kubenswrapper[4915]: I0127 18:56:43.845829 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqjvl\" (UniqueName: \"kubernetes.io/projected/c0908716-de6b-48e7-a672-bf577fcaba00-kube-api-access-xqjvl\") on node \"crc\" DevicePath \"\"" Jan 27 18:56:43 crc kubenswrapper[4915]: I0127 18:56:43.845863 4915 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c0908716-de6b-48e7-a672-bf577fcaba00-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:56:43 crc kubenswrapper[4915]: I0127 18:56:43.845872 4915 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c0908716-de6b-48e7-a672-bf577fcaba00-util\") on node \"crc\" DevicePath \"\"" Jan 27 18:56:44 crc kubenswrapper[4915]: I0127 18:56:44.436416 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th" event={"ID":"c0908716-de6b-48e7-a672-bf577fcaba00","Type":"ContainerDied","Data":"3179d68ac4259aab8a00c8155c34e6273a6e51fea53cd7270295e21991c5dbad"} Jan 27 18:56:44 crc kubenswrapper[4915]: I0127 18:56:44.436460 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3179d68ac4259aab8a00c8155c34e6273a6e51fea53cd7270295e21991c5dbad" Jan 27 18:56:44 crc kubenswrapper[4915]: I0127 18:56:44.436605 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th" Jan 27 18:56:48 crc kubenswrapper[4915]: I0127 18:56:48.502378 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-l74h4"] Jan 27 18:56:48 crc kubenswrapper[4915]: E0127 18:56:48.502979 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0908716-de6b-48e7-a672-bf577fcaba00" containerName="util" Jan 27 18:56:48 crc kubenswrapper[4915]: I0127 18:56:48.502995 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0908716-de6b-48e7-a672-bf577fcaba00" containerName="util" Jan 27 18:56:48 crc kubenswrapper[4915]: E0127 18:56:48.503007 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0908716-de6b-48e7-a672-bf577fcaba00" containerName="pull" Jan 27 18:56:48 crc kubenswrapper[4915]: I0127 18:56:48.503015 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0908716-de6b-48e7-a672-bf577fcaba00" containerName="pull" Jan 27 18:56:48 crc kubenswrapper[4915]: E0127 18:56:48.503029 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0908716-de6b-48e7-a672-bf577fcaba00" containerName="extract" Jan 27 18:56:48 crc kubenswrapper[4915]: I0127 18:56:48.503037 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0908716-de6b-48e7-a672-bf577fcaba00" containerName="extract" Jan 27 18:56:48 crc kubenswrapper[4915]: I0127 18:56:48.503153 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0908716-de6b-48e7-a672-bf577fcaba00" containerName="extract" Jan 27 18:56:48 crc kubenswrapper[4915]: I0127 18:56:48.503619 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-l74h4" Jan 27 18:56:48 crc kubenswrapper[4915]: I0127 18:56:48.506301 4915 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-99fqn" Jan 27 18:56:48 crc kubenswrapper[4915]: I0127 18:56:48.506610 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 27 18:56:48 crc kubenswrapper[4915]: I0127 18:56:48.506785 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 27 18:56:48 crc kubenswrapper[4915]: I0127 18:56:48.523720 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-l74h4"] Jan 27 18:56:48 crc kubenswrapper[4915]: I0127 18:56:48.612059 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a6e12f9a-d094-41e6-8f92-afaa889df9ab-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-l74h4\" (UID: \"a6e12f9a-d094-41e6-8f92-afaa889df9ab\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-l74h4" Jan 27 18:56:48 crc kubenswrapper[4915]: I0127 18:56:48.612121 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlcg2\" (UniqueName: \"kubernetes.io/projected/a6e12f9a-d094-41e6-8f92-afaa889df9ab-kube-api-access-xlcg2\") pod \"cert-manager-operator-controller-manager-64cf6dff88-l74h4\" (UID: \"a6e12f9a-d094-41e6-8f92-afaa889df9ab\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-l74h4" Jan 27 18:56:48 crc kubenswrapper[4915]: I0127 18:56:48.712922 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlcg2\" (UniqueName: \"kubernetes.io/projected/a6e12f9a-d094-41e6-8f92-afaa889df9ab-kube-api-access-xlcg2\") pod \"cert-manager-operator-controller-manager-64cf6dff88-l74h4\" (UID: \"a6e12f9a-d094-41e6-8f92-afaa889df9ab\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-l74h4" Jan 27 18:56:48 crc kubenswrapper[4915]: I0127 18:56:48.713048 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a6e12f9a-d094-41e6-8f92-afaa889df9ab-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-l74h4\" (UID: \"a6e12f9a-d094-41e6-8f92-afaa889df9ab\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-l74h4" Jan 27 18:56:48 crc kubenswrapper[4915]: I0127 18:56:48.713562 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a6e12f9a-d094-41e6-8f92-afaa889df9ab-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-l74h4\" (UID: \"a6e12f9a-d094-41e6-8f92-afaa889df9ab\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-l74h4" Jan 27 18:56:48 crc kubenswrapper[4915]: I0127 18:56:48.732426 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlcg2\" (UniqueName: \"kubernetes.io/projected/a6e12f9a-d094-41e6-8f92-afaa889df9ab-kube-api-access-xlcg2\") pod \"cert-manager-operator-controller-manager-64cf6dff88-l74h4\" (UID: \"a6e12f9a-d094-41e6-8f92-afaa889df9ab\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-l74h4" Jan 27 18:56:48 crc kubenswrapper[4915]: I0127 18:56:48.819031 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-l74h4" Jan 27 18:56:49 crc kubenswrapper[4915]: I0127 18:56:49.285800 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-l74h4"] Jan 27 18:56:49 crc kubenswrapper[4915]: W0127 18:56:49.290987 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6e12f9a_d094_41e6_8f92_afaa889df9ab.slice/crio-ef2efa37d7351a4b9c4962eae18a17eb942a62a545db84ef609a58662756175f WatchSource:0}: Error finding container ef2efa37d7351a4b9c4962eae18a17eb942a62a545db84ef609a58662756175f: Status 404 returned error can't find the container with id ef2efa37d7351a4b9c4962eae18a17eb942a62a545db84ef609a58662756175f Jan 27 18:56:49 crc kubenswrapper[4915]: I0127 18:56:49.465203 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-l74h4" event={"ID":"a6e12f9a-d094-41e6-8f92-afaa889df9ab","Type":"ContainerStarted","Data":"ef2efa37d7351a4b9c4962eae18a17eb942a62a545db84ef609a58662756175f"} Jan 27 18:56:57 crc kubenswrapper[4915]: I0127 18:56:57.518743 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-l74h4" event={"ID":"a6e12f9a-d094-41e6-8f92-afaa889df9ab","Type":"ContainerStarted","Data":"709cca5cda5b10a7f00f9a3188b325f0cc8973979c1c12bdf5f0ef226ae70412"} Jan 27 18:56:57 crc kubenswrapper[4915]: I0127 18:56:57.542239 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-l74h4" podStartSLOduration=1.620861318 podStartE2EDuration="9.542216661s" podCreationTimestamp="2026-01-27 18:56:48 +0000 UTC" firstStartedPulling="2026-01-27 18:56:49.293135066 +0000 UTC m=+900.650988730" lastFinishedPulling="2026-01-27 18:56:57.214490399 +0000 UTC m=+908.572344073" observedRunningTime="2026-01-27 18:56:57.536719807 +0000 UTC m=+908.894573471" watchObservedRunningTime="2026-01-27 18:56:57.542216661 +0000 UTC m=+908.900070325" Jan 27 18:57:00 crc kubenswrapper[4915]: I0127 18:57:00.448855 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-x695t"] Jan 27 18:57:00 crc kubenswrapper[4915]: I0127 18:57:00.450004 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-x695t" Jan 27 18:57:00 crc kubenswrapper[4915]: I0127 18:57:00.452869 4915 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-8p557" Jan 27 18:57:00 crc kubenswrapper[4915]: I0127 18:57:00.453145 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 27 18:57:00 crc kubenswrapper[4915]: I0127 18:57:00.458123 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 27 18:57:00 crc kubenswrapper[4915]: I0127 18:57:00.464857 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-x695t"] Jan 27 18:57:00 crc kubenswrapper[4915]: I0127 18:57:00.483861 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ec5c0833-e215-4011-b71a-ed4854ab9be7-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-x695t\" (UID: \"ec5c0833-e215-4011-b71a-ed4854ab9be7\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-x695t" Jan 27 18:57:00 crc kubenswrapper[4915]: I0127 18:57:00.483916 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kdl4\" (UniqueName: \"kubernetes.io/projected/ec5c0833-e215-4011-b71a-ed4854ab9be7-kube-api-access-2kdl4\") pod \"cert-manager-webhook-f4fb5df64-x695t\" (UID: \"ec5c0833-e215-4011-b71a-ed4854ab9be7\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-x695t" Jan 27 18:57:00 crc kubenswrapper[4915]: I0127 18:57:00.584775 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ec5c0833-e215-4011-b71a-ed4854ab9be7-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-x695t\" (UID: \"ec5c0833-e215-4011-b71a-ed4854ab9be7\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-x695t" Jan 27 18:57:00 crc kubenswrapper[4915]: I0127 18:57:00.584855 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kdl4\" (UniqueName: \"kubernetes.io/projected/ec5c0833-e215-4011-b71a-ed4854ab9be7-kube-api-access-2kdl4\") pod \"cert-manager-webhook-f4fb5df64-x695t\" (UID: \"ec5c0833-e215-4011-b71a-ed4854ab9be7\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-x695t" Jan 27 18:57:00 crc kubenswrapper[4915]: I0127 18:57:00.605715 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ec5c0833-e215-4011-b71a-ed4854ab9be7-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-x695t\" (UID: \"ec5c0833-e215-4011-b71a-ed4854ab9be7\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-x695t" Jan 27 18:57:00 crc kubenswrapper[4915]: I0127 18:57:00.607849 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kdl4\" (UniqueName: \"kubernetes.io/projected/ec5c0833-e215-4011-b71a-ed4854ab9be7-kube-api-access-2kdl4\") pod \"cert-manager-webhook-f4fb5df64-x695t\" (UID: \"ec5c0833-e215-4011-b71a-ed4854ab9be7\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-x695t" Jan 27 18:57:00 crc kubenswrapper[4915]: I0127 18:57:00.775277 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-x695t" Jan 27 18:57:01 crc kubenswrapper[4915]: I0127 18:57:01.151846 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-x695t"] Jan 27 18:57:01 crc kubenswrapper[4915]: I0127 18:57:01.546184 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-x695t" event={"ID":"ec5c0833-e215-4011-b71a-ed4854ab9be7","Type":"ContainerStarted","Data":"ce26ad0b6d41065b048fc449b0c84c27f7853b29a47fc208a4454ef475bdbaaf"} Jan 27 18:57:01 crc kubenswrapper[4915]: I0127 18:57:01.864319 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8z625"] Jan 27 18:57:01 crc kubenswrapper[4915]: I0127 18:57:01.865336 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8z625" Jan 27 18:57:01 crc kubenswrapper[4915]: I0127 18:57:01.882699 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8z625"] Jan 27 18:57:01 crc kubenswrapper[4915]: I0127 18:57:01.908329 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/597e565b-72eb-448e-8473-e94bfab22ff2-utilities\") pod \"community-operators-8z625\" (UID: \"597e565b-72eb-448e-8473-e94bfab22ff2\") " pod="openshift-marketplace/community-operators-8z625" Jan 27 18:57:01 crc kubenswrapper[4915]: I0127 18:57:01.908376 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-856ll\" (UniqueName: \"kubernetes.io/projected/597e565b-72eb-448e-8473-e94bfab22ff2-kube-api-access-856ll\") pod \"community-operators-8z625\" (UID: \"597e565b-72eb-448e-8473-e94bfab22ff2\") " pod="openshift-marketplace/community-operators-8z625" Jan 27 18:57:01 crc kubenswrapper[4915]: I0127 18:57:01.908481 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/597e565b-72eb-448e-8473-e94bfab22ff2-catalog-content\") pod \"community-operators-8z625\" (UID: \"597e565b-72eb-448e-8473-e94bfab22ff2\") " pod="openshift-marketplace/community-operators-8z625" Jan 27 18:57:02 crc kubenswrapper[4915]: I0127 18:57:02.009395 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/597e565b-72eb-448e-8473-e94bfab22ff2-catalog-content\") pod \"community-operators-8z625\" (UID: \"597e565b-72eb-448e-8473-e94bfab22ff2\") " pod="openshift-marketplace/community-operators-8z625" Jan 27 18:57:02 crc kubenswrapper[4915]: I0127 18:57:02.009460 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/597e565b-72eb-448e-8473-e94bfab22ff2-utilities\") pod \"community-operators-8z625\" (UID: \"597e565b-72eb-448e-8473-e94bfab22ff2\") " pod="openshift-marketplace/community-operators-8z625" Jan 27 18:57:02 crc kubenswrapper[4915]: I0127 18:57:02.009493 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-856ll\" (UniqueName: \"kubernetes.io/projected/597e565b-72eb-448e-8473-e94bfab22ff2-kube-api-access-856ll\") pod \"community-operators-8z625\" (UID: \"597e565b-72eb-448e-8473-e94bfab22ff2\") " pod="openshift-marketplace/community-operators-8z625" Jan 27 18:57:02 crc kubenswrapper[4915]: I0127 18:57:02.009975 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/597e565b-72eb-448e-8473-e94bfab22ff2-catalog-content\") pod \"community-operators-8z625\" (UID: \"597e565b-72eb-448e-8473-e94bfab22ff2\") " pod="openshift-marketplace/community-operators-8z625" Jan 27 18:57:02 crc kubenswrapper[4915]: I0127 18:57:02.010023 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/597e565b-72eb-448e-8473-e94bfab22ff2-utilities\") pod \"community-operators-8z625\" (UID: \"597e565b-72eb-448e-8473-e94bfab22ff2\") " pod="openshift-marketplace/community-operators-8z625" Jan 27 18:57:02 crc kubenswrapper[4915]: I0127 18:57:02.031896 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-856ll\" (UniqueName: \"kubernetes.io/projected/597e565b-72eb-448e-8473-e94bfab22ff2-kube-api-access-856ll\") pod \"community-operators-8z625\" (UID: \"597e565b-72eb-448e-8473-e94bfab22ff2\") " pod="openshift-marketplace/community-operators-8z625" Jan 27 18:57:02 crc kubenswrapper[4915]: I0127 18:57:02.180179 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8z625" Jan 27 18:57:02 crc kubenswrapper[4915]: I0127 18:57:02.261507 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j9vg4"] Jan 27 18:57:02 crc kubenswrapper[4915]: I0127 18:57:02.263234 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j9vg4" Jan 27 18:57:02 crc kubenswrapper[4915]: I0127 18:57:02.281450 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j9vg4"] Jan 27 18:57:02 crc kubenswrapper[4915]: I0127 18:57:02.313200 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgk76\" (UniqueName: \"kubernetes.io/projected/6f50038a-6725-46ec-99fb-5270098f0c43-kube-api-access-qgk76\") pod \"certified-operators-j9vg4\" (UID: \"6f50038a-6725-46ec-99fb-5270098f0c43\") " pod="openshift-marketplace/certified-operators-j9vg4" Jan 27 18:57:02 crc kubenswrapper[4915]: I0127 18:57:02.313255 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f50038a-6725-46ec-99fb-5270098f0c43-catalog-content\") pod \"certified-operators-j9vg4\" (UID: \"6f50038a-6725-46ec-99fb-5270098f0c43\") " pod="openshift-marketplace/certified-operators-j9vg4" Jan 27 18:57:02 crc kubenswrapper[4915]: I0127 18:57:02.313314 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f50038a-6725-46ec-99fb-5270098f0c43-utilities\") pod \"certified-operators-j9vg4\" (UID: \"6f50038a-6725-46ec-99fb-5270098f0c43\") " pod="openshift-marketplace/certified-operators-j9vg4" Jan 27 18:57:02 crc kubenswrapper[4915]: I0127 18:57:02.413945 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgk76\" (UniqueName: \"kubernetes.io/projected/6f50038a-6725-46ec-99fb-5270098f0c43-kube-api-access-qgk76\") pod \"certified-operators-j9vg4\" (UID: \"6f50038a-6725-46ec-99fb-5270098f0c43\") " pod="openshift-marketplace/certified-operators-j9vg4" Jan 27 18:57:02 crc kubenswrapper[4915]: I0127 18:57:02.413998 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f50038a-6725-46ec-99fb-5270098f0c43-catalog-content\") pod \"certified-operators-j9vg4\" (UID: \"6f50038a-6725-46ec-99fb-5270098f0c43\") " pod="openshift-marketplace/certified-operators-j9vg4" Jan 27 18:57:02 crc kubenswrapper[4915]: I0127 18:57:02.414021 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f50038a-6725-46ec-99fb-5270098f0c43-utilities\") pod \"certified-operators-j9vg4\" (UID: \"6f50038a-6725-46ec-99fb-5270098f0c43\") " pod="openshift-marketplace/certified-operators-j9vg4" Jan 27 18:57:02 crc kubenswrapper[4915]: I0127 18:57:02.414458 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f50038a-6725-46ec-99fb-5270098f0c43-utilities\") pod \"certified-operators-j9vg4\" (UID: \"6f50038a-6725-46ec-99fb-5270098f0c43\") " pod="openshift-marketplace/certified-operators-j9vg4" Jan 27 18:57:02 crc kubenswrapper[4915]: I0127 18:57:02.414556 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f50038a-6725-46ec-99fb-5270098f0c43-catalog-content\") pod \"certified-operators-j9vg4\" (UID: \"6f50038a-6725-46ec-99fb-5270098f0c43\") " pod="openshift-marketplace/certified-operators-j9vg4" Jan 27 18:57:02 crc kubenswrapper[4915]: I0127 18:57:02.440708 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgk76\" (UniqueName: \"kubernetes.io/projected/6f50038a-6725-46ec-99fb-5270098f0c43-kube-api-access-qgk76\") pod \"certified-operators-j9vg4\" (UID: \"6f50038a-6725-46ec-99fb-5270098f0c43\") " pod="openshift-marketplace/certified-operators-j9vg4" Jan 27 18:57:02 crc kubenswrapper[4915]: I0127 18:57:02.510401 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8z625"] Jan 27 18:57:02 crc kubenswrapper[4915]: I0127 18:57:02.561982 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z625" event={"ID":"597e565b-72eb-448e-8473-e94bfab22ff2","Type":"ContainerStarted","Data":"e72a4ea6fb16750ffe64c59740c116330fb0252757733bf1300f3eaa15213df8"} Jan 27 18:57:02 crc kubenswrapper[4915]: I0127 18:57:02.582814 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j9vg4" Jan 27 18:57:03 crc kubenswrapper[4915]: I0127 18:57:03.090284 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j9vg4"] Jan 27 18:57:03 crc kubenswrapper[4915]: I0127 18:57:03.569749 4915 generic.go:334] "Generic (PLEG): container finished" podID="6f50038a-6725-46ec-99fb-5270098f0c43" containerID="15ac889c05574b901b4a02d82d7f4139d1b7c48ec7c18d463dca7ad3fd01f0fb" exitCode=0 Jan 27 18:57:03 crc kubenswrapper[4915]: I0127 18:57:03.569863 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j9vg4" event={"ID":"6f50038a-6725-46ec-99fb-5270098f0c43","Type":"ContainerDied","Data":"15ac889c05574b901b4a02d82d7f4139d1b7c48ec7c18d463dca7ad3fd01f0fb"} Jan 27 18:57:03 crc kubenswrapper[4915]: I0127 18:57:03.571295 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j9vg4" event={"ID":"6f50038a-6725-46ec-99fb-5270098f0c43","Type":"ContainerStarted","Data":"434d4f7577c237cd7210767f86ef2f632d1c75cb7006c779d8602e2b545d6739"} Jan 27 18:57:03 crc kubenswrapper[4915]: I0127 18:57:03.586235 4915 generic.go:334] "Generic (PLEG): container finished" podID="597e565b-72eb-448e-8473-e94bfab22ff2" containerID="24d6a8826e8afce09c45aa25f9aa5e33ce2500ec87469f03c6352e11b7602cc2" exitCode=0 Jan 27 18:57:03 crc kubenswrapper[4915]: I0127 18:57:03.586265 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z625" event={"ID":"597e565b-72eb-448e-8473-e94bfab22ff2","Type":"ContainerDied","Data":"24d6a8826e8afce09c45aa25f9aa5e33ce2500ec87469f03c6352e11b7602cc2"} Jan 27 18:57:04 crc kubenswrapper[4915]: I0127 18:57:04.345857 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-259gl"] Jan 27 18:57:04 crc kubenswrapper[4915]: I0127 18:57:04.347060 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-259gl" Jan 27 18:57:04 crc kubenswrapper[4915]: I0127 18:57:04.351431 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-259gl"] Jan 27 18:57:04 crc kubenswrapper[4915]: I0127 18:57:04.356838 4915 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-9zg9s" Jan 27 18:57:04 crc kubenswrapper[4915]: I0127 18:57:04.367646 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7aba51ef-4eab-42a1-bb96-a10f2ac2816e-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-259gl\" (UID: \"7aba51ef-4eab-42a1-bb96-a10f2ac2816e\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-259gl" Jan 27 18:57:04 crc kubenswrapper[4915]: I0127 18:57:04.367936 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgqjp\" (UniqueName: \"kubernetes.io/projected/7aba51ef-4eab-42a1-bb96-a10f2ac2816e-kube-api-access-wgqjp\") pod \"cert-manager-cainjector-855d9ccff4-259gl\" (UID: \"7aba51ef-4eab-42a1-bb96-a10f2ac2816e\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-259gl" Jan 27 18:57:04 crc kubenswrapper[4915]: I0127 18:57:04.469535 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgqjp\" (UniqueName: \"kubernetes.io/projected/7aba51ef-4eab-42a1-bb96-a10f2ac2816e-kube-api-access-wgqjp\") pod \"cert-manager-cainjector-855d9ccff4-259gl\" (UID: \"7aba51ef-4eab-42a1-bb96-a10f2ac2816e\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-259gl" Jan 27 18:57:04 crc kubenswrapper[4915]: I0127 18:57:04.469680 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7aba51ef-4eab-42a1-bb96-a10f2ac2816e-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-259gl\" (UID: \"7aba51ef-4eab-42a1-bb96-a10f2ac2816e\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-259gl" Jan 27 18:57:04 crc kubenswrapper[4915]: I0127 18:57:04.490298 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgqjp\" (UniqueName: \"kubernetes.io/projected/7aba51ef-4eab-42a1-bb96-a10f2ac2816e-kube-api-access-wgqjp\") pod \"cert-manager-cainjector-855d9ccff4-259gl\" (UID: \"7aba51ef-4eab-42a1-bb96-a10f2ac2816e\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-259gl" Jan 27 18:57:04 crc kubenswrapper[4915]: I0127 18:57:04.498116 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7aba51ef-4eab-42a1-bb96-a10f2ac2816e-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-259gl\" (UID: \"7aba51ef-4eab-42a1-bb96-a10f2ac2816e\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-259gl" Jan 27 18:57:04 crc kubenswrapper[4915]: I0127 18:57:04.593448 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j9vg4" event={"ID":"6f50038a-6725-46ec-99fb-5270098f0c43","Type":"ContainerStarted","Data":"adf5d66b1b5a8b00ec9f38f4ccbd70bc13fa8aea3f948a6487d0b1777025f0ea"} Jan 27 18:57:04 crc kubenswrapper[4915]: I0127 18:57:04.600492 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z625" event={"ID":"597e565b-72eb-448e-8473-e94bfab22ff2","Type":"ContainerStarted","Data":"aa456ba938da39320e68cce5eef5b057fe30a375c01500764ebd57a47c7ba5d3"} Jan 27 18:57:04 crc kubenswrapper[4915]: I0127 18:57:04.680842 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-259gl" Jan 27 18:57:05 crc kubenswrapper[4915]: I0127 18:57:05.087776 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-259gl"] Jan 27 18:57:05 crc kubenswrapper[4915]: I0127 18:57:05.608717 4915 generic.go:334] "Generic (PLEG): container finished" podID="597e565b-72eb-448e-8473-e94bfab22ff2" containerID="aa456ba938da39320e68cce5eef5b057fe30a375c01500764ebd57a47c7ba5d3" exitCode=0 Jan 27 18:57:05 crc kubenswrapper[4915]: I0127 18:57:05.608809 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z625" event={"ID":"597e565b-72eb-448e-8473-e94bfab22ff2","Type":"ContainerDied","Data":"aa456ba938da39320e68cce5eef5b057fe30a375c01500764ebd57a47c7ba5d3"} Jan 27 18:57:05 crc kubenswrapper[4915]: I0127 18:57:05.611611 4915 generic.go:334] "Generic (PLEG): container finished" podID="6f50038a-6725-46ec-99fb-5270098f0c43" containerID="adf5d66b1b5a8b00ec9f38f4ccbd70bc13fa8aea3f948a6487d0b1777025f0ea" exitCode=0 Jan 27 18:57:05 crc kubenswrapper[4915]: I0127 18:57:05.611692 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j9vg4" event={"ID":"6f50038a-6725-46ec-99fb-5270098f0c43","Type":"ContainerDied","Data":"adf5d66b1b5a8b00ec9f38f4ccbd70bc13fa8aea3f948a6487d0b1777025f0ea"} Jan 27 18:57:05 crc kubenswrapper[4915]: I0127 18:57:05.614492 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-259gl" event={"ID":"7aba51ef-4eab-42a1-bb96-a10f2ac2816e","Type":"ContainerStarted","Data":"5f6fdfc0dcd9c694d0172c3683246e73d1677779757560a14c4c2a523e534053"} Jan 27 18:57:06 crc kubenswrapper[4915]: I0127 18:57:06.624414 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j9vg4" event={"ID":"6f50038a-6725-46ec-99fb-5270098f0c43","Type":"ContainerStarted","Data":"d30701a5f725d47d43de868341105e9e1172585f07743b88f2909dedafae116d"} Jan 27 18:57:06 crc kubenswrapper[4915]: I0127 18:57:06.648105 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j9vg4" podStartSLOduration=2.78643163 podStartE2EDuration="4.64808734s" podCreationTimestamp="2026-01-27 18:57:02 +0000 UTC" firstStartedPulling="2026-01-27 18:57:03.573088499 +0000 UTC m=+914.930942163" lastFinishedPulling="2026-01-27 18:57:05.434744209 +0000 UTC m=+916.792597873" observedRunningTime="2026-01-27 18:57:06.643403736 +0000 UTC m=+918.001257400" watchObservedRunningTime="2026-01-27 18:57:06.64808734 +0000 UTC m=+918.005941004" Jan 27 18:57:11 crc kubenswrapper[4915]: I0127 18:57:11.658863 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z625" event={"ID":"597e565b-72eb-448e-8473-e94bfab22ff2","Type":"ContainerStarted","Data":"f333afbee300a80b82a25f530a7509eddf984f5c1acd6619fae341d927f5e0f9"} Jan 27 18:57:11 crc kubenswrapper[4915]: I0127 18:57:11.660947 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-x695t" event={"ID":"ec5c0833-e215-4011-b71a-ed4854ab9be7","Type":"ContainerStarted","Data":"ba1ec38ffe7a63e1caa1e33cac83ddcbe464b23758dbc51a05d88c7202c5715e"} Jan 27 18:57:11 crc kubenswrapper[4915]: I0127 18:57:11.661099 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-x695t" Jan 27 18:57:11 crc kubenswrapper[4915]: I0127 18:57:11.662518 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-259gl" event={"ID":"7aba51ef-4eab-42a1-bb96-a10f2ac2816e","Type":"ContainerStarted","Data":"97221c692f1800ace3722fa99d6b9fb11b9d09e9d53ab081552718b4988e10f2"} Jan 27 18:57:11 crc kubenswrapper[4915]: I0127 18:57:11.687941 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8z625" podStartSLOduration=3.521499939 podStartE2EDuration="10.68792184s" podCreationTimestamp="2026-01-27 18:57:01 +0000 UTC" firstStartedPulling="2026-01-27 18:57:03.587586262 +0000 UTC m=+914.945439926" lastFinishedPulling="2026-01-27 18:57:10.754008143 +0000 UTC m=+922.111861827" observedRunningTime="2026-01-27 18:57:11.686347561 +0000 UTC m=+923.044201255" watchObservedRunningTime="2026-01-27 18:57:11.68792184 +0000 UTC m=+923.045775514" Jan 27 18:57:11 crc kubenswrapper[4915]: I0127 18:57:11.706211 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-x695t" podStartSLOduration=2.106701377 podStartE2EDuration="11.706191985s" podCreationTimestamp="2026-01-27 18:57:00 +0000 UTC" firstStartedPulling="2026-01-27 18:57:01.163998347 +0000 UTC m=+912.521852001" lastFinishedPulling="2026-01-27 18:57:10.763488945 +0000 UTC m=+922.121342609" observedRunningTime="2026-01-27 18:57:11.705332244 +0000 UTC m=+923.063185908" watchObservedRunningTime="2026-01-27 18:57:11.706191985 +0000 UTC m=+923.064045649" Jan 27 18:57:11 crc kubenswrapper[4915]: I0127 18:57:11.723943 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-259gl" podStartSLOduration=2.047721249 podStartE2EDuration="7.723923888s" podCreationTimestamp="2026-01-27 18:57:04 +0000 UTC" firstStartedPulling="2026-01-27 18:57:05.093571779 +0000 UTC m=+916.451425443" lastFinishedPulling="2026-01-27 18:57:10.769774418 +0000 UTC m=+922.127628082" observedRunningTime="2026-01-27 18:57:11.72113938 +0000 UTC m=+923.078993044" watchObservedRunningTime="2026-01-27 18:57:11.723923888 +0000 UTC m=+923.081777562" Jan 27 18:57:12 crc kubenswrapper[4915]: I0127 18:57:12.181392 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8z625" Jan 27 18:57:12 crc kubenswrapper[4915]: I0127 18:57:12.181438 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8z625" Jan 27 18:57:12 crc kubenswrapper[4915]: I0127 18:57:12.583598 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j9vg4" Jan 27 18:57:12 crc kubenswrapper[4915]: I0127 18:57:12.583637 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j9vg4" Jan 27 18:57:12 crc kubenswrapper[4915]: I0127 18:57:12.630555 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j9vg4" Jan 27 18:57:12 crc kubenswrapper[4915]: I0127 18:57:12.712078 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j9vg4" Jan 27 18:57:13 crc kubenswrapper[4915]: I0127 18:57:13.220282 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8z625" podUID="597e565b-72eb-448e-8473-e94bfab22ff2" containerName="registry-server" probeResult="failure" output=< Jan 27 18:57:13 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Jan 27 18:57:13 crc kubenswrapper[4915]: > Jan 27 18:57:13 crc kubenswrapper[4915]: I0127 18:57:13.452488 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j9vg4"] Jan 27 18:57:14 crc kubenswrapper[4915]: I0127 18:57:14.682580 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j9vg4" podUID="6f50038a-6725-46ec-99fb-5270098f0c43" containerName="registry-server" containerID="cri-o://d30701a5f725d47d43de868341105e9e1172585f07743b88f2909dedafae116d" gracePeriod=2 Jan 27 18:57:15 crc kubenswrapper[4915]: I0127 18:57:15.049674 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j9vg4" Jan 27 18:57:15 crc kubenswrapper[4915]: I0127 18:57:15.124890 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgk76\" (UniqueName: \"kubernetes.io/projected/6f50038a-6725-46ec-99fb-5270098f0c43-kube-api-access-qgk76\") pod \"6f50038a-6725-46ec-99fb-5270098f0c43\" (UID: \"6f50038a-6725-46ec-99fb-5270098f0c43\") " Jan 27 18:57:15 crc kubenswrapper[4915]: I0127 18:57:15.125059 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f50038a-6725-46ec-99fb-5270098f0c43-catalog-content\") pod \"6f50038a-6725-46ec-99fb-5270098f0c43\" (UID: \"6f50038a-6725-46ec-99fb-5270098f0c43\") " Jan 27 18:57:15 crc kubenswrapper[4915]: I0127 18:57:15.125378 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f50038a-6725-46ec-99fb-5270098f0c43-utilities\") pod \"6f50038a-6725-46ec-99fb-5270098f0c43\" (UID: \"6f50038a-6725-46ec-99fb-5270098f0c43\") " Jan 27 18:57:15 crc kubenswrapper[4915]: I0127 18:57:15.126192 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f50038a-6725-46ec-99fb-5270098f0c43-utilities" (OuterVolumeSpecName: "utilities") pod "6f50038a-6725-46ec-99fb-5270098f0c43" (UID: "6f50038a-6725-46ec-99fb-5270098f0c43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:57:15 crc kubenswrapper[4915]: I0127 18:57:15.130557 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f50038a-6725-46ec-99fb-5270098f0c43-kube-api-access-qgk76" (OuterVolumeSpecName: "kube-api-access-qgk76") pod "6f50038a-6725-46ec-99fb-5270098f0c43" (UID: "6f50038a-6725-46ec-99fb-5270098f0c43"). InnerVolumeSpecName "kube-api-access-qgk76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:57:15 crc kubenswrapper[4915]: I0127 18:57:15.226873 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f50038a-6725-46ec-99fb-5270098f0c43-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:57:15 crc kubenswrapper[4915]: I0127 18:57:15.226920 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgk76\" (UniqueName: \"kubernetes.io/projected/6f50038a-6725-46ec-99fb-5270098f0c43-kube-api-access-qgk76\") on node \"crc\" DevicePath \"\"" Jan 27 18:57:15 crc kubenswrapper[4915]: I0127 18:57:15.693016 4915 generic.go:334] "Generic (PLEG): container finished" podID="6f50038a-6725-46ec-99fb-5270098f0c43" containerID="d30701a5f725d47d43de868341105e9e1172585f07743b88f2909dedafae116d" exitCode=0 Jan 27 18:57:15 crc kubenswrapper[4915]: I0127 18:57:15.693079 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j9vg4" event={"ID":"6f50038a-6725-46ec-99fb-5270098f0c43","Type":"ContainerDied","Data":"d30701a5f725d47d43de868341105e9e1172585f07743b88f2909dedafae116d"} Jan 27 18:57:15 crc kubenswrapper[4915]: I0127 18:57:15.693127 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j9vg4" event={"ID":"6f50038a-6725-46ec-99fb-5270098f0c43","Type":"ContainerDied","Data":"434d4f7577c237cd7210767f86ef2f632d1c75cb7006c779d8602e2b545d6739"} Jan 27 18:57:15 crc kubenswrapper[4915]: I0127 18:57:15.693184 4915 scope.go:117] "RemoveContainer" containerID="d30701a5f725d47d43de868341105e9e1172585f07743b88f2909dedafae116d" Jan 27 18:57:15 crc kubenswrapper[4915]: I0127 18:57:15.693189 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j9vg4" Jan 27 18:57:15 crc kubenswrapper[4915]: I0127 18:57:15.716396 4915 scope.go:117] "RemoveContainer" containerID="adf5d66b1b5a8b00ec9f38f4ccbd70bc13fa8aea3f948a6487d0b1777025f0ea" Jan 27 18:57:15 crc kubenswrapper[4915]: I0127 18:57:15.748876 4915 scope.go:117] "RemoveContainer" containerID="15ac889c05574b901b4a02d82d7f4139d1b7c48ec7c18d463dca7ad3fd01f0fb" Jan 27 18:57:15 crc kubenswrapper[4915]: I0127 18:57:15.777923 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-x695t" Jan 27 18:57:15 crc kubenswrapper[4915]: I0127 18:57:15.792042 4915 scope.go:117] "RemoveContainer" containerID="d30701a5f725d47d43de868341105e9e1172585f07743b88f2909dedafae116d" Jan 27 18:57:15 crc kubenswrapper[4915]: E0127 18:57:15.792532 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d30701a5f725d47d43de868341105e9e1172585f07743b88f2909dedafae116d\": container with ID starting with d30701a5f725d47d43de868341105e9e1172585f07743b88f2909dedafae116d not found: ID does not exist" containerID="d30701a5f725d47d43de868341105e9e1172585f07743b88f2909dedafae116d" Jan 27 18:57:15 crc kubenswrapper[4915]: I0127 18:57:15.792568 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d30701a5f725d47d43de868341105e9e1172585f07743b88f2909dedafae116d"} err="failed to get container status \"d30701a5f725d47d43de868341105e9e1172585f07743b88f2909dedafae116d\": rpc error: code = NotFound desc = could not find container \"d30701a5f725d47d43de868341105e9e1172585f07743b88f2909dedafae116d\": container with ID starting with d30701a5f725d47d43de868341105e9e1172585f07743b88f2909dedafae116d not found: ID does not exist" Jan 27 18:57:15 crc kubenswrapper[4915]: I0127 18:57:15.792594 4915 scope.go:117] "RemoveContainer" containerID="adf5d66b1b5a8b00ec9f38f4ccbd70bc13fa8aea3f948a6487d0b1777025f0ea" Jan 27 18:57:15 crc kubenswrapper[4915]: E0127 18:57:15.792995 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adf5d66b1b5a8b00ec9f38f4ccbd70bc13fa8aea3f948a6487d0b1777025f0ea\": container with ID starting with adf5d66b1b5a8b00ec9f38f4ccbd70bc13fa8aea3f948a6487d0b1777025f0ea not found: ID does not exist" containerID="adf5d66b1b5a8b00ec9f38f4ccbd70bc13fa8aea3f948a6487d0b1777025f0ea" Jan 27 18:57:15 crc kubenswrapper[4915]: I0127 18:57:15.793041 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adf5d66b1b5a8b00ec9f38f4ccbd70bc13fa8aea3f948a6487d0b1777025f0ea"} err="failed to get container status \"adf5d66b1b5a8b00ec9f38f4ccbd70bc13fa8aea3f948a6487d0b1777025f0ea\": rpc error: code = NotFound desc = could not find container \"adf5d66b1b5a8b00ec9f38f4ccbd70bc13fa8aea3f948a6487d0b1777025f0ea\": container with ID starting with adf5d66b1b5a8b00ec9f38f4ccbd70bc13fa8aea3f948a6487d0b1777025f0ea not found: ID does not exist" Jan 27 18:57:15 crc kubenswrapper[4915]: I0127 18:57:15.793073 4915 scope.go:117] "RemoveContainer" containerID="15ac889c05574b901b4a02d82d7f4139d1b7c48ec7c18d463dca7ad3fd01f0fb" Jan 27 18:57:15 crc kubenswrapper[4915]: E0127 18:57:15.793383 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15ac889c05574b901b4a02d82d7f4139d1b7c48ec7c18d463dca7ad3fd01f0fb\": container with ID starting with 15ac889c05574b901b4a02d82d7f4139d1b7c48ec7c18d463dca7ad3fd01f0fb not found: ID does not exist" containerID="15ac889c05574b901b4a02d82d7f4139d1b7c48ec7c18d463dca7ad3fd01f0fb" Jan 27 18:57:15 crc kubenswrapper[4915]: I0127 18:57:15.793415 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15ac889c05574b901b4a02d82d7f4139d1b7c48ec7c18d463dca7ad3fd01f0fb"} err="failed to get container status \"15ac889c05574b901b4a02d82d7f4139d1b7c48ec7c18d463dca7ad3fd01f0fb\": rpc error: code = NotFound desc = could not find container \"15ac889c05574b901b4a02d82d7f4139d1b7c48ec7c18d463dca7ad3fd01f0fb\": container with ID starting with 15ac889c05574b901b4a02d82d7f4139d1b7c48ec7c18d463dca7ad3fd01f0fb not found: ID does not exist" Jan 27 18:57:16 crc kubenswrapper[4915]: I0127 18:57:16.265051 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f50038a-6725-46ec-99fb-5270098f0c43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f50038a-6725-46ec-99fb-5270098f0c43" (UID: "6f50038a-6725-46ec-99fb-5270098f0c43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:57:16 crc kubenswrapper[4915]: I0127 18:57:16.337849 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j9vg4"] Jan 27 18:57:16 crc kubenswrapper[4915]: I0127 18:57:16.340144 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f50038a-6725-46ec-99fb-5270098f0c43-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:57:16 crc kubenswrapper[4915]: I0127 18:57:16.343787 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j9vg4"] Jan 27 18:57:17 crc kubenswrapper[4915]: I0127 18:57:17.365445 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f50038a-6725-46ec-99fb-5270098f0c43" path="/var/lib/kubelet/pods/6f50038a-6725-46ec-99fb-5270098f0c43/volumes" Jan 27 18:57:20 crc kubenswrapper[4915]: I0127 18:57:20.708197 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-95vhp"] Jan 27 18:57:20 crc kubenswrapper[4915]: E0127 18:57:20.709483 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f50038a-6725-46ec-99fb-5270098f0c43" containerName="extract-content" Jan 27 18:57:20 crc kubenswrapper[4915]: I0127 18:57:20.709511 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f50038a-6725-46ec-99fb-5270098f0c43" containerName="extract-content" Jan 27 18:57:20 crc kubenswrapper[4915]: E0127 18:57:20.709530 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f50038a-6725-46ec-99fb-5270098f0c43" containerName="registry-server" Jan 27 18:57:20 crc kubenswrapper[4915]: I0127 18:57:20.709542 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f50038a-6725-46ec-99fb-5270098f0c43" containerName="registry-server" Jan 27 18:57:20 crc kubenswrapper[4915]: E0127 18:57:20.709575 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f50038a-6725-46ec-99fb-5270098f0c43" containerName="extract-utilities" Jan 27 18:57:20 crc kubenswrapper[4915]: I0127 18:57:20.709586 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f50038a-6725-46ec-99fb-5270098f0c43" containerName="extract-utilities" Jan 27 18:57:20 crc kubenswrapper[4915]: I0127 18:57:20.709817 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f50038a-6725-46ec-99fb-5270098f0c43" containerName="registry-server" Jan 27 18:57:20 crc kubenswrapper[4915]: I0127 18:57:20.710920 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-95vhp" Jan 27 18:57:20 crc kubenswrapper[4915]: I0127 18:57:20.722342 4915 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-49wvw" Jan 27 18:57:20 crc kubenswrapper[4915]: I0127 18:57:20.734538 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-95vhp"] Jan 27 18:57:20 crc kubenswrapper[4915]: I0127 18:57:20.795842 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b3c7c02-6cf8-49db-bd51-038ee25d5613-bound-sa-token\") pod \"cert-manager-86cb77c54b-95vhp\" (UID: \"7b3c7c02-6cf8-49db-bd51-038ee25d5613\") " pod="cert-manager/cert-manager-86cb77c54b-95vhp" Jan 27 18:57:20 crc kubenswrapper[4915]: I0127 18:57:20.795977 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjlzn\" (UniqueName: \"kubernetes.io/projected/7b3c7c02-6cf8-49db-bd51-038ee25d5613-kube-api-access-fjlzn\") pod \"cert-manager-86cb77c54b-95vhp\" (UID: \"7b3c7c02-6cf8-49db-bd51-038ee25d5613\") " pod="cert-manager/cert-manager-86cb77c54b-95vhp" Jan 27 18:57:20 crc kubenswrapper[4915]: I0127 18:57:20.897550 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjlzn\" (UniqueName: \"kubernetes.io/projected/7b3c7c02-6cf8-49db-bd51-038ee25d5613-kube-api-access-fjlzn\") pod \"cert-manager-86cb77c54b-95vhp\" (UID: \"7b3c7c02-6cf8-49db-bd51-038ee25d5613\") " pod="cert-manager/cert-manager-86cb77c54b-95vhp" Jan 27 18:57:20 crc kubenswrapper[4915]: I0127 18:57:20.897652 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b3c7c02-6cf8-49db-bd51-038ee25d5613-bound-sa-token\") pod \"cert-manager-86cb77c54b-95vhp\" (UID: \"7b3c7c02-6cf8-49db-bd51-038ee25d5613\") " pod="cert-manager/cert-manager-86cb77c54b-95vhp" Jan 27 18:57:20 crc kubenswrapper[4915]: I0127 18:57:20.918069 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjlzn\" (UniqueName: \"kubernetes.io/projected/7b3c7c02-6cf8-49db-bd51-038ee25d5613-kube-api-access-fjlzn\") pod \"cert-manager-86cb77c54b-95vhp\" (UID: \"7b3c7c02-6cf8-49db-bd51-038ee25d5613\") " pod="cert-manager/cert-manager-86cb77c54b-95vhp" Jan 27 18:57:20 crc kubenswrapper[4915]: I0127 18:57:20.918649 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b3c7c02-6cf8-49db-bd51-038ee25d5613-bound-sa-token\") pod \"cert-manager-86cb77c54b-95vhp\" (UID: \"7b3c7c02-6cf8-49db-bd51-038ee25d5613\") " pod="cert-manager/cert-manager-86cb77c54b-95vhp" Jan 27 18:57:21 crc kubenswrapper[4915]: I0127 18:57:21.048460 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-95vhp" Jan 27 18:57:21 crc kubenswrapper[4915]: I0127 18:57:21.452849 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-95vhp"] Jan 27 18:57:21 crc kubenswrapper[4915]: W0127 18:57:21.456673 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b3c7c02_6cf8_49db_bd51_038ee25d5613.slice/crio-85536b7d5150eba407c1e9f14c5392b9e2bba6518a27f89b5ae236eb116baa58 WatchSource:0}: Error finding container 85536b7d5150eba407c1e9f14c5392b9e2bba6518a27f89b5ae236eb116baa58: Status 404 returned error can't find the container with id 85536b7d5150eba407c1e9f14c5392b9e2bba6518a27f89b5ae236eb116baa58 Jan 27 18:57:21 crc kubenswrapper[4915]: I0127 18:57:21.737585 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-95vhp" event={"ID":"7b3c7c02-6cf8-49db-bd51-038ee25d5613","Type":"ContainerStarted","Data":"500debbf022d9ebcd707bf190af9188129cc3ee84a421ba53887d44b9731022e"} Jan 27 18:57:21 crc kubenswrapper[4915]: I0127 18:57:21.738010 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-95vhp" event={"ID":"7b3c7c02-6cf8-49db-bd51-038ee25d5613","Type":"ContainerStarted","Data":"85536b7d5150eba407c1e9f14c5392b9e2bba6518a27f89b5ae236eb116baa58"} Jan 27 18:57:21 crc kubenswrapper[4915]: I0127 18:57:21.757477 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-95vhp" podStartSLOduration=1.757435669 podStartE2EDuration="1.757435669s" podCreationTimestamp="2026-01-27 18:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:57:21.756424334 +0000 UTC m=+933.114277998" watchObservedRunningTime="2026-01-27 18:57:21.757435669 +0000 UTC m=+933.115289343" Jan 27 18:57:22 crc kubenswrapper[4915]: I0127 18:57:22.244958 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8z625" Jan 27 18:57:22 crc kubenswrapper[4915]: I0127 18:57:22.297075 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8z625" Jan 27 18:57:22 crc kubenswrapper[4915]: I0127 18:57:22.481436 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8z625"] Jan 27 18:57:23 crc kubenswrapper[4915]: I0127 18:57:23.750626 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8z625" podUID="597e565b-72eb-448e-8473-e94bfab22ff2" containerName="registry-server" containerID="cri-o://f333afbee300a80b82a25f530a7509eddf984f5c1acd6619fae341d927f5e0f9" gracePeriod=2 Jan 27 18:57:24 crc kubenswrapper[4915]: I0127 18:57:24.133888 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8z625" Jan 27 18:57:24 crc kubenswrapper[4915]: I0127 18:57:24.243048 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/597e565b-72eb-448e-8473-e94bfab22ff2-utilities\") pod \"597e565b-72eb-448e-8473-e94bfab22ff2\" (UID: \"597e565b-72eb-448e-8473-e94bfab22ff2\") " Jan 27 18:57:24 crc kubenswrapper[4915]: I0127 18:57:24.243109 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-856ll\" (UniqueName: \"kubernetes.io/projected/597e565b-72eb-448e-8473-e94bfab22ff2-kube-api-access-856ll\") pod \"597e565b-72eb-448e-8473-e94bfab22ff2\" (UID: \"597e565b-72eb-448e-8473-e94bfab22ff2\") " Jan 27 18:57:24 crc kubenswrapper[4915]: I0127 18:57:24.243143 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/597e565b-72eb-448e-8473-e94bfab22ff2-catalog-content\") pod \"597e565b-72eb-448e-8473-e94bfab22ff2\" (UID: \"597e565b-72eb-448e-8473-e94bfab22ff2\") " Jan 27 18:57:24 crc kubenswrapper[4915]: I0127 18:57:24.243996 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/597e565b-72eb-448e-8473-e94bfab22ff2-utilities" (OuterVolumeSpecName: "utilities") pod "597e565b-72eb-448e-8473-e94bfab22ff2" (UID: "597e565b-72eb-448e-8473-e94bfab22ff2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:57:24 crc kubenswrapper[4915]: I0127 18:57:24.245427 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/597e565b-72eb-448e-8473-e94bfab22ff2-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:57:24 crc kubenswrapper[4915]: I0127 18:57:24.250957 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/597e565b-72eb-448e-8473-e94bfab22ff2-kube-api-access-856ll" (OuterVolumeSpecName: "kube-api-access-856ll") pod "597e565b-72eb-448e-8473-e94bfab22ff2" (UID: "597e565b-72eb-448e-8473-e94bfab22ff2"). InnerVolumeSpecName "kube-api-access-856ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:57:24 crc kubenswrapper[4915]: I0127 18:57:24.300772 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/597e565b-72eb-448e-8473-e94bfab22ff2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "597e565b-72eb-448e-8473-e94bfab22ff2" (UID: "597e565b-72eb-448e-8473-e94bfab22ff2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:57:24 crc kubenswrapper[4915]: I0127 18:57:24.346661 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-856ll\" (UniqueName: \"kubernetes.io/projected/597e565b-72eb-448e-8473-e94bfab22ff2-kube-api-access-856ll\") on node \"crc\" DevicePath \"\"" Jan 27 18:57:24 crc kubenswrapper[4915]: I0127 18:57:24.346739 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/597e565b-72eb-448e-8473-e94bfab22ff2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:57:24 crc kubenswrapper[4915]: I0127 18:57:24.758893 4915 generic.go:334] "Generic (PLEG): container finished" podID="597e565b-72eb-448e-8473-e94bfab22ff2" containerID="f333afbee300a80b82a25f530a7509eddf984f5c1acd6619fae341d927f5e0f9" exitCode=0 Jan 27 18:57:24 crc kubenswrapper[4915]: I0127 18:57:24.758931 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z625" event={"ID":"597e565b-72eb-448e-8473-e94bfab22ff2","Type":"ContainerDied","Data":"f333afbee300a80b82a25f530a7509eddf984f5c1acd6619fae341d927f5e0f9"} Jan 27 18:57:24 crc kubenswrapper[4915]: I0127 18:57:24.758960 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z625" event={"ID":"597e565b-72eb-448e-8473-e94bfab22ff2","Type":"ContainerDied","Data":"e72a4ea6fb16750ffe64c59740c116330fb0252757733bf1300f3eaa15213df8"} Jan 27 18:57:24 crc kubenswrapper[4915]: I0127 18:57:24.758972 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8z625" Jan 27 18:57:24 crc kubenswrapper[4915]: I0127 18:57:24.758978 4915 scope.go:117] "RemoveContainer" containerID="f333afbee300a80b82a25f530a7509eddf984f5c1acd6619fae341d927f5e0f9" Jan 27 18:57:24 crc kubenswrapper[4915]: I0127 18:57:24.790303 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8z625"] Jan 27 18:57:24 crc kubenswrapper[4915]: I0127 18:57:24.794338 4915 scope.go:117] "RemoveContainer" containerID="aa456ba938da39320e68cce5eef5b057fe30a375c01500764ebd57a47c7ba5d3" Jan 27 18:57:24 crc kubenswrapper[4915]: I0127 18:57:24.794584 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8z625"] Jan 27 18:57:24 crc kubenswrapper[4915]: I0127 18:57:24.810034 4915 scope.go:117] "RemoveContainer" containerID="24d6a8826e8afce09c45aa25f9aa5e33ce2500ec87469f03c6352e11b7602cc2" Jan 27 18:57:24 crc kubenswrapper[4915]: I0127 18:57:24.827357 4915 scope.go:117] "RemoveContainer" containerID="f333afbee300a80b82a25f530a7509eddf984f5c1acd6619fae341d927f5e0f9" Jan 27 18:57:24 crc kubenswrapper[4915]: E0127 18:57:24.829234 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f333afbee300a80b82a25f530a7509eddf984f5c1acd6619fae341d927f5e0f9\": container with ID starting with f333afbee300a80b82a25f530a7509eddf984f5c1acd6619fae341d927f5e0f9 not found: ID does not exist" containerID="f333afbee300a80b82a25f530a7509eddf984f5c1acd6619fae341d927f5e0f9" Jan 27 18:57:24 crc kubenswrapper[4915]: I0127 18:57:24.829303 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f333afbee300a80b82a25f530a7509eddf984f5c1acd6619fae341d927f5e0f9"} err="failed to get container status \"f333afbee300a80b82a25f530a7509eddf984f5c1acd6619fae341d927f5e0f9\": rpc error: code = NotFound desc = could not find container \"f333afbee300a80b82a25f530a7509eddf984f5c1acd6619fae341d927f5e0f9\": container with ID starting with f333afbee300a80b82a25f530a7509eddf984f5c1acd6619fae341d927f5e0f9 not found: ID does not exist" Jan 27 18:57:24 crc kubenswrapper[4915]: I0127 18:57:24.829344 4915 scope.go:117] "RemoveContainer" containerID="aa456ba938da39320e68cce5eef5b057fe30a375c01500764ebd57a47c7ba5d3" Jan 27 18:57:24 crc kubenswrapper[4915]: E0127 18:57:24.829809 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa456ba938da39320e68cce5eef5b057fe30a375c01500764ebd57a47c7ba5d3\": container with ID starting with aa456ba938da39320e68cce5eef5b057fe30a375c01500764ebd57a47c7ba5d3 not found: ID does not exist" containerID="aa456ba938da39320e68cce5eef5b057fe30a375c01500764ebd57a47c7ba5d3" Jan 27 18:57:24 crc kubenswrapper[4915]: I0127 18:57:24.829857 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa456ba938da39320e68cce5eef5b057fe30a375c01500764ebd57a47c7ba5d3"} err="failed to get container status \"aa456ba938da39320e68cce5eef5b057fe30a375c01500764ebd57a47c7ba5d3\": rpc error: code = NotFound desc = could not find container \"aa456ba938da39320e68cce5eef5b057fe30a375c01500764ebd57a47c7ba5d3\": container with ID starting with aa456ba938da39320e68cce5eef5b057fe30a375c01500764ebd57a47c7ba5d3 not found: ID does not exist" Jan 27 18:57:24 crc kubenswrapper[4915]: I0127 18:57:24.829889 4915 scope.go:117] "RemoveContainer" containerID="24d6a8826e8afce09c45aa25f9aa5e33ce2500ec87469f03c6352e11b7602cc2" Jan 27 18:57:24 crc kubenswrapper[4915]: E0127 18:57:24.830393 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24d6a8826e8afce09c45aa25f9aa5e33ce2500ec87469f03c6352e11b7602cc2\": container with ID starting with 24d6a8826e8afce09c45aa25f9aa5e33ce2500ec87469f03c6352e11b7602cc2 not found: ID does not exist" containerID="24d6a8826e8afce09c45aa25f9aa5e33ce2500ec87469f03c6352e11b7602cc2" Jan 27 18:57:24 crc kubenswrapper[4915]: I0127 18:57:24.830428 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24d6a8826e8afce09c45aa25f9aa5e33ce2500ec87469f03c6352e11b7602cc2"} err="failed to get container status \"24d6a8826e8afce09c45aa25f9aa5e33ce2500ec87469f03c6352e11b7602cc2\": rpc error: code = NotFound desc = could not find container \"24d6a8826e8afce09c45aa25f9aa5e33ce2500ec87469f03c6352e11b7602cc2\": container with ID starting with 24d6a8826e8afce09c45aa25f9aa5e33ce2500ec87469f03c6352e11b7602cc2 not found: ID does not exist" Jan 27 18:57:25 crc kubenswrapper[4915]: I0127 18:57:25.366900 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="597e565b-72eb-448e-8473-e94bfab22ff2" path="/var/lib/kubelet/pods/597e565b-72eb-448e-8473-e94bfab22ff2/volumes" Jan 27 18:57:29 crc kubenswrapper[4915]: I0127 18:57:29.272158 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-rj5kh"] Jan 27 18:57:29 crc kubenswrapper[4915]: E0127 18:57:29.273051 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="597e565b-72eb-448e-8473-e94bfab22ff2" containerName="extract-content" Jan 27 18:57:29 crc kubenswrapper[4915]: I0127 18:57:29.273067 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="597e565b-72eb-448e-8473-e94bfab22ff2" containerName="extract-content" Jan 27 18:57:29 crc kubenswrapper[4915]: E0127 18:57:29.273078 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="597e565b-72eb-448e-8473-e94bfab22ff2" containerName="registry-server" Jan 27 18:57:29 crc kubenswrapper[4915]: I0127 18:57:29.273087 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="597e565b-72eb-448e-8473-e94bfab22ff2" containerName="registry-server" Jan 27 18:57:29 crc kubenswrapper[4915]: E0127 18:57:29.273113 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="597e565b-72eb-448e-8473-e94bfab22ff2" containerName="extract-utilities" Jan 27 18:57:29 crc kubenswrapper[4915]: I0127 18:57:29.273121 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="597e565b-72eb-448e-8473-e94bfab22ff2" containerName="extract-utilities" Jan 27 18:57:29 crc kubenswrapper[4915]: I0127 18:57:29.273213 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="597e565b-72eb-448e-8473-e94bfab22ff2" containerName="registry-server" Jan 27 18:57:29 crc kubenswrapper[4915]: I0127 18:57:29.273673 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rj5kh" Jan 27 18:57:29 crc kubenswrapper[4915]: I0127 18:57:29.275856 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-mw2vr" Jan 27 18:57:29 crc kubenswrapper[4915]: I0127 18:57:29.276424 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 27 18:57:29 crc kubenswrapper[4915]: I0127 18:57:29.279882 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 27 18:57:29 crc kubenswrapper[4915]: I0127 18:57:29.291076 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rj5kh"] Jan 27 18:57:29 crc kubenswrapper[4915]: I0127 18:57:29.319632 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qq8n\" (UniqueName: \"kubernetes.io/projected/64fb92c4-0953-4f15-ae63-773c817ac4cb-kube-api-access-4qq8n\") pod \"openstack-operator-index-rj5kh\" (UID: \"64fb92c4-0953-4f15-ae63-773c817ac4cb\") " pod="openstack-operators/openstack-operator-index-rj5kh" Jan 27 18:57:29 crc kubenswrapper[4915]: I0127 18:57:29.420402 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qq8n\" (UniqueName: \"kubernetes.io/projected/64fb92c4-0953-4f15-ae63-773c817ac4cb-kube-api-access-4qq8n\") pod \"openstack-operator-index-rj5kh\" (UID: \"64fb92c4-0953-4f15-ae63-773c817ac4cb\") " pod="openstack-operators/openstack-operator-index-rj5kh" Jan 27 18:57:29 crc kubenswrapper[4915]: I0127 18:57:29.459649 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qq8n\" (UniqueName: \"kubernetes.io/projected/64fb92c4-0953-4f15-ae63-773c817ac4cb-kube-api-access-4qq8n\") pod \"openstack-operator-index-rj5kh\" (UID: \"64fb92c4-0953-4f15-ae63-773c817ac4cb\") " pod="openstack-operators/openstack-operator-index-rj5kh" Jan 27 18:57:29 crc kubenswrapper[4915]: I0127 18:57:29.591894 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rj5kh" Jan 27 18:57:30 crc kubenswrapper[4915]: I0127 18:57:30.042626 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rj5kh"] Jan 27 18:57:30 crc kubenswrapper[4915]: W0127 18:57:30.043573 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64fb92c4_0953_4f15_ae63_773c817ac4cb.slice/crio-489fbbbaa01c92cb87591c9ddf5e24ba71c7586212c22941159bd9e9143a6ffa WatchSource:0}: Error finding container 489fbbbaa01c92cb87591c9ddf5e24ba71c7586212c22941159bd9e9143a6ffa: Status 404 returned error can't find the container with id 489fbbbaa01c92cb87591c9ddf5e24ba71c7586212c22941159bd9e9143a6ffa Jan 27 18:57:30 crc kubenswrapper[4915]: I0127 18:57:30.802383 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rj5kh" event={"ID":"64fb92c4-0953-4f15-ae63-773c817ac4cb","Type":"ContainerStarted","Data":"489fbbbaa01c92cb87591c9ddf5e24ba71c7586212c22941159bd9e9143a6ffa"} Jan 27 18:57:32 crc kubenswrapper[4915]: I0127 18:57:32.645966 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-rj5kh"] Jan 27 18:57:33 crc kubenswrapper[4915]: I0127 18:57:33.255540 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-rg6kn"] Jan 27 18:57:33 crc kubenswrapper[4915]: I0127 18:57:33.256407 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rg6kn" Jan 27 18:57:33 crc kubenswrapper[4915]: I0127 18:57:33.265557 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rg6kn"] Jan 27 18:57:33 crc kubenswrapper[4915]: I0127 18:57:33.300503 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzvsh\" (UniqueName: \"kubernetes.io/projected/4741f828-8ade-428a-9903-cca6e8e1dc56-kube-api-access-gzvsh\") pod \"openstack-operator-index-rg6kn\" (UID: \"4741f828-8ade-428a-9903-cca6e8e1dc56\") " pod="openstack-operators/openstack-operator-index-rg6kn" Jan 27 18:57:33 crc kubenswrapper[4915]: I0127 18:57:33.401523 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzvsh\" (UniqueName: \"kubernetes.io/projected/4741f828-8ade-428a-9903-cca6e8e1dc56-kube-api-access-gzvsh\") pod \"openstack-operator-index-rg6kn\" (UID: \"4741f828-8ade-428a-9903-cca6e8e1dc56\") " pod="openstack-operators/openstack-operator-index-rg6kn" Jan 27 18:57:33 crc kubenswrapper[4915]: I0127 18:57:33.421941 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzvsh\" (UniqueName: \"kubernetes.io/projected/4741f828-8ade-428a-9903-cca6e8e1dc56-kube-api-access-gzvsh\") pod \"openstack-operator-index-rg6kn\" (UID: \"4741f828-8ade-428a-9903-cca6e8e1dc56\") " pod="openstack-operators/openstack-operator-index-rg6kn" Jan 27 18:57:33 crc kubenswrapper[4915]: I0127 18:57:33.616218 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rg6kn" Jan 27 18:57:34 crc kubenswrapper[4915]: I0127 18:57:34.541639 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rg6kn"] Jan 27 18:57:34 crc kubenswrapper[4915]: W0127 18:57:34.544157 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4741f828_8ade_428a_9903_cca6e8e1dc56.slice/crio-9bb8ac7cdff5139fe8bbabab27e53085685775b74caa2c43660f0dd8fa8a684a WatchSource:0}: Error finding container 9bb8ac7cdff5139fe8bbabab27e53085685775b74caa2c43660f0dd8fa8a684a: Status 404 returned error can't find the container with id 9bb8ac7cdff5139fe8bbabab27e53085685775b74caa2c43660f0dd8fa8a684a Jan 27 18:57:34 crc kubenswrapper[4915]: I0127 18:57:34.836807 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rg6kn" event={"ID":"4741f828-8ade-428a-9903-cca6e8e1dc56","Type":"ContainerStarted","Data":"b2feb0819e6b28900af38523427c75a04e1cba5cc13ea55f4b565d5a247e0e56"} Jan 27 18:57:34 crc kubenswrapper[4915]: I0127 18:57:34.837200 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rg6kn" event={"ID":"4741f828-8ade-428a-9903-cca6e8e1dc56","Type":"ContainerStarted","Data":"9bb8ac7cdff5139fe8bbabab27e53085685775b74caa2c43660f0dd8fa8a684a"} Jan 27 18:57:34 crc kubenswrapper[4915]: I0127 18:57:34.839299 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rj5kh" event={"ID":"64fb92c4-0953-4f15-ae63-773c817ac4cb","Type":"ContainerStarted","Data":"b7f97fbf35d0194a585fd3672a390845a0400275850cdbb9966d7214ef39850c"} Jan 27 18:57:34 crc kubenswrapper[4915]: I0127 18:57:34.839431 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-rj5kh" podUID="64fb92c4-0953-4f15-ae63-773c817ac4cb" containerName="registry-server" containerID="cri-o://b7f97fbf35d0194a585fd3672a390845a0400275850cdbb9966d7214ef39850c" gracePeriod=2 Jan 27 18:57:34 crc kubenswrapper[4915]: I0127 18:57:34.856007 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-rg6kn" podStartSLOduration=1.790683258 podStartE2EDuration="1.85598138s" podCreationTimestamp="2026-01-27 18:57:33 +0000 UTC" firstStartedPulling="2026-01-27 18:57:34.550171952 +0000 UTC m=+945.908025656" lastFinishedPulling="2026-01-27 18:57:34.615470074 +0000 UTC m=+945.973323778" observedRunningTime="2026-01-27 18:57:34.852238498 +0000 UTC m=+946.210092172" watchObservedRunningTime="2026-01-27 18:57:34.85598138 +0000 UTC m=+946.213835054" Jan 27 18:57:34 crc kubenswrapper[4915]: I0127 18:57:34.873316 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-rj5kh" podStartSLOduration=1.815262058 podStartE2EDuration="5.873294812s" podCreationTimestamp="2026-01-27 18:57:29 +0000 UTC" firstStartedPulling="2026-01-27 18:57:30.048084918 +0000 UTC m=+941.405938592" lastFinishedPulling="2026-01-27 18:57:34.106117642 +0000 UTC m=+945.463971346" observedRunningTime="2026-01-27 18:57:34.868672249 +0000 UTC m=+946.226525943" watchObservedRunningTime="2026-01-27 18:57:34.873294812 +0000 UTC m=+946.231148486" Jan 27 18:57:35 crc kubenswrapper[4915]: I0127 18:57:35.214194 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rj5kh" Jan 27 18:57:35 crc kubenswrapper[4915]: I0127 18:57:35.227674 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qq8n\" (UniqueName: \"kubernetes.io/projected/64fb92c4-0953-4f15-ae63-773c817ac4cb-kube-api-access-4qq8n\") pod \"64fb92c4-0953-4f15-ae63-773c817ac4cb\" (UID: \"64fb92c4-0953-4f15-ae63-773c817ac4cb\") " Jan 27 18:57:35 crc kubenswrapper[4915]: I0127 18:57:35.234524 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64fb92c4-0953-4f15-ae63-773c817ac4cb-kube-api-access-4qq8n" (OuterVolumeSpecName: "kube-api-access-4qq8n") pod "64fb92c4-0953-4f15-ae63-773c817ac4cb" (UID: "64fb92c4-0953-4f15-ae63-773c817ac4cb"). InnerVolumeSpecName "kube-api-access-4qq8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:57:35 crc kubenswrapper[4915]: I0127 18:57:35.328414 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qq8n\" (UniqueName: \"kubernetes.io/projected/64fb92c4-0953-4f15-ae63-773c817ac4cb-kube-api-access-4qq8n\") on node \"crc\" DevicePath \"\"" Jan 27 18:57:35 crc kubenswrapper[4915]: I0127 18:57:35.848082 4915 generic.go:334] "Generic (PLEG): container finished" podID="64fb92c4-0953-4f15-ae63-773c817ac4cb" containerID="b7f97fbf35d0194a585fd3672a390845a0400275850cdbb9966d7214ef39850c" exitCode=0 Jan 27 18:57:35 crc kubenswrapper[4915]: I0127 18:57:35.848144 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rj5kh" Jan 27 18:57:35 crc kubenswrapper[4915]: I0127 18:57:35.848176 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rj5kh" event={"ID":"64fb92c4-0953-4f15-ae63-773c817ac4cb","Type":"ContainerDied","Data":"b7f97fbf35d0194a585fd3672a390845a0400275850cdbb9966d7214ef39850c"} Jan 27 18:57:35 crc kubenswrapper[4915]: I0127 18:57:35.848643 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rj5kh" event={"ID":"64fb92c4-0953-4f15-ae63-773c817ac4cb","Type":"ContainerDied","Data":"489fbbbaa01c92cb87591c9ddf5e24ba71c7586212c22941159bd9e9143a6ffa"} Jan 27 18:57:35 crc kubenswrapper[4915]: I0127 18:57:35.848698 4915 scope.go:117] "RemoveContainer" containerID="b7f97fbf35d0194a585fd3672a390845a0400275850cdbb9966d7214ef39850c" Jan 27 18:57:35 crc kubenswrapper[4915]: I0127 18:57:35.869905 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-rj5kh"] Jan 27 18:57:35 crc kubenswrapper[4915]: I0127 18:57:35.876447 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-rj5kh"] Jan 27 18:57:35 crc kubenswrapper[4915]: I0127 18:57:35.877104 4915 scope.go:117] "RemoveContainer" containerID="b7f97fbf35d0194a585fd3672a390845a0400275850cdbb9966d7214ef39850c" Jan 27 18:57:35 crc kubenswrapper[4915]: E0127 18:57:35.877550 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7f97fbf35d0194a585fd3672a390845a0400275850cdbb9966d7214ef39850c\": container with ID starting with b7f97fbf35d0194a585fd3672a390845a0400275850cdbb9966d7214ef39850c not found: ID does not exist" containerID="b7f97fbf35d0194a585fd3672a390845a0400275850cdbb9966d7214ef39850c" Jan 27 18:57:35 crc kubenswrapper[4915]: I0127 18:57:35.877581 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7f97fbf35d0194a585fd3672a390845a0400275850cdbb9966d7214ef39850c"} err="failed to get container status \"b7f97fbf35d0194a585fd3672a390845a0400275850cdbb9966d7214ef39850c\": rpc error: code = NotFound desc = could not find container \"b7f97fbf35d0194a585fd3672a390845a0400275850cdbb9966d7214ef39850c\": container with ID starting with b7f97fbf35d0194a585fd3672a390845a0400275850cdbb9966d7214ef39850c not found: ID does not exist" Jan 27 18:57:37 crc kubenswrapper[4915]: I0127 18:57:37.363356 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64fb92c4-0953-4f15-ae63-773c817ac4cb" path="/var/lib/kubelet/pods/64fb92c4-0953-4f15-ae63-773c817ac4cb/volumes" Jan 27 18:57:43 crc kubenswrapper[4915]: I0127 18:57:43.616606 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-rg6kn" Jan 27 18:57:43 crc kubenswrapper[4915]: I0127 18:57:43.617270 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-rg6kn" Jan 27 18:57:43 crc kubenswrapper[4915]: I0127 18:57:43.656460 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-rg6kn" Jan 27 18:57:43 crc kubenswrapper[4915]: I0127 18:57:43.929312 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-rg6kn" Jan 27 18:57:45 crc kubenswrapper[4915]: I0127 18:57:45.280730 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96"] Jan 27 18:57:45 crc kubenswrapper[4915]: E0127 18:57:45.281356 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64fb92c4-0953-4f15-ae63-773c817ac4cb" containerName="registry-server" Jan 27 18:57:45 crc kubenswrapper[4915]: I0127 18:57:45.281373 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="64fb92c4-0953-4f15-ae63-773c817ac4cb" containerName="registry-server" Jan 27 18:57:45 crc kubenswrapper[4915]: I0127 18:57:45.281540 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="64fb92c4-0953-4f15-ae63-773c817ac4cb" containerName="registry-server" Jan 27 18:57:45 crc kubenswrapper[4915]: I0127 18:57:45.295183 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96" Jan 27 18:57:45 crc kubenswrapper[4915]: I0127 18:57:45.298304 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-mljbc" Jan 27 18:57:45 crc kubenswrapper[4915]: I0127 18:57:45.305270 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96"] Jan 27 18:57:45 crc kubenswrapper[4915]: I0127 18:57:45.464812 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3c204dc6-9798-4818-8a10-ef8ccc2ce918-util\") pod \"73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96\" (UID: \"3c204dc6-9798-4818-8a10-ef8ccc2ce918\") " pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96" Jan 27 18:57:45 crc kubenswrapper[4915]: I0127 18:57:45.464935 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw8m9\" (UniqueName: \"kubernetes.io/projected/3c204dc6-9798-4818-8a10-ef8ccc2ce918-kube-api-access-jw8m9\") pod \"73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96\" (UID: \"3c204dc6-9798-4818-8a10-ef8ccc2ce918\") " pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96" Jan 27 18:57:45 crc kubenswrapper[4915]: I0127 18:57:45.464988 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3c204dc6-9798-4818-8a10-ef8ccc2ce918-bundle\") pod \"73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96\" (UID: \"3c204dc6-9798-4818-8a10-ef8ccc2ce918\") " pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96" Jan 27 18:57:45 crc kubenswrapper[4915]: I0127 18:57:45.566803 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3c204dc6-9798-4818-8a10-ef8ccc2ce918-util\") pod \"73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96\" (UID: \"3c204dc6-9798-4818-8a10-ef8ccc2ce918\") " pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96" Jan 27 18:57:45 crc kubenswrapper[4915]: I0127 18:57:45.566859 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw8m9\" (UniqueName: \"kubernetes.io/projected/3c204dc6-9798-4818-8a10-ef8ccc2ce918-kube-api-access-jw8m9\") pod \"73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96\" (UID: \"3c204dc6-9798-4818-8a10-ef8ccc2ce918\") " pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96" Jan 27 18:57:45 crc kubenswrapper[4915]: I0127 18:57:45.566910 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3c204dc6-9798-4818-8a10-ef8ccc2ce918-bundle\") pod \"73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96\" (UID: \"3c204dc6-9798-4818-8a10-ef8ccc2ce918\") " pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96" Jan 27 18:57:45 crc kubenswrapper[4915]: I0127 18:57:45.567510 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3c204dc6-9798-4818-8a10-ef8ccc2ce918-util\") pod \"73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96\" (UID: \"3c204dc6-9798-4818-8a10-ef8ccc2ce918\") " pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96" Jan 27 18:57:45 crc kubenswrapper[4915]: I0127 18:57:45.567885 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3c204dc6-9798-4818-8a10-ef8ccc2ce918-bundle\") pod \"73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96\" (UID: \"3c204dc6-9798-4818-8a10-ef8ccc2ce918\") " pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96" Jan 27 18:57:45 crc kubenswrapper[4915]: I0127 18:57:45.590488 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw8m9\" (UniqueName: \"kubernetes.io/projected/3c204dc6-9798-4818-8a10-ef8ccc2ce918-kube-api-access-jw8m9\") pod \"73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96\" (UID: \"3c204dc6-9798-4818-8a10-ef8ccc2ce918\") " pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96" Jan 27 18:57:45 crc kubenswrapper[4915]: I0127 18:57:45.617009 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96" Jan 27 18:57:45 crc kubenswrapper[4915]: I0127 18:57:45.866710 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96"] Jan 27 18:57:45 crc kubenswrapper[4915]: W0127 18:57:45.871498 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c204dc6_9798_4818_8a10_ef8ccc2ce918.slice/crio-ecd5d9aa101149330b8f9bd18058c77736491b7f3e9bbbbe27c2d70f875b96fd WatchSource:0}: Error finding container ecd5d9aa101149330b8f9bd18058c77736491b7f3e9bbbbe27c2d70f875b96fd: Status 404 returned error can't find the container with id ecd5d9aa101149330b8f9bd18058c77736491b7f3e9bbbbe27c2d70f875b96fd Jan 27 18:57:45 crc kubenswrapper[4915]: I0127 18:57:45.916032 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96" event={"ID":"3c204dc6-9798-4818-8a10-ef8ccc2ce918","Type":"ContainerStarted","Data":"ecd5d9aa101149330b8f9bd18058c77736491b7f3e9bbbbe27c2d70f875b96fd"} Jan 27 18:57:46 crc kubenswrapper[4915]: I0127 18:57:46.922924 4915 generic.go:334] "Generic (PLEG): container finished" podID="3c204dc6-9798-4818-8a10-ef8ccc2ce918" containerID="2de43be4f9d262e8471ed68218315d4024c2bf3fd915048af9bc30c3288b55a1" exitCode=0 Jan 27 18:57:46 crc kubenswrapper[4915]: I0127 18:57:46.923211 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96" event={"ID":"3c204dc6-9798-4818-8a10-ef8ccc2ce918","Type":"ContainerDied","Data":"2de43be4f9d262e8471ed68218315d4024c2bf3fd915048af9bc30c3288b55a1"} Jan 27 18:57:47 crc kubenswrapper[4915]: I0127 18:57:47.934099 4915 generic.go:334] "Generic (PLEG): container finished" podID="3c204dc6-9798-4818-8a10-ef8ccc2ce918" containerID="9b1ab571da9c390fb5be159c09df312d639374a331898cf3b15d6f16e8efa48d" exitCode=0 Jan 27 18:57:47 crc kubenswrapper[4915]: I0127 18:57:47.934169 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96" event={"ID":"3c204dc6-9798-4818-8a10-ef8ccc2ce918","Type":"ContainerDied","Data":"9b1ab571da9c390fb5be159c09df312d639374a331898cf3b15d6f16e8efa48d"} Jan 27 18:57:48 crc kubenswrapper[4915]: I0127 18:57:48.943295 4915 generic.go:334] "Generic (PLEG): container finished" podID="3c204dc6-9798-4818-8a10-ef8ccc2ce918" containerID="9285065738ea8efd70881bfe1c3eb8e2095f564c458f338682c6e1da7852fca6" exitCode=0 Jan 27 18:57:48 crc kubenswrapper[4915]: I0127 18:57:48.943375 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96" event={"ID":"3c204dc6-9798-4818-8a10-ef8ccc2ce918","Type":"ContainerDied","Data":"9285065738ea8efd70881bfe1c3eb8e2095f564c458f338682c6e1da7852fca6"} Jan 27 18:57:50 crc kubenswrapper[4915]: I0127 18:57:50.228451 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96" Jan 27 18:57:50 crc kubenswrapper[4915]: I0127 18:57:50.242194 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw8m9\" (UniqueName: \"kubernetes.io/projected/3c204dc6-9798-4818-8a10-ef8ccc2ce918-kube-api-access-jw8m9\") pod \"3c204dc6-9798-4818-8a10-ef8ccc2ce918\" (UID: \"3c204dc6-9798-4818-8a10-ef8ccc2ce918\") " Jan 27 18:57:50 crc kubenswrapper[4915]: I0127 18:57:50.242280 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3c204dc6-9798-4818-8a10-ef8ccc2ce918-util\") pod \"3c204dc6-9798-4818-8a10-ef8ccc2ce918\" (UID: \"3c204dc6-9798-4818-8a10-ef8ccc2ce918\") " Jan 27 18:57:50 crc kubenswrapper[4915]: I0127 18:57:50.242334 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3c204dc6-9798-4818-8a10-ef8ccc2ce918-bundle\") pod \"3c204dc6-9798-4818-8a10-ef8ccc2ce918\" (UID: \"3c204dc6-9798-4818-8a10-ef8ccc2ce918\") " Jan 27 18:57:50 crc kubenswrapper[4915]: I0127 18:57:50.244316 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c204dc6-9798-4818-8a10-ef8ccc2ce918-bundle" (OuterVolumeSpecName: "bundle") pod "3c204dc6-9798-4818-8a10-ef8ccc2ce918" (UID: "3c204dc6-9798-4818-8a10-ef8ccc2ce918"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:57:50 crc kubenswrapper[4915]: I0127 18:57:50.253668 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c204dc6-9798-4818-8a10-ef8ccc2ce918-kube-api-access-jw8m9" (OuterVolumeSpecName: "kube-api-access-jw8m9") pod "3c204dc6-9798-4818-8a10-ef8ccc2ce918" (UID: "3c204dc6-9798-4818-8a10-ef8ccc2ce918"). InnerVolumeSpecName "kube-api-access-jw8m9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:57:50 crc kubenswrapper[4915]: I0127 18:57:50.266666 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c204dc6-9798-4818-8a10-ef8ccc2ce918-util" (OuterVolumeSpecName: "util") pod "3c204dc6-9798-4818-8a10-ef8ccc2ce918" (UID: "3c204dc6-9798-4818-8a10-ef8ccc2ce918"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:57:50 crc kubenswrapper[4915]: I0127 18:57:50.343706 4915 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3c204dc6-9798-4818-8a10-ef8ccc2ce918-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:57:50 crc kubenswrapper[4915]: I0127 18:57:50.343735 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw8m9\" (UniqueName: \"kubernetes.io/projected/3c204dc6-9798-4818-8a10-ef8ccc2ce918-kube-api-access-jw8m9\") on node \"crc\" DevicePath \"\"" Jan 27 18:57:50 crc kubenswrapper[4915]: I0127 18:57:50.343746 4915 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3c204dc6-9798-4818-8a10-ef8ccc2ce918-util\") on node \"crc\" DevicePath \"\"" Jan 27 18:57:50 crc kubenswrapper[4915]: I0127 18:57:50.625069 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:57:50 crc kubenswrapper[4915]: I0127 18:57:50.625160 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:57:50 crc kubenswrapper[4915]: I0127 18:57:50.963082 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96" event={"ID":"3c204dc6-9798-4818-8a10-ef8ccc2ce918","Type":"ContainerDied","Data":"ecd5d9aa101149330b8f9bd18058c77736491b7f3e9bbbbe27c2d70f875b96fd"} Jan 27 18:57:50 crc kubenswrapper[4915]: I0127 18:57:50.963140 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecd5d9aa101149330b8f9bd18058c77736491b7f3e9bbbbe27c2d70f875b96fd" Jan 27 18:57:50 crc kubenswrapper[4915]: I0127 18:57:50.963228 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96" Jan 27 18:57:54 crc kubenswrapper[4915]: I0127 18:57:54.269775 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-67d88b5675-dgbhd"] Jan 27 18:57:54 crc kubenswrapper[4915]: E0127 18:57:54.270314 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c204dc6-9798-4818-8a10-ef8ccc2ce918" containerName="util" Jan 27 18:57:54 crc kubenswrapper[4915]: I0127 18:57:54.270327 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c204dc6-9798-4818-8a10-ef8ccc2ce918" containerName="util" Jan 27 18:57:54 crc kubenswrapper[4915]: E0127 18:57:54.270345 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c204dc6-9798-4818-8a10-ef8ccc2ce918" containerName="extract" Jan 27 18:57:54 crc kubenswrapper[4915]: I0127 18:57:54.270351 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c204dc6-9798-4818-8a10-ef8ccc2ce918" containerName="extract" Jan 27 18:57:54 crc kubenswrapper[4915]: E0127 18:57:54.270360 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c204dc6-9798-4818-8a10-ef8ccc2ce918" containerName="pull" Jan 27 18:57:54 crc kubenswrapper[4915]: I0127 18:57:54.270368 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c204dc6-9798-4818-8a10-ef8ccc2ce918" containerName="pull" Jan 27 18:57:54 crc kubenswrapper[4915]: I0127 18:57:54.270477 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c204dc6-9798-4818-8a10-ef8ccc2ce918" containerName="extract" Jan 27 18:57:54 crc kubenswrapper[4915]: I0127 18:57:54.270878 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-67d88b5675-dgbhd" Jan 27 18:57:54 crc kubenswrapper[4915]: I0127 18:57:54.272550 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-8tck4" Jan 27 18:57:54 crc kubenswrapper[4915]: I0127 18:57:54.302532 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-67d88b5675-dgbhd"] Jan 27 18:57:54 crc kubenswrapper[4915]: I0127 18:57:54.396820 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttsr4\" (UniqueName: \"kubernetes.io/projected/9c22429f-a66c-41ae-b664-8f986ca713ef-kube-api-access-ttsr4\") pod \"openstack-operator-controller-init-67d88b5675-dgbhd\" (UID: \"9c22429f-a66c-41ae-b664-8f986ca713ef\") " pod="openstack-operators/openstack-operator-controller-init-67d88b5675-dgbhd" Jan 27 18:57:54 crc kubenswrapper[4915]: I0127 18:57:54.497763 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttsr4\" (UniqueName: \"kubernetes.io/projected/9c22429f-a66c-41ae-b664-8f986ca713ef-kube-api-access-ttsr4\") pod \"openstack-operator-controller-init-67d88b5675-dgbhd\" (UID: \"9c22429f-a66c-41ae-b664-8f986ca713ef\") " pod="openstack-operators/openstack-operator-controller-init-67d88b5675-dgbhd" Jan 27 18:57:54 crc kubenswrapper[4915]: I0127 18:57:54.516077 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttsr4\" (UniqueName: \"kubernetes.io/projected/9c22429f-a66c-41ae-b664-8f986ca713ef-kube-api-access-ttsr4\") pod \"openstack-operator-controller-init-67d88b5675-dgbhd\" (UID: \"9c22429f-a66c-41ae-b664-8f986ca713ef\") " pod="openstack-operators/openstack-operator-controller-init-67d88b5675-dgbhd" Jan 27 18:57:54 crc kubenswrapper[4915]: I0127 18:57:54.586808 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-67d88b5675-dgbhd" Jan 27 18:57:55 crc kubenswrapper[4915]: I0127 18:57:55.026880 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-67d88b5675-dgbhd"] Jan 27 18:57:56 crc kubenswrapper[4915]: I0127 18:57:56.010766 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-67d88b5675-dgbhd" event={"ID":"9c22429f-a66c-41ae-b664-8f986ca713ef","Type":"ContainerStarted","Data":"11b4b69401732291c382e42677aeb8c8ef81802422534b1922b7eb24768595d8"} Jan 27 18:58:00 crc kubenswrapper[4915]: I0127 18:58:00.051467 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-67d88b5675-dgbhd" event={"ID":"9c22429f-a66c-41ae-b664-8f986ca713ef","Type":"ContainerStarted","Data":"08521c8da35a5bb952c7c901371e05f1c91d12e0e54bfda426756cd9785bb6a0"} Jan 27 18:58:00 crc kubenswrapper[4915]: I0127 18:58:00.052995 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-67d88b5675-dgbhd" Jan 27 18:58:04 crc kubenswrapper[4915]: I0127 18:58:04.590233 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-67d88b5675-dgbhd" Jan 27 18:58:04 crc kubenswrapper[4915]: I0127 18:58:04.637045 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-67d88b5675-dgbhd" podStartSLOduration=6.154124174 podStartE2EDuration="10.63701939s" podCreationTimestamp="2026-01-27 18:57:54 +0000 UTC" firstStartedPulling="2026-01-27 18:57:55.035596739 +0000 UTC m=+966.393450403" lastFinishedPulling="2026-01-27 18:57:59.518491955 +0000 UTC m=+970.876345619" observedRunningTime="2026-01-27 18:58:00.09726084 +0000 UTC m=+971.455114504" watchObservedRunningTime="2026-01-27 18:58:04.63701939 +0000 UTC m=+975.994873094" Jan 27 18:58:20 crc kubenswrapper[4915]: I0127 18:58:20.625095 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:58:20 crc kubenswrapper[4915]: I0127 18:58:20.625864 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.820426 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-575ffb885b-lfz95"] Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.821667 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-lfz95" Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.824183 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-t4hpx" Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.825909 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-mrctf"] Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.826663 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-mrctf" Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.830924 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-2wrrs"] Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.831620 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-2wrrs" Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.833261 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-j2lqv" Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.837192 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-zhfbv" Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.840404 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-4jwbr"] Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.841372 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-4jwbr" Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.844856 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-2wrrs"] Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.847268 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-mxszk" Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.850101 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-mrctf"] Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.855552 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-l5rlq"] Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.856468 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-l5rlq" Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.858310 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-cdfsf" Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.866405 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-575ffb885b-lfz95"] Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.871930 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-l5rlq"] Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.872212 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjjnz\" (UniqueName: \"kubernetes.io/projected/841f427a-0fba-4d27-b325-9e048c7242d0-kube-api-access-qjjnz\") pod \"designate-operator-controller-manager-77554cdc5c-4jwbr\" (UID: \"841f427a-0fba-4d27-b325-9e048c7242d0\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-4jwbr" Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.872271 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9krkp\" (UniqueName: \"kubernetes.io/projected/3abe8388-a712-4933-a52a-020b4d9cf1a1-kube-api-access-9krkp\") pod \"heat-operator-controller-manager-575ffb885b-lfz95\" (UID: \"3abe8388-a712-4933-a52a-020b4d9cf1a1\") " pod="openstack-operators/heat-operator-controller-manager-575ffb885b-lfz95" Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.872312 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9q87\" (UniqueName: \"kubernetes.io/projected/efcf0ee2-8cec-485e-b699-9228967f50de-kube-api-access-r9q87\") pod \"barbican-operator-controller-manager-65ff799cfd-mrctf\" (UID: \"efcf0ee2-8cec-485e-b699-9228967f50de\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-mrctf" Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.884403 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-4jwbr"] Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.894712 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-d7jbs"] Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.895606 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-d7jbs" Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.902889 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-dzs9j" Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.933966 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-d7jbs"] Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.951595 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-7499c"] Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.952328 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-7499c" Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.961307 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-grblw" Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.970039 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-k2wrn"] Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.971006 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-k2wrn" Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.986034 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjvr8\" (UniqueName: \"kubernetes.io/projected/411fdf42-e261-4eb9-a72b-e5067da8116d-kube-api-access-hjvr8\") pod \"cinder-operator-controller-manager-655bf9cfbb-l5rlq\" (UID: \"411fdf42-e261-4eb9-a72b-e5067da8116d\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-l5rlq" Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.986111 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9krkp\" (UniqueName: \"kubernetes.io/projected/3abe8388-a712-4933-a52a-020b4d9cf1a1-kube-api-access-9krkp\") pod \"heat-operator-controller-manager-575ffb885b-lfz95\" (UID: \"3abe8388-a712-4933-a52a-020b4d9cf1a1\") " pod="openstack-operators/heat-operator-controller-manager-575ffb885b-lfz95" Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.986199 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9q87\" (UniqueName: \"kubernetes.io/projected/efcf0ee2-8cec-485e-b699-9228967f50de-kube-api-access-r9q87\") pod \"barbican-operator-controller-manager-65ff799cfd-mrctf\" (UID: \"efcf0ee2-8cec-485e-b699-9228967f50de\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-mrctf" Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.986221 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dbvb\" (UniqueName: \"kubernetes.io/projected/3db878ad-a2f4-4bc6-bc28-a91ace116f6c-kube-api-access-8dbvb\") pod \"glance-operator-controller-manager-67dd55ff59-2wrrs\" (UID: \"3db878ad-a2f4-4bc6-bc28-a91ace116f6c\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-2wrrs" Jan 27 18:58:22 crc kubenswrapper[4915]: I0127 18:58:22.986443 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjjnz\" (UniqueName: \"kubernetes.io/projected/841f427a-0fba-4d27-b325-9e048c7242d0-kube-api-access-qjjnz\") pod \"designate-operator-controller-manager-77554cdc5c-4jwbr\" (UID: \"841f427a-0fba-4d27-b325-9e048c7242d0\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-4jwbr" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.019967 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-c72xq" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.035174 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9krkp\" (UniqueName: \"kubernetes.io/projected/3abe8388-a712-4933-a52a-020b4d9cf1a1-kube-api-access-9krkp\") pod \"heat-operator-controller-manager-575ffb885b-lfz95\" (UID: \"3abe8388-a712-4933-a52a-020b4d9cf1a1\") " pod="openstack-operators/heat-operator-controller-manager-575ffb885b-lfz95" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.035169 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-rklwj"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.036295 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-rklwj" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.040660 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9q87\" (UniqueName: \"kubernetes.io/projected/efcf0ee2-8cec-485e-b699-9228967f50de-kube-api-access-r9q87\") pod \"barbican-operator-controller-manager-65ff799cfd-mrctf\" (UID: \"efcf0ee2-8cec-485e-b699-9228967f50de\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-mrctf" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.047297 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-k8fz5" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.047532 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.049419 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-7499c"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.057456 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-k2wrn"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.057972 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjjnz\" (UniqueName: \"kubernetes.io/projected/841f427a-0fba-4d27-b325-9e048c7242d0-kube-api-access-qjjnz\") pod \"designate-operator-controller-manager-77554cdc5c-4jwbr\" (UID: \"841f427a-0fba-4d27-b325-9e048c7242d0\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-4jwbr" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.063097 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-rklwj"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.081839 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-c298f"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.082696 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-c298f" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.085929 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8qsl7"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.086811 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8qsl7" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.087390 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjvr8\" (UniqueName: \"kubernetes.io/projected/411fdf42-e261-4eb9-a72b-e5067da8116d-kube-api-access-hjvr8\") pod \"cinder-operator-controller-manager-655bf9cfbb-l5rlq\" (UID: \"411fdf42-e261-4eb9-a72b-e5067da8116d\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-l5rlq" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.087437 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bg9h\" (UniqueName: \"kubernetes.io/projected/a54f43cb-a1a3-47dc-8f79-9902abf8e15a-kube-api-access-9bg9h\") pod \"horizon-operator-controller-manager-77d5c5b54f-d7jbs\" (UID: \"a54f43cb-a1a3-47dc-8f79-9902abf8e15a\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-d7jbs" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.087473 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dbvb\" (UniqueName: \"kubernetes.io/projected/3db878ad-a2f4-4bc6-bc28-a91ace116f6c-kube-api-access-8dbvb\") pod \"glance-operator-controller-manager-67dd55ff59-2wrrs\" (UID: \"3db878ad-a2f4-4bc6-bc28-a91ace116f6c\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-2wrrs" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.087518 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9rgg\" (UniqueName: \"kubernetes.io/projected/157f36e8-64c1-41f5-a134-28a3dee716a0-kube-api-access-p9rgg\") pod \"keystone-operator-controller-manager-55f684fd56-k2wrn\" (UID: \"157f36e8-64c1-41f5-a134-28a3dee716a0\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-k2wrn" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.087537 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn5xv\" (UniqueName: \"kubernetes.io/projected/c4fd8f30-3efb-4002-9508-247a3973daf5-kube-api-access-jn5xv\") pod \"manila-operator-controller-manager-849fcfbb6b-7499c\" (UID: \"c4fd8f30-3efb-4002-9508-247a3973daf5\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-7499c" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.093185 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-b4mh9"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.093901 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-b4mh9" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.105844 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-c298f"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.118354 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-m2bmr" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.118534 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-sfblg" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.118666 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-566fl" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.118770 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-b4mh9"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.147863 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-2dx85"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.148985 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-2dx85" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.150236 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-lfz95" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.159459 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjvr8\" (UniqueName: \"kubernetes.io/projected/411fdf42-e261-4eb9-a72b-e5067da8116d-kube-api-access-hjvr8\") pod \"cinder-operator-controller-manager-655bf9cfbb-l5rlq\" (UID: \"411fdf42-e261-4eb9-a72b-e5067da8116d\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-l5rlq" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.159528 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-2dx85"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.163107 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-mrctf" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.172618 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-g7kw7" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.177431 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dbvb\" (UniqueName: \"kubernetes.io/projected/3db878ad-a2f4-4bc6-bc28-a91ace116f6c-kube-api-access-8dbvb\") pod \"glance-operator-controller-manager-67dd55ff59-2wrrs\" (UID: \"3db878ad-a2f4-4bc6-bc28-a91ace116f6c\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-2wrrs" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.180086 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-2wrrs" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.192848 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8qsl7"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.196312 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9rgg\" (UniqueName: \"kubernetes.io/projected/157f36e8-64c1-41f5-a134-28a3dee716a0-kube-api-access-p9rgg\") pod \"keystone-operator-controller-manager-55f684fd56-k2wrn\" (UID: \"157f36e8-64c1-41f5-a134-28a3dee716a0\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-k2wrn" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.196360 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn5xv\" (UniqueName: \"kubernetes.io/projected/c4fd8f30-3efb-4002-9508-247a3973daf5-kube-api-access-jn5xv\") pod \"manila-operator-controller-manager-849fcfbb6b-7499c\" (UID: \"c4fd8f30-3efb-4002-9508-247a3973daf5\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-7499c" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.196388 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e9d96b4-5711-408c-8f62-198e8a9af22f-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-rklwj\" (UID: \"4e9d96b4-5711-408c-8f62-198e8a9af22f\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-rklwj" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.196405 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lgrr\" (UniqueName: \"kubernetes.io/projected/4e9d96b4-5711-408c-8f62-198e8a9af22f-kube-api-access-6lgrr\") pod \"infra-operator-controller-manager-7d75bc88d5-rklwj\" (UID: \"4e9d96b4-5711-408c-8f62-198e8a9af22f\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-rklwj" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.196432 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvjnj\" (UniqueName: \"kubernetes.io/projected/1390839d-2560-4530-9a71-15568aa4e400-kube-api-access-pvjnj\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-8qsl7\" (UID: \"1390839d-2560-4530-9a71-15568aa4e400\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8qsl7" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.196471 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs72g\" (UniqueName: \"kubernetes.io/projected/b8d0acd2-831c-4c69-bb9e-661c93caf365-kube-api-access-gs72g\") pod \"neutron-operator-controller-manager-7ffd8d76d4-b4mh9\" (UID: \"b8d0acd2-831c-4c69-bb9e-661c93caf365\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-b4mh9" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.196503 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bg9h\" (UniqueName: \"kubernetes.io/projected/a54f43cb-a1a3-47dc-8f79-9902abf8e15a-kube-api-access-9bg9h\") pod \"horizon-operator-controller-manager-77d5c5b54f-d7jbs\" (UID: \"a54f43cb-a1a3-47dc-8f79-9902abf8e15a\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-d7jbs" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.196524 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwjwb\" (UniqueName: \"kubernetes.io/projected/ff429293-b11f-4482-90e7-41d277b5a044-kube-api-access-kwjwb\") pod \"ironic-operator-controller-manager-768b776ffb-c298f\" (UID: \"ff429293-b11f-4482-90e7-41d277b5a044\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-c298f" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.196551 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hfls\" (UniqueName: \"kubernetes.io/projected/14c2c0a2-eaa3-4f68-a081-35f187ee3ecd-kube-api-access-7hfls\") pod \"nova-operator-controller-manager-ddcbfd695-2dx85\" (UID: \"14c2c0a2-eaa3-4f68-a081-35f187ee3ecd\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-2dx85" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.198741 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-4jwbr" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.206272 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-l5rlq" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.230747 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9rgg\" (UniqueName: \"kubernetes.io/projected/157f36e8-64c1-41f5-a134-28a3dee716a0-kube-api-access-p9rgg\") pod \"keystone-operator-controller-manager-55f684fd56-k2wrn\" (UID: \"157f36e8-64c1-41f5-a134-28a3dee716a0\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-k2wrn" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.243592 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bg9h\" (UniqueName: \"kubernetes.io/projected/a54f43cb-a1a3-47dc-8f79-9902abf8e15a-kube-api-access-9bg9h\") pod \"horizon-operator-controller-manager-77d5c5b54f-d7jbs\" (UID: \"a54f43cb-a1a3-47dc-8f79-9902abf8e15a\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-d7jbs" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.254675 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-4rgf6"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.255538 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-4rgf6" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.260870 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-sv5kz" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.265513 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn5xv\" (UniqueName: \"kubernetes.io/projected/c4fd8f30-3efb-4002-9508-247a3973daf5-kube-api-access-jn5xv\") pod \"manila-operator-controller-manager-849fcfbb6b-7499c\" (UID: \"c4fd8f30-3efb-4002-9508-247a3973daf5\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-7499c" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.273282 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-vcxbh"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.274389 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vcxbh" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.278050 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-b4bs4" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.287979 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-7499c" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.298985 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs72g\" (UniqueName: \"kubernetes.io/projected/b8d0acd2-831c-4c69-bb9e-661c93caf365-kube-api-access-gs72g\") pod \"neutron-operator-controller-manager-7ffd8d76d4-b4mh9\" (UID: \"b8d0acd2-831c-4c69-bb9e-661c93caf365\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-b4mh9" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.299147 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwjwb\" (UniqueName: \"kubernetes.io/projected/ff429293-b11f-4482-90e7-41d277b5a044-kube-api-access-kwjwb\") pod \"ironic-operator-controller-manager-768b776ffb-c298f\" (UID: \"ff429293-b11f-4482-90e7-41d277b5a044\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-c298f" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.299232 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hfls\" (UniqueName: \"kubernetes.io/projected/14c2c0a2-eaa3-4f68-a081-35f187ee3ecd-kube-api-access-7hfls\") pod \"nova-operator-controller-manager-ddcbfd695-2dx85\" (UID: \"14c2c0a2-eaa3-4f68-a081-35f187ee3ecd\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-2dx85" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.299343 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7vb6\" (UniqueName: \"kubernetes.io/projected/f82c79f5-df27-4ba5-b193-2442492a9897-kube-api-access-v7vb6\") pod \"octavia-operator-controller-manager-7875d7675-4rgf6\" (UID: \"f82c79f5-df27-4ba5-b193-2442492a9897\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-4rgf6" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.299460 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lgrr\" (UniqueName: \"kubernetes.io/projected/4e9d96b4-5711-408c-8f62-198e8a9af22f-kube-api-access-6lgrr\") pod \"infra-operator-controller-manager-7d75bc88d5-rklwj\" (UID: \"4e9d96b4-5711-408c-8f62-198e8a9af22f\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-rklwj" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.299547 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e9d96b4-5711-408c-8f62-198e8a9af22f-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-rklwj\" (UID: \"4e9d96b4-5711-408c-8f62-198e8a9af22f\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-rklwj" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.299630 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b648r\" (UniqueName: \"kubernetes.io/projected/f448a710-dbc8-4561-8964-24b7bcf1ebf5-kube-api-access-b648r\") pod \"ovn-operator-controller-manager-6f75f45d54-vcxbh\" (UID: \"f448a710-dbc8-4561-8964-24b7bcf1ebf5\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vcxbh" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.299722 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvjnj\" (UniqueName: \"kubernetes.io/projected/1390839d-2560-4530-9a71-15568aa4e400-kube-api-access-pvjnj\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-8qsl7\" (UID: \"1390839d-2560-4530-9a71-15568aa4e400\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8qsl7" Jan 27 18:58:23 crc kubenswrapper[4915]: E0127 18:58:23.300549 4915 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 18:58:23 crc kubenswrapper[4915]: E0127 18:58:23.300685 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e9d96b4-5711-408c-8f62-198e8a9af22f-cert podName:4e9d96b4-5711-408c-8f62-198e8a9af22f nodeName:}" failed. No retries permitted until 2026-01-27 18:58:23.800669002 +0000 UTC m=+995.158522666 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4e9d96b4-5711-408c-8f62-198e8a9af22f-cert") pod "infra-operator-controller-manager-7d75bc88d5-rklwj" (UID: "4e9d96b4-5711-408c-8f62-198e8a9af22f") : secret "infra-operator-webhook-server-cert" not found Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.318935 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xjclv"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.319757 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xjclv" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.320573 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-k2wrn" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.331280 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-tfvrm" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.336967 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.338073 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvjnj\" (UniqueName: \"kubernetes.io/projected/1390839d-2560-4530-9a71-15568aa4e400-kube-api-access-pvjnj\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-8qsl7\" (UID: \"1390839d-2560-4530-9a71-15568aa4e400\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8qsl7" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.338643 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-vcxbh"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.352031 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lgrr\" (UniqueName: \"kubernetes.io/projected/4e9d96b4-5711-408c-8f62-198e8a9af22f-kube-api-access-6lgrr\") pod \"infra-operator-controller-manager-7d75bc88d5-rklwj\" (UID: \"4e9d96b4-5711-408c-8f62-198e8a9af22f\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-rklwj" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.353297 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwjwb\" (UniqueName: \"kubernetes.io/projected/ff429293-b11f-4482-90e7-41d277b5a044-kube-api-access-kwjwb\") pod \"ironic-operator-controller-manager-768b776ffb-c298f\" (UID: \"ff429293-b11f-4482-90e7-41d277b5a044\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-c298f" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.354671 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs72g\" (UniqueName: \"kubernetes.io/projected/b8d0acd2-831c-4c69-bb9e-661c93caf365-kube-api-access-gs72g\") pod \"neutron-operator-controller-manager-7ffd8d76d4-b4mh9\" (UID: \"b8d0acd2-831c-4c69-bb9e-661c93caf365\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-b4mh9" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.375529 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hfls\" (UniqueName: \"kubernetes.io/projected/14c2c0a2-eaa3-4f68-a081-35f187ee3ecd-kube-api-access-7hfls\") pod \"nova-operator-controller-manager-ddcbfd695-2dx85\" (UID: \"14c2c0a2-eaa3-4f68-a081-35f187ee3ecd\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-2dx85" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.400549 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vb6\" (UniqueName: \"kubernetes.io/projected/f82c79f5-df27-4ba5-b193-2442492a9897-kube-api-access-v7vb6\") pod \"octavia-operator-controller-manager-7875d7675-4rgf6\" (UID: \"f82c79f5-df27-4ba5-b193-2442492a9897\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-4rgf6" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.400631 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b648r\" (UniqueName: \"kubernetes.io/projected/f448a710-dbc8-4561-8964-24b7bcf1ebf5-kube-api-access-b648r\") pod \"ovn-operator-controller-manager-6f75f45d54-vcxbh\" (UID: \"f448a710-dbc8-4561-8964-24b7bcf1ebf5\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vcxbh" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.400693 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c66v6\" (UniqueName: \"kubernetes.io/projected/431eefc2-c140-49b9-acbc-523dffc5195b-kube-api-access-c66v6\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xjclv\" (UID: \"431eefc2-c140-49b9-acbc-523dffc5195b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xjclv" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.400723 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/431eefc2-c140-49b9-acbc-523dffc5195b-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xjclv\" (UID: \"431eefc2-c140-49b9-acbc-523dffc5195b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xjclv" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.400822 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-4rgf6"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.412196 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-c298f" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.417065 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xjclv"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.422635 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-4lskf"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.423542 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4lskf" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.424871 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8qsl7" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.427333 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-r6wjz" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.427998 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7vb6\" (UniqueName: \"kubernetes.io/projected/f82c79f5-df27-4ba5-b193-2442492a9897-kube-api-access-v7vb6\") pod \"octavia-operator-controller-manager-7875d7675-4rgf6\" (UID: \"f82c79f5-df27-4ba5-b193-2442492a9897\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-4rgf6" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.432129 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-wn9ns"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.433178 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-wn9ns" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.435645 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-vkvd6" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.436101 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-b4mh9" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.439146 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b648r\" (UniqueName: \"kubernetes.io/projected/f448a710-dbc8-4561-8964-24b7bcf1ebf5-kube-api-access-b648r\") pod \"ovn-operator-controller-manager-6f75f45d54-vcxbh\" (UID: \"f448a710-dbc8-4561-8964-24b7bcf1ebf5\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vcxbh" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.447067 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-4lskf"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.457832 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-wn9ns"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.502740 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhcbq\" (UniqueName: \"kubernetes.io/projected/cd2e585e-7b6a-4b81-bd2e-969dfe93ea12-kube-api-access-zhcbq\") pod \"placement-operator-controller-manager-79d5ccc684-wn9ns\" (UID: \"cd2e585e-7b6a-4b81-bd2e-969dfe93ea12\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-wn9ns" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.502823 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c66v6\" (UniqueName: \"kubernetes.io/projected/431eefc2-c140-49b9-acbc-523dffc5195b-kube-api-access-c66v6\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xjclv\" (UID: \"431eefc2-c140-49b9-acbc-523dffc5195b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xjclv" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.502853 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/431eefc2-c140-49b9-acbc-523dffc5195b-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xjclv\" (UID: \"431eefc2-c140-49b9-acbc-523dffc5195b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xjclv" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.502883 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c82tk\" (UniqueName: \"kubernetes.io/projected/04ed2ea6-d6e8-4e28-a038-7e9b23259535-kube-api-access-c82tk\") pod \"swift-operator-controller-manager-547cbdb99f-4lskf\" (UID: \"04ed2ea6-d6e8-4e28-a038-7e9b23259535\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4lskf" Jan 27 18:58:23 crc kubenswrapper[4915]: E0127 18:58:23.509172 4915 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:58:23 crc kubenswrapper[4915]: E0127 18:58:23.509210 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/431eefc2-c140-49b9-acbc-523dffc5195b-cert podName:431eefc2-c140-49b9-acbc-523dffc5195b nodeName:}" failed. No retries permitted until 2026-01-27 18:58:24.009197 +0000 UTC m=+995.367050664 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/431eefc2-c140-49b9-acbc-523dffc5195b-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854xjclv" (UID: "431eefc2-c140-49b9-acbc-523dffc5195b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.527627 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-d7jbs" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.548055 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c66v6\" (UniqueName: \"kubernetes.io/projected/431eefc2-c140-49b9-acbc-523dffc5195b-kube-api-access-c66v6\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xjclv\" (UID: \"431eefc2-c140-49b9-acbc-523dffc5195b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xjclv" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.561382 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-2dx85" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.572776 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-9lrtq"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.580354 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-9lrtq" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.588543 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-9lrtq"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.590127 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-2g4vc" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.590176 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-4rgf6" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.609402 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c82tk\" (UniqueName: \"kubernetes.io/projected/04ed2ea6-d6e8-4e28-a038-7e9b23259535-kube-api-access-c82tk\") pod \"swift-operator-controller-manager-547cbdb99f-4lskf\" (UID: \"04ed2ea6-d6e8-4e28-a038-7e9b23259535\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4lskf" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.609569 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhcbq\" (UniqueName: \"kubernetes.io/projected/cd2e585e-7b6a-4b81-bd2e-969dfe93ea12-kube-api-access-zhcbq\") pod \"placement-operator-controller-manager-79d5ccc684-wn9ns\" (UID: \"cd2e585e-7b6a-4b81-bd2e-969dfe93ea12\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-wn9ns" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.615337 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vcxbh" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.659856 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhcbq\" (UniqueName: \"kubernetes.io/projected/cd2e585e-7b6a-4b81-bd2e-969dfe93ea12-kube-api-access-zhcbq\") pod \"placement-operator-controller-manager-79d5ccc684-wn9ns\" (UID: \"cd2e585e-7b6a-4b81-bd2e-969dfe93ea12\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-wn9ns" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.664492 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c82tk\" (UniqueName: \"kubernetes.io/projected/04ed2ea6-d6e8-4e28-a038-7e9b23259535-kube-api-access-c82tk\") pod \"swift-operator-controller-manager-547cbdb99f-4lskf\" (UID: \"04ed2ea6-d6e8-4e28-a038-7e9b23259535\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4lskf" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.666902 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-jdv22"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.668020 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jdv22" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.673829 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-77rks" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.685357 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-jdv22"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.699428 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-767b8bc766-hs2pq"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.701494 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-hs2pq" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.704319 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-cpmjf" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.711720 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86s57\" (UniqueName: \"kubernetes.io/projected/3ca605d2-336e-4af3-ace1-f4615c16aa3e-kube-api-access-86s57\") pod \"telemetry-operator-controller-manager-799bc87c89-9lrtq\" (UID: \"3ca605d2-336e-4af3-ace1-f4615c16aa3e\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-9lrtq" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.733962 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-767b8bc766-hs2pq"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.763821 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.764872 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4lskf" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.765204 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.766784 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-lfd9l" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.767028 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.768256 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.780374 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-wn9ns" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.782221 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.800187 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b5ht7"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.806651 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b5ht7"] Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.806753 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b5ht7" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.811948 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-8hl2t" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.817555 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e9d96b4-5711-408c-8f62-198e8a9af22f-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-rklwj\" (UID: \"4e9d96b4-5711-408c-8f62-198e8a9af22f\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-rklwj" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.817627 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86s57\" (UniqueName: \"kubernetes.io/projected/3ca605d2-336e-4af3-ace1-f4615c16aa3e-kube-api-access-86s57\") pod \"telemetry-operator-controller-manager-799bc87c89-9lrtq\" (UID: \"3ca605d2-336e-4af3-ace1-f4615c16aa3e\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-9lrtq" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.817659 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gl65\" (UniqueName: \"kubernetes.io/projected/0a712a3e-d212-4452-950d-51d8c803ffdf-kube-api-access-7gl65\") pod \"test-operator-controller-manager-69797bbcbd-jdv22\" (UID: \"0a712a3e-d212-4452-950d-51d8c803ffdf\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jdv22" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.817746 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrp6m\" (UniqueName: \"kubernetes.io/projected/dd5fd940-cc29-4114-9634-3236f285b65c-kube-api-access-hrp6m\") pod \"watcher-operator-controller-manager-767b8bc766-hs2pq\" (UID: \"dd5fd940-cc29-4114-9634-3236f285b65c\") " pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-hs2pq" Jan 27 18:58:23 crc kubenswrapper[4915]: E0127 18:58:23.817929 4915 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 18:58:23 crc kubenswrapper[4915]: E0127 18:58:23.817988 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e9d96b4-5711-408c-8f62-198e8a9af22f-cert podName:4e9d96b4-5711-408c-8f62-198e8a9af22f nodeName:}" failed. No retries permitted until 2026-01-27 18:58:24.817969637 +0000 UTC m=+996.175823301 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4e9d96b4-5711-408c-8f62-198e8a9af22f-cert") pod "infra-operator-controller-manager-7d75bc88d5-rklwj" (UID: "4e9d96b4-5711-408c-8f62-198e8a9af22f") : secret "infra-operator-webhook-server-cert" not found Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.836898 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86s57\" (UniqueName: \"kubernetes.io/projected/3ca605d2-336e-4af3-ace1-f4615c16aa3e-kube-api-access-86s57\") pod \"telemetry-operator-controller-manager-799bc87c89-9lrtq\" (UID: \"3ca605d2-336e-4af3-ace1-f4615c16aa3e\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-9lrtq" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.918893 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-webhook-certs\") pod \"openstack-operator-controller-manager-bf776578d-hcmpr\" (UID: \"c5c7155d-017b-4653-9294-5d99093b027d\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.918969 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bbth\" (UniqueName: \"kubernetes.io/projected/c5c7155d-017b-4653-9294-5d99093b027d-kube-api-access-5bbth\") pod \"openstack-operator-controller-manager-bf776578d-hcmpr\" (UID: \"c5c7155d-017b-4653-9294-5d99093b027d\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.919015 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gl65\" (UniqueName: \"kubernetes.io/projected/0a712a3e-d212-4452-950d-51d8c803ffdf-kube-api-access-7gl65\") pod \"test-operator-controller-manager-69797bbcbd-jdv22\" (UID: \"0a712a3e-d212-4452-950d-51d8c803ffdf\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jdv22" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.919045 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-metrics-certs\") pod \"openstack-operator-controller-manager-bf776578d-hcmpr\" (UID: \"c5c7155d-017b-4653-9294-5d99093b027d\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.919079 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s626s\" (UniqueName: \"kubernetes.io/projected/ce0a16bf-c68f-41c4-a3ee-88cfa0f32161-kube-api-access-s626s\") pod \"rabbitmq-cluster-operator-manager-668c99d594-b5ht7\" (UID: \"ce0a16bf-c68f-41c4-a3ee-88cfa0f32161\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b5ht7" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.919117 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrp6m\" (UniqueName: \"kubernetes.io/projected/dd5fd940-cc29-4114-9634-3236f285b65c-kube-api-access-hrp6m\") pod \"watcher-operator-controller-manager-767b8bc766-hs2pq\" (UID: \"dd5fd940-cc29-4114-9634-3236f285b65c\") " pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-hs2pq" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.937099 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gl65\" (UniqueName: \"kubernetes.io/projected/0a712a3e-d212-4452-950d-51d8c803ffdf-kube-api-access-7gl65\") pod \"test-operator-controller-manager-69797bbcbd-jdv22\" (UID: \"0a712a3e-d212-4452-950d-51d8c803ffdf\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jdv22" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.940455 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrp6m\" (UniqueName: \"kubernetes.io/projected/dd5fd940-cc29-4114-9634-3236f285b65c-kube-api-access-hrp6m\") pod \"watcher-operator-controller-manager-767b8bc766-hs2pq\" (UID: \"dd5fd940-cc29-4114-9634-3236f285b65c\") " pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-hs2pq" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.972182 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-9lrtq" Jan 27 18:58:23 crc kubenswrapper[4915]: I0127 18:58:23.989055 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jdv22" Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.021398 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-webhook-certs\") pod \"openstack-operator-controller-manager-bf776578d-hcmpr\" (UID: \"c5c7155d-017b-4653-9294-5d99093b027d\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr" Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.021473 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bbth\" (UniqueName: \"kubernetes.io/projected/c5c7155d-017b-4653-9294-5d99093b027d-kube-api-access-5bbth\") pod \"openstack-operator-controller-manager-bf776578d-hcmpr\" (UID: \"c5c7155d-017b-4653-9294-5d99093b027d\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr" Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.021532 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-metrics-certs\") pod \"openstack-operator-controller-manager-bf776578d-hcmpr\" (UID: \"c5c7155d-017b-4653-9294-5d99093b027d\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr" Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.021565 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s626s\" (UniqueName: \"kubernetes.io/projected/ce0a16bf-c68f-41c4-a3ee-88cfa0f32161-kube-api-access-s626s\") pod \"rabbitmq-cluster-operator-manager-668c99d594-b5ht7\" (UID: \"ce0a16bf-c68f-41c4-a3ee-88cfa0f32161\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b5ht7" Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.021590 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/431eefc2-c140-49b9-acbc-523dffc5195b-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xjclv\" (UID: \"431eefc2-c140-49b9-acbc-523dffc5195b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xjclv" Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.021747 4915 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.021808 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/431eefc2-c140-49b9-acbc-523dffc5195b-cert podName:431eefc2-c140-49b9-acbc-523dffc5195b nodeName:}" failed. No retries permitted until 2026-01-27 18:58:25.021782308 +0000 UTC m=+996.379635972 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/431eefc2-c140-49b9-acbc-523dffc5195b-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854xjclv" (UID: "431eefc2-c140-49b9-acbc-523dffc5195b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.022119 4915 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.022150 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-webhook-certs podName:c5c7155d-017b-4653-9294-5d99093b027d nodeName:}" failed. No retries permitted until 2026-01-27 18:58:24.522141157 +0000 UTC m=+995.879994821 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-webhook-certs") pod "openstack-operator-controller-manager-bf776578d-hcmpr" (UID: "c5c7155d-017b-4653-9294-5d99093b027d") : secret "webhook-server-cert" not found Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.022330 4915 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.022354 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-metrics-certs podName:c5c7155d-017b-4653-9294-5d99093b027d nodeName:}" failed. No retries permitted until 2026-01-27 18:58:24.522346782 +0000 UTC m=+995.880200446 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-metrics-certs") pod "openstack-operator-controller-manager-bf776578d-hcmpr" (UID: "c5c7155d-017b-4653-9294-5d99093b027d") : secret "metrics-server-cert" not found Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.036199 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-hs2pq" Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.059472 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s626s\" (UniqueName: \"kubernetes.io/projected/ce0a16bf-c68f-41c4-a3ee-88cfa0f32161-kube-api-access-s626s\") pod \"rabbitmq-cluster-operator-manager-668c99d594-b5ht7\" (UID: \"ce0a16bf-c68f-41c4-a3ee-88cfa0f32161\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b5ht7" Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.060179 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bbth\" (UniqueName: \"kubernetes.io/projected/c5c7155d-017b-4653-9294-5d99093b027d-kube-api-access-5bbth\") pod \"openstack-operator-controller-manager-bf776578d-hcmpr\" (UID: \"c5c7155d-017b-4653-9294-5d99093b027d\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr" Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.064926 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-mrctf"] Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.180283 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b5ht7" Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.233064 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-mrctf" event={"ID":"efcf0ee2-8cec-485e-b699-9228967f50de","Type":"ContainerStarted","Data":"216776393b5ca74bbb32a123fb4c992d139be282345c4b7b39464915fb20ceb0"} Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.456830 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-575ffb885b-lfz95"] Jan 27 18:58:24 crc kubenswrapper[4915]: W0127 18:58:24.465591 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3abe8388_a712_4933_a52a_020b4d9cf1a1.slice/crio-e8bf23fd42a7817bf278c122e3b8d619956c193e08b9bdef10e834c7c7f932a7 WatchSource:0}: Error finding container e8bf23fd42a7817bf278c122e3b8d619956c193e08b9bdef10e834c7c7f932a7: Status 404 returned error can't find the container with id e8bf23fd42a7817bf278c122e3b8d619956c193e08b9bdef10e834c7c7f932a7 Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.474067 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-4jwbr"] Jan 27 18:58:24 crc kubenswrapper[4915]: W0127 18:58:24.489181 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod841f427a_0fba_4d27_b325_9e048c7242d0.slice/crio-f596358a8d84416a50d01e522d6cd13d899d83d18d65ff8f216e2fcb002ae504 WatchSource:0}: Error finding container f596358a8d84416a50d01e522d6cd13d899d83d18d65ff8f216e2fcb002ae504: Status 404 returned error can't find the container with id f596358a8d84416a50d01e522d6cd13d899d83d18d65ff8f216e2fcb002ae504 Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.516587 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-2wrrs"] Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.526846 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-k2wrn"] Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.530401 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-webhook-certs\") pod \"openstack-operator-controller-manager-bf776578d-hcmpr\" (UID: \"c5c7155d-017b-4653-9294-5d99093b027d\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr" Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.530541 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-metrics-certs\") pod \"openstack-operator-controller-manager-bf776578d-hcmpr\" (UID: \"c5c7155d-017b-4653-9294-5d99093b027d\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr" Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.531289 4915 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.531338 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-webhook-certs podName:c5c7155d-017b-4653-9294-5d99093b027d nodeName:}" failed. No retries permitted until 2026-01-27 18:58:25.531324072 +0000 UTC m=+996.889177736 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-webhook-certs") pod "openstack-operator-controller-manager-bf776578d-hcmpr" (UID: "c5c7155d-017b-4653-9294-5d99093b027d") : secret "webhook-server-cert" not found Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.531548 4915 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.531621 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-metrics-certs podName:c5c7155d-017b-4653-9294-5d99093b027d nodeName:}" failed. No retries permitted until 2026-01-27 18:58:25.531600319 +0000 UTC m=+996.889454073 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-metrics-certs") pod "openstack-operator-controller-manager-bf776578d-hcmpr" (UID: "c5c7155d-017b-4653-9294-5d99093b027d") : secret "metrics-server-cert" not found Jan 27 18:58:24 crc kubenswrapper[4915]: W0127 18:58:24.534160 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod157f36e8_64c1_41f5_a134_28a3dee716a0.slice/crio-05858fd886be1342c132787c5a6fc373034891bd252e3c07d7b88dea94a7098d WatchSource:0}: Error finding container 05858fd886be1342c132787c5a6fc373034891bd252e3c07d7b88dea94a7098d: Status 404 returned error can't find the container with id 05858fd886be1342c132787c5a6fc373034891bd252e3c07d7b88dea94a7098d Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.536882 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-7499c"] Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.559462 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-l5rlq"] Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.594458 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-d7jbs"] Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.605604 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-c298f"] Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.610165 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-4rgf6"] Jan 27 18:58:24 crc kubenswrapper[4915]: W0127 18:58:24.614999 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda54f43cb_a1a3_47dc_8f79_9902abf8e15a.slice/crio-1ba37af6f44884af572beb958940374a04b000f70b110fc282922e0da5ab8932 WatchSource:0}: Error finding container 1ba37af6f44884af572beb958940374a04b000f70b110fc282922e0da5ab8932: Status 404 returned error can't find the container with id 1ba37af6f44884af572beb958940374a04b000f70b110fc282922e0da5ab8932 Jan 27 18:58:24 crc kubenswrapper[4915]: W0127 18:58:24.615251 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14c2c0a2_eaa3_4f68_a081_35f187ee3ecd.slice/crio-2619d4c0ddde4bf0c7b56fef5e1cf12e6add5cae999fd16b34a08afa6b4b9fe0 WatchSource:0}: Error finding container 2619d4c0ddde4bf0c7b56fef5e1cf12e6add5cae999fd16b34a08afa6b4b9fe0: Status 404 returned error can't find the container with id 2619d4c0ddde4bf0c7b56fef5e1cf12e6add5cae999fd16b34a08afa6b4b9fe0 Jan 27 18:58:24 crc kubenswrapper[4915]: W0127 18:58:24.616121 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf82c79f5_df27_4ba5_b193_2442492a9897.slice/crio-17b3727dd3f29fb758a30aef5af52a0b36752d203d710e39bf3a71c692dfbdf4 WatchSource:0}: Error finding container 17b3727dd3f29fb758a30aef5af52a0b36752d203d710e39bf3a71c692dfbdf4: Status 404 returned error can't find the container with id 17b3727dd3f29fb758a30aef5af52a0b36752d203d710e39bf3a71c692dfbdf4 Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.623518 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8qsl7"] Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.627884 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-2dx85"] Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.635279 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-b4mh9"] Jan 27 18:58:24 crc kubenswrapper[4915]: W0127 18:58:24.639258 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8d0acd2_831c_4c69_bb9e_661c93caf365.slice/crio-d848f46375ccde49db956e756434ae85a013d8cd4eec7dce29e5c887fe1c6dea WatchSource:0}: Error finding container d848f46375ccde49db956e756434ae85a013d8cd4eec7dce29e5c887fe1c6dea: Status 404 returned error can't find the container with id d848f46375ccde49db956e756434ae85a013d8cd4eec7dce29e5c887fe1c6dea Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.639367 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pvjnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-8qsl7_openstack-operators(1390839d-2560-4530-9a71-15568aa4e400): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.643311 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8qsl7" podUID="1390839d-2560-4530-9a71-15568aa4e400" Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.655319 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gs72g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7ffd8d76d4-b4mh9_openstack-operators(b8d0acd2-831c-4c69-bb9e-661c93caf365): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.656615 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-b4mh9" podUID="b8d0acd2-831c-4c69-bb9e-661c93caf365" Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.784897 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-jdv22"] Jan 27 18:58:24 crc kubenswrapper[4915]: W0127 18:58:24.792312 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a712a3e_d212_4452_950d_51d8c803ffdf.slice/crio-297972a3b459e3ee2b3d4a0b904e59eef2b87cefd4cf09ec283da6be9faf8dc1 WatchSource:0}: Error finding container 297972a3b459e3ee2b3d4a0b904e59eef2b87cefd4cf09ec283da6be9faf8dc1: Status 404 returned error can't find the container with id 297972a3b459e3ee2b3d4a0b904e59eef2b87cefd4cf09ec283da6be9faf8dc1 Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.797691 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-9lrtq"] Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.799031 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7gl65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-jdv22_openstack-operators(0a712a3e-d212-4452-950d-51d8c803ffdf): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.800903 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jdv22" podUID="0a712a3e-d212-4452-950d-51d8c803ffdf" Jan 27 18:58:24 crc kubenswrapper[4915]: W0127 18:58:24.818202 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04ed2ea6_d6e8_4e28_a038_7e9b23259535.slice/crio-1b408f88451d74018f506718e02a110a10c2e3dcfc52b92d935b152200f5de09 WatchSource:0}: Error finding container 1b408f88451d74018f506718e02a110a10c2e3dcfc52b92d935b152200f5de09: Status 404 returned error can't find the container with id 1b408f88451d74018f506718e02a110a10c2e3dcfc52b92d935b152200f5de09 Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.821543 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c82tk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-4lskf_openstack-operators(04ed2ea6-d6e8-4e28-a038-7e9b23259535): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.822686 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4lskf" podUID="04ed2ea6-d6e8-4e28-a038-7e9b23259535" Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.824759 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-4lskf"] Jan 27 18:58:24 crc kubenswrapper[4915]: W0127 18:58:24.828972 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf448a710_dbc8_4561_8964_24b7bcf1ebf5.slice/crio-3763f55dbc7378d159bb707faa930eb6596be9d6a3e6c33333d463c42acc0ddc WatchSource:0}: Error finding container 3763f55dbc7378d159bb707faa930eb6596be9d6a3e6c33333d463c42acc0ddc: Status 404 returned error can't find the container with id 3763f55dbc7378d159bb707faa930eb6596be9d6a3e6c33333d463c42acc0ddc Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.829742 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-vcxbh"] Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.831674 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/watcher-operator@sha256:35f1eb96f42069bb8f7c33942fb86b41843ba02803464245c16192ccda3d50e4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hrp6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-767b8bc766-hs2pq_openstack-operators(dd5fd940-cc29-4114-9634-3236f285b65c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.831696 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b648r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-vcxbh_openstack-operators(f448a710-dbc8-4561-8964-24b7bcf1ebf5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.832742 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zhcbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-wn9ns_openstack-operators(cd2e585e-7b6a-4b81-bd2e-969dfe93ea12): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.832841 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vcxbh" podUID="f448a710-dbc8-4561-8964-24b7bcf1ebf5" Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.832862 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-hs2pq" podUID="dd5fd940-cc29-4114-9634-3236f285b65c" Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.833119 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s626s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-b5ht7_openstack-operators(ce0a16bf-c68f-41c4-a3ee-88cfa0f32161): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.834829 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b5ht7" podUID="ce0a16bf-c68f-41c4-a3ee-88cfa0f32161" Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.834860 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-wn9ns" podUID="cd2e585e-7b6a-4b81-bd2e-969dfe93ea12" Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.835449 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e9d96b4-5711-408c-8f62-198e8a9af22f-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-rklwj\" (UID: \"4e9d96b4-5711-408c-8f62-198e8a9af22f\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-rklwj" Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.835587 4915 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 18:58:24 crc kubenswrapper[4915]: E0127 18:58:24.835652 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e9d96b4-5711-408c-8f62-198e8a9af22f-cert podName:4e9d96b4-5711-408c-8f62-198e8a9af22f nodeName:}" failed. No retries permitted until 2026-01-27 18:58:26.83563315 +0000 UTC m=+998.193486804 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4e9d96b4-5711-408c-8f62-198e8a9af22f-cert") pod "infra-operator-controller-manager-7d75bc88d5-rklwj" (UID: "4e9d96b4-5711-408c-8f62-198e8a9af22f") : secret "infra-operator-webhook-server-cert" not found Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.836137 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-wn9ns"] Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.841860 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b5ht7"] Jan 27 18:58:24 crc kubenswrapper[4915]: I0127 18:58:24.846615 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-767b8bc766-hs2pq"] Jan 27 18:58:25 crc kubenswrapper[4915]: I0127 18:58:25.037440 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/431eefc2-c140-49b9-acbc-523dffc5195b-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xjclv\" (UID: \"431eefc2-c140-49b9-acbc-523dffc5195b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xjclv" Jan 27 18:58:25 crc kubenswrapper[4915]: E0127 18:58:25.037619 4915 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:58:25 crc kubenswrapper[4915]: E0127 18:58:25.037695 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/431eefc2-c140-49b9-acbc-523dffc5195b-cert podName:431eefc2-c140-49b9-acbc-523dffc5195b nodeName:}" failed. No retries permitted until 2026-01-27 18:58:27.037673039 +0000 UTC m=+998.395526713 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/431eefc2-c140-49b9-acbc-523dffc5195b-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854xjclv" (UID: "431eefc2-c140-49b9-acbc-523dffc5195b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:58:25 crc kubenswrapper[4915]: I0127 18:58:25.239694 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-9lrtq" event={"ID":"3ca605d2-336e-4af3-ace1-f4615c16aa3e","Type":"ContainerStarted","Data":"9c8551946195a6d06367184aba02046c0f0988f396164a9f58e71f47611be4b5"} Jan 27 18:58:25 crc kubenswrapper[4915]: I0127 18:58:25.241131 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-l5rlq" event={"ID":"411fdf42-e261-4eb9-a72b-e5067da8116d","Type":"ContainerStarted","Data":"645f50352bb8e5a370781522c7e5fff80eb242514f8ac895a3daa2d463740fc3"} Jan 27 18:58:25 crc kubenswrapper[4915]: I0127 18:58:25.242844 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-c298f" event={"ID":"ff429293-b11f-4482-90e7-41d277b5a044","Type":"ContainerStarted","Data":"046c2dd4ead2427f13a03aa052f20b1bf46e8a280085f9b0f8f1d02dd18bf97c"} Jan 27 18:58:25 crc kubenswrapper[4915]: I0127 18:58:25.244424 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-4jwbr" event={"ID":"841f427a-0fba-4d27-b325-9e048c7242d0","Type":"ContainerStarted","Data":"f596358a8d84416a50d01e522d6cd13d899d83d18d65ff8f216e2fcb002ae504"} Jan 27 18:58:25 crc kubenswrapper[4915]: I0127 18:58:25.245647 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-2dx85" event={"ID":"14c2c0a2-eaa3-4f68-a081-35f187ee3ecd","Type":"ContainerStarted","Data":"2619d4c0ddde4bf0c7b56fef5e1cf12e6add5cae999fd16b34a08afa6b4b9fe0"} Jan 27 18:58:25 crc kubenswrapper[4915]: I0127 18:58:25.246750 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8qsl7" event={"ID":"1390839d-2560-4530-9a71-15568aa4e400","Type":"ContainerStarted","Data":"c0090d8ea95e1bce5cbb2a9f2948e52a369ef17b85e08fc164a4db645d9266d0"} Jan 27 18:58:25 crc kubenswrapper[4915]: I0127 18:58:25.248276 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-7499c" event={"ID":"c4fd8f30-3efb-4002-9508-247a3973daf5","Type":"ContainerStarted","Data":"5471be4ad659b09b157b0d966de8e12797501eeb557ab68e827df5269e64f967"} Jan 27 18:58:25 crc kubenswrapper[4915]: E0127 18:58:25.249861 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8qsl7" podUID="1390839d-2560-4530-9a71-15568aa4e400" Jan 27 18:58:25 crc kubenswrapper[4915]: I0127 18:58:25.250880 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-k2wrn" event={"ID":"157f36e8-64c1-41f5-a134-28a3dee716a0","Type":"ContainerStarted","Data":"05858fd886be1342c132787c5a6fc373034891bd252e3c07d7b88dea94a7098d"} Jan 27 18:58:25 crc kubenswrapper[4915]: I0127 18:58:25.251719 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-d7jbs" event={"ID":"a54f43cb-a1a3-47dc-8f79-9902abf8e15a","Type":"ContainerStarted","Data":"1ba37af6f44884af572beb958940374a04b000f70b110fc282922e0da5ab8932"} Jan 27 18:58:25 crc kubenswrapper[4915]: I0127 18:58:25.253242 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-4rgf6" event={"ID":"f82c79f5-df27-4ba5-b193-2442492a9897","Type":"ContainerStarted","Data":"17b3727dd3f29fb758a30aef5af52a0b36752d203d710e39bf3a71c692dfbdf4"} Jan 27 18:58:25 crc kubenswrapper[4915]: I0127 18:58:25.258751 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jdv22" event={"ID":"0a712a3e-d212-4452-950d-51d8c803ffdf","Type":"ContainerStarted","Data":"297972a3b459e3ee2b3d4a0b904e59eef2b87cefd4cf09ec283da6be9faf8dc1"} Jan 27 18:58:25 crc kubenswrapper[4915]: E0127 18:58:25.266281 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jdv22" podUID="0a712a3e-d212-4452-950d-51d8c803ffdf" Jan 27 18:58:25 crc kubenswrapper[4915]: I0127 18:58:25.267395 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-2wrrs" event={"ID":"3db878ad-a2f4-4bc6-bc28-a91ace116f6c","Type":"ContainerStarted","Data":"1b5bd9b6e85fcc6bf152d8f62ab846b968c9e3f10bb8bf2d35d8de683d4622da"} Jan 27 18:58:25 crc kubenswrapper[4915]: I0127 18:58:25.270612 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vcxbh" event={"ID":"f448a710-dbc8-4561-8964-24b7bcf1ebf5","Type":"ContainerStarted","Data":"3763f55dbc7378d159bb707faa930eb6596be9d6a3e6c33333d463c42acc0ddc"} Jan 27 18:58:25 crc kubenswrapper[4915]: E0127 18:58:25.276805 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vcxbh" podUID="f448a710-dbc8-4561-8964-24b7bcf1ebf5" Jan 27 18:58:25 crc kubenswrapper[4915]: I0127 18:58:25.278038 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-b4mh9" event={"ID":"b8d0acd2-831c-4c69-bb9e-661c93caf365","Type":"ContainerStarted","Data":"d848f46375ccde49db956e756434ae85a013d8cd4eec7dce29e5c887fe1c6dea"} Jan 27 18:58:25 crc kubenswrapper[4915]: E0127 18:58:25.279351 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-b4mh9" podUID="b8d0acd2-831c-4c69-bb9e-661c93caf365" Jan 27 18:58:25 crc kubenswrapper[4915]: I0127 18:58:25.279926 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-wn9ns" event={"ID":"cd2e585e-7b6a-4b81-bd2e-969dfe93ea12","Type":"ContainerStarted","Data":"42099ea15ddac37f96df79ad6643ffee2092791fcc5a0558c9d8ddb35bd26e5d"} Jan 27 18:58:25 crc kubenswrapper[4915]: E0127 18:58:25.280873 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-wn9ns" podUID="cd2e585e-7b6a-4b81-bd2e-969dfe93ea12" Jan 27 18:58:25 crc kubenswrapper[4915]: I0127 18:58:25.286027 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-hs2pq" event={"ID":"dd5fd940-cc29-4114-9634-3236f285b65c","Type":"ContainerStarted","Data":"dc32ba6a9a52b7586e2196cd09fd9fdfa6adf99ca7bd67adf094cb8e3b9d2397"} Jan 27 18:58:25 crc kubenswrapper[4915]: I0127 18:58:25.287300 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b5ht7" event={"ID":"ce0a16bf-c68f-41c4-a3ee-88cfa0f32161","Type":"ContainerStarted","Data":"5349d7f8e6a4e5440e934b2974d9d85a4964546a4298b5fc6f7d3d54b9e0f5e6"} Jan 27 18:58:25 crc kubenswrapper[4915]: E0127 18:58:25.287753 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:35f1eb96f42069bb8f7c33942fb86b41843ba02803464245c16192ccda3d50e4\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-hs2pq" podUID="dd5fd940-cc29-4114-9634-3236f285b65c" Jan 27 18:58:25 crc kubenswrapper[4915]: E0127 18:58:25.288706 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b5ht7" podUID="ce0a16bf-c68f-41c4-a3ee-88cfa0f32161" Jan 27 18:58:25 crc kubenswrapper[4915]: I0127 18:58:25.290338 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4lskf" event={"ID":"04ed2ea6-d6e8-4e28-a038-7e9b23259535","Type":"ContainerStarted","Data":"1b408f88451d74018f506718e02a110a10c2e3dcfc52b92d935b152200f5de09"} Jan 27 18:58:25 crc kubenswrapper[4915]: E0127 18:58:25.295249 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4lskf" podUID="04ed2ea6-d6e8-4e28-a038-7e9b23259535" Jan 27 18:58:25 crc kubenswrapper[4915]: I0127 18:58:25.296965 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-lfz95" event={"ID":"3abe8388-a712-4933-a52a-020b4d9cf1a1","Type":"ContainerStarted","Data":"e8bf23fd42a7817bf278c122e3b8d619956c193e08b9bdef10e834c7c7f932a7"} Jan 27 18:58:25 crc kubenswrapper[4915]: I0127 18:58:25.544568 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-metrics-certs\") pod \"openstack-operator-controller-manager-bf776578d-hcmpr\" (UID: \"c5c7155d-017b-4653-9294-5d99093b027d\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr" Jan 27 18:58:25 crc kubenswrapper[4915]: I0127 18:58:25.544900 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-webhook-certs\") pod \"openstack-operator-controller-manager-bf776578d-hcmpr\" (UID: \"c5c7155d-017b-4653-9294-5d99093b027d\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr" Jan 27 18:58:25 crc kubenswrapper[4915]: E0127 18:58:25.544724 4915 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 18:58:25 crc kubenswrapper[4915]: E0127 18:58:25.545090 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-metrics-certs podName:c5c7155d-017b-4653-9294-5d99093b027d nodeName:}" failed. No retries permitted until 2026-01-27 18:58:27.545074961 +0000 UTC m=+998.902928615 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-metrics-certs") pod "openstack-operator-controller-manager-bf776578d-hcmpr" (UID: "c5c7155d-017b-4653-9294-5d99093b027d") : secret "metrics-server-cert" not found Jan 27 18:58:25 crc kubenswrapper[4915]: E0127 18:58:25.545040 4915 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 18:58:25 crc kubenswrapper[4915]: E0127 18:58:25.545990 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-webhook-certs podName:c5c7155d-017b-4653-9294-5d99093b027d nodeName:}" failed. No retries permitted until 2026-01-27 18:58:27.545969192 +0000 UTC m=+998.903822856 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-webhook-certs") pod "openstack-operator-controller-manager-bf776578d-hcmpr" (UID: "c5c7155d-017b-4653-9294-5d99093b027d") : secret "webhook-server-cert" not found Jan 27 18:58:26 crc kubenswrapper[4915]: E0127 18:58:26.306815 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b5ht7" podUID="ce0a16bf-c68f-41c4-a3ee-88cfa0f32161" Jan 27 18:58:26 crc kubenswrapper[4915]: E0127 18:58:26.307068 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8qsl7" podUID="1390839d-2560-4530-9a71-15568aa4e400" Jan 27 18:58:26 crc kubenswrapper[4915]: E0127 18:58:26.307287 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:35f1eb96f42069bb8f7c33942fb86b41843ba02803464245c16192ccda3d50e4\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-hs2pq" podUID="dd5fd940-cc29-4114-9634-3236f285b65c" Jan 27 18:58:26 crc kubenswrapper[4915]: E0127 18:58:26.307432 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4lskf" podUID="04ed2ea6-d6e8-4e28-a038-7e9b23259535" Jan 27 18:58:26 crc kubenswrapper[4915]: E0127 18:58:26.307670 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-wn9ns" podUID="cd2e585e-7b6a-4b81-bd2e-969dfe93ea12" Jan 27 18:58:26 crc kubenswrapper[4915]: E0127 18:58:26.308628 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-b4mh9" podUID="b8d0acd2-831c-4c69-bb9e-661c93caf365" Jan 27 18:58:26 crc kubenswrapper[4915]: E0127 18:58:26.308688 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jdv22" podUID="0a712a3e-d212-4452-950d-51d8c803ffdf" Jan 27 18:58:26 crc kubenswrapper[4915]: E0127 18:58:26.309008 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vcxbh" podUID="f448a710-dbc8-4561-8964-24b7bcf1ebf5" Jan 27 18:58:26 crc kubenswrapper[4915]: I0127 18:58:26.874537 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e9d96b4-5711-408c-8f62-198e8a9af22f-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-rklwj\" (UID: \"4e9d96b4-5711-408c-8f62-198e8a9af22f\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-rklwj" Jan 27 18:58:26 crc kubenswrapper[4915]: E0127 18:58:26.874728 4915 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 18:58:26 crc kubenswrapper[4915]: E0127 18:58:26.874820 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e9d96b4-5711-408c-8f62-198e8a9af22f-cert podName:4e9d96b4-5711-408c-8f62-198e8a9af22f nodeName:}" failed. No retries permitted until 2026-01-27 18:58:30.874784812 +0000 UTC m=+1002.232638476 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4e9d96b4-5711-408c-8f62-198e8a9af22f-cert") pod "infra-operator-controller-manager-7d75bc88d5-rklwj" (UID: "4e9d96b4-5711-408c-8f62-198e8a9af22f") : secret "infra-operator-webhook-server-cert" not found Jan 27 18:58:27 crc kubenswrapper[4915]: I0127 18:58:27.079065 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/431eefc2-c140-49b9-acbc-523dffc5195b-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xjclv\" (UID: \"431eefc2-c140-49b9-acbc-523dffc5195b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xjclv" Jan 27 18:58:27 crc kubenswrapper[4915]: E0127 18:58:27.079302 4915 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:58:27 crc kubenswrapper[4915]: E0127 18:58:27.079379 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/431eefc2-c140-49b9-acbc-523dffc5195b-cert podName:431eefc2-c140-49b9-acbc-523dffc5195b nodeName:}" failed. No retries permitted until 2026-01-27 18:58:31.079357443 +0000 UTC m=+1002.437211107 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/431eefc2-c140-49b9-acbc-523dffc5195b-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854xjclv" (UID: "431eefc2-c140-49b9-acbc-523dffc5195b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:58:27 crc kubenswrapper[4915]: I0127 18:58:27.586428 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-webhook-certs\") pod \"openstack-operator-controller-manager-bf776578d-hcmpr\" (UID: \"c5c7155d-017b-4653-9294-5d99093b027d\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr" Jan 27 18:58:27 crc kubenswrapper[4915]: I0127 18:58:27.586560 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-metrics-certs\") pod \"openstack-operator-controller-manager-bf776578d-hcmpr\" (UID: \"c5c7155d-017b-4653-9294-5d99093b027d\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr" Jan 27 18:58:27 crc kubenswrapper[4915]: E0127 18:58:27.586622 4915 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 18:58:27 crc kubenswrapper[4915]: E0127 18:58:27.586698 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-webhook-certs podName:c5c7155d-017b-4653-9294-5d99093b027d nodeName:}" failed. No retries permitted until 2026-01-27 18:58:31.586680852 +0000 UTC m=+1002.944534516 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-webhook-certs") pod "openstack-operator-controller-manager-bf776578d-hcmpr" (UID: "c5c7155d-017b-4653-9294-5d99093b027d") : secret "webhook-server-cert" not found Jan 27 18:58:27 crc kubenswrapper[4915]: E0127 18:58:27.586698 4915 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 18:58:27 crc kubenswrapper[4915]: E0127 18:58:27.586756 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-metrics-certs podName:c5c7155d-017b-4653-9294-5d99093b027d nodeName:}" failed. No retries permitted until 2026-01-27 18:58:31.586736963 +0000 UTC m=+1002.944590627 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-metrics-certs") pod "openstack-operator-controller-manager-bf776578d-hcmpr" (UID: "c5c7155d-017b-4653-9294-5d99093b027d") : secret "metrics-server-cert" not found Jan 27 18:58:30 crc kubenswrapper[4915]: I0127 18:58:30.934624 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e9d96b4-5711-408c-8f62-198e8a9af22f-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-rklwj\" (UID: \"4e9d96b4-5711-408c-8f62-198e8a9af22f\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-rklwj" Jan 27 18:58:30 crc kubenswrapper[4915]: E0127 18:58:30.934836 4915 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 18:58:30 crc kubenswrapper[4915]: E0127 18:58:30.935114 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e9d96b4-5711-408c-8f62-198e8a9af22f-cert podName:4e9d96b4-5711-408c-8f62-198e8a9af22f nodeName:}" failed. No retries permitted until 2026-01-27 18:58:38.935094214 +0000 UTC m=+1010.292947968 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4e9d96b4-5711-408c-8f62-198e8a9af22f-cert") pod "infra-operator-controller-manager-7d75bc88d5-rklwj" (UID: "4e9d96b4-5711-408c-8f62-198e8a9af22f") : secret "infra-operator-webhook-server-cert" not found Jan 27 18:58:31 crc kubenswrapper[4915]: I0127 18:58:31.138121 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/431eefc2-c140-49b9-acbc-523dffc5195b-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xjclv\" (UID: \"431eefc2-c140-49b9-acbc-523dffc5195b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xjclv" Jan 27 18:58:31 crc kubenswrapper[4915]: E0127 18:58:31.138336 4915 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:58:31 crc kubenswrapper[4915]: E0127 18:58:31.138423 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/431eefc2-c140-49b9-acbc-523dffc5195b-cert podName:431eefc2-c140-49b9-acbc-523dffc5195b nodeName:}" failed. No retries permitted until 2026-01-27 18:58:39.138402232 +0000 UTC m=+1010.496255896 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/431eefc2-c140-49b9-acbc-523dffc5195b-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854xjclv" (UID: "431eefc2-c140-49b9-acbc-523dffc5195b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:58:31 crc kubenswrapper[4915]: I0127 18:58:31.644485 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-webhook-certs\") pod \"openstack-operator-controller-manager-bf776578d-hcmpr\" (UID: \"c5c7155d-017b-4653-9294-5d99093b027d\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr" Jan 27 18:58:31 crc kubenswrapper[4915]: E0127 18:58:31.644704 4915 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 18:58:31 crc kubenswrapper[4915]: I0127 18:58:31.644944 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-metrics-certs\") pod \"openstack-operator-controller-manager-bf776578d-hcmpr\" (UID: \"c5c7155d-017b-4653-9294-5d99093b027d\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr" Jan 27 18:58:31 crc kubenswrapper[4915]: E0127 18:58:31.645020 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-webhook-certs podName:c5c7155d-017b-4653-9294-5d99093b027d nodeName:}" failed. No retries permitted until 2026-01-27 18:58:39.644996965 +0000 UTC m=+1011.002850639 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-webhook-certs") pod "openstack-operator-controller-manager-bf776578d-hcmpr" (UID: "c5c7155d-017b-4653-9294-5d99093b027d") : secret "webhook-server-cert" not found Jan 27 18:58:31 crc kubenswrapper[4915]: E0127 18:58:31.645038 4915 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 18:58:31 crc kubenswrapper[4915]: E0127 18:58:31.645211 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-metrics-certs podName:c5c7155d-017b-4653-9294-5d99093b027d nodeName:}" failed. No retries permitted until 2026-01-27 18:58:39.645193589 +0000 UTC m=+1011.003047253 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-metrics-certs") pod "openstack-operator-controller-manager-bf776578d-hcmpr" (UID: "c5c7155d-017b-4653-9294-5d99093b027d") : secret "metrics-server-cert" not found Jan 27 18:58:36 crc kubenswrapper[4915]: E0127 18:58:36.868958 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/octavia-operator@sha256:bb8d23f38682e4b987b621a3116500a76d0dc380a1bfb9ea77f18dfacdee4f49" Jan 27 18:58:36 crc kubenswrapper[4915]: E0127 18:58:36.869731 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:bb8d23f38682e4b987b621a3116500a76d0dc380a1bfb9ea77f18dfacdee4f49,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v7vb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7875d7675-4rgf6_openstack-operators(f82c79f5-df27-4ba5-b193-2442492a9897): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:58:36 crc kubenswrapper[4915]: E0127 18:58:36.870957 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-4rgf6" podUID="f82c79f5-df27-4ba5-b193-2442492a9897" Jan 27 18:58:37 crc kubenswrapper[4915]: E0127 18:58:37.384846 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:bb8d23f38682e4b987b621a3116500a76d0dc380a1bfb9ea77f18dfacdee4f49\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-4rgf6" podUID="f82c79f5-df27-4ba5-b193-2442492a9897" Jan 27 18:58:37 crc kubenswrapper[4915]: E0127 18:58:37.475317 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487" Jan 27 18:58:37 crc kubenswrapper[4915]: E0127 18:58:37.475510 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p9rgg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-55f684fd56-k2wrn_openstack-operators(157f36e8-64c1-41f5-a134-28a3dee716a0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:58:37 crc kubenswrapper[4915]: E0127 18:58:37.476725 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-k2wrn" podUID="157f36e8-64c1-41f5-a134-28a3dee716a0" Jan 27 18:58:38 crc kubenswrapper[4915]: E0127 18:58:38.214549 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/nova-operator@sha256:a992613466db3478a00c20c28639c4a12f6326aa52c40a418d1ec40038c83b61" Jan 27 18:58:38 crc kubenswrapper[4915]: E0127 18:58:38.214824 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/nova-operator@sha256:a992613466db3478a00c20c28639c4a12f6326aa52c40a418d1ec40038c83b61,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7hfls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-ddcbfd695-2dx85_openstack-operators(14c2c0a2-eaa3-4f68-a081-35f187ee3ecd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:58:38 crc kubenswrapper[4915]: E0127 18:58:38.216024 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-2dx85" podUID="14c2c0a2-eaa3-4f68-a081-35f187ee3ecd" Jan 27 18:58:38 crc kubenswrapper[4915]: E0127 18:58:38.392595 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-k2wrn" podUID="157f36e8-64c1-41f5-a134-28a3dee716a0" Jan 27 18:58:38 crc kubenswrapper[4915]: E0127 18:58:38.392860 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/nova-operator@sha256:a992613466db3478a00c20c28639c4a12f6326aa52c40a418d1ec40038c83b61\\\"\"" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-2dx85" podUID="14c2c0a2-eaa3-4f68-a081-35f187ee3ecd" Jan 27 18:58:38 crc kubenswrapper[4915]: I0127 18:58:38.955865 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e9d96b4-5711-408c-8f62-198e8a9af22f-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-rklwj\" (UID: \"4e9d96b4-5711-408c-8f62-198e8a9af22f\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-rklwj" Jan 27 18:58:38 crc kubenswrapper[4915]: E0127 18:58:38.956095 4915 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 18:58:38 crc kubenswrapper[4915]: E0127 18:58:38.956188 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e9d96b4-5711-408c-8f62-198e8a9af22f-cert podName:4e9d96b4-5711-408c-8f62-198e8a9af22f nodeName:}" failed. No retries permitted until 2026-01-27 18:58:54.956165483 +0000 UTC m=+1026.314019207 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4e9d96b4-5711-408c-8f62-198e8a9af22f-cert") pod "infra-operator-controller-manager-7d75bc88d5-rklwj" (UID: "4e9d96b4-5711-408c-8f62-198e8a9af22f") : secret "infra-operator-webhook-server-cert" not found Jan 27 18:58:39 crc kubenswrapper[4915]: I0127 18:58:39.159200 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/431eefc2-c140-49b9-acbc-523dffc5195b-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xjclv\" (UID: \"431eefc2-c140-49b9-acbc-523dffc5195b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xjclv" Jan 27 18:58:39 crc kubenswrapper[4915]: E0127 18:58:39.159421 4915 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:58:39 crc kubenswrapper[4915]: E0127 18:58:39.159521 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/431eefc2-c140-49b9-acbc-523dffc5195b-cert podName:431eefc2-c140-49b9-acbc-523dffc5195b nodeName:}" failed. No retries permitted until 2026-01-27 18:58:55.159496573 +0000 UTC m=+1026.517350247 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/431eefc2-c140-49b9-acbc-523dffc5195b-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854xjclv" (UID: "431eefc2-c140-49b9-acbc-523dffc5195b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 18:58:39 crc kubenswrapper[4915]: I0127 18:58:39.665361 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-webhook-certs\") pod \"openstack-operator-controller-manager-bf776578d-hcmpr\" (UID: \"c5c7155d-017b-4653-9294-5d99093b027d\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr" Jan 27 18:58:39 crc kubenswrapper[4915]: I0127 18:58:39.665722 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-metrics-certs\") pod \"openstack-operator-controller-manager-bf776578d-hcmpr\" (UID: \"c5c7155d-017b-4653-9294-5d99093b027d\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr" Jan 27 18:58:39 crc kubenswrapper[4915]: E0127 18:58:39.665838 4915 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 18:58:39 crc kubenswrapper[4915]: E0127 18:58:39.665919 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-metrics-certs podName:c5c7155d-017b-4653-9294-5d99093b027d nodeName:}" failed. No retries permitted until 2026-01-27 18:58:55.66589586 +0000 UTC m=+1027.023749584 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-metrics-certs") pod "openstack-operator-controller-manager-bf776578d-hcmpr" (UID: "c5c7155d-017b-4653-9294-5d99093b027d") : secret "metrics-server-cert" not found Jan 27 18:58:39 crc kubenswrapper[4915]: I0127 18:58:39.673171 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-webhook-certs\") pod \"openstack-operator-controller-manager-bf776578d-hcmpr\" (UID: \"c5c7155d-017b-4653-9294-5d99093b027d\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr" Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.414249 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-lfz95" event={"ID":"3abe8388-a712-4933-a52a-020b4d9cf1a1","Type":"ContainerStarted","Data":"a6902ca874a8b8484c2a88c09fee2be64ab0e96bf644b3eba392e3b6d8cf1e6a"} Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.415561 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-lfz95" Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.426009 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-4jwbr" event={"ID":"841f427a-0fba-4d27-b325-9e048c7242d0","Type":"ContainerStarted","Data":"8501e54d9ad347715e0740cb6c7990a955cd1b67b8efeba7c73401f3543c2415"} Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.426654 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-4jwbr" Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.438940 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-lfz95" podStartSLOduration=4.735603267 podStartE2EDuration="18.438922661s" podCreationTimestamp="2026-01-27 18:58:22 +0000 UTC" firstStartedPulling="2026-01-27 18:58:24.467584618 +0000 UTC m=+995.825438282" lastFinishedPulling="2026-01-27 18:58:38.170904012 +0000 UTC m=+1009.528757676" observedRunningTime="2026-01-27 18:58:40.438276485 +0000 UTC m=+1011.796130159" watchObservedRunningTime="2026-01-27 18:58:40.438922661 +0000 UTC m=+1011.796776325" Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.444413 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-7499c" event={"ID":"c4fd8f30-3efb-4002-9508-247a3973daf5","Type":"ContainerStarted","Data":"a860cce358d649463a09cfe55658b532c87423b8a3447bb68273005bccf2a60d"} Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.445154 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-7499c" Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.477059 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-9lrtq" event={"ID":"3ca605d2-336e-4af3-ace1-f4615c16aa3e","Type":"ContainerStarted","Data":"f8c28719442f3cb450df0b68feb8a72bec6ac2c0f5e980a61d94771f5d03364c"} Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.477728 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-9lrtq" Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.478577 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-4jwbr" podStartSLOduration=3.44273922 podStartE2EDuration="18.478563304s" podCreationTimestamp="2026-01-27 18:58:22 +0000 UTC" firstStartedPulling="2026-01-27 18:58:24.499503542 +0000 UTC m=+995.857357206" lastFinishedPulling="2026-01-27 18:58:39.535327626 +0000 UTC m=+1010.893181290" observedRunningTime="2026-01-27 18:58:40.471336567 +0000 UTC m=+1011.829190241" watchObservedRunningTime="2026-01-27 18:58:40.478563304 +0000 UTC m=+1011.836416978" Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.484235 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-l5rlq" event={"ID":"411fdf42-e261-4eb9-a72b-e5067da8116d","Type":"ContainerStarted","Data":"73d6a3a365016708ff9d2350f5ae4b9a3f4860bb9ad6c2513ae1e74fa445752e"} Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.485411 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-l5rlq" Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.493134 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-mrctf" event={"ID":"efcf0ee2-8cec-485e-b699-9228967f50de","Type":"ContainerStarted","Data":"00de3965f318d40fd5cc9e68a3eb507e8f9bbc296aee550c85a6039a12c0376e"} Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.493919 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-mrctf" Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.514563 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-7499c" podStartSLOduration=4.889801891 podStartE2EDuration="18.514541357s" podCreationTimestamp="2026-01-27 18:58:22 +0000 UTC" firstStartedPulling="2026-01-27 18:58:24.547907789 +0000 UTC m=+995.905761453" lastFinishedPulling="2026-01-27 18:58:38.172647255 +0000 UTC m=+1009.530500919" observedRunningTime="2026-01-27 18:58:40.502195094 +0000 UTC m=+1011.860048748" watchObservedRunningTime="2026-01-27 18:58:40.514541357 +0000 UTC m=+1011.872395021" Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.517510 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b5ht7" event={"ID":"ce0a16bf-c68f-41c4-a3ee-88cfa0f32161","Type":"ContainerStarted","Data":"a5f1a95005c5278bb24df14c31f1b91a0d70e89edf36f3c6d21d9fd73a5820f4"} Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.520205 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-2wrrs" event={"ID":"3db878ad-a2f4-4bc6-bc28-a91ace116f6c","Type":"ContainerStarted","Data":"b707a69db776beaabd2a53d89a5906bde36a60e3d2cbcfd4357f0ae7b74dfd6c"} Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.520946 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-2wrrs" Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.549991 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-c298f" event={"ID":"ff429293-b11f-4482-90e7-41d277b5a044","Type":"ContainerStarted","Data":"bf4318c8df5ec708101de75518b9bac7bba326bab1f985bd6885fb42022fb327"} Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.550602 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-c298f" Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.560164 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-d7jbs" event={"ID":"a54f43cb-a1a3-47dc-8f79-9902abf8e15a","Type":"ContainerStarted","Data":"d8731a2c597cc4cf37df7a7e438f0bf6c0a4ac76474c5e25cd25c9e906cf4607"} Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.560781 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-d7jbs" Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.564403 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-l5rlq" podStartSLOduration=3.99413565 podStartE2EDuration="18.56438604s" podCreationTimestamp="2026-01-27 18:58:22 +0000 UTC" firstStartedPulling="2026-01-27 18:58:24.562090757 +0000 UTC m=+995.919944421" lastFinishedPulling="2026-01-27 18:58:39.132341157 +0000 UTC m=+1010.490194811" observedRunningTime="2026-01-27 18:58:40.53628502 +0000 UTC m=+1011.894138684" watchObservedRunningTime="2026-01-27 18:58:40.56438604 +0000 UTC m=+1011.922239724" Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.568815 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-mrctf" podStartSLOduration=4.523171843 podStartE2EDuration="18.568783118s" podCreationTimestamp="2026-01-27 18:58:22 +0000 UTC" firstStartedPulling="2026-01-27 18:58:24.126029536 +0000 UTC m=+995.483883200" lastFinishedPulling="2026-01-27 18:58:38.171640811 +0000 UTC m=+1009.529494475" observedRunningTime="2026-01-27 18:58:40.561398407 +0000 UTC m=+1011.919252061" watchObservedRunningTime="2026-01-27 18:58:40.568783118 +0000 UTC m=+1011.926636792" Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.608000 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-9lrtq" podStartSLOduration=4.234602883 podStartE2EDuration="17.60797869s" podCreationTimestamp="2026-01-27 18:58:23 +0000 UTC" firstStartedPulling="2026-01-27 18:58:24.797439933 +0000 UTC m=+996.155293597" lastFinishedPulling="2026-01-27 18:58:38.17081574 +0000 UTC m=+1009.528669404" observedRunningTime="2026-01-27 18:58:40.596275183 +0000 UTC m=+1011.954128847" watchObservedRunningTime="2026-01-27 18:58:40.60797869 +0000 UTC m=+1011.965832344" Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.619117 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b5ht7" podStartSLOduration=2.829792529 podStartE2EDuration="17.619092533s" podCreationTimestamp="2026-01-27 18:58:23 +0000 UTC" firstStartedPulling="2026-01-27 18:58:24.833033607 +0000 UTC m=+996.190887271" lastFinishedPulling="2026-01-27 18:58:39.622333611 +0000 UTC m=+1010.980187275" observedRunningTime="2026-01-27 18:58:40.618052997 +0000 UTC m=+1011.975906671" watchObservedRunningTime="2026-01-27 18:58:40.619092533 +0000 UTC m=+1011.976946197" Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.648371 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-d7jbs" podStartSLOduration=3.730586014 podStartE2EDuration="18.648353561s" podCreationTimestamp="2026-01-27 18:58:22 +0000 UTC" firstStartedPulling="2026-01-27 18:58:24.618530343 +0000 UTC m=+995.976383997" lastFinishedPulling="2026-01-27 18:58:39.53629788 +0000 UTC m=+1010.894151544" observedRunningTime="2026-01-27 18:58:40.637706669 +0000 UTC m=+1011.995560333" watchObservedRunningTime="2026-01-27 18:58:40.648353561 +0000 UTC m=+1012.006207225" Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.669207 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-2wrrs" podStartSLOduration=5.053240372 podStartE2EDuration="18.669193462s" podCreationTimestamp="2026-01-27 18:58:22 +0000 UTC" firstStartedPulling="2026-01-27 18:58:24.55486127 +0000 UTC m=+995.912714934" lastFinishedPulling="2026-01-27 18:58:38.17081436 +0000 UTC m=+1009.528668024" observedRunningTime="2026-01-27 18:58:40.668181357 +0000 UTC m=+1012.026035021" watchObservedRunningTime="2026-01-27 18:58:40.669193462 +0000 UTC m=+1012.027047126" Jan 27 18:58:40 crc kubenswrapper[4915]: I0127 18:58:40.712184 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-c298f" podStartSLOduration=3.809770097 podStartE2EDuration="18.712163977s" podCreationTimestamp="2026-01-27 18:58:22 +0000 UTC" firstStartedPulling="2026-01-27 18:58:24.634954656 +0000 UTC m=+995.992808320" lastFinishedPulling="2026-01-27 18:58:39.537348536 +0000 UTC m=+1010.895202200" observedRunningTime="2026-01-27 18:58:40.706038826 +0000 UTC m=+1012.063892490" watchObservedRunningTime="2026-01-27 18:58:40.712163977 +0000 UTC m=+1012.070017641" Jan 27 18:58:47 crc kubenswrapper[4915]: I0127 18:58:47.611218 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-b4mh9" event={"ID":"b8d0acd2-831c-4c69-bb9e-661c93caf365","Type":"ContainerStarted","Data":"a2ca61969f0024d01c76e1eedff2260f998c65f7a6110436d77f6cfb53c042de"} Jan 27 18:58:47 crc kubenswrapper[4915]: I0127 18:58:47.611731 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-b4mh9" Jan 27 18:58:47 crc kubenswrapper[4915]: I0127 18:58:47.612583 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-wn9ns" event={"ID":"cd2e585e-7b6a-4b81-bd2e-969dfe93ea12","Type":"ContainerStarted","Data":"5e115610f4ecb1522ef371b3ff312df50c449c2c64e0b886678f09bafc8cf482"} Jan 27 18:58:47 crc kubenswrapper[4915]: I0127 18:58:47.612773 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-wn9ns" Jan 27 18:58:47 crc kubenswrapper[4915]: I0127 18:58:47.613913 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8qsl7" event={"ID":"1390839d-2560-4530-9a71-15568aa4e400","Type":"ContainerStarted","Data":"3cdef90ecf7ab3680fc96463d24299328f82d4abf897780c698616e498f3eec0"} Jan 27 18:58:47 crc kubenswrapper[4915]: I0127 18:58:47.614093 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8qsl7" Jan 27 18:58:47 crc kubenswrapper[4915]: I0127 18:58:47.615341 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jdv22" event={"ID":"0a712a3e-d212-4452-950d-51d8c803ffdf","Type":"ContainerStarted","Data":"61d4d2ac05fd43a65436e8fdc48a7afc83359848d1889f8728fb8ac4ad10c3d6"} Jan 27 18:58:47 crc kubenswrapper[4915]: I0127 18:58:47.615949 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jdv22" Jan 27 18:58:47 crc kubenswrapper[4915]: I0127 18:58:47.617075 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-hs2pq" event={"ID":"dd5fd940-cc29-4114-9634-3236f285b65c","Type":"ContainerStarted","Data":"6c299d674dccd8c80e4d9f3f25f3d166e0d1fa10fa9f82b69c02260758758e14"} Jan 27 18:58:47 crc kubenswrapper[4915]: I0127 18:58:47.617259 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-hs2pq" Jan 27 18:58:47 crc kubenswrapper[4915]: I0127 18:58:47.618164 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4lskf" event={"ID":"04ed2ea6-d6e8-4e28-a038-7e9b23259535","Type":"ContainerStarted","Data":"fccd322cfe12a784f0aa29192f610b1cc4211649ebd48954b1488ce023429894"} Jan 27 18:58:47 crc kubenswrapper[4915]: I0127 18:58:47.618348 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4lskf" Jan 27 18:58:47 crc kubenswrapper[4915]: I0127 18:58:47.619550 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vcxbh" event={"ID":"f448a710-dbc8-4561-8964-24b7bcf1ebf5","Type":"ContainerStarted","Data":"85e569e22911a3b75d369d3750cf0c4004eba7f7a6387b6f996c017251ee188a"} Jan 27 18:58:47 crc kubenswrapper[4915]: I0127 18:58:47.619785 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vcxbh" Jan 27 18:58:47 crc kubenswrapper[4915]: I0127 18:58:47.634167 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-b4mh9" podStartSLOduration=2.196214199 podStartE2EDuration="24.634145104s" podCreationTimestamp="2026-01-27 18:58:23 +0000 UTC" firstStartedPulling="2026-01-27 18:58:24.655131481 +0000 UTC m=+996.012985145" lastFinishedPulling="2026-01-27 18:58:47.093062346 +0000 UTC m=+1018.450916050" observedRunningTime="2026-01-27 18:58:47.629338406 +0000 UTC m=+1018.987192090" watchObservedRunningTime="2026-01-27 18:58:47.634145104 +0000 UTC m=+1018.991998778" Jan 27 18:58:47 crc kubenswrapper[4915]: I0127 18:58:47.658317 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4lskf" podStartSLOduration=2.37023131 podStartE2EDuration="24.658295547s" podCreationTimestamp="2026-01-27 18:58:23 +0000 UTC" firstStartedPulling="2026-01-27 18:58:24.821402391 +0000 UTC m=+996.179256045" lastFinishedPulling="2026-01-27 18:58:47.109466588 +0000 UTC m=+1018.467320282" observedRunningTime="2026-01-27 18:58:47.650490825 +0000 UTC m=+1019.008344489" watchObservedRunningTime="2026-01-27 18:58:47.658295547 +0000 UTC m=+1019.016149221" Jan 27 18:58:47 crc kubenswrapper[4915]: I0127 18:58:47.680836 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8qsl7" podStartSLOduration=3.187021235 podStartE2EDuration="25.68081644s" podCreationTimestamp="2026-01-27 18:58:22 +0000 UTC" firstStartedPulling="2026-01-27 18:58:24.63919202 +0000 UTC m=+995.997045684" lastFinishedPulling="2026-01-27 18:58:47.132987185 +0000 UTC m=+1018.490840889" observedRunningTime="2026-01-27 18:58:47.67755958 +0000 UTC m=+1019.035413234" watchObservedRunningTime="2026-01-27 18:58:47.68081644 +0000 UTC m=+1019.038670114" Jan 27 18:58:47 crc kubenswrapper[4915]: I0127 18:58:47.699412 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jdv22" podStartSLOduration=2.388078748 podStartE2EDuration="24.699396616s" podCreationTimestamp="2026-01-27 18:58:23 +0000 UTC" firstStartedPulling="2026-01-27 18:58:24.798921449 +0000 UTC m=+996.156775113" lastFinishedPulling="2026-01-27 18:58:47.110239287 +0000 UTC m=+1018.468092981" observedRunningTime="2026-01-27 18:58:47.698964505 +0000 UTC m=+1019.056818189" watchObservedRunningTime="2026-01-27 18:58:47.699396616 +0000 UTC m=+1019.057250280" Jan 27 18:58:47 crc kubenswrapper[4915]: I0127 18:58:47.717623 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vcxbh" podStartSLOduration=2.439159891 podStartE2EDuration="24.717604332s" podCreationTimestamp="2026-01-27 18:58:23 +0000 UTC" firstStartedPulling="2026-01-27 18:58:24.831601831 +0000 UTC m=+996.189455495" lastFinishedPulling="2026-01-27 18:58:47.110046232 +0000 UTC m=+1018.467899936" observedRunningTime="2026-01-27 18:58:47.714587858 +0000 UTC m=+1019.072441532" watchObservedRunningTime="2026-01-27 18:58:47.717604332 +0000 UTC m=+1019.075457996" Jan 27 18:58:47 crc kubenswrapper[4915]: I0127 18:58:47.729938 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-wn9ns" podStartSLOduration=2.451981576 podStartE2EDuration="24.729922745s" podCreationTimestamp="2026-01-27 18:58:23 +0000 UTC" firstStartedPulling="2026-01-27 18:58:24.832647147 +0000 UTC m=+996.190500811" lastFinishedPulling="2026-01-27 18:58:47.110588296 +0000 UTC m=+1018.468441980" observedRunningTime="2026-01-27 18:58:47.724380609 +0000 UTC m=+1019.082234263" watchObservedRunningTime="2026-01-27 18:58:47.729922745 +0000 UTC m=+1019.087776409" Jan 27 18:58:47 crc kubenswrapper[4915]: I0127 18:58:47.741445 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-hs2pq" podStartSLOduration=2.440024202 podStartE2EDuration="24.741428457s" podCreationTimestamp="2026-01-27 18:58:23 +0000 UTC" firstStartedPulling="2026-01-27 18:58:24.83155732 +0000 UTC m=+996.189410984" lastFinishedPulling="2026-01-27 18:58:47.132961565 +0000 UTC m=+1018.490815239" observedRunningTime="2026-01-27 18:58:47.740644688 +0000 UTC m=+1019.098498352" watchObservedRunningTime="2026-01-27 18:58:47.741428457 +0000 UTC m=+1019.099282121" Jan 27 18:58:50 crc kubenswrapper[4915]: I0127 18:58:50.625302 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:58:50 crc kubenswrapper[4915]: I0127 18:58:50.625756 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:58:50 crc kubenswrapper[4915]: I0127 18:58:50.625867 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 18:58:50 crc kubenswrapper[4915]: I0127 18:58:50.626842 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0fd9f5796cc3a57e9fc7c4db2c85fe065ddeb03c518ee01197233c97b80d5a44"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:58:50 crc kubenswrapper[4915]: I0127 18:58:50.626937 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://0fd9f5796cc3a57e9fc7c4db2c85fe065ddeb03c518ee01197233c97b80d5a44" gracePeriod=600 Jan 27 18:58:50 crc kubenswrapper[4915]: I0127 18:58:50.643288 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-k2wrn" event={"ID":"157f36e8-64c1-41f5-a134-28a3dee716a0","Type":"ContainerStarted","Data":"ca2ec5cd1cd5db3b0001614080952de0aaca283ca7568aa95000f399264449f1"} Jan 27 18:58:50 crc kubenswrapper[4915]: I0127 18:58:50.644162 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-k2wrn" Jan 27 18:58:50 crc kubenswrapper[4915]: I0127 18:58:50.674076 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-k2wrn" podStartSLOduration=3.346862476 podStartE2EDuration="28.674056384s" podCreationTimestamp="2026-01-27 18:58:22 +0000 UTC" firstStartedPulling="2026-01-27 18:58:24.53815117 +0000 UTC m=+995.896004834" lastFinishedPulling="2026-01-27 18:58:49.865345078 +0000 UTC m=+1021.223198742" observedRunningTime="2026-01-27 18:58:50.668784085 +0000 UTC m=+1022.026637789" watchObservedRunningTime="2026-01-27 18:58:50.674056384 +0000 UTC m=+1022.031910048" Jan 27 18:58:51 crc kubenswrapper[4915]: I0127 18:58:51.655903 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="0fd9f5796cc3a57e9fc7c4db2c85fe065ddeb03c518ee01197233c97b80d5a44" exitCode=0 Jan 27 18:58:51 crc kubenswrapper[4915]: I0127 18:58:51.655986 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"0fd9f5796cc3a57e9fc7c4db2c85fe065ddeb03c518ee01197233c97b80d5a44"} Jan 27 18:58:51 crc kubenswrapper[4915]: I0127 18:58:51.656542 4915 scope.go:117] "RemoveContainer" containerID="da511792c908f1faa8a2f05c73323541a29c21bfe552744df49646a41552c036" Jan 27 18:58:53 crc kubenswrapper[4915]: I0127 18:58:53.154101 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-lfz95" Jan 27 18:58:53 crc kubenswrapper[4915]: I0127 18:58:53.166205 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-mrctf" Jan 27 18:58:53 crc kubenswrapper[4915]: I0127 18:58:53.192378 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-2wrrs" Jan 27 18:58:53 crc kubenswrapper[4915]: I0127 18:58:53.203264 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-4jwbr" Jan 27 18:58:53 crc kubenswrapper[4915]: I0127 18:58:53.212756 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-l5rlq" Jan 27 18:58:53 crc kubenswrapper[4915]: I0127 18:58:53.293154 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-7499c" Jan 27 18:58:53 crc kubenswrapper[4915]: I0127 18:58:53.415445 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-c298f" Jan 27 18:58:53 crc kubenswrapper[4915]: I0127 18:58:53.427597 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8qsl7" Jan 27 18:58:53 crc kubenswrapper[4915]: I0127 18:58:53.447099 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-b4mh9" Jan 27 18:58:53 crc kubenswrapper[4915]: I0127 18:58:53.534336 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-d7jbs" Jan 27 18:58:53 crc kubenswrapper[4915]: I0127 18:58:53.619546 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-vcxbh" Jan 27 18:58:53 crc kubenswrapper[4915]: I0127 18:58:53.769393 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4lskf" Jan 27 18:58:53 crc kubenswrapper[4915]: I0127 18:58:53.788588 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-wn9ns" Jan 27 18:58:53 crc kubenswrapper[4915]: I0127 18:58:53.975855 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-9lrtq" Jan 27 18:58:53 crc kubenswrapper[4915]: I0127 18:58:53.993809 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jdv22" Jan 27 18:58:54 crc kubenswrapper[4915]: I0127 18:58:54.039729 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-hs2pq" Jan 27 18:58:55 crc kubenswrapper[4915]: I0127 18:58:55.004892 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e9d96b4-5711-408c-8f62-198e8a9af22f-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-rklwj\" (UID: \"4e9d96b4-5711-408c-8f62-198e8a9af22f\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-rklwj" Jan 27 18:58:55 crc kubenswrapper[4915]: I0127 18:58:55.014480 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e9d96b4-5711-408c-8f62-198e8a9af22f-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-rklwj\" (UID: \"4e9d96b4-5711-408c-8f62-198e8a9af22f\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-rklwj" Jan 27 18:58:55 crc kubenswrapper[4915]: I0127 18:58:55.201695 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-k8fz5" Jan 27 18:58:55 crc kubenswrapper[4915]: I0127 18:58:55.209889 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-rklwj" Jan 27 18:58:55 crc kubenswrapper[4915]: I0127 18:58:55.210717 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/431eefc2-c140-49b9-acbc-523dffc5195b-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xjclv\" (UID: \"431eefc2-c140-49b9-acbc-523dffc5195b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xjclv" Jan 27 18:58:55 crc kubenswrapper[4915]: I0127 18:58:55.217069 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/431eefc2-c140-49b9-acbc-523dffc5195b-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xjclv\" (UID: \"431eefc2-c140-49b9-acbc-523dffc5195b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xjclv" Jan 27 18:58:55 crc kubenswrapper[4915]: I0127 18:58:55.453962 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-tfvrm" Jan 27 18:58:55 crc kubenswrapper[4915]: I0127 18:58:55.462237 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xjclv" Jan 27 18:58:55 crc kubenswrapper[4915]: I0127 18:58:55.719166 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-metrics-certs\") pod \"openstack-operator-controller-manager-bf776578d-hcmpr\" (UID: \"c5c7155d-017b-4653-9294-5d99093b027d\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr" Jan 27 18:58:55 crc kubenswrapper[4915]: I0127 18:58:55.724755 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c5c7155d-017b-4653-9294-5d99093b027d-metrics-certs\") pod \"openstack-operator-controller-manager-bf776578d-hcmpr\" (UID: \"c5c7155d-017b-4653-9294-5d99093b027d\") " pod="openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr" Jan 27 18:58:55 crc kubenswrapper[4915]: I0127 18:58:55.965146 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-lfd9l" Jan 27 18:58:55 crc kubenswrapper[4915]: I0127 18:58:55.973743 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr" Jan 27 18:59:00 crc kubenswrapper[4915]: I0127 18:59:00.605867 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-rklwj"] Jan 27 18:59:00 crc kubenswrapper[4915]: W0127 18:59:00.612615 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9d96b4_5711_408c_8f62_198e8a9af22f.slice/crio-65300ada63cbc08272038efa2c802644f794ebc2ba71fb972df534bb46a2a3f9 WatchSource:0}: Error finding container 65300ada63cbc08272038efa2c802644f794ebc2ba71fb972df534bb46a2a3f9: Status 404 returned error can't find the container with id 65300ada63cbc08272038efa2c802644f794ebc2ba71fb972df534bb46a2a3f9 Jan 27 18:59:00 crc kubenswrapper[4915]: I0127 18:59:00.627703 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xjclv"] Jan 27 18:59:00 crc kubenswrapper[4915]: I0127 18:59:00.636940 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr"] Jan 27 18:59:00 crc kubenswrapper[4915]: I0127 18:59:00.727552 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr" event={"ID":"c5c7155d-017b-4653-9294-5d99093b027d","Type":"ContainerStarted","Data":"80881d05688ac0e0a2ce56ddffc0510d795b1a05cbeffd5a8bf19d41a10e58fa"} Jan 27 18:59:00 crc kubenswrapper[4915]: I0127 18:59:00.730512 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-rklwj" event={"ID":"4e9d96b4-5711-408c-8f62-198e8a9af22f","Type":"ContainerStarted","Data":"65300ada63cbc08272038efa2c802644f794ebc2ba71fb972df534bb46a2a3f9"} Jan 27 18:59:00 crc kubenswrapper[4915]: I0127 18:59:00.732278 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-2dx85" event={"ID":"14c2c0a2-eaa3-4f68-a081-35f187ee3ecd","Type":"ContainerStarted","Data":"913dce524acb6848bf285c9440b3a5cf8d7630398bdc2925ae1087558949fefb"} Jan 27 18:59:00 crc kubenswrapper[4915]: I0127 18:59:00.734424 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xjclv" event={"ID":"431eefc2-c140-49b9-acbc-523dffc5195b","Type":"ContainerStarted","Data":"44d2b513fc4f87c5a69e8ded4f9f3b91995e5d3acfdd2a831556b7afb482fd2a"} Jan 27 18:59:01 crc kubenswrapper[4915]: I0127 18:59:01.748768 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"f848ad2f1cae042bc567d9f4705384b39a6ceca79b0cc5f51ad49a89ebe4229a"} Jan 27 18:59:01 crc kubenswrapper[4915]: I0127 18:59:01.752413 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr" event={"ID":"c5c7155d-017b-4653-9294-5d99093b027d","Type":"ContainerStarted","Data":"77f8583fcdd74f161c55256e7ac3f96e02207a4de8ebd3c5235f1ee1ca129d1f"} Jan 27 18:59:01 crc kubenswrapper[4915]: I0127 18:59:01.752564 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-2dx85" Jan 27 18:59:01 crc kubenswrapper[4915]: I0127 18:59:01.809423 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-2dx85" podStartSLOduration=3.699334726 podStartE2EDuration="38.80940619s" podCreationTimestamp="2026-01-27 18:58:23 +0000 UTC" firstStartedPulling="2026-01-27 18:58:24.618062891 +0000 UTC m=+995.975916555" lastFinishedPulling="2026-01-27 18:58:59.728134345 +0000 UTC m=+1031.085988019" observedRunningTime="2026-01-27 18:59:01.797505618 +0000 UTC m=+1033.155359332" watchObservedRunningTime="2026-01-27 18:59:01.80940619 +0000 UTC m=+1033.167259854" Jan 27 18:59:02 crc kubenswrapper[4915]: I0127 18:59:02.802014 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr" podStartSLOduration=39.801979668 podStartE2EDuration="39.801979668s" podCreationTimestamp="2026-01-27 18:58:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:59:02.801011355 +0000 UTC m=+1034.158865119" watchObservedRunningTime="2026-01-27 18:59:02.801979668 +0000 UTC m=+1034.159833382" Jan 27 18:59:03 crc kubenswrapper[4915]: I0127 18:59:03.327289 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-k2wrn" Jan 27 18:59:04 crc kubenswrapper[4915]: I0127 18:59:04.779449 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-rklwj" event={"ID":"4e9d96b4-5711-408c-8f62-198e8a9af22f","Type":"ContainerStarted","Data":"cb24f966732a43561ac27592c353dca22b5b25ee994996fa6f5e9930ba1a596d"} Jan 27 18:59:04 crc kubenswrapper[4915]: I0127 18:59:04.780063 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-rklwj" Jan 27 18:59:04 crc kubenswrapper[4915]: I0127 18:59:04.782981 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-4rgf6" event={"ID":"f82c79f5-df27-4ba5-b193-2442492a9897","Type":"ContainerStarted","Data":"47c005010dcf87d7cedc8d0ae4b69d26fb95ee71b341acb4b8ad83871af949f1"} Jan 27 18:59:04 crc kubenswrapper[4915]: I0127 18:59:04.783244 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-4rgf6" Jan 27 18:59:04 crc kubenswrapper[4915]: I0127 18:59:04.806770 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-rklwj" podStartSLOduration=38.833516872 podStartE2EDuration="42.806749186s" podCreationTimestamp="2026-01-27 18:58:22 +0000 UTC" firstStartedPulling="2026-01-27 18:59:00.615958223 +0000 UTC m=+1031.973811897" lastFinishedPulling="2026-01-27 18:59:04.589190547 +0000 UTC m=+1035.947044211" observedRunningTime="2026-01-27 18:59:04.799067617 +0000 UTC m=+1036.156921291" watchObservedRunningTime="2026-01-27 18:59:04.806749186 +0000 UTC m=+1036.164602850" Jan 27 18:59:04 crc kubenswrapper[4915]: I0127 18:59:04.818559 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-4rgf6" podStartSLOduration=3.074929123 podStartE2EDuration="41.818543715s" podCreationTimestamp="2026-01-27 18:58:23 +0000 UTC" firstStartedPulling="2026-01-27 18:58:24.620904251 +0000 UTC m=+995.978757915" lastFinishedPulling="2026-01-27 18:59:03.364518843 +0000 UTC m=+1034.722372507" observedRunningTime="2026-01-27 18:59:04.81548879 +0000 UTC m=+1036.173342474" watchObservedRunningTime="2026-01-27 18:59:04.818543715 +0000 UTC m=+1036.176397379" Jan 27 18:59:05 crc kubenswrapper[4915]: I0127 18:59:05.793014 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xjclv" event={"ID":"431eefc2-c140-49b9-acbc-523dffc5195b","Type":"ContainerStarted","Data":"af80aa9b0164b154c789f6b9d20f4ebad911a633d3c8a61fc0294ae8a31bb3b4"} Jan 27 18:59:05 crc kubenswrapper[4915]: I0127 18:59:05.974156 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr" Jan 27 18:59:06 crc kubenswrapper[4915]: I0127 18:59:06.801586 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xjclv" Jan 27 18:59:13 crc kubenswrapper[4915]: I0127 18:59:13.568451 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-2dx85" Jan 27 18:59:13 crc kubenswrapper[4915]: I0127 18:59:13.594696 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-4rgf6" Jan 27 18:59:13 crc kubenswrapper[4915]: I0127 18:59:13.595222 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xjclv" podStartSLOduration=46.574473388 podStartE2EDuration="50.595202078s" podCreationTimestamp="2026-01-27 18:58:23 +0000 UTC" firstStartedPulling="2026-01-27 18:59:00.645439056 +0000 UTC m=+1032.003292720" lastFinishedPulling="2026-01-27 18:59:04.666167746 +0000 UTC m=+1036.024021410" observedRunningTime="2026-01-27 18:59:05.853997776 +0000 UTC m=+1037.211851450" watchObservedRunningTime="2026-01-27 18:59:13.595202078 +0000 UTC m=+1044.953055772" Jan 27 18:59:15 crc kubenswrapper[4915]: I0127 18:59:15.215283 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-rklwj" Jan 27 18:59:15 crc kubenswrapper[4915]: I0127 18:59:15.469309 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xjclv" Jan 27 18:59:15 crc kubenswrapper[4915]: I0127 18:59:15.988083 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-bf776578d-hcmpr" Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.157251 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6gb75"] Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.165436 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6gb75" Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.177880 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.178233 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-kgf9w" Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.177890 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.178019 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.186772 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6gb75"] Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.206550 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g2qtc"] Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.208111 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g2qtc" Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.211505 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.253420 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g2qtc"] Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.255458 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfgs7\" (UniqueName: \"kubernetes.io/projected/f19fea82-c53b-46de-8e6c-ff245e7e25af-kube-api-access-hfgs7\") pod \"dnsmasq-dns-675f4bcbfc-6gb75\" (UID: \"f19fea82-c53b-46de-8e6c-ff245e7e25af\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6gb75" Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.255539 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f19fea82-c53b-46de-8e6c-ff245e7e25af-config\") pod \"dnsmasq-dns-675f4bcbfc-6gb75\" (UID: \"f19fea82-c53b-46de-8e6c-ff245e7e25af\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6gb75" Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.356864 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fcb7d90-7d62-4d35-a53e-99ba9bc318ff-config\") pod \"dnsmasq-dns-78dd6ddcc-g2qtc\" (UID: \"5fcb7d90-7d62-4d35-a53e-99ba9bc318ff\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g2qtc" Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.356937 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz2nc\" (UniqueName: \"kubernetes.io/projected/5fcb7d90-7d62-4d35-a53e-99ba9bc318ff-kube-api-access-vz2nc\") pod \"dnsmasq-dns-78dd6ddcc-g2qtc\" (UID: \"5fcb7d90-7d62-4d35-a53e-99ba9bc318ff\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g2qtc" Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.357123 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fcb7d90-7d62-4d35-a53e-99ba9bc318ff-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-g2qtc\" (UID: \"5fcb7d90-7d62-4d35-a53e-99ba9bc318ff\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g2qtc" Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.357177 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfgs7\" (UniqueName: \"kubernetes.io/projected/f19fea82-c53b-46de-8e6c-ff245e7e25af-kube-api-access-hfgs7\") pod \"dnsmasq-dns-675f4bcbfc-6gb75\" (UID: \"f19fea82-c53b-46de-8e6c-ff245e7e25af\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6gb75" Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.357221 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f19fea82-c53b-46de-8e6c-ff245e7e25af-config\") pod \"dnsmasq-dns-675f4bcbfc-6gb75\" (UID: \"f19fea82-c53b-46de-8e6c-ff245e7e25af\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6gb75" Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.358051 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f19fea82-c53b-46de-8e6c-ff245e7e25af-config\") pod \"dnsmasq-dns-675f4bcbfc-6gb75\" (UID: \"f19fea82-c53b-46de-8e6c-ff245e7e25af\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6gb75" Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.377857 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfgs7\" (UniqueName: \"kubernetes.io/projected/f19fea82-c53b-46de-8e6c-ff245e7e25af-kube-api-access-hfgs7\") pod \"dnsmasq-dns-675f4bcbfc-6gb75\" (UID: \"f19fea82-c53b-46de-8e6c-ff245e7e25af\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6gb75" Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.457952 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fcb7d90-7d62-4d35-a53e-99ba9bc318ff-config\") pod \"dnsmasq-dns-78dd6ddcc-g2qtc\" (UID: \"5fcb7d90-7d62-4d35-a53e-99ba9bc318ff\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g2qtc" Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.458046 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz2nc\" (UniqueName: \"kubernetes.io/projected/5fcb7d90-7d62-4d35-a53e-99ba9bc318ff-kube-api-access-vz2nc\") pod \"dnsmasq-dns-78dd6ddcc-g2qtc\" (UID: \"5fcb7d90-7d62-4d35-a53e-99ba9bc318ff\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g2qtc" Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.458097 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fcb7d90-7d62-4d35-a53e-99ba9bc318ff-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-g2qtc\" (UID: \"5fcb7d90-7d62-4d35-a53e-99ba9bc318ff\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g2qtc" Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.458842 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fcb7d90-7d62-4d35-a53e-99ba9bc318ff-config\") pod \"dnsmasq-dns-78dd6ddcc-g2qtc\" (UID: \"5fcb7d90-7d62-4d35-a53e-99ba9bc318ff\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g2qtc" Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.458951 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fcb7d90-7d62-4d35-a53e-99ba9bc318ff-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-g2qtc\" (UID: \"5fcb7d90-7d62-4d35-a53e-99ba9bc318ff\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g2qtc" Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.475128 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz2nc\" (UniqueName: \"kubernetes.io/projected/5fcb7d90-7d62-4d35-a53e-99ba9bc318ff-kube-api-access-vz2nc\") pod \"dnsmasq-dns-78dd6ddcc-g2qtc\" (UID: \"5fcb7d90-7d62-4d35-a53e-99ba9bc318ff\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g2qtc" Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.538531 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6gb75" Jan 27 18:59:33 crc kubenswrapper[4915]: I0127 18:59:33.545813 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g2qtc" Jan 27 18:59:34 crc kubenswrapper[4915]: I0127 18:59:34.002570 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:59:34 crc kubenswrapper[4915]: I0127 18:59:34.004389 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6gb75"] Jan 27 18:59:34 crc kubenswrapper[4915]: I0127 18:59:34.042092 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g2qtc"] Jan 27 18:59:34 crc kubenswrapper[4915]: W0127 18:59:34.048213 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5fcb7d90_7d62_4d35_a53e_99ba9bc318ff.slice/crio-b46926669880d07f8383c8809c264c382d90c2734275a05e581ff80ac94523e1 WatchSource:0}: Error finding container b46926669880d07f8383c8809c264c382d90c2734275a05e581ff80ac94523e1: Status 404 returned error can't find the container with id b46926669880d07f8383c8809c264c382d90c2734275a05e581ff80ac94523e1 Jan 27 18:59:35 crc kubenswrapper[4915]: I0127 18:59:35.001712 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-6gb75" event={"ID":"f19fea82-c53b-46de-8e6c-ff245e7e25af","Type":"ContainerStarted","Data":"c4365d32a06b85d6af4dc34a2d259ad1e336f49e6acffd1c455449b66ff2b6ea"} Jan 27 18:59:35 crc kubenswrapper[4915]: I0127 18:59:35.003863 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-g2qtc" event={"ID":"5fcb7d90-7d62-4d35-a53e-99ba9bc318ff","Type":"ContainerStarted","Data":"b46926669880d07f8383c8809c264c382d90c2734275a05e581ff80ac94523e1"} Jan 27 18:59:35 crc kubenswrapper[4915]: I0127 18:59:35.893671 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6gb75"] Jan 27 18:59:35 crc kubenswrapper[4915]: I0127 18:59:35.922555 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2wgvn"] Jan 27 18:59:35 crc kubenswrapper[4915]: I0127 18:59:35.923647 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2wgvn" Jan 27 18:59:35 crc kubenswrapper[4915]: I0127 18:59:35.933323 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2wgvn"] Jan 27 18:59:36 crc kubenswrapper[4915]: I0127 18:59:36.114555 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tntcw\" (UniqueName: \"kubernetes.io/projected/0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5-kube-api-access-tntcw\") pod \"dnsmasq-dns-666b6646f7-2wgvn\" (UID: \"0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5\") " pod="openstack/dnsmasq-dns-666b6646f7-2wgvn" Jan 27 18:59:36 crc kubenswrapper[4915]: I0127 18:59:36.114616 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2wgvn\" (UID: \"0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5\") " pod="openstack/dnsmasq-dns-666b6646f7-2wgvn" Jan 27 18:59:36 crc kubenswrapper[4915]: I0127 18:59:36.114674 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5-config\") pod \"dnsmasq-dns-666b6646f7-2wgvn\" (UID: \"0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5\") " pod="openstack/dnsmasq-dns-666b6646f7-2wgvn" Jan 27 18:59:36 crc kubenswrapper[4915]: I0127 18:59:36.180143 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g2qtc"] Jan 27 18:59:36 crc kubenswrapper[4915]: I0127 18:59:36.203894 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zzsgm"] Jan 27 18:59:36 crc kubenswrapper[4915]: I0127 18:59:36.204954 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zzsgm" Jan 27 18:59:36 crc kubenswrapper[4915]: I0127 18:59:36.215467 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tntcw\" (UniqueName: \"kubernetes.io/projected/0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5-kube-api-access-tntcw\") pod \"dnsmasq-dns-666b6646f7-2wgvn\" (UID: \"0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5\") " pod="openstack/dnsmasq-dns-666b6646f7-2wgvn" Jan 27 18:59:36 crc kubenswrapper[4915]: I0127 18:59:36.215505 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2wgvn\" (UID: \"0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5\") " pod="openstack/dnsmasq-dns-666b6646f7-2wgvn" Jan 27 18:59:36 crc kubenswrapper[4915]: I0127 18:59:36.215557 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5-config\") pod \"dnsmasq-dns-666b6646f7-2wgvn\" (UID: \"0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5\") " pod="openstack/dnsmasq-dns-666b6646f7-2wgvn" Jan 27 18:59:36 crc kubenswrapper[4915]: I0127 18:59:36.216379 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5-config\") pod \"dnsmasq-dns-666b6646f7-2wgvn\" (UID: \"0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5\") " pod="openstack/dnsmasq-dns-666b6646f7-2wgvn" Jan 27 18:59:36 crc kubenswrapper[4915]: I0127 18:59:36.216558 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2wgvn\" (UID: \"0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5\") " pod="openstack/dnsmasq-dns-666b6646f7-2wgvn" Jan 27 18:59:36 crc kubenswrapper[4915]: I0127 18:59:36.218438 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zzsgm"] Jan 27 18:59:36 crc kubenswrapper[4915]: I0127 18:59:36.254460 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tntcw\" (UniqueName: \"kubernetes.io/projected/0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5-kube-api-access-tntcw\") pod \"dnsmasq-dns-666b6646f7-2wgvn\" (UID: \"0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5\") " pod="openstack/dnsmasq-dns-666b6646f7-2wgvn" Jan 27 18:59:36 crc kubenswrapper[4915]: I0127 18:59:36.319912 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03628b30-c873-40cb-a7f9-acea32d1f486-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-zzsgm\" (UID: \"03628b30-c873-40cb-a7f9-acea32d1f486\") " pod="openstack/dnsmasq-dns-57d769cc4f-zzsgm" Jan 27 18:59:36 crc kubenswrapper[4915]: I0127 18:59:36.323531 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8drf7\" (UniqueName: \"kubernetes.io/projected/03628b30-c873-40cb-a7f9-acea32d1f486-kube-api-access-8drf7\") pod \"dnsmasq-dns-57d769cc4f-zzsgm\" (UID: \"03628b30-c873-40cb-a7f9-acea32d1f486\") " pod="openstack/dnsmasq-dns-57d769cc4f-zzsgm" Jan 27 18:59:36 crc kubenswrapper[4915]: I0127 18:59:36.323604 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03628b30-c873-40cb-a7f9-acea32d1f486-config\") pod \"dnsmasq-dns-57d769cc4f-zzsgm\" (UID: \"03628b30-c873-40cb-a7f9-acea32d1f486\") " pod="openstack/dnsmasq-dns-57d769cc4f-zzsgm" Jan 27 18:59:36 crc kubenswrapper[4915]: I0127 18:59:36.425182 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8drf7\" (UniqueName: \"kubernetes.io/projected/03628b30-c873-40cb-a7f9-acea32d1f486-kube-api-access-8drf7\") pod \"dnsmasq-dns-57d769cc4f-zzsgm\" (UID: \"03628b30-c873-40cb-a7f9-acea32d1f486\") " pod="openstack/dnsmasq-dns-57d769cc4f-zzsgm" Jan 27 18:59:36 crc kubenswrapper[4915]: I0127 18:59:36.425240 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03628b30-c873-40cb-a7f9-acea32d1f486-config\") pod \"dnsmasq-dns-57d769cc4f-zzsgm\" (UID: \"03628b30-c873-40cb-a7f9-acea32d1f486\") " pod="openstack/dnsmasq-dns-57d769cc4f-zzsgm" Jan 27 18:59:36 crc kubenswrapper[4915]: I0127 18:59:36.425314 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03628b30-c873-40cb-a7f9-acea32d1f486-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-zzsgm\" (UID: \"03628b30-c873-40cb-a7f9-acea32d1f486\") " pod="openstack/dnsmasq-dns-57d769cc4f-zzsgm" Jan 27 18:59:36 crc kubenswrapper[4915]: I0127 18:59:36.426173 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03628b30-c873-40cb-a7f9-acea32d1f486-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-zzsgm\" (UID: \"03628b30-c873-40cb-a7f9-acea32d1f486\") " pod="openstack/dnsmasq-dns-57d769cc4f-zzsgm" Jan 27 18:59:36 crc kubenswrapper[4915]: I0127 18:59:36.426549 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03628b30-c873-40cb-a7f9-acea32d1f486-config\") pod \"dnsmasq-dns-57d769cc4f-zzsgm\" (UID: \"03628b30-c873-40cb-a7f9-acea32d1f486\") " pod="openstack/dnsmasq-dns-57d769cc4f-zzsgm" Jan 27 18:59:36 crc kubenswrapper[4915]: I0127 18:59:36.454697 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8drf7\" (UniqueName: \"kubernetes.io/projected/03628b30-c873-40cb-a7f9-acea32d1f486-kube-api-access-8drf7\") pod \"dnsmasq-dns-57d769cc4f-zzsgm\" (UID: \"03628b30-c873-40cb-a7f9-acea32d1f486\") " pod="openstack/dnsmasq-dns-57d769cc4f-zzsgm" Jan 27 18:59:36 crc kubenswrapper[4915]: I0127 18:59:36.549024 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2wgvn" Jan 27 18:59:36 crc kubenswrapper[4915]: I0127 18:59:36.553753 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zzsgm" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.025212 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zzsgm"] Jan 27 18:59:37 crc kubenswrapper[4915]: W0127 18:59:37.025996 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03628b30_c873_40cb_a7f9_acea32d1f486.slice/crio-7b7537387e550303788dece9a253ae5c826c3daf85c8efa19baeceb25a16f945 WatchSource:0}: Error finding container 7b7537387e550303788dece9a253ae5c826c3daf85c8efa19baeceb25a16f945: Status 404 returned error can't find the container with id 7b7537387e550303788dece9a253ae5c826c3daf85c8efa19baeceb25a16f945 Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.062461 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.064072 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.067102 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.067455 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.067587 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.067628 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ztm7t" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.067692 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.067865 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.067977 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.081800 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.092957 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2wgvn"] Jan 27 18:59:37 crc kubenswrapper[4915]: W0127 18:59:37.109180 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0060a7f6_5f56_4ca9_87db_97ee5e0a2ea5.slice/crio-cfa8bfbf5476490fbeba459fed4efad788fd1ee9ff915582330c9f6f8cb8a639 WatchSource:0}: Error finding container cfa8bfbf5476490fbeba459fed4efad788fd1ee9ff915582330c9f6f8cb8a639: Status 404 returned error can't find the container with id cfa8bfbf5476490fbeba459fed4efad788fd1ee9ff915582330c9f6f8cb8a639 Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.134267 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.134298 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.134322 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.134346 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.134446 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.134513 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grbct\" (UniqueName: \"kubernetes.io/projected/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-kube-api-access-grbct\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.134647 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.134707 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-config-data\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.134863 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.134894 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.134929 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.235889 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.235938 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-config-data\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.235973 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.235990 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.236009 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.236034 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.236051 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.236075 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.236097 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.236125 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.236155 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grbct\" (UniqueName: \"kubernetes.io/projected/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-kube-api-access-grbct\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.237073 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.237181 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-config-data\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.237421 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.237638 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.242457 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.242507 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.242635 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.242639 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.242660 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.242777 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.250279 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grbct\" (UniqueName: \"kubernetes.io/projected/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-kube-api-access-grbct\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.261410 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.361365 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.366643 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.372395 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.372541 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.372649 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.372906 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.373035 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.373109 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.373313 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-c85s8" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.384440 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.390298 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.540007 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b3ead5d8-b1e5-4145-a6de-64c316f4027e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.540082 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b3ead5d8-b1e5-4145-a6de-64c316f4027e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.540110 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjk48\" (UniqueName: \"kubernetes.io/projected/b3ead5d8-b1e5-4145-a6de-64c316f4027e-kube-api-access-xjk48\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.540132 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.540168 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b3ead5d8-b1e5-4145-a6de-64c316f4027e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.540195 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.540235 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.540286 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b3ead5d8-b1e5-4145-a6de-64c316f4027e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.540322 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.540344 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b3ead5d8-b1e5-4145-a6de-64c316f4027e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.540365 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b3ead5d8-b1e5-4145-a6de-64c316f4027e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.641928 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.641970 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjk48\" (UniqueName: \"kubernetes.io/projected/b3ead5d8-b1e5-4145-a6de-64c316f4027e-kube-api-access-xjk48\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.641999 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b3ead5d8-b1e5-4145-a6de-64c316f4027e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.642019 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.642051 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.642070 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b3ead5d8-b1e5-4145-a6de-64c316f4027e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.642102 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.642120 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b3ead5d8-b1e5-4145-a6de-64c316f4027e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.642139 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b3ead5d8-b1e5-4145-a6de-64c316f4027e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.642172 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b3ead5d8-b1e5-4145-a6de-64c316f4027e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.642208 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b3ead5d8-b1e5-4145-a6de-64c316f4027e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.643567 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.644738 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.645019 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b3ead5d8-b1e5-4145-a6de-64c316f4027e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.645055 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.645163 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.645672 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b3ead5d8-b1e5-4145-a6de-64c316f4027e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.649183 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b3ead5d8-b1e5-4145-a6de-64c316f4027e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.649204 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b3ead5d8-b1e5-4145-a6de-64c316f4027e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.649358 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b3ead5d8-b1e5-4145-a6de-64c316f4027e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.653151 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b3ead5d8-b1e5-4145-a6de-64c316f4027e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.678195 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjk48\" (UniqueName: \"kubernetes.io/projected/b3ead5d8-b1e5-4145-a6de-64c316f4027e-kube-api-access-xjk48\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.694877 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:37 crc kubenswrapper[4915]: I0127 18:59:37.996165 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.034373 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-zzsgm" event={"ID":"03628b30-c873-40cb-a7f9-acea32d1f486","Type":"ContainerStarted","Data":"7b7537387e550303788dece9a253ae5c826c3daf85c8efa19baeceb25a16f945"} Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.035353 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2wgvn" event={"ID":"0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5","Type":"ContainerStarted","Data":"cfa8bfbf5476490fbeba459fed4efad788fd1ee9ff915582330c9f6f8cb8a639"} Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.589266 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.590452 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.595572 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.595904 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.596305 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.598319 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-6pqhv" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.603182 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.610563 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.757339 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " pod="openstack/openstack-galera-0" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.757401 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/69486e9e-4ef8-4749-842f-a38dfeba60d3-kolla-config\") pod \"openstack-galera-0\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " pod="openstack/openstack-galera-0" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.757421 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69486e9e-4ef8-4749-842f-a38dfeba60d3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " pod="openstack/openstack-galera-0" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.757486 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/69486e9e-4ef8-4749-842f-a38dfeba60d3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " pod="openstack/openstack-galera-0" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.757507 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69486e9e-4ef8-4749-842f-a38dfeba60d3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " pod="openstack/openstack-galera-0" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.757553 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7zkx\" (UniqueName: \"kubernetes.io/projected/69486e9e-4ef8-4749-842f-a38dfeba60d3-kube-api-access-f7zkx\") pod \"openstack-galera-0\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " pod="openstack/openstack-galera-0" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.757592 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/69486e9e-4ef8-4749-842f-a38dfeba60d3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " pod="openstack/openstack-galera-0" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.757654 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/69486e9e-4ef8-4749-842f-a38dfeba60d3-config-data-default\") pod \"openstack-galera-0\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " pod="openstack/openstack-galera-0" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.861268 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/69486e9e-4ef8-4749-842f-a38dfeba60d3-kolla-config\") pod \"openstack-galera-0\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " pod="openstack/openstack-galera-0" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.861351 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69486e9e-4ef8-4749-842f-a38dfeba60d3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " pod="openstack/openstack-galera-0" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.861424 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/69486e9e-4ef8-4749-842f-a38dfeba60d3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " pod="openstack/openstack-galera-0" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.861451 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69486e9e-4ef8-4749-842f-a38dfeba60d3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " pod="openstack/openstack-galera-0" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.861478 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7zkx\" (UniqueName: \"kubernetes.io/projected/69486e9e-4ef8-4749-842f-a38dfeba60d3-kube-api-access-f7zkx\") pod \"openstack-galera-0\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " pod="openstack/openstack-galera-0" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.861535 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/69486e9e-4ef8-4749-842f-a38dfeba60d3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " pod="openstack/openstack-galera-0" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.861559 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/69486e9e-4ef8-4749-842f-a38dfeba60d3-config-data-default\") pod \"openstack-galera-0\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " pod="openstack/openstack-galera-0" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.861606 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " pod="openstack/openstack-galera-0" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.861760 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.863621 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69486e9e-4ef8-4749-842f-a38dfeba60d3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " pod="openstack/openstack-galera-0" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.863653 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/69486e9e-4ef8-4749-842f-a38dfeba60d3-kolla-config\") pod \"openstack-galera-0\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " pod="openstack/openstack-galera-0" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.863932 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/69486e9e-4ef8-4749-842f-a38dfeba60d3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " pod="openstack/openstack-galera-0" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.864519 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/69486e9e-4ef8-4749-842f-a38dfeba60d3-config-data-default\") pod \"openstack-galera-0\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " pod="openstack/openstack-galera-0" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.866954 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69486e9e-4ef8-4749-842f-a38dfeba60d3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " pod="openstack/openstack-galera-0" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.870428 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/69486e9e-4ef8-4749-842f-a38dfeba60d3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " pod="openstack/openstack-galera-0" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.882905 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7zkx\" (UniqueName: \"kubernetes.io/projected/69486e9e-4ef8-4749-842f-a38dfeba60d3-kube-api-access-f7zkx\") pod \"openstack-galera-0\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " pod="openstack/openstack-galera-0" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.886843 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " pod="openstack/openstack-galera-0" Jan 27 18:59:38 crc kubenswrapper[4915]: I0127 18:59:38.916056 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 18:59:39 crc kubenswrapper[4915]: I0127 18:59:39.981686 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 18:59:39 crc kubenswrapper[4915]: I0127 18:59:39.983150 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 18:59:39 crc kubenswrapper[4915]: I0127 18:59:39.988101 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 18:59:39 crc kubenswrapper[4915]: I0127 18:59:39.988727 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-stslp" Jan 27 18:59:39 crc kubenswrapper[4915]: I0127 18:59:39.988936 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 27 18:59:39 crc kubenswrapper[4915]: I0127 18:59:39.989048 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 27 18:59:39 crc kubenswrapper[4915]: I0127 18:59:39.989141 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.096120 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.096241 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c245e6e6-955f-4f75-9427-3a3bd0f26c97-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.096282 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c245e6e6-955f-4f75-9427-3a3bd0f26c97-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.098269 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28sl6\" (UniqueName: \"kubernetes.io/projected/c245e6e6-955f-4f75-9427-3a3bd0f26c97-kube-api-access-28sl6\") pod \"openstack-cell1-galera-0\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.098367 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c245e6e6-955f-4f75-9427-3a3bd0f26c97-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.098406 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c245e6e6-955f-4f75-9427-3a3bd0f26c97-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.098485 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c245e6e6-955f-4f75-9427-3a3bd0f26c97-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.098705 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c245e6e6-955f-4f75-9427-3a3bd0f26c97-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.201733 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.201850 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c245e6e6-955f-4f75-9427-3a3bd0f26c97-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.201875 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c245e6e6-955f-4f75-9427-3a3bd0f26c97-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.201897 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28sl6\" (UniqueName: \"kubernetes.io/projected/c245e6e6-955f-4f75-9427-3a3bd0f26c97-kube-api-access-28sl6\") pod \"openstack-cell1-galera-0\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.201930 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c245e6e6-955f-4f75-9427-3a3bd0f26c97-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.201954 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c245e6e6-955f-4f75-9427-3a3bd0f26c97-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.201993 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c245e6e6-955f-4f75-9427-3a3bd0f26c97-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.202021 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c245e6e6-955f-4f75-9427-3a3bd0f26c97-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.202177 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.202802 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c245e6e6-955f-4f75-9427-3a3bd0f26c97-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.203060 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c245e6e6-955f-4f75-9427-3a3bd0f26c97-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.203511 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c245e6e6-955f-4f75-9427-3a3bd0f26c97-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.204036 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c245e6e6-955f-4f75-9427-3a3bd0f26c97-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.219440 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c245e6e6-955f-4f75-9427-3a3bd0f26c97-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.219640 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c245e6e6-955f-4f75-9427-3a3bd0f26c97-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.235042 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.235711 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28sl6\" (UniqueName: \"kubernetes.io/projected/c245e6e6-955f-4f75-9427-3a3bd0f26c97-kube-api-access-28sl6\") pod \"openstack-cell1-galera-0\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.308055 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.324566 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.325469 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.330112 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.330205 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.330549 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-sxsw2" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.340491 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.505950 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwhj2\" (UniqueName: \"kubernetes.io/projected/b01be235-2ab9-4e61-a5a4-1d006a9e6679-kube-api-access-cwhj2\") pod \"memcached-0\" (UID: \"b01be235-2ab9-4e61-a5a4-1d006a9e6679\") " pod="openstack/memcached-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.505994 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b01be235-2ab9-4e61-a5a4-1d006a9e6679-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b01be235-2ab9-4e61-a5a4-1d006a9e6679\") " pod="openstack/memcached-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.506023 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b01be235-2ab9-4e61-a5a4-1d006a9e6679-kolla-config\") pod \"memcached-0\" (UID: \"b01be235-2ab9-4e61-a5a4-1d006a9e6679\") " pod="openstack/memcached-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.506037 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b01be235-2ab9-4e61-a5a4-1d006a9e6679-config-data\") pod \"memcached-0\" (UID: \"b01be235-2ab9-4e61-a5a4-1d006a9e6679\") " pod="openstack/memcached-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.506428 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b01be235-2ab9-4e61-a5a4-1d006a9e6679-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b01be235-2ab9-4e61-a5a4-1d006a9e6679\") " pod="openstack/memcached-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.607868 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwhj2\" (UniqueName: \"kubernetes.io/projected/b01be235-2ab9-4e61-a5a4-1d006a9e6679-kube-api-access-cwhj2\") pod \"memcached-0\" (UID: \"b01be235-2ab9-4e61-a5a4-1d006a9e6679\") " pod="openstack/memcached-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.608161 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b01be235-2ab9-4e61-a5a4-1d006a9e6679-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b01be235-2ab9-4e61-a5a4-1d006a9e6679\") " pod="openstack/memcached-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.608246 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b01be235-2ab9-4e61-a5a4-1d006a9e6679-kolla-config\") pod \"memcached-0\" (UID: \"b01be235-2ab9-4e61-a5a4-1d006a9e6679\") " pod="openstack/memcached-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.608318 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b01be235-2ab9-4e61-a5a4-1d006a9e6679-config-data\") pod \"memcached-0\" (UID: \"b01be235-2ab9-4e61-a5a4-1d006a9e6679\") " pod="openstack/memcached-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.608420 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b01be235-2ab9-4e61-a5a4-1d006a9e6679-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b01be235-2ab9-4e61-a5a4-1d006a9e6679\") " pod="openstack/memcached-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.609091 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b01be235-2ab9-4e61-a5a4-1d006a9e6679-kolla-config\") pod \"memcached-0\" (UID: \"b01be235-2ab9-4e61-a5a4-1d006a9e6679\") " pod="openstack/memcached-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.609190 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b01be235-2ab9-4e61-a5a4-1d006a9e6679-config-data\") pod \"memcached-0\" (UID: \"b01be235-2ab9-4e61-a5a4-1d006a9e6679\") " pod="openstack/memcached-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.631195 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b01be235-2ab9-4e61-a5a4-1d006a9e6679-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b01be235-2ab9-4e61-a5a4-1d006a9e6679\") " pod="openstack/memcached-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.631274 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b01be235-2ab9-4e61-a5a4-1d006a9e6679-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b01be235-2ab9-4e61-a5a4-1d006a9e6679\") " pod="openstack/memcached-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.634367 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwhj2\" (UniqueName: \"kubernetes.io/projected/b01be235-2ab9-4e61-a5a4-1d006a9e6679-kube-api-access-cwhj2\") pod \"memcached-0\" (UID: \"b01be235-2ab9-4e61-a5a4-1d006a9e6679\") " pod="openstack/memcached-0" Jan 27 18:59:40 crc kubenswrapper[4915]: I0127 18:59:40.646232 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 18:59:41 crc kubenswrapper[4915]: I0127 18:59:41.909642 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 18:59:41 crc kubenswrapper[4915]: I0127 18:59:41.910963 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 18:59:41 crc kubenswrapper[4915]: I0127 18:59:41.916358 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-k2r99" Jan 27 18:59:41 crc kubenswrapper[4915]: I0127 18:59:41.919020 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 18:59:42 crc kubenswrapper[4915]: I0127 18:59:42.028046 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hztkr\" (UniqueName: \"kubernetes.io/projected/4c362f63-191f-4589-80ee-212b909db51e-kube-api-access-hztkr\") pod \"kube-state-metrics-0\" (UID: \"4c362f63-191f-4589-80ee-212b909db51e\") " pod="openstack/kube-state-metrics-0" Jan 27 18:59:42 crc kubenswrapper[4915]: I0127 18:59:42.129542 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hztkr\" (UniqueName: \"kubernetes.io/projected/4c362f63-191f-4589-80ee-212b909db51e-kube-api-access-hztkr\") pod \"kube-state-metrics-0\" (UID: \"4c362f63-191f-4589-80ee-212b909db51e\") " pod="openstack/kube-state-metrics-0" Jan 27 18:59:42 crc kubenswrapper[4915]: I0127 18:59:42.151275 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hztkr\" (UniqueName: \"kubernetes.io/projected/4c362f63-191f-4589-80ee-212b909db51e-kube-api-access-hztkr\") pod \"kube-state-metrics-0\" (UID: \"4c362f63-191f-4589-80ee-212b909db51e\") " pod="openstack/kube-state-metrics-0" Jan 27 18:59:42 crc kubenswrapper[4915]: I0127 18:59:42.233540 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.219435 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-lrzgd"] Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.221026 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lrzgd" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.222880 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-srlw2" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.223291 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.223670 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.232471 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-lrzgd"] Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.308932 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-g8pg6"] Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.310306 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-g8pg6" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.320665 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-g8pg6"] Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.393089 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae7274f-1da9-4023-96b1-30cca477c6a2-combined-ca-bundle\") pod \"ovn-controller-lrzgd\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " pod="openstack/ovn-controller-lrzgd" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.393560 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/aae7274f-1da9-4023-96b1-30cca477c6a2-ovn-controller-tls-certs\") pod \"ovn-controller-lrzgd\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " pod="openstack/ovn-controller-lrzgd" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.393731 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb4fp\" (UniqueName: \"kubernetes.io/projected/aae7274f-1da9-4023-96b1-30cca477c6a2-kube-api-access-qb4fp\") pod \"ovn-controller-lrzgd\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " pod="openstack/ovn-controller-lrzgd" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.393816 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/aae7274f-1da9-4023-96b1-30cca477c6a2-var-log-ovn\") pod \"ovn-controller-lrzgd\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " pod="openstack/ovn-controller-lrzgd" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.393901 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aae7274f-1da9-4023-96b1-30cca477c6a2-var-run\") pod \"ovn-controller-lrzgd\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " pod="openstack/ovn-controller-lrzgd" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.393938 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aae7274f-1da9-4023-96b1-30cca477c6a2-scripts\") pod \"ovn-controller-lrzgd\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " pod="openstack/ovn-controller-lrzgd" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.394011 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/aae7274f-1da9-4023-96b1-30cca477c6a2-var-run-ovn\") pod \"ovn-controller-lrzgd\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " pod="openstack/ovn-controller-lrzgd" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.495250 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f070ee25-edfb-4020-b526-3ec9d6c727bc-scripts\") pod \"ovn-controller-ovs-g8pg6\" (UID: \"f070ee25-edfb-4020-b526-3ec9d6c727bc\") " pod="openstack/ovn-controller-ovs-g8pg6" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.495323 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb4fp\" (UniqueName: \"kubernetes.io/projected/aae7274f-1da9-4023-96b1-30cca477c6a2-kube-api-access-qb4fp\") pod \"ovn-controller-lrzgd\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " pod="openstack/ovn-controller-lrzgd" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.495362 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg78k\" (UniqueName: \"kubernetes.io/projected/f070ee25-edfb-4020-b526-3ec9d6c727bc-kube-api-access-cg78k\") pod \"ovn-controller-ovs-g8pg6\" (UID: \"f070ee25-edfb-4020-b526-3ec9d6c727bc\") " pod="openstack/ovn-controller-ovs-g8pg6" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.495436 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/aae7274f-1da9-4023-96b1-30cca477c6a2-var-log-ovn\") pod \"ovn-controller-lrzgd\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " pod="openstack/ovn-controller-lrzgd" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.495469 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aae7274f-1da9-4023-96b1-30cca477c6a2-var-run\") pod \"ovn-controller-lrzgd\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " pod="openstack/ovn-controller-lrzgd" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.495495 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aae7274f-1da9-4023-96b1-30cca477c6a2-scripts\") pod \"ovn-controller-lrzgd\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " pod="openstack/ovn-controller-lrzgd" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.495525 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/aae7274f-1da9-4023-96b1-30cca477c6a2-var-run-ovn\") pod \"ovn-controller-lrzgd\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " pod="openstack/ovn-controller-lrzgd" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.495551 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f070ee25-edfb-4020-b526-3ec9d6c727bc-var-log\") pod \"ovn-controller-ovs-g8pg6\" (UID: \"f070ee25-edfb-4020-b526-3ec9d6c727bc\") " pod="openstack/ovn-controller-ovs-g8pg6" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.495581 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae7274f-1da9-4023-96b1-30cca477c6a2-combined-ca-bundle\") pod \"ovn-controller-lrzgd\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " pod="openstack/ovn-controller-lrzgd" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.495621 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/aae7274f-1da9-4023-96b1-30cca477c6a2-ovn-controller-tls-certs\") pod \"ovn-controller-lrzgd\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " pod="openstack/ovn-controller-lrzgd" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.495642 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f070ee25-edfb-4020-b526-3ec9d6c727bc-var-run\") pod \"ovn-controller-ovs-g8pg6\" (UID: \"f070ee25-edfb-4020-b526-3ec9d6c727bc\") " pod="openstack/ovn-controller-ovs-g8pg6" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.495685 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f070ee25-edfb-4020-b526-3ec9d6c727bc-var-lib\") pod \"ovn-controller-ovs-g8pg6\" (UID: \"f070ee25-edfb-4020-b526-3ec9d6c727bc\") " pod="openstack/ovn-controller-ovs-g8pg6" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.495721 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f070ee25-edfb-4020-b526-3ec9d6c727bc-etc-ovs\") pod \"ovn-controller-ovs-g8pg6\" (UID: \"f070ee25-edfb-4020-b526-3ec9d6c727bc\") " pod="openstack/ovn-controller-ovs-g8pg6" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.496159 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/aae7274f-1da9-4023-96b1-30cca477c6a2-var-log-ovn\") pod \"ovn-controller-lrzgd\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " pod="openstack/ovn-controller-lrzgd" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.496373 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aae7274f-1da9-4023-96b1-30cca477c6a2-var-run\") pod \"ovn-controller-lrzgd\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " pod="openstack/ovn-controller-lrzgd" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.496631 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/aae7274f-1da9-4023-96b1-30cca477c6a2-var-run-ovn\") pod \"ovn-controller-lrzgd\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " pod="openstack/ovn-controller-lrzgd" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.499580 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aae7274f-1da9-4023-96b1-30cca477c6a2-scripts\") pod \"ovn-controller-lrzgd\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " pod="openstack/ovn-controller-lrzgd" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.510258 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae7274f-1da9-4023-96b1-30cca477c6a2-combined-ca-bundle\") pod \"ovn-controller-lrzgd\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " pod="openstack/ovn-controller-lrzgd" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.516329 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb4fp\" (UniqueName: \"kubernetes.io/projected/aae7274f-1da9-4023-96b1-30cca477c6a2-kube-api-access-qb4fp\") pod \"ovn-controller-lrzgd\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " pod="openstack/ovn-controller-lrzgd" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.543214 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/aae7274f-1da9-4023-96b1-30cca477c6a2-ovn-controller-tls-certs\") pod \"ovn-controller-lrzgd\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " pod="openstack/ovn-controller-lrzgd" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.551470 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lrzgd" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.597363 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f070ee25-edfb-4020-b526-3ec9d6c727bc-scripts\") pod \"ovn-controller-ovs-g8pg6\" (UID: \"f070ee25-edfb-4020-b526-3ec9d6c727bc\") " pod="openstack/ovn-controller-ovs-g8pg6" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.597427 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg78k\" (UniqueName: \"kubernetes.io/projected/f070ee25-edfb-4020-b526-3ec9d6c727bc-kube-api-access-cg78k\") pod \"ovn-controller-ovs-g8pg6\" (UID: \"f070ee25-edfb-4020-b526-3ec9d6c727bc\") " pod="openstack/ovn-controller-ovs-g8pg6" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.597490 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f070ee25-edfb-4020-b526-3ec9d6c727bc-var-log\") pod \"ovn-controller-ovs-g8pg6\" (UID: \"f070ee25-edfb-4020-b526-3ec9d6c727bc\") " pod="openstack/ovn-controller-ovs-g8pg6" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.597542 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f070ee25-edfb-4020-b526-3ec9d6c727bc-var-run\") pod \"ovn-controller-ovs-g8pg6\" (UID: \"f070ee25-edfb-4020-b526-3ec9d6c727bc\") " pod="openstack/ovn-controller-ovs-g8pg6" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.597592 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f070ee25-edfb-4020-b526-3ec9d6c727bc-var-lib\") pod \"ovn-controller-ovs-g8pg6\" (UID: \"f070ee25-edfb-4020-b526-3ec9d6c727bc\") " pod="openstack/ovn-controller-ovs-g8pg6" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.597634 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f070ee25-edfb-4020-b526-3ec9d6c727bc-etc-ovs\") pod \"ovn-controller-ovs-g8pg6\" (UID: \"f070ee25-edfb-4020-b526-3ec9d6c727bc\") " pod="openstack/ovn-controller-ovs-g8pg6" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.597873 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f070ee25-edfb-4020-b526-3ec9d6c727bc-etc-ovs\") pod \"ovn-controller-ovs-g8pg6\" (UID: \"f070ee25-edfb-4020-b526-3ec9d6c727bc\") " pod="openstack/ovn-controller-ovs-g8pg6" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.597937 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f070ee25-edfb-4020-b526-3ec9d6c727bc-var-run\") pod \"ovn-controller-ovs-g8pg6\" (UID: \"f070ee25-edfb-4020-b526-3ec9d6c727bc\") " pod="openstack/ovn-controller-ovs-g8pg6" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.598035 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f070ee25-edfb-4020-b526-3ec9d6c727bc-var-lib\") pod \"ovn-controller-ovs-g8pg6\" (UID: \"f070ee25-edfb-4020-b526-3ec9d6c727bc\") " pod="openstack/ovn-controller-ovs-g8pg6" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.598186 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f070ee25-edfb-4020-b526-3ec9d6c727bc-var-log\") pod \"ovn-controller-ovs-g8pg6\" (UID: \"f070ee25-edfb-4020-b526-3ec9d6c727bc\") " pod="openstack/ovn-controller-ovs-g8pg6" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.600381 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f070ee25-edfb-4020-b526-3ec9d6c727bc-scripts\") pod \"ovn-controller-ovs-g8pg6\" (UID: \"f070ee25-edfb-4020-b526-3ec9d6c727bc\") " pod="openstack/ovn-controller-ovs-g8pg6" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.624304 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg78k\" (UniqueName: \"kubernetes.io/projected/f070ee25-edfb-4020-b526-3ec9d6c727bc-kube-api-access-cg78k\") pod \"ovn-controller-ovs-g8pg6\" (UID: \"f070ee25-edfb-4020-b526-3ec9d6c727bc\") " pod="openstack/ovn-controller-ovs-g8pg6" Jan 27 18:59:46 crc kubenswrapper[4915]: I0127 18:59:46.624997 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-g8pg6" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.116200 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.118352 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.121300 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.121607 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.122144 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.123890 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-dx5tg" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.124071 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.124151 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.207290 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2e3568-b735-4d0a-a6ff-a4862f244a53-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.207351 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.207391 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mphc9\" (UniqueName: \"kubernetes.io/projected/6c2e3568-b735-4d0a-a6ff-a4862f244a53-kube-api-access-mphc9\") pod \"ovsdbserver-nb-0\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.207443 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2e3568-b735-4d0a-a6ff-a4862f244a53-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.207475 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c2e3568-b735-4d0a-a6ff-a4862f244a53-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.207513 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c2e3568-b735-4d0a-a6ff-a4862f244a53-config\") pod \"ovsdbserver-nb-0\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.207532 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6c2e3568-b735-4d0a-a6ff-a4862f244a53-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.207601 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2e3568-b735-4d0a-a6ff-a4862f244a53-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.309230 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.309322 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mphc9\" (UniqueName: \"kubernetes.io/projected/6c2e3568-b735-4d0a-a6ff-a4862f244a53-kube-api-access-mphc9\") pod \"ovsdbserver-nb-0\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.309399 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2e3568-b735-4d0a-a6ff-a4862f244a53-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.309443 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c2e3568-b735-4d0a-a6ff-a4862f244a53-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.309503 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c2e3568-b735-4d0a-a6ff-a4862f244a53-config\") pod \"ovsdbserver-nb-0\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.309534 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6c2e3568-b735-4d0a-a6ff-a4862f244a53-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.309552 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.309784 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2e3568-b735-4d0a-a6ff-a4862f244a53-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.309902 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2e3568-b735-4d0a-a6ff-a4862f244a53-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.313251 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2e3568-b735-4d0a-a6ff-a4862f244a53-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.313490 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6c2e3568-b735-4d0a-a6ff-a4862f244a53-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.314108 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c2e3568-b735-4d0a-a6ff-a4862f244a53-config\") pod \"ovsdbserver-nb-0\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.315128 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c2e3568-b735-4d0a-a6ff-a4862f244a53-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.318716 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2e3568-b735-4d0a-a6ff-a4862f244a53-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.319148 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2e3568-b735-4d0a-a6ff-a4862f244a53-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.332912 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mphc9\" (UniqueName: \"kubernetes.io/projected/6c2e3568-b735-4d0a-a6ff-a4862f244a53-kube-api-access-mphc9\") pod \"ovsdbserver-nb-0\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.343374 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:59:47 crc kubenswrapper[4915]: I0127 18:59:47.448419 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 18:59:48 crc kubenswrapper[4915]: I0127 18:59:48.852962 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 18:59:48 crc kubenswrapper[4915]: I0127 18:59:48.854964 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 18:59:48 crc kubenswrapper[4915]: I0127 18:59:48.859492 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 27 18:59:48 crc kubenswrapper[4915]: I0127 18:59:48.859720 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 27 18:59:48 crc kubenswrapper[4915]: I0127 18:59:48.859859 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-vdfcf" Jan 27 18:59:48 crc kubenswrapper[4915]: I0127 18:59:48.859986 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 27 18:59:48 crc kubenswrapper[4915]: I0127 18:59:48.875370 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 18:59:48 crc kubenswrapper[4915]: I0127 18:59:48.956584 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:59:48 crc kubenswrapper[4915]: I0127 18:59:48.956639 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/754faa2e-19b3-47fb-9436-62d0ebd49ea4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:59:48 crc kubenswrapper[4915]: I0127 18:59:48.956673 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/754faa2e-19b3-47fb-9436-62d0ebd49ea4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:59:48 crc kubenswrapper[4915]: I0127 18:59:48.956708 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/754faa2e-19b3-47fb-9436-62d0ebd49ea4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:59:48 crc kubenswrapper[4915]: I0127 18:59:48.956744 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/754faa2e-19b3-47fb-9436-62d0ebd49ea4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:59:48 crc kubenswrapper[4915]: I0127 18:59:48.956774 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/754faa2e-19b3-47fb-9436-62d0ebd49ea4-config\") pod \"ovsdbserver-sb-0\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:59:48 crc kubenswrapper[4915]: I0127 18:59:48.956819 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7dvs\" (UniqueName: \"kubernetes.io/projected/754faa2e-19b3-47fb-9436-62d0ebd49ea4-kube-api-access-c7dvs\") pod \"ovsdbserver-sb-0\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:59:48 crc kubenswrapper[4915]: I0127 18:59:48.956867 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/754faa2e-19b3-47fb-9436-62d0ebd49ea4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:59:49 crc kubenswrapper[4915]: I0127 18:59:49.057944 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/754faa2e-19b3-47fb-9436-62d0ebd49ea4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:59:49 crc kubenswrapper[4915]: I0127 18:59:49.058299 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/754faa2e-19b3-47fb-9436-62d0ebd49ea4-config\") pod \"ovsdbserver-sb-0\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:59:49 crc kubenswrapper[4915]: I0127 18:59:49.058331 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7dvs\" (UniqueName: \"kubernetes.io/projected/754faa2e-19b3-47fb-9436-62d0ebd49ea4-kube-api-access-c7dvs\") pod \"ovsdbserver-sb-0\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:59:49 crc kubenswrapper[4915]: I0127 18:59:49.058378 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/754faa2e-19b3-47fb-9436-62d0ebd49ea4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:59:49 crc kubenswrapper[4915]: I0127 18:59:49.058480 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:59:49 crc kubenswrapper[4915]: I0127 18:59:49.058515 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/754faa2e-19b3-47fb-9436-62d0ebd49ea4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:59:49 crc kubenswrapper[4915]: I0127 18:59:49.058546 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/754faa2e-19b3-47fb-9436-62d0ebd49ea4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:59:49 crc kubenswrapper[4915]: I0127 18:59:49.058609 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/754faa2e-19b3-47fb-9436-62d0ebd49ea4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:59:49 crc kubenswrapper[4915]: I0127 18:59:49.059291 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/754faa2e-19b3-47fb-9436-62d0ebd49ea4-config\") pod \"ovsdbserver-sb-0\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:59:49 crc kubenswrapper[4915]: I0127 18:59:49.059976 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/754faa2e-19b3-47fb-9436-62d0ebd49ea4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:59:49 crc kubenswrapper[4915]: I0127 18:59:49.061046 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-sb-0" Jan 27 18:59:49 crc kubenswrapper[4915]: I0127 18:59:49.063288 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/754faa2e-19b3-47fb-9436-62d0ebd49ea4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:59:49 crc kubenswrapper[4915]: I0127 18:59:49.063346 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/754faa2e-19b3-47fb-9436-62d0ebd49ea4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:59:49 crc kubenswrapper[4915]: I0127 18:59:49.065713 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/754faa2e-19b3-47fb-9436-62d0ebd49ea4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:59:49 crc kubenswrapper[4915]: I0127 18:59:49.069041 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/754faa2e-19b3-47fb-9436-62d0ebd49ea4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:59:49 crc kubenswrapper[4915]: I0127 18:59:49.075406 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7dvs\" (UniqueName: \"kubernetes.io/projected/754faa2e-19b3-47fb-9436-62d0ebd49ea4-kube-api-access-c7dvs\") pod \"ovsdbserver-sb-0\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:59:49 crc kubenswrapper[4915]: I0127 18:59:49.107133 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:59:49 crc kubenswrapper[4915]: I0127 18:59:49.164449 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 18:59:49 crc kubenswrapper[4915]: I0127 18:59:49.181312 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 18:59:49 crc kubenswrapper[4915]: W0127 18:59:49.568388 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb01be235_2ab9_4e61_a5a4_1d006a9e6679.slice/crio-45f7a51ba93b3d190c66f83b3dbf1c7803ec32c1aea00154175d792189c6c6ba WatchSource:0}: Error finding container 45f7a51ba93b3d190c66f83b3dbf1c7803ec32c1aea00154175d792189c6c6ba: Status 404 returned error can't find the container with id 45f7a51ba93b3d190c66f83b3dbf1c7803ec32c1aea00154175d792189c6c6ba Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.131644 4915 generic.go:334] "Generic (PLEG): container finished" podID="03628b30-c873-40cb-a7f9-acea32d1f486" containerID="27a3f767350147dbc2b3afa517fa8b47afa7df60b4ac2b72b7ed250e7f4580b5" exitCode=0 Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.131723 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-zzsgm" event={"ID":"03628b30-c873-40cb-a7f9-acea32d1f486","Type":"ContainerDied","Data":"27a3f767350147dbc2b3afa517fa8b47afa7df60b4ac2b72b7ed250e7f4580b5"} Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.135247 4915 generic.go:334] "Generic (PLEG): container finished" podID="f19fea82-c53b-46de-8e6c-ff245e7e25af" containerID="2584fb0c0a10a516cb1a7912681d1f367955b5e391063db67fb478b646f0ccdd" exitCode=0 Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.135309 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-6gb75" event={"ID":"f19fea82-c53b-46de-8e6c-ff245e7e25af","Type":"ContainerDied","Data":"2584fb0c0a10a516cb1a7912681d1f367955b5e391063db67fb478b646f0ccdd"} Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.137261 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-g2qtc" event={"ID":"5fcb7d90-7d62-4d35-a53e-99ba9bc318ff","Type":"ContainerStarted","Data":"225603af5954befd9be23fde64bb9885d60722f3ec3123f8546f4492d537cbc5"} Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.140584 4915 generic.go:334] "Generic (PLEG): container finished" podID="0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5" containerID="10ed065c1aa5955d747c210ed54a8bf3547cf60f738fc305f0078ca0244151db" exitCode=0 Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.140644 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2wgvn" event={"ID":"0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5","Type":"ContainerDied","Data":"10ed065c1aa5955d747c210ed54a8bf3547cf60f738fc305f0078ca0244151db"} Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.143002 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b01be235-2ab9-4e61-a5a4-1d006a9e6679","Type":"ContainerStarted","Data":"45f7a51ba93b3d190c66f83b3dbf1c7803ec32c1aea00154175d792189c6c6ba"} Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.190766 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.307194 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-lrzgd"] Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.318462 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.322543 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 18:59:50 crc kubenswrapper[4915]: W0127 18:59:50.322768 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69486e9e_4ef8_4749_842f_a38dfeba60d3.slice/crio-c469f41e66e538d2e937f34b0e269c15ac8c21d6493aa2369f95ba537e6a75b7 WatchSource:0}: Error finding container c469f41e66e538d2e937f34b0e269c15ac8c21d6493aa2369f95ba537e6a75b7: Status 404 returned error can't find the container with id c469f41e66e538d2e937f34b0e269c15ac8c21d6493aa2369f95ba537e6a75b7 Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.337165 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.389912 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-g8pg6"] Jan 27 18:59:50 crc kubenswrapper[4915]: W0127 18:59:50.395908 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf070ee25_edfb_4020_b526_3ec9d6c727bc.slice/crio-12a34a821760a0f530c261e714f0db2f0f22dc4335757b7906bda953def91aa4 WatchSource:0}: Error finding container 12a34a821760a0f530c261e714f0db2f0f22dc4335757b7906bda953def91aa4: Status 404 returned error can't find the container with id 12a34a821760a0f530c261e714f0db2f0f22dc4335757b7906bda953def91aa4 Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.588305 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g2qtc" Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.599981 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6gb75" Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.675685 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.695604 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fcb7d90-7d62-4d35-a53e-99ba9bc318ff-dns-svc\") pod \"5fcb7d90-7d62-4d35-a53e-99ba9bc318ff\" (UID: \"5fcb7d90-7d62-4d35-a53e-99ba9bc318ff\") " Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.695844 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f19fea82-c53b-46de-8e6c-ff245e7e25af-config\") pod \"f19fea82-c53b-46de-8e6c-ff245e7e25af\" (UID: \"f19fea82-c53b-46de-8e6c-ff245e7e25af\") " Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.696897 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfgs7\" (UniqueName: \"kubernetes.io/projected/f19fea82-c53b-46de-8e6c-ff245e7e25af-kube-api-access-hfgs7\") pod \"f19fea82-c53b-46de-8e6c-ff245e7e25af\" (UID: \"f19fea82-c53b-46de-8e6c-ff245e7e25af\") " Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.696940 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz2nc\" (UniqueName: \"kubernetes.io/projected/5fcb7d90-7d62-4d35-a53e-99ba9bc318ff-kube-api-access-vz2nc\") pod \"5fcb7d90-7d62-4d35-a53e-99ba9bc318ff\" (UID: \"5fcb7d90-7d62-4d35-a53e-99ba9bc318ff\") " Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.697017 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fcb7d90-7d62-4d35-a53e-99ba9bc318ff-config\") pod \"5fcb7d90-7d62-4d35-a53e-99ba9bc318ff\" (UID: \"5fcb7d90-7d62-4d35-a53e-99ba9bc318ff\") " Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.705050 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f19fea82-c53b-46de-8e6c-ff245e7e25af-kube-api-access-hfgs7" (OuterVolumeSpecName: "kube-api-access-hfgs7") pod "f19fea82-c53b-46de-8e6c-ff245e7e25af" (UID: "f19fea82-c53b-46de-8e6c-ff245e7e25af"). InnerVolumeSpecName "kube-api-access-hfgs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.705221 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fcb7d90-7d62-4d35-a53e-99ba9bc318ff-kube-api-access-vz2nc" (OuterVolumeSpecName: "kube-api-access-vz2nc") pod "5fcb7d90-7d62-4d35-a53e-99ba9bc318ff" (UID: "5fcb7d90-7d62-4d35-a53e-99ba9bc318ff"). InnerVolumeSpecName "kube-api-access-vz2nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.719365 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fcb7d90-7d62-4d35-a53e-99ba9bc318ff-config" (OuterVolumeSpecName: "config") pod "5fcb7d90-7d62-4d35-a53e-99ba9bc318ff" (UID: "5fcb7d90-7d62-4d35-a53e-99ba9bc318ff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.720023 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fcb7d90-7d62-4d35-a53e-99ba9bc318ff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5fcb7d90-7d62-4d35-a53e-99ba9bc318ff" (UID: "5fcb7d90-7d62-4d35-a53e-99ba9bc318ff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.724881 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f19fea82-c53b-46de-8e6c-ff245e7e25af-config" (OuterVolumeSpecName: "config") pod "f19fea82-c53b-46de-8e6c-ff245e7e25af" (UID: "f19fea82-c53b-46de-8e6c-ff245e7e25af"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.770394 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.798729 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fcb7d90-7d62-4d35-a53e-99ba9bc318ff-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.798770 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fcb7d90-7d62-4d35-a53e-99ba9bc318ff-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.798855 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f19fea82-c53b-46de-8e6c-ff245e7e25af-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.798871 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfgs7\" (UniqueName: \"kubernetes.io/projected/f19fea82-c53b-46de-8e6c-ff245e7e25af-kube-api-access-hfgs7\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:50 crc kubenswrapper[4915]: I0127 18:59:50.798889 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz2nc\" (UniqueName: \"kubernetes.io/projected/5fcb7d90-7d62-4d35-a53e-99ba9bc318ff-kube-api-access-vz2nc\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.153186 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b3ead5d8-b1e5-4145-a6de-64c316f4027e","Type":"ContainerStarted","Data":"e24c285ab5913c5a999c9b9b5afcb30590ee7a769cd4ffc0ffa815e68c1209cb"} Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.154686 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5b5f81dc-48ff-40c8-a0af-84c7c60338fd","Type":"ContainerStarted","Data":"e41f4777fadbbb920de51d9857441037e2199f54ad2cdc08d5b5adce5f49bc85"} Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.155954 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"754faa2e-19b3-47fb-9436-62d0ebd49ea4","Type":"ContainerStarted","Data":"065c4061afa5e35d12717c28d2f122b0d975243697a3da3af487cc5b3e57e0ed"} Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.157648 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-zzsgm" event={"ID":"03628b30-c873-40cb-a7f9-acea32d1f486","Type":"ContainerStarted","Data":"5fddb3730fdff25f725dfc175dfe6c5b3f09fe1cf4a286caef09244356b6fb5a"} Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.158637 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-zzsgm" Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.160023 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"69486e9e-4ef8-4749-842f-a38dfeba60d3","Type":"ContainerStarted","Data":"c469f41e66e538d2e937f34b0e269c15ac8c21d6493aa2369f95ba537e6a75b7"} Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.161568 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lrzgd" event={"ID":"aae7274f-1da9-4023-96b1-30cca477c6a2","Type":"ContainerStarted","Data":"dd632c343111af79be5b2bd34fcfdb9299d5d6bec7bbe59196c550d950a3cf41"} Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.163617 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2wgvn" event={"ID":"0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5","Type":"ContainerStarted","Data":"ee09cdc350e8ec029ed1701b11d7195985a82e9bad4b6de558f28930a3d676b8"} Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.164181 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-2wgvn" Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.165283 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-g8pg6" event={"ID":"f070ee25-edfb-4020-b526-3ec9d6c727bc","Type":"ContainerStarted","Data":"12a34a821760a0f530c261e714f0db2f0f22dc4335757b7906bda953def91aa4"} Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.167086 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-6gb75" event={"ID":"f19fea82-c53b-46de-8e6c-ff245e7e25af","Type":"ContainerDied","Data":"c4365d32a06b85d6af4dc34a2d259ad1e336f49e6acffd1c455449b66ff2b6ea"} Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.167109 4915 scope.go:117] "RemoveContainer" containerID="2584fb0c0a10a516cb1a7912681d1f367955b5e391063db67fb478b646f0ccdd" Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.167237 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6gb75" Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.171302 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4c362f63-191f-4589-80ee-212b909db51e","Type":"ContainerStarted","Data":"f7b4f88ad3fb139053d97a533796c44be723c4d452eb537e9ac21d1a000075db"} Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.175678 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-zzsgm" podStartSLOduration=2.457183957 podStartE2EDuration="15.175660069s" podCreationTimestamp="2026-01-27 18:59:36 +0000 UTC" firstStartedPulling="2026-01-27 18:59:37.028421197 +0000 UTC m=+1068.386274861" lastFinishedPulling="2026-01-27 18:59:49.746897319 +0000 UTC m=+1081.104750973" observedRunningTime="2026-01-27 18:59:51.174001878 +0000 UTC m=+1082.531855542" watchObservedRunningTime="2026-01-27 18:59:51.175660069 +0000 UTC m=+1082.533513743" Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.181165 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c245e6e6-955f-4f75-9427-3a3bd0f26c97","Type":"ContainerStarted","Data":"7e61a28371d8914f1878731e91b9c9df8fde6ae215a001eae37c46265542323e"} Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.184460 4915 generic.go:334] "Generic (PLEG): container finished" podID="5fcb7d90-7d62-4d35-a53e-99ba9bc318ff" containerID="225603af5954befd9be23fde64bb9885d60722f3ec3123f8546f4492d537cbc5" exitCode=0 Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.184504 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-g2qtc" event={"ID":"5fcb7d90-7d62-4d35-a53e-99ba9bc318ff","Type":"ContainerDied","Data":"225603af5954befd9be23fde64bb9885d60722f3ec3123f8546f4492d537cbc5"} Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.184534 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-g2qtc" event={"ID":"5fcb7d90-7d62-4d35-a53e-99ba9bc318ff","Type":"ContainerDied","Data":"b46926669880d07f8383c8809c264c382d90c2734275a05e581ff80ac94523e1"} Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.184585 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g2qtc" Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.202993 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-2wgvn" podStartSLOduration=3.5826464749999998 podStartE2EDuration="16.202973995s" podCreationTimestamp="2026-01-27 18:59:35 +0000 UTC" firstStartedPulling="2026-01-27 18:59:37.110915578 +0000 UTC m=+1068.468769242" lastFinishedPulling="2026-01-27 18:59:49.731243108 +0000 UTC m=+1081.089096762" observedRunningTime="2026-01-27 18:59:51.195137404 +0000 UTC m=+1082.552991098" watchObservedRunningTime="2026-01-27 18:59:51.202973995 +0000 UTC m=+1082.560827659" Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.236632 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6gb75"] Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.251556 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6gb75"] Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.261574 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g2qtc"] Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.267327 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g2qtc"] Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.369676 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fcb7d90-7d62-4d35-a53e-99ba9bc318ff" path="/var/lib/kubelet/pods/5fcb7d90-7d62-4d35-a53e-99ba9bc318ff/volumes" Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.370347 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f19fea82-c53b-46de-8e6c-ff245e7e25af" path="/var/lib/kubelet/pods/f19fea82-c53b-46de-8e6c-ff245e7e25af/volumes" Jan 27 18:59:51 crc kubenswrapper[4915]: I0127 18:59:51.700825 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 18:59:51 crc kubenswrapper[4915]: W0127 18:59:51.862737 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c2e3568_b735_4d0a_a6ff_a4862f244a53.slice/crio-ac9a30e19f080413ec85fb2769002407bada744201034f2e8c8abc3d4efce794 WatchSource:0}: Error finding container ac9a30e19f080413ec85fb2769002407bada744201034f2e8c8abc3d4efce794: Status 404 returned error can't find the container with id ac9a30e19f080413ec85fb2769002407bada744201034f2e8c8abc3d4efce794 Jan 27 18:59:52 crc kubenswrapper[4915]: I0127 18:59:52.191621 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6c2e3568-b735-4d0a-a6ff-a4862f244a53","Type":"ContainerStarted","Data":"ac9a30e19f080413ec85fb2769002407bada744201034f2e8c8abc3d4efce794"} Jan 27 18:59:52 crc kubenswrapper[4915]: I0127 18:59:52.826927 4915 scope.go:117] "RemoveContainer" containerID="225603af5954befd9be23fde64bb9885d60722f3ec3123f8546f4492d537cbc5" Jan 27 18:59:55 crc kubenswrapper[4915]: I0127 18:59:55.948558 4915 scope.go:117] "RemoveContainer" containerID="225603af5954befd9be23fde64bb9885d60722f3ec3123f8546f4492d537cbc5" Jan 27 18:59:55 crc kubenswrapper[4915]: E0127 18:59:55.949426 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"225603af5954befd9be23fde64bb9885d60722f3ec3123f8546f4492d537cbc5\": container with ID starting with 225603af5954befd9be23fde64bb9885d60722f3ec3123f8546f4492d537cbc5 not found: ID does not exist" containerID="225603af5954befd9be23fde64bb9885d60722f3ec3123f8546f4492d537cbc5" Jan 27 18:59:55 crc kubenswrapper[4915]: I0127 18:59:55.949456 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"225603af5954befd9be23fde64bb9885d60722f3ec3123f8546f4492d537cbc5"} err="failed to get container status \"225603af5954befd9be23fde64bb9885d60722f3ec3123f8546f4492d537cbc5\": rpc error: code = NotFound desc = could not find container \"225603af5954befd9be23fde64bb9885d60722f3ec3123f8546f4492d537cbc5\": container with ID starting with 225603af5954befd9be23fde64bb9885d60722f3ec3123f8546f4492d537cbc5 not found: ID does not exist" Jan 27 18:59:56 crc kubenswrapper[4915]: I0127 18:59:56.551753 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-2wgvn" Jan 27 18:59:56 crc kubenswrapper[4915]: I0127 18:59:56.555923 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-zzsgm" Jan 27 18:59:56 crc kubenswrapper[4915]: I0127 18:59:56.626705 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2wgvn"] Jan 27 18:59:57 crc kubenswrapper[4915]: I0127 18:59:57.238582 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-2wgvn" podUID="0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5" containerName="dnsmasq-dns" containerID="cri-o://ee09cdc350e8ec029ed1701b11d7195985a82e9bad4b6de558f28930a3d676b8" gracePeriod=10 Jan 27 18:59:57 crc kubenswrapper[4915]: I0127 18:59:57.787652 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2wgvn" Jan 27 18:59:57 crc kubenswrapper[4915]: I0127 18:59:57.922691 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tntcw\" (UniqueName: \"kubernetes.io/projected/0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5-kube-api-access-tntcw\") pod \"0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5\" (UID: \"0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5\") " Jan 27 18:59:57 crc kubenswrapper[4915]: I0127 18:59:57.922838 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5-dns-svc\") pod \"0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5\" (UID: \"0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5\") " Jan 27 18:59:57 crc kubenswrapper[4915]: I0127 18:59:57.922867 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5-config\") pod \"0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5\" (UID: \"0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5\") " Jan 27 18:59:57 crc kubenswrapper[4915]: I0127 18:59:57.929523 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5-kube-api-access-tntcw" (OuterVolumeSpecName: "kube-api-access-tntcw") pod "0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5" (UID: "0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5"). InnerVolumeSpecName "kube-api-access-tntcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:59:57 crc kubenswrapper[4915]: I0127 18:59:57.968076 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5" (UID: "0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:57 crc kubenswrapper[4915]: I0127 18:59:57.984618 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5-config" (OuterVolumeSpecName: "config") pod "0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5" (UID: "0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:59:58 crc kubenswrapper[4915]: I0127 18:59:58.024436 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tntcw\" (UniqueName: \"kubernetes.io/projected/0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5-kube-api-access-tntcw\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:58 crc kubenswrapper[4915]: I0127 18:59:58.024471 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:58 crc kubenswrapper[4915]: I0127 18:59:58.024480 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:59:58 crc kubenswrapper[4915]: I0127 18:59:58.261430 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b01be235-2ab9-4e61-a5a4-1d006a9e6679","Type":"ContainerStarted","Data":"e8dc10179f8954b400ea8cb8db73ee6638c728824ddb7504365e79a8f13b926f"} Jan 27 18:59:58 crc kubenswrapper[4915]: I0127 18:59:58.261577 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 27 18:59:58 crc kubenswrapper[4915]: I0127 18:59:58.267034 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c245e6e6-955f-4f75-9427-3a3bd0f26c97","Type":"ContainerStarted","Data":"baa30aa2c659658be9ba23bbd37aac31d9208153a31b6caf3fc6bdb6c9e2629b"} Jan 27 18:59:58 crc kubenswrapper[4915]: I0127 18:59:58.269865 4915 generic.go:334] "Generic (PLEG): container finished" podID="0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5" containerID="ee09cdc350e8ec029ed1701b11d7195985a82e9bad4b6de558f28930a3d676b8" exitCode=0 Jan 27 18:59:58 crc kubenswrapper[4915]: I0127 18:59:58.269906 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2wgvn" Jan 27 18:59:58 crc kubenswrapper[4915]: I0127 18:59:58.269929 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2wgvn" event={"ID":"0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5","Type":"ContainerDied","Data":"ee09cdc350e8ec029ed1701b11d7195985a82e9bad4b6de558f28930a3d676b8"} Jan 27 18:59:58 crc kubenswrapper[4915]: I0127 18:59:58.269969 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2wgvn" event={"ID":"0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5","Type":"ContainerDied","Data":"cfa8bfbf5476490fbeba459fed4efad788fd1ee9ff915582330c9f6f8cb8a639"} Jan 27 18:59:58 crc kubenswrapper[4915]: I0127 18:59:58.269993 4915 scope.go:117] "RemoveContainer" containerID="ee09cdc350e8ec029ed1701b11d7195985a82e9bad4b6de558f28930a3d676b8" Jan 27 18:59:58 crc kubenswrapper[4915]: I0127 18:59:58.292635 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=11.651175261 podStartE2EDuration="18.292610521s" podCreationTimestamp="2026-01-27 18:59:40 +0000 UTC" firstStartedPulling="2026-01-27 18:59:49.600599684 +0000 UTC m=+1080.958453348" lastFinishedPulling="2026-01-27 18:59:56.242034934 +0000 UTC m=+1087.599888608" observedRunningTime="2026-01-27 18:59:58.280653099 +0000 UTC m=+1089.638506763" watchObservedRunningTime="2026-01-27 18:59:58.292610521 +0000 UTC m=+1089.650464195" Jan 27 18:59:58 crc kubenswrapper[4915]: I0127 18:59:58.307899 4915 scope.go:117] "RemoveContainer" containerID="10ed065c1aa5955d747c210ed54a8bf3547cf60f738fc305f0078ca0244151db" Jan 27 18:59:58 crc kubenswrapper[4915]: I0127 18:59:58.357903 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2wgvn"] Jan 27 18:59:58 crc kubenswrapper[4915]: I0127 18:59:58.365067 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2wgvn"] Jan 27 18:59:58 crc kubenswrapper[4915]: I0127 18:59:58.452341 4915 scope.go:117] "RemoveContainer" containerID="ee09cdc350e8ec029ed1701b11d7195985a82e9bad4b6de558f28930a3d676b8" Jan 27 18:59:58 crc kubenswrapper[4915]: E0127 18:59:58.453015 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee09cdc350e8ec029ed1701b11d7195985a82e9bad4b6de558f28930a3d676b8\": container with ID starting with ee09cdc350e8ec029ed1701b11d7195985a82e9bad4b6de558f28930a3d676b8 not found: ID does not exist" containerID="ee09cdc350e8ec029ed1701b11d7195985a82e9bad4b6de558f28930a3d676b8" Jan 27 18:59:58 crc kubenswrapper[4915]: I0127 18:59:58.453086 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee09cdc350e8ec029ed1701b11d7195985a82e9bad4b6de558f28930a3d676b8"} err="failed to get container status \"ee09cdc350e8ec029ed1701b11d7195985a82e9bad4b6de558f28930a3d676b8\": rpc error: code = NotFound desc = could not find container \"ee09cdc350e8ec029ed1701b11d7195985a82e9bad4b6de558f28930a3d676b8\": container with ID starting with ee09cdc350e8ec029ed1701b11d7195985a82e9bad4b6de558f28930a3d676b8 not found: ID does not exist" Jan 27 18:59:58 crc kubenswrapper[4915]: I0127 18:59:58.453123 4915 scope.go:117] "RemoveContainer" containerID="10ed065c1aa5955d747c210ed54a8bf3547cf60f738fc305f0078ca0244151db" Jan 27 18:59:58 crc kubenswrapper[4915]: E0127 18:59:58.453614 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10ed065c1aa5955d747c210ed54a8bf3547cf60f738fc305f0078ca0244151db\": container with ID starting with 10ed065c1aa5955d747c210ed54a8bf3547cf60f738fc305f0078ca0244151db not found: ID does not exist" containerID="10ed065c1aa5955d747c210ed54a8bf3547cf60f738fc305f0078ca0244151db" Jan 27 18:59:58 crc kubenswrapper[4915]: I0127 18:59:58.453650 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10ed065c1aa5955d747c210ed54a8bf3547cf60f738fc305f0078ca0244151db"} err="failed to get container status \"10ed065c1aa5955d747c210ed54a8bf3547cf60f738fc305f0078ca0244151db\": rpc error: code = NotFound desc = could not find container \"10ed065c1aa5955d747c210ed54a8bf3547cf60f738fc305f0078ca0244151db\": container with ID starting with 10ed065c1aa5955d747c210ed54a8bf3547cf60f738fc305f0078ca0244151db not found: ID does not exist" Jan 27 18:59:59 crc kubenswrapper[4915]: I0127 18:59:59.280880 4915 generic.go:334] "Generic (PLEG): container finished" podID="f070ee25-edfb-4020-b526-3ec9d6c727bc" containerID="acec8eb90e92e5f007da92bc17cf2b949fa14a35fa9a64da0f7f65b3d4845a0e" exitCode=0 Jan 27 18:59:59 crc kubenswrapper[4915]: I0127 18:59:59.280967 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-g8pg6" event={"ID":"f070ee25-edfb-4020-b526-3ec9d6c727bc","Type":"ContainerDied","Data":"acec8eb90e92e5f007da92bc17cf2b949fa14a35fa9a64da0f7f65b3d4845a0e"} Jan 27 18:59:59 crc kubenswrapper[4915]: I0127 18:59:59.282768 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5b5f81dc-48ff-40c8-a0af-84c7c60338fd","Type":"ContainerStarted","Data":"1358fb4c705e4868c1b83ea13e0f2c10cac4558883cc330d554154fd44be9f97"} Jan 27 18:59:59 crc kubenswrapper[4915]: I0127 18:59:59.284213 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"69486e9e-4ef8-4749-842f-a38dfeba60d3","Type":"ContainerStarted","Data":"1dd3f550ad87852c603ab43ce4c551f3e98213b4a6234d95db5496706e6065eb"} Jan 27 18:59:59 crc kubenswrapper[4915]: I0127 18:59:59.288040 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lrzgd" event={"ID":"aae7274f-1da9-4023-96b1-30cca477c6a2","Type":"ContainerStarted","Data":"7fffb4fdd6a80fbec7e6cf82acb3dcd293ed9492fc9c6f489ec1057ec100aae3"} Jan 27 18:59:59 crc kubenswrapper[4915]: I0127 18:59:59.288163 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-lrzgd" Jan 27 18:59:59 crc kubenswrapper[4915]: I0127 18:59:59.290621 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6c2e3568-b735-4d0a-a6ff-a4862f244a53","Type":"ContainerStarted","Data":"0ab2050c29aa330700365f1e8b79ab3720b9477e1281313c21a50e2387bf14ae"} Jan 27 18:59:59 crc kubenswrapper[4915]: I0127 18:59:59.291905 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"754faa2e-19b3-47fb-9436-62d0ebd49ea4","Type":"ContainerStarted","Data":"b4e015d7524e506289b969ff9782e3d4f4f302efdbc8387341c6e0b4bb26a7fc"} Jan 27 18:59:59 crc kubenswrapper[4915]: I0127 18:59:59.377241 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-lrzgd" podStartSLOduration=6.7686352020000005 podStartE2EDuration="13.37719966s" podCreationTimestamp="2026-01-27 18:59:46 +0000 UTC" firstStartedPulling="2026-01-27 18:59:50.31138257 +0000 UTC m=+1081.669236234" lastFinishedPulling="2026-01-27 18:59:56.919947028 +0000 UTC m=+1088.277800692" observedRunningTime="2026-01-27 18:59:59.330235025 +0000 UTC m=+1090.688088689" watchObservedRunningTime="2026-01-27 18:59:59.37719966 +0000 UTC m=+1090.735053324" Jan 27 18:59:59 crc kubenswrapper[4915]: I0127 18:59:59.400469 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5" path="/var/lib/kubelet/pods/0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5/volumes" Jan 27 19:00:00 crc kubenswrapper[4915]: I0127 19:00:00.152035 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492340-srbs2"] Jan 27 19:00:00 crc kubenswrapper[4915]: E0127 19:00:00.152438 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19fea82-c53b-46de-8e6c-ff245e7e25af" containerName="init" Jan 27 19:00:00 crc kubenswrapper[4915]: I0127 19:00:00.152459 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19fea82-c53b-46de-8e6c-ff245e7e25af" containerName="init" Jan 27 19:00:00 crc kubenswrapper[4915]: E0127 19:00:00.152476 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5" containerName="dnsmasq-dns" Jan 27 19:00:00 crc kubenswrapper[4915]: I0127 19:00:00.152484 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5" containerName="dnsmasq-dns" Jan 27 19:00:00 crc kubenswrapper[4915]: E0127 19:00:00.152517 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fcb7d90-7d62-4d35-a53e-99ba9bc318ff" containerName="init" Jan 27 19:00:00 crc kubenswrapper[4915]: I0127 19:00:00.152525 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fcb7d90-7d62-4d35-a53e-99ba9bc318ff" containerName="init" Jan 27 19:00:00 crc kubenswrapper[4915]: E0127 19:00:00.152545 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5" containerName="init" Jan 27 19:00:00 crc kubenswrapper[4915]: I0127 19:00:00.152552 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5" containerName="init" Jan 27 19:00:00 crc kubenswrapper[4915]: I0127 19:00:00.152751 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f19fea82-c53b-46de-8e6c-ff245e7e25af" containerName="init" Jan 27 19:00:00 crc kubenswrapper[4915]: I0127 19:00:00.152765 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="0060a7f6-5f56-4ca9-87db-97ee5e0a2ea5" containerName="dnsmasq-dns" Jan 27 19:00:00 crc kubenswrapper[4915]: I0127 19:00:00.152777 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fcb7d90-7d62-4d35-a53e-99ba9bc318ff" containerName="init" Jan 27 19:00:00 crc kubenswrapper[4915]: I0127 19:00:00.153553 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-srbs2" Jan 27 19:00:00 crc kubenswrapper[4915]: I0127 19:00:00.159138 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 19:00:00 crc kubenswrapper[4915]: I0127 19:00:00.159273 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 19:00:00 crc kubenswrapper[4915]: I0127 19:00:00.159300 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492340-srbs2"] Jan 27 19:00:00 crc kubenswrapper[4915]: I0127 19:00:00.265607 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4cf525f-49ce-424b-8dbc-1c3f807b78d7-config-volume\") pod \"collect-profiles-29492340-srbs2\" (UID: \"d4cf525f-49ce-424b-8dbc-1c3f807b78d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-srbs2" Jan 27 19:00:00 crc kubenswrapper[4915]: I0127 19:00:00.266095 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj7n9\" (UniqueName: \"kubernetes.io/projected/d4cf525f-49ce-424b-8dbc-1c3f807b78d7-kube-api-access-pj7n9\") pod \"collect-profiles-29492340-srbs2\" (UID: \"d4cf525f-49ce-424b-8dbc-1c3f807b78d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-srbs2" Jan 27 19:00:00 crc kubenswrapper[4915]: I0127 19:00:00.266133 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4cf525f-49ce-424b-8dbc-1c3f807b78d7-secret-volume\") pod \"collect-profiles-29492340-srbs2\" (UID: \"d4cf525f-49ce-424b-8dbc-1c3f807b78d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-srbs2" Jan 27 19:00:00 crc kubenswrapper[4915]: I0127 19:00:00.301897 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b3ead5d8-b1e5-4145-a6de-64c316f4027e","Type":"ContainerStarted","Data":"30e1cae36df5f0d1cc0a2108f960ad759e0d97261fe96cef9f3de92ab69add31"} Jan 27 19:00:00 crc kubenswrapper[4915]: I0127 19:00:00.306046 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-g8pg6" event={"ID":"f070ee25-edfb-4020-b526-3ec9d6c727bc","Type":"ContainerStarted","Data":"416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91"} Jan 27 19:00:00 crc kubenswrapper[4915]: I0127 19:00:00.367753 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4cf525f-49ce-424b-8dbc-1c3f807b78d7-config-volume\") pod \"collect-profiles-29492340-srbs2\" (UID: \"d4cf525f-49ce-424b-8dbc-1c3f807b78d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-srbs2" Jan 27 19:00:00 crc kubenswrapper[4915]: I0127 19:00:00.367901 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj7n9\" (UniqueName: \"kubernetes.io/projected/d4cf525f-49ce-424b-8dbc-1c3f807b78d7-kube-api-access-pj7n9\") pod \"collect-profiles-29492340-srbs2\" (UID: \"d4cf525f-49ce-424b-8dbc-1c3f807b78d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-srbs2" Jan 27 19:00:00 crc kubenswrapper[4915]: I0127 19:00:00.367944 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4cf525f-49ce-424b-8dbc-1c3f807b78d7-secret-volume\") pod \"collect-profiles-29492340-srbs2\" (UID: \"d4cf525f-49ce-424b-8dbc-1c3f807b78d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-srbs2" Jan 27 19:00:00 crc kubenswrapper[4915]: I0127 19:00:00.369716 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4cf525f-49ce-424b-8dbc-1c3f807b78d7-config-volume\") pod \"collect-profiles-29492340-srbs2\" (UID: \"d4cf525f-49ce-424b-8dbc-1c3f807b78d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-srbs2" Jan 27 19:00:00 crc kubenswrapper[4915]: I0127 19:00:00.386423 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj7n9\" (UniqueName: \"kubernetes.io/projected/d4cf525f-49ce-424b-8dbc-1c3f807b78d7-kube-api-access-pj7n9\") pod \"collect-profiles-29492340-srbs2\" (UID: \"d4cf525f-49ce-424b-8dbc-1c3f807b78d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-srbs2" Jan 27 19:00:00 crc kubenswrapper[4915]: I0127 19:00:00.401682 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4cf525f-49ce-424b-8dbc-1c3f807b78d7-secret-volume\") pod \"collect-profiles-29492340-srbs2\" (UID: \"d4cf525f-49ce-424b-8dbc-1c3f807b78d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-srbs2" Jan 27 19:00:00 crc kubenswrapper[4915]: I0127 19:00:00.528666 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-srbs2" Jan 27 19:00:01 crc kubenswrapper[4915]: I0127 19:00:01.315725 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-g8pg6" event={"ID":"f070ee25-edfb-4020-b526-3ec9d6c727bc","Type":"ContainerStarted","Data":"d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f"} Jan 27 19:00:01 crc kubenswrapper[4915]: I0127 19:00:01.317239 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-g8pg6" Jan 27 19:00:01 crc kubenswrapper[4915]: I0127 19:00:01.317281 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-g8pg6" Jan 27 19:00:01 crc kubenswrapper[4915]: I0127 19:00:01.319041 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4c362f63-191f-4589-80ee-212b909db51e","Type":"ContainerStarted","Data":"7548d601ec369e2e34f04802814e4719086b0bb1a7874f7d1b23bbb8492a19f9"} Jan 27 19:00:01 crc kubenswrapper[4915]: I0127 19:00:01.319067 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 19:00:01 crc kubenswrapper[4915]: I0127 19:00:01.339592 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-g8pg6" podStartSLOduration=8.980069069 podStartE2EDuration="15.339570697s" podCreationTimestamp="2026-01-27 18:59:46 +0000 UTC" firstStartedPulling="2026-01-27 18:59:50.399259362 +0000 UTC m=+1081.757113026" lastFinishedPulling="2026-01-27 18:59:56.75876099 +0000 UTC m=+1088.116614654" observedRunningTime="2026-01-27 19:00:01.333882208 +0000 UTC m=+1092.691735882" watchObservedRunningTime="2026-01-27 19:00:01.339570697 +0000 UTC m=+1092.697424381" Jan 27 19:00:01 crc kubenswrapper[4915]: I0127 19:00:01.356217 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.571307143 podStartE2EDuration="20.356199972s" podCreationTimestamp="2026-01-27 18:59:41 +0000 UTC" firstStartedPulling="2026-01-27 18:59:50.34542628 +0000 UTC m=+1081.703279944" lastFinishedPulling="2026-01-27 19:00:00.130319119 +0000 UTC m=+1091.488172773" observedRunningTime="2026-01-27 19:00:01.352179284 +0000 UTC m=+1092.710032948" watchObservedRunningTime="2026-01-27 19:00:01.356199972 +0000 UTC m=+1092.714053636" Jan 27 19:00:02 crc kubenswrapper[4915]: I0127 19:00:02.056211 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492340-srbs2"] Jan 27 19:00:02 crc kubenswrapper[4915]: W0127 19:00:02.076557 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4cf525f_49ce_424b_8dbc_1c3f807b78d7.slice/crio-f1a6000fe51eb9728bddcca0f10cde949e80e98e60a07f7487636c789c78d9fc WatchSource:0}: Error finding container f1a6000fe51eb9728bddcca0f10cde949e80e98e60a07f7487636c789c78d9fc: Status 404 returned error can't find the container with id f1a6000fe51eb9728bddcca0f10cde949e80e98e60a07f7487636c789c78d9fc Jan 27 19:00:02 crc kubenswrapper[4915]: I0127 19:00:02.329439 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"754faa2e-19b3-47fb-9436-62d0ebd49ea4","Type":"ContainerStarted","Data":"f92d8556511e3b49e269eccc8efb0fb37188fb77e03e17b4ccb7a23b18b8ec13"} Jan 27 19:00:02 crc kubenswrapper[4915]: I0127 19:00:02.339605 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-srbs2" event={"ID":"d4cf525f-49ce-424b-8dbc-1c3f807b78d7","Type":"ContainerStarted","Data":"c01b0c9b5aeafc4712f319baa61e2fc29c98c01b0aad9c6b22b1ebfea66c9459"} Jan 27 19:00:02 crc kubenswrapper[4915]: I0127 19:00:02.340074 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-srbs2" event={"ID":"d4cf525f-49ce-424b-8dbc-1c3f807b78d7","Type":"ContainerStarted","Data":"f1a6000fe51eb9728bddcca0f10cde949e80e98e60a07f7487636c789c78d9fc"} Jan 27 19:00:02 crc kubenswrapper[4915]: I0127 19:00:02.345815 4915 generic.go:334] "Generic (PLEG): container finished" podID="c245e6e6-955f-4f75-9427-3a3bd0f26c97" containerID="baa30aa2c659658be9ba23bbd37aac31d9208153a31b6caf3fc6bdb6c9e2629b" exitCode=0 Jan 27 19:00:02 crc kubenswrapper[4915]: I0127 19:00:02.345906 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c245e6e6-955f-4f75-9427-3a3bd0f26c97","Type":"ContainerDied","Data":"baa30aa2c659658be9ba23bbd37aac31d9208153a31b6caf3fc6bdb6c9e2629b"} Jan 27 19:00:02 crc kubenswrapper[4915]: I0127 19:00:02.351844 4915 generic.go:334] "Generic (PLEG): container finished" podID="69486e9e-4ef8-4749-842f-a38dfeba60d3" containerID="1dd3f550ad87852c603ab43ce4c551f3e98213b4a6234d95db5496706e6065eb" exitCode=0 Jan 27 19:00:02 crc kubenswrapper[4915]: I0127 19:00:02.351937 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"69486e9e-4ef8-4749-842f-a38dfeba60d3","Type":"ContainerDied","Data":"1dd3f550ad87852c603ab43ce4c551f3e98213b4a6234d95db5496706e6065eb"} Jan 27 19:00:02 crc kubenswrapper[4915]: I0127 19:00:02.359564 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6c2e3568-b735-4d0a-a6ff-a4862f244a53","Type":"ContainerStarted","Data":"c06502ec99303d1016158ff95946da489a6d421aee3a2b886bff7e1bb8d8498f"} Jan 27 19:00:02 crc kubenswrapper[4915]: I0127 19:00:02.381542 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.433528625 podStartE2EDuration="15.381528237s" podCreationTimestamp="2026-01-27 18:59:47 +0000 UTC" firstStartedPulling="2026-01-27 18:59:50.782141476 +0000 UTC m=+1082.139995150" lastFinishedPulling="2026-01-27 19:00:01.730141098 +0000 UTC m=+1093.087994762" observedRunningTime="2026-01-27 19:00:02.352133481 +0000 UTC m=+1093.709987185" watchObservedRunningTime="2026-01-27 19:00:02.381528237 +0000 UTC m=+1093.739381901" Jan 27 19:00:02 crc kubenswrapper[4915]: I0127 19:00:02.429060 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-srbs2" podStartSLOduration=2.429024625 podStartE2EDuration="2.429024625s" podCreationTimestamp="2026-01-27 19:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:00:02.42595676 +0000 UTC m=+1093.783810424" watchObservedRunningTime="2026-01-27 19:00:02.429024625 +0000 UTC m=+1093.786878289" Jan 27 19:00:02 crc kubenswrapper[4915]: I0127 19:00:02.449558 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 27 19:00:02 crc kubenswrapper[4915]: I0127 19:00:02.449638 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 27 19:00:02 crc kubenswrapper[4915]: I0127 19:00:02.465496 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=6.621448154 podStartE2EDuration="16.465475514s" podCreationTimestamp="2026-01-27 18:59:46 +0000 UTC" firstStartedPulling="2026-01-27 18:59:51.865614808 +0000 UTC m=+1083.223468472" lastFinishedPulling="2026-01-27 19:00:01.709642168 +0000 UTC m=+1093.067495832" observedRunningTime="2026-01-27 19:00:02.447021524 +0000 UTC m=+1093.804875188" watchObservedRunningTime="2026-01-27 19:00:02.465475514 +0000 UTC m=+1093.823329188" Jan 27 19:00:02 crc kubenswrapper[4915]: I0127 19:00:02.508374 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.370268 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c245e6e6-955f-4f75-9427-3a3bd0f26c97","Type":"ContainerStarted","Data":"4f3d5336750659c5dda939caa3a53bdbca4ff115b87947e9d27b31dbf4e09950"} Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.373756 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"69486e9e-4ef8-4749-842f-a38dfeba60d3","Type":"ContainerStarted","Data":"b05c34d07645b4a14d56b4d52e63ae462133c80866f332009ef2ce490415bcdc"} Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.377407 4915 generic.go:334] "Generic (PLEG): container finished" podID="d4cf525f-49ce-424b-8dbc-1c3f807b78d7" containerID="c01b0c9b5aeafc4712f319baa61e2fc29c98c01b0aad9c6b22b1ebfea66c9459" exitCode=0 Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.377558 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-srbs2" event={"ID":"d4cf525f-49ce-424b-8dbc-1c3f807b78d7","Type":"ContainerDied","Data":"c01b0c9b5aeafc4712f319baa61e2fc29c98c01b0aad9c6b22b1ebfea66c9459"} Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.390834 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=19.322829009 podStartE2EDuration="25.390815331s" podCreationTimestamp="2026-01-27 18:59:38 +0000 UTC" firstStartedPulling="2026-01-27 18:59:50.69083693 +0000 UTC m=+1082.048690584" lastFinishedPulling="2026-01-27 18:59:56.758823242 +0000 UTC m=+1088.116676906" observedRunningTime="2026-01-27 19:00:03.389751026 +0000 UTC m=+1094.747604710" watchObservedRunningTime="2026-01-27 19:00:03.390815331 +0000 UTC m=+1094.748668995" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.419436 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=19.836173447 podStartE2EDuration="26.419415939s" podCreationTimestamp="2026-01-27 18:59:37 +0000 UTC" firstStartedPulling="2026-01-27 18:59:50.332857813 +0000 UTC m=+1081.690711477" lastFinishedPulling="2026-01-27 18:59:56.916100305 +0000 UTC m=+1088.273953969" observedRunningTime="2026-01-27 19:00:03.413955556 +0000 UTC m=+1094.771809220" watchObservedRunningTime="2026-01-27 19:00:03.419415939 +0000 UTC m=+1094.777269613" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.422008 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.692837 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-xtgkp"] Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.694358 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-xtgkp" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.700506 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.701780 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-xtgkp"] Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.723317 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-zdx67"] Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.724355 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-zdx67" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.728464 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.749920 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-zdx67"] Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.846210 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t7sx\" (UniqueName: \"kubernetes.io/projected/1319812a-0ea7-4cf3-bbd7-55ecedbf696e-kube-api-access-8t7sx\") pod \"dnsmasq-dns-5bf47b49b7-xtgkp\" (UID: \"1319812a-0ea7-4cf3-bbd7-55ecedbf696e\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xtgkp" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.846255 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-zdx67\" (UID: \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\") " pod="openstack/ovn-controller-metrics-zdx67" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.846283 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-ovn-rundir\") pod \"ovn-controller-metrics-zdx67\" (UID: \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\") " pod="openstack/ovn-controller-metrics-zdx67" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.846301 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1319812a-0ea7-4cf3-bbd7-55ecedbf696e-config\") pod \"dnsmasq-dns-5bf47b49b7-xtgkp\" (UID: \"1319812a-0ea7-4cf3-bbd7-55ecedbf696e\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xtgkp" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.846329 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1319812a-0ea7-4cf3-bbd7-55ecedbf696e-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-xtgkp\" (UID: \"1319812a-0ea7-4cf3-bbd7-55ecedbf696e\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xtgkp" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.846357 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jl87\" (UniqueName: \"kubernetes.io/projected/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-kube-api-access-2jl87\") pod \"ovn-controller-metrics-zdx67\" (UID: \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\") " pod="openstack/ovn-controller-metrics-zdx67" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.846379 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-combined-ca-bundle\") pod \"ovn-controller-metrics-zdx67\" (UID: \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\") " pod="openstack/ovn-controller-metrics-zdx67" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.846394 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-config\") pod \"ovn-controller-metrics-zdx67\" (UID: \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\") " pod="openstack/ovn-controller-metrics-zdx67" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.846409 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1319812a-0ea7-4cf3-bbd7-55ecedbf696e-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-xtgkp\" (UID: \"1319812a-0ea7-4cf3-bbd7-55ecedbf696e\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xtgkp" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.846445 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-ovs-rundir\") pod \"ovn-controller-metrics-zdx67\" (UID: \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\") " pod="openstack/ovn-controller-metrics-zdx67" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.947902 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-ovs-rundir\") pod \"ovn-controller-metrics-zdx67\" (UID: \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\") " pod="openstack/ovn-controller-metrics-zdx67" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.948035 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t7sx\" (UniqueName: \"kubernetes.io/projected/1319812a-0ea7-4cf3-bbd7-55ecedbf696e-kube-api-access-8t7sx\") pod \"dnsmasq-dns-5bf47b49b7-xtgkp\" (UID: \"1319812a-0ea7-4cf3-bbd7-55ecedbf696e\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xtgkp" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.948070 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-zdx67\" (UID: \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\") " pod="openstack/ovn-controller-metrics-zdx67" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.948099 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-ovn-rundir\") pod \"ovn-controller-metrics-zdx67\" (UID: \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\") " pod="openstack/ovn-controller-metrics-zdx67" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.948122 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1319812a-0ea7-4cf3-bbd7-55ecedbf696e-config\") pod \"dnsmasq-dns-5bf47b49b7-xtgkp\" (UID: \"1319812a-0ea7-4cf3-bbd7-55ecedbf696e\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xtgkp" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.948163 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1319812a-0ea7-4cf3-bbd7-55ecedbf696e-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-xtgkp\" (UID: \"1319812a-0ea7-4cf3-bbd7-55ecedbf696e\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xtgkp" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.948205 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jl87\" (UniqueName: \"kubernetes.io/projected/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-kube-api-access-2jl87\") pod \"ovn-controller-metrics-zdx67\" (UID: \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\") " pod="openstack/ovn-controller-metrics-zdx67" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.948238 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-combined-ca-bundle\") pod \"ovn-controller-metrics-zdx67\" (UID: \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\") " pod="openstack/ovn-controller-metrics-zdx67" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.948262 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-config\") pod \"ovn-controller-metrics-zdx67\" (UID: \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\") " pod="openstack/ovn-controller-metrics-zdx67" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.948266 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-ovn-rundir\") pod \"ovn-controller-metrics-zdx67\" (UID: \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\") " pod="openstack/ovn-controller-metrics-zdx67" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.948285 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1319812a-0ea7-4cf3-bbd7-55ecedbf696e-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-xtgkp\" (UID: \"1319812a-0ea7-4cf3-bbd7-55ecedbf696e\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xtgkp" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.948266 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-ovs-rundir\") pod \"ovn-controller-metrics-zdx67\" (UID: \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\") " pod="openstack/ovn-controller-metrics-zdx67" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.949396 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-config\") pod \"ovn-controller-metrics-zdx67\" (UID: \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\") " pod="openstack/ovn-controller-metrics-zdx67" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.949480 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1319812a-0ea7-4cf3-bbd7-55ecedbf696e-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-xtgkp\" (UID: \"1319812a-0ea7-4cf3-bbd7-55ecedbf696e\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xtgkp" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.949482 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1319812a-0ea7-4cf3-bbd7-55ecedbf696e-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-xtgkp\" (UID: \"1319812a-0ea7-4cf3-bbd7-55ecedbf696e\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xtgkp" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.949696 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1319812a-0ea7-4cf3-bbd7-55ecedbf696e-config\") pod \"dnsmasq-dns-5bf47b49b7-xtgkp\" (UID: \"1319812a-0ea7-4cf3-bbd7-55ecedbf696e\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xtgkp" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.956657 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-combined-ca-bundle\") pod \"ovn-controller-metrics-zdx67\" (UID: \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\") " pod="openstack/ovn-controller-metrics-zdx67" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.956856 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-zdx67\" (UID: \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\") " pod="openstack/ovn-controller-metrics-zdx67" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.965114 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jl87\" (UniqueName: \"kubernetes.io/projected/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-kube-api-access-2jl87\") pod \"ovn-controller-metrics-zdx67\" (UID: \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\") " pod="openstack/ovn-controller-metrics-zdx67" Jan 27 19:00:03 crc kubenswrapper[4915]: I0127 19:00:03.971861 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t7sx\" (UniqueName: \"kubernetes.io/projected/1319812a-0ea7-4cf3-bbd7-55ecedbf696e-kube-api-access-8t7sx\") pod \"dnsmasq-dns-5bf47b49b7-xtgkp\" (UID: \"1319812a-0ea7-4cf3-bbd7-55ecedbf696e\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xtgkp" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.020272 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-xtgkp" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.053215 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-zdx67" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.118565 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-xtgkp"] Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.149151 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-hf52h"] Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.150349 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-hf52h" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.156650 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.162255 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-hf52h"] Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.185242 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.185368 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.241350 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.254004 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d38f6264-7883-4f50-87bb-870718f430e8-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-hf52h\" (UID: \"d38f6264-7883-4f50-87bb-870718f430e8\") " pod="openstack/dnsmasq-dns-8554648995-hf52h" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.256451 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsgpf\" (UniqueName: \"kubernetes.io/projected/d38f6264-7883-4f50-87bb-870718f430e8-kube-api-access-qsgpf\") pod \"dnsmasq-dns-8554648995-hf52h\" (UID: \"d38f6264-7883-4f50-87bb-870718f430e8\") " pod="openstack/dnsmasq-dns-8554648995-hf52h" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.256495 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d38f6264-7883-4f50-87bb-870718f430e8-dns-svc\") pod \"dnsmasq-dns-8554648995-hf52h\" (UID: \"d38f6264-7883-4f50-87bb-870718f430e8\") " pod="openstack/dnsmasq-dns-8554648995-hf52h" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.256548 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d38f6264-7883-4f50-87bb-870718f430e8-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-hf52h\" (UID: \"d38f6264-7883-4f50-87bb-870718f430e8\") " pod="openstack/dnsmasq-dns-8554648995-hf52h" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.256567 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d38f6264-7883-4f50-87bb-870718f430e8-config\") pod \"dnsmasq-dns-8554648995-hf52h\" (UID: \"d38f6264-7883-4f50-87bb-870718f430e8\") " pod="openstack/dnsmasq-dns-8554648995-hf52h" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.358679 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d38f6264-7883-4f50-87bb-870718f430e8-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-hf52h\" (UID: \"d38f6264-7883-4f50-87bb-870718f430e8\") " pod="openstack/dnsmasq-dns-8554648995-hf52h" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.358773 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsgpf\" (UniqueName: \"kubernetes.io/projected/d38f6264-7883-4f50-87bb-870718f430e8-kube-api-access-qsgpf\") pod \"dnsmasq-dns-8554648995-hf52h\" (UID: \"d38f6264-7883-4f50-87bb-870718f430e8\") " pod="openstack/dnsmasq-dns-8554648995-hf52h" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.358945 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d38f6264-7883-4f50-87bb-870718f430e8-dns-svc\") pod \"dnsmasq-dns-8554648995-hf52h\" (UID: \"d38f6264-7883-4f50-87bb-870718f430e8\") " pod="openstack/dnsmasq-dns-8554648995-hf52h" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.359014 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d38f6264-7883-4f50-87bb-870718f430e8-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-hf52h\" (UID: \"d38f6264-7883-4f50-87bb-870718f430e8\") " pod="openstack/dnsmasq-dns-8554648995-hf52h" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.359052 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d38f6264-7883-4f50-87bb-870718f430e8-config\") pod \"dnsmasq-dns-8554648995-hf52h\" (UID: \"d38f6264-7883-4f50-87bb-870718f430e8\") " pod="openstack/dnsmasq-dns-8554648995-hf52h" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.360768 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d38f6264-7883-4f50-87bb-870718f430e8-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-hf52h\" (UID: \"d38f6264-7883-4f50-87bb-870718f430e8\") " pod="openstack/dnsmasq-dns-8554648995-hf52h" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.361726 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d38f6264-7883-4f50-87bb-870718f430e8-dns-svc\") pod \"dnsmasq-dns-8554648995-hf52h\" (UID: \"d38f6264-7883-4f50-87bb-870718f430e8\") " pod="openstack/dnsmasq-dns-8554648995-hf52h" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.362941 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d38f6264-7883-4f50-87bb-870718f430e8-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-hf52h\" (UID: \"d38f6264-7883-4f50-87bb-870718f430e8\") " pod="openstack/dnsmasq-dns-8554648995-hf52h" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.364946 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d38f6264-7883-4f50-87bb-870718f430e8-config\") pod \"dnsmasq-dns-8554648995-hf52h\" (UID: \"d38f6264-7883-4f50-87bb-870718f430e8\") " pod="openstack/dnsmasq-dns-8554648995-hf52h" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.386475 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsgpf\" (UniqueName: \"kubernetes.io/projected/d38f6264-7883-4f50-87bb-870718f430e8-kube-api-access-qsgpf\") pod \"dnsmasq-dns-8554648995-hf52h\" (UID: \"d38f6264-7883-4f50-87bb-870718f430e8\") " pod="openstack/dnsmasq-dns-8554648995-hf52h" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.441572 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.497166 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-hf52h" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.527595 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-xtgkp"] Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.631219 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.637614 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.641011 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-zdx67"] Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.642923 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.642996 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-jc4hg" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.643627 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.645690 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.649082 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 19:00:04 crc kubenswrapper[4915]: W0127 19:00:04.662394 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84da5d2b_b9c1_41ef_9222_aaf3e67ff232.slice/crio-5c6723dc84252855ee6682a65b68c069cfae17605985570ef674a52a00cced9d WatchSource:0}: Error finding container 5c6723dc84252855ee6682a65b68c069cfae17605985570ef674a52a00cced9d: Status 404 returned error can't find the container with id 5c6723dc84252855ee6682a65b68c069cfae17605985570ef674a52a00cced9d Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.724642 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-srbs2" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.767949 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67897b01-d7d4-465f-9b98-ca325dabb449-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " pod="openstack/ovn-northd-0" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.768060 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/67897b01-d7d4-465f-9b98-ca325dabb449-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " pod="openstack/ovn-northd-0" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.768097 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67897b01-d7d4-465f-9b98-ca325dabb449-config\") pod \"ovn-northd-0\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " pod="openstack/ovn-northd-0" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.768116 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/67897b01-d7d4-465f-9b98-ca325dabb449-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " pod="openstack/ovn-northd-0" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.768279 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr2qc\" (UniqueName: \"kubernetes.io/projected/67897b01-d7d4-465f-9b98-ca325dabb449-kube-api-access-mr2qc\") pod \"ovn-northd-0\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " pod="openstack/ovn-northd-0" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.768369 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67897b01-d7d4-465f-9b98-ca325dabb449-scripts\") pod \"ovn-northd-0\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " pod="openstack/ovn-northd-0" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.768615 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/67897b01-d7d4-465f-9b98-ca325dabb449-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " pod="openstack/ovn-northd-0" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.869332 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4cf525f-49ce-424b-8dbc-1c3f807b78d7-secret-volume\") pod \"d4cf525f-49ce-424b-8dbc-1c3f807b78d7\" (UID: \"d4cf525f-49ce-424b-8dbc-1c3f807b78d7\") " Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.869463 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4cf525f-49ce-424b-8dbc-1c3f807b78d7-config-volume\") pod \"d4cf525f-49ce-424b-8dbc-1c3f807b78d7\" (UID: \"d4cf525f-49ce-424b-8dbc-1c3f807b78d7\") " Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.869500 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj7n9\" (UniqueName: \"kubernetes.io/projected/d4cf525f-49ce-424b-8dbc-1c3f807b78d7-kube-api-access-pj7n9\") pod \"d4cf525f-49ce-424b-8dbc-1c3f807b78d7\" (UID: \"d4cf525f-49ce-424b-8dbc-1c3f807b78d7\") " Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.869849 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67897b01-d7d4-465f-9b98-ca325dabb449-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " pod="openstack/ovn-northd-0" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.869936 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/67897b01-d7d4-465f-9b98-ca325dabb449-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " pod="openstack/ovn-northd-0" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.869973 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67897b01-d7d4-465f-9b98-ca325dabb449-config\") pod \"ovn-northd-0\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " pod="openstack/ovn-northd-0" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.869997 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/67897b01-d7d4-465f-9b98-ca325dabb449-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " pod="openstack/ovn-northd-0" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.870035 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr2qc\" (UniqueName: \"kubernetes.io/projected/67897b01-d7d4-465f-9b98-ca325dabb449-kube-api-access-mr2qc\") pod \"ovn-northd-0\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " pod="openstack/ovn-northd-0" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.870071 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67897b01-d7d4-465f-9b98-ca325dabb449-scripts\") pod \"ovn-northd-0\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " pod="openstack/ovn-northd-0" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.870128 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/67897b01-d7d4-465f-9b98-ca325dabb449-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " pod="openstack/ovn-northd-0" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.870182 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4cf525f-49ce-424b-8dbc-1c3f807b78d7-config-volume" (OuterVolumeSpecName: "config-volume") pod "d4cf525f-49ce-424b-8dbc-1c3f807b78d7" (UID: "d4cf525f-49ce-424b-8dbc-1c3f807b78d7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.871084 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67897b01-d7d4-465f-9b98-ca325dabb449-config\") pod \"ovn-northd-0\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " pod="openstack/ovn-northd-0" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.873997 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/67897b01-d7d4-465f-9b98-ca325dabb449-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " pod="openstack/ovn-northd-0" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.874772 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67897b01-d7d4-465f-9b98-ca325dabb449-scripts\") pod \"ovn-northd-0\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " pod="openstack/ovn-northd-0" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.878253 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4cf525f-49ce-424b-8dbc-1c3f807b78d7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d4cf525f-49ce-424b-8dbc-1c3f807b78d7" (UID: "d4cf525f-49ce-424b-8dbc-1c3f807b78d7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.879621 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67897b01-d7d4-465f-9b98-ca325dabb449-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " pod="openstack/ovn-northd-0" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.880408 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/67897b01-d7d4-465f-9b98-ca325dabb449-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " pod="openstack/ovn-northd-0" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.881986 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4cf525f-49ce-424b-8dbc-1c3f807b78d7-kube-api-access-pj7n9" (OuterVolumeSpecName: "kube-api-access-pj7n9") pod "d4cf525f-49ce-424b-8dbc-1c3f807b78d7" (UID: "d4cf525f-49ce-424b-8dbc-1c3f807b78d7"). InnerVolumeSpecName "kube-api-access-pj7n9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.883523 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/67897b01-d7d4-465f-9b98-ca325dabb449-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " pod="openstack/ovn-northd-0" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.904705 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr2qc\" (UniqueName: \"kubernetes.io/projected/67897b01-d7d4-465f-9b98-ca325dabb449-kube-api-access-mr2qc\") pod \"ovn-northd-0\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " pod="openstack/ovn-northd-0" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.964165 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.973074 4915 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4cf525f-49ce-424b-8dbc-1c3f807b78d7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.973528 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj7n9\" (UniqueName: \"kubernetes.io/projected/d4cf525f-49ce-424b-8dbc-1c3f807b78d7-kube-api-access-pj7n9\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:04 crc kubenswrapper[4915]: I0127 19:00:04.973635 4915 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4cf525f-49ce-424b-8dbc-1c3f807b78d7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:05 crc kubenswrapper[4915]: I0127 19:00:05.049155 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-hf52h"] Jan 27 19:00:05 crc kubenswrapper[4915]: I0127 19:00:05.405236 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-srbs2" event={"ID":"d4cf525f-49ce-424b-8dbc-1c3f807b78d7","Type":"ContainerDied","Data":"f1a6000fe51eb9728bddcca0f10cde949e80e98e60a07f7487636c789c78d9fc"} Jan 27 19:00:05 crc kubenswrapper[4915]: I0127 19:00:05.405574 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1a6000fe51eb9728bddcca0f10cde949e80e98e60a07f7487636c789c78d9fc" Jan 27 19:00:05 crc kubenswrapper[4915]: I0127 19:00:05.405295 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-srbs2" Jan 27 19:00:05 crc kubenswrapper[4915]: I0127 19:00:05.409246 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-xtgkp" event={"ID":"1319812a-0ea7-4cf3-bbd7-55ecedbf696e","Type":"ContainerStarted","Data":"739a412abf47ef2ab720ac84feea4a00c96a47ed1f89457d555f2ecc2fa396ed"} Jan 27 19:00:05 crc kubenswrapper[4915]: I0127 19:00:05.415143 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-hf52h" event={"ID":"d38f6264-7883-4f50-87bb-870718f430e8","Type":"ContainerStarted","Data":"c5d528ba38b65758d61234728185cabc377d434bbc727c21c69481289c1df353"} Jan 27 19:00:05 crc kubenswrapper[4915]: I0127 19:00:05.417903 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-zdx67" event={"ID":"84da5d2b-b9c1-41ef-9222-aaf3e67ff232","Type":"ContainerStarted","Data":"5c6723dc84252855ee6682a65b68c069cfae17605985570ef674a52a00cced9d"} Jan 27 19:00:05 crc kubenswrapper[4915]: I0127 19:00:05.557660 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 19:00:05 crc kubenswrapper[4915]: I0127 19:00:05.649388 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 27 19:00:06 crc kubenswrapper[4915]: I0127 19:00:06.425833 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"67897b01-d7d4-465f-9b98-ca325dabb449","Type":"ContainerStarted","Data":"00431e5452a68d033b8efb59d411dffca92590a52b07644b362fb8b40d74d9de"} Jan 27 19:00:08 crc kubenswrapper[4915]: I0127 19:00:08.917585 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 19:00:08 crc kubenswrapper[4915]: I0127 19:00:08.917969 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 19:00:10 crc kubenswrapper[4915]: I0127 19:00:10.309137 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 19:00:10 crc kubenswrapper[4915]: I0127 19:00:10.309189 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 19:00:11 crc kubenswrapper[4915]: I0127 19:00:11.470673 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-zdx67" event={"ID":"84da5d2b-b9c1-41ef-9222-aaf3e67ff232","Type":"ContainerStarted","Data":"de86cd114c117f9cffe320ebf31f70906d6b6d5b162d70c11a8ccad94f9510e2"} Jan 27 19:00:11 crc kubenswrapper[4915]: I0127 19:00:11.472959 4915 generic.go:334] "Generic (PLEG): container finished" podID="1319812a-0ea7-4cf3-bbd7-55ecedbf696e" containerID="876742ed0d6bf891f4af922915ecaef3afccae85f62b07c1dc6ee656008749d1" exitCode=0 Jan 27 19:00:11 crc kubenswrapper[4915]: I0127 19:00:11.473012 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-xtgkp" event={"ID":"1319812a-0ea7-4cf3-bbd7-55ecedbf696e","Type":"ContainerDied","Data":"876742ed0d6bf891f4af922915ecaef3afccae85f62b07c1dc6ee656008749d1"} Jan 27 19:00:11 crc kubenswrapper[4915]: I0127 19:00:11.475650 4915 generic.go:334] "Generic (PLEG): container finished" podID="d38f6264-7883-4f50-87bb-870718f430e8" containerID="71eb0203c6dbc185783ae61b9b3d02912e43ce31aacd2702509b99fb6451fe32" exitCode=0 Jan 27 19:00:11 crc kubenswrapper[4915]: I0127 19:00:11.475688 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-hf52h" event={"ID":"d38f6264-7883-4f50-87bb-870718f430e8","Type":"ContainerDied","Data":"71eb0203c6dbc185783ae61b9b3d02912e43ce31aacd2702509b99fb6451fe32"} Jan 27 19:00:11 crc kubenswrapper[4915]: I0127 19:00:11.501239 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-zdx67" podStartSLOduration=8.501213197 podStartE2EDuration="8.501213197s" podCreationTimestamp="2026-01-27 19:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:00:11.4940009 +0000 UTC m=+1102.851854564" watchObservedRunningTime="2026-01-27 19:00:11.501213197 +0000 UTC m=+1102.859066861" Jan 27 19:00:11 crc kubenswrapper[4915]: I0127 19:00:11.850138 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-xtgkp" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.009880 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1319812a-0ea7-4cf3-bbd7-55ecedbf696e-config\") pod \"1319812a-0ea7-4cf3-bbd7-55ecedbf696e\" (UID: \"1319812a-0ea7-4cf3-bbd7-55ecedbf696e\") " Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.009999 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t7sx\" (UniqueName: \"kubernetes.io/projected/1319812a-0ea7-4cf3-bbd7-55ecedbf696e-kube-api-access-8t7sx\") pod \"1319812a-0ea7-4cf3-bbd7-55ecedbf696e\" (UID: \"1319812a-0ea7-4cf3-bbd7-55ecedbf696e\") " Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.010056 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1319812a-0ea7-4cf3-bbd7-55ecedbf696e-dns-svc\") pod \"1319812a-0ea7-4cf3-bbd7-55ecedbf696e\" (UID: \"1319812a-0ea7-4cf3-bbd7-55ecedbf696e\") " Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.010131 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1319812a-0ea7-4cf3-bbd7-55ecedbf696e-ovsdbserver-nb\") pod \"1319812a-0ea7-4cf3-bbd7-55ecedbf696e\" (UID: \"1319812a-0ea7-4cf3-bbd7-55ecedbf696e\") " Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.013818 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1319812a-0ea7-4cf3-bbd7-55ecedbf696e-kube-api-access-8t7sx" (OuterVolumeSpecName: "kube-api-access-8t7sx") pod "1319812a-0ea7-4cf3-bbd7-55ecedbf696e" (UID: "1319812a-0ea7-4cf3-bbd7-55ecedbf696e"). InnerVolumeSpecName "kube-api-access-8t7sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.029243 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1319812a-0ea7-4cf3-bbd7-55ecedbf696e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1319812a-0ea7-4cf3-bbd7-55ecedbf696e" (UID: "1319812a-0ea7-4cf3-bbd7-55ecedbf696e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.030888 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1319812a-0ea7-4cf3-bbd7-55ecedbf696e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1319812a-0ea7-4cf3-bbd7-55ecedbf696e" (UID: "1319812a-0ea7-4cf3-bbd7-55ecedbf696e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.037878 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1319812a-0ea7-4cf3-bbd7-55ecedbf696e-config" (OuterVolumeSpecName: "config") pod "1319812a-0ea7-4cf3-bbd7-55ecedbf696e" (UID: "1319812a-0ea7-4cf3-bbd7-55ecedbf696e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.112350 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1319812a-0ea7-4cf3-bbd7-55ecedbf696e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.112401 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1319812a-0ea7-4cf3-bbd7-55ecedbf696e-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.112420 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8t7sx\" (UniqueName: \"kubernetes.io/projected/1319812a-0ea7-4cf3-bbd7-55ecedbf696e-kube-api-access-8t7sx\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.112432 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1319812a-0ea7-4cf3-bbd7-55ecedbf696e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.220452 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-hf52h"] Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.244116 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.258198 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-bm9dh"] Jan 27 19:00:12 crc kubenswrapper[4915]: E0127 19:00:12.258523 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4cf525f-49ce-424b-8dbc-1c3f807b78d7" containerName="collect-profiles" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.258539 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4cf525f-49ce-424b-8dbc-1c3f807b78d7" containerName="collect-profiles" Jan 27 19:00:12 crc kubenswrapper[4915]: E0127 19:00:12.258562 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1319812a-0ea7-4cf3-bbd7-55ecedbf696e" containerName="init" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.258571 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="1319812a-0ea7-4cf3-bbd7-55ecedbf696e" containerName="init" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.258744 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="1319812a-0ea7-4cf3-bbd7-55ecedbf696e" containerName="init" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.258763 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4cf525f-49ce-424b-8dbc-1c3f807b78d7" containerName="collect-profiles" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.259549 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.296627 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-bm9dh"] Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.416505 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84957bd3-d9f0-4452-9311-c1dee4133184-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-bm9dh\" (UID: \"84957bd3-d9f0-4452-9311-c1dee4133184\") " pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.416553 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xglhn\" (UniqueName: \"kubernetes.io/projected/84957bd3-d9f0-4452-9311-c1dee4133184-kube-api-access-xglhn\") pod \"dnsmasq-dns-b8fbc5445-bm9dh\" (UID: \"84957bd3-d9f0-4452-9311-c1dee4133184\") " pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.416616 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84957bd3-d9f0-4452-9311-c1dee4133184-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-bm9dh\" (UID: \"84957bd3-d9f0-4452-9311-c1dee4133184\") " pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.416653 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84957bd3-d9f0-4452-9311-c1dee4133184-config\") pod \"dnsmasq-dns-b8fbc5445-bm9dh\" (UID: \"84957bd3-d9f0-4452-9311-c1dee4133184\") " pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.416673 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84957bd3-d9f0-4452-9311-c1dee4133184-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-bm9dh\" (UID: \"84957bd3-d9f0-4452-9311-c1dee4133184\") " pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.504688 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"67897b01-d7d4-465f-9b98-ca325dabb449","Type":"ContainerStarted","Data":"47de9cacd64cfebe73b1c97722b60389306b8f69b7d5151985869e539f89e176"} Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.505702 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"67897b01-d7d4-465f-9b98-ca325dabb449","Type":"ContainerStarted","Data":"9bf35278466b1eb6865efbb8b4d74a29b2c9e804afe1f6f529238009b91cf6d9"} Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.505896 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.507674 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-xtgkp" event={"ID":"1319812a-0ea7-4cf3-bbd7-55ecedbf696e","Type":"ContainerDied","Data":"739a412abf47ef2ab720ac84feea4a00c96a47ed1f89457d555f2ecc2fa396ed"} Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.507724 4915 scope.go:117] "RemoveContainer" containerID="876742ed0d6bf891f4af922915ecaef3afccae85f62b07c1dc6ee656008749d1" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.507731 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-xtgkp" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.515576 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-hf52h" event={"ID":"d38f6264-7883-4f50-87bb-870718f430e8","Type":"ContainerStarted","Data":"21b4cc2e6ff7ca4a23d376a0d17c6fe849b215c68723de492679ccb65b9ffc79"} Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.515625 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-hf52h" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.517762 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84957bd3-d9f0-4452-9311-c1dee4133184-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-bm9dh\" (UID: \"84957bd3-d9f0-4452-9311-c1dee4133184\") " pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.517858 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84957bd3-d9f0-4452-9311-c1dee4133184-config\") pod \"dnsmasq-dns-b8fbc5445-bm9dh\" (UID: \"84957bd3-d9f0-4452-9311-c1dee4133184\") " pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.517893 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84957bd3-d9f0-4452-9311-c1dee4133184-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-bm9dh\" (UID: \"84957bd3-d9f0-4452-9311-c1dee4133184\") " pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.518024 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84957bd3-d9f0-4452-9311-c1dee4133184-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-bm9dh\" (UID: \"84957bd3-d9f0-4452-9311-c1dee4133184\") " pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.518073 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xglhn\" (UniqueName: \"kubernetes.io/projected/84957bd3-d9f0-4452-9311-c1dee4133184-kube-api-access-xglhn\") pod \"dnsmasq-dns-b8fbc5445-bm9dh\" (UID: \"84957bd3-d9f0-4452-9311-c1dee4133184\") " pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.518579 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84957bd3-d9f0-4452-9311-c1dee4133184-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-bm9dh\" (UID: \"84957bd3-d9f0-4452-9311-c1dee4133184\") " pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.519327 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84957bd3-d9f0-4452-9311-c1dee4133184-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-bm9dh\" (UID: \"84957bd3-d9f0-4452-9311-c1dee4133184\") " pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.519340 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84957bd3-d9f0-4452-9311-c1dee4133184-config\") pod \"dnsmasq-dns-b8fbc5445-bm9dh\" (UID: \"84957bd3-d9f0-4452-9311-c1dee4133184\") " pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.519340 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84957bd3-d9f0-4452-9311-c1dee4133184-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-bm9dh\" (UID: \"84957bd3-d9f0-4452-9311-c1dee4133184\") " pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.540695 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xglhn\" (UniqueName: \"kubernetes.io/projected/84957bd3-d9f0-4452-9311-c1dee4133184-kube-api-access-xglhn\") pod \"dnsmasq-dns-b8fbc5445-bm9dh\" (UID: \"84957bd3-d9f0-4452-9311-c1dee4133184\") " pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.555718 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-hf52h" podStartSLOduration=8.555697404 podStartE2EDuration="8.555697404s" podCreationTimestamp="2026-01-27 19:00:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:00:12.554364592 +0000 UTC m=+1103.912218276" watchObservedRunningTime="2026-01-27 19:00:12.555697404 +0000 UTC m=+1103.913551068" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.563056 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.814998673 podStartE2EDuration="8.563036384s" podCreationTimestamp="2026-01-27 19:00:04 +0000 UTC" firstStartedPulling="2026-01-27 19:00:05.564053459 +0000 UTC m=+1096.921907123" lastFinishedPulling="2026-01-27 19:00:11.31209117 +0000 UTC m=+1102.669944834" observedRunningTime="2026-01-27 19:00:12.534002342 +0000 UTC m=+1103.891856006" watchObservedRunningTime="2026-01-27 19:00:12.563036384 +0000 UTC m=+1103.920890048" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.581196 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.711105 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-xtgkp"] Jan 27 19:00:12 crc kubenswrapper[4915]: I0127 19:00:12.732725 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-xtgkp"] Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.034663 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-bm9dh"] Jan 27 19:00:13 crc kubenswrapper[4915]: W0127 19:00:13.037218 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84957bd3_d9f0_4452_9311_c1dee4133184.slice/crio-7b2723cf717d8354cb1bf36c546533768f97acd5eefd2607fa5f387f361c9eea WatchSource:0}: Error finding container 7b2723cf717d8354cb1bf36c546533768f97acd5eefd2607fa5f387f361c9eea: Status 404 returned error can't find the container with id 7b2723cf717d8354cb1bf36c546533768f97acd5eefd2607fa5f387f361c9eea Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.368351 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1319812a-0ea7-4cf3-bbd7-55ecedbf696e" path="/var/lib/kubelet/pods/1319812a-0ea7-4cf3-bbd7-55ecedbf696e/volumes" Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.410917 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.415814 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.418267 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.418684 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.418930 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.422970 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-srp5q" Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.443991 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.535730 4915 generic.go:334] "Generic (PLEG): container finished" podID="84957bd3-d9f0-4452-9311-c1dee4133184" containerID="f53109c47e79e8245f76b3f3cb3f5dddf48f6c680d364bf20f999fb39b79f031" exitCode=0 Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.535843 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" event={"ID":"84957bd3-d9f0-4452-9311-c1dee4133184","Type":"ContainerDied","Data":"f53109c47e79e8245f76b3f3cb3f5dddf48f6c680d364bf20f999fb39b79f031"} Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.536097 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" event={"ID":"84957bd3-d9f0-4452-9311-c1dee4133184","Type":"ContainerStarted","Data":"7b2723cf717d8354cb1bf36c546533768f97acd5eefd2607fa5f387f361c9eea"} Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.536239 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-hf52h" podUID="d38f6264-7883-4f50-87bb-870718f430e8" containerName="dnsmasq-dns" containerID="cri-o://21b4cc2e6ff7ca4a23d376a0d17c6fe849b215c68723de492679ccb65b9ffc79" gracePeriod=10 Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.543918 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-lock\") pod \"swift-storage-0\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " pod="openstack/swift-storage-0" Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.543959 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkjcq\" (UniqueName: \"kubernetes.io/projected/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-kube-api-access-mkjcq\") pod \"swift-storage-0\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " pod="openstack/swift-storage-0" Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.544008 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-cache\") pod \"swift-storage-0\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " pod="openstack/swift-storage-0" Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.544035 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " pod="openstack/swift-storage-0" Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.544085 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-etc-swift\") pod \"swift-storage-0\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " pod="openstack/swift-storage-0" Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.544112 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " pod="openstack/swift-storage-0" Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.602886 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.645892 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-lock\") pod \"swift-storage-0\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " pod="openstack/swift-storage-0" Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.645941 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkjcq\" (UniqueName: \"kubernetes.io/projected/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-kube-api-access-mkjcq\") pod \"swift-storage-0\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " pod="openstack/swift-storage-0" Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.646045 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-cache\") pod \"swift-storage-0\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " pod="openstack/swift-storage-0" Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.646614 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-cache\") pod \"swift-storage-0\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " pod="openstack/swift-storage-0" Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.646645 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " pod="openstack/swift-storage-0" Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.646563 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-lock\") pod \"swift-storage-0\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " pod="openstack/swift-storage-0" Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.646833 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/swift-storage-0" Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.647382 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-etc-swift\") pod \"swift-storage-0\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " pod="openstack/swift-storage-0" Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.647445 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " pod="openstack/swift-storage-0" Jan 27 19:00:13 crc kubenswrapper[4915]: E0127 19:00:13.648029 4915 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 19:00:13 crc kubenswrapper[4915]: E0127 19:00:13.648080 4915 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 19:00:13 crc kubenswrapper[4915]: E0127 19:00:13.648156 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-etc-swift podName:a50240d6-5cb2-4e11-a9da-5a7c682b5d93 nodeName:}" failed. No retries permitted until 2026-01-27 19:00:14.148110582 +0000 UTC m=+1105.505964326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-etc-swift") pod "swift-storage-0" (UID: "a50240d6-5cb2-4e11-a9da-5a7c682b5d93") : configmap "swift-ring-files" not found Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.652579 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " pod="openstack/swift-storage-0" Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.666663 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkjcq\" (UniqueName: \"kubernetes.io/projected/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-kube-api-access-mkjcq\") pod \"swift-storage-0\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " pod="openstack/swift-storage-0" Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.668999 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " pod="openstack/swift-storage-0" Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.687624 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 27 19:00:13 crc kubenswrapper[4915]: I0127 19:00:13.975708 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-hf52h" Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.054279 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsgpf\" (UniqueName: \"kubernetes.io/projected/d38f6264-7883-4f50-87bb-870718f430e8-kube-api-access-qsgpf\") pod \"d38f6264-7883-4f50-87bb-870718f430e8\" (UID: \"d38f6264-7883-4f50-87bb-870718f430e8\") " Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.054374 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d38f6264-7883-4f50-87bb-870718f430e8-ovsdbserver-sb\") pod \"d38f6264-7883-4f50-87bb-870718f430e8\" (UID: \"d38f6264-7883-4f50-87bb-870718f430e8\") " Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.054513 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d38f6264-7883-4f50-87bb-870718f430e8-ovsdbserver-nb\") pod \"d38f6264-7883-4f50-87bb-870718f430e8\" (UID: \"d38f6264-7883-4f50-87bb-870718f430e8\") " Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.054578 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d38f6264-7883-4f50-87bb-870718f430e8-dns-svc\") pod \"d38f6264-7883-4f50-87bb-870718f430e8\" (UID: \"d38f6264-7883-4f50-87bb-870718f430e8\") " Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.054616 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d38f6264-7883-4f50-87bb-870718f430e8-config\") pod \"d38f6264-7883-4f50-87bb-870718f430e8\" (UID: \"d38f6264-7883-4f50-87bb-870718f430e8\") " Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.063981 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d38f6264-7883-4f50-87bb-870718f430e8-kube-api-access-qsgpf" (OuterVolumeSpecName: "kube-api-access-qsgpf") pod "d38f6264-7883-4f50-87bb-870718f430e8" (UID: "d38f6264-7883-4f50-87bb-870718f430e8"). InnerVolumeSpecName "kube-api-access-qsgpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.094047 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d38f6264-7883-4f50-87bb-870718f430e8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d38f6264-7883-4f50-87bb-870718f430e8" (UID: "d38f6264-7883-4f50-87bb-870718f430e8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.094257 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d38f6264-7883-4f50-87bb-870718f430e8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d38f6264-7883-4f50-87bb-870718f430e8" (UID: "d38f6264-7883-4f50-87bb-870718f430e8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.098901 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d38f6264-7883-4f50-87bb-870718f430e8-config" (OuterVolumeSpecName: "config") pod "d38f6264-7883-4f50-87bb-870718f430e8" (UID: "d38f6264-7883-4f50-87bb-870718f430e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.107523 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d38f6264-7883-4f50-87bb-870718f430e8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d38f6264-7883-4f50-87bb-870718f430e8" (UID: "d38f6264-7883-4f50-87bb-870718f430e8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.156941 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-etc-swift\") pod \"swift-storage-0\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " pod="openstack/swift-storage-0" Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.157039 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsgpf\" (UniqueName: \"kubernetes.io/projected/d38f6264-7883-4f50-87bb-870718f430e8-kube-api-access-qsgpf\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.157054 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d38f6264-7883-4f50-87bb-870718f430e8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.157067 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d38f6264-7883-4f50-87bb-870718f430e8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.157078 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d38f6264-7883-4f50-87bb-870718f430e8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.157088 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d38f6264-7883-4f50-87bb-870718f430e8-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:14 crc kubenswrapper[4915]: E0127 19:00:14.157197 4915 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 19:00:14 crc kubenswrapper[4915]: E0127 19:00:14.157212 4915 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 19:00:14 crc kubenswrapper[4915]: E0127 19:00:14.157264 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-etc-swift podName:a50240d6-5cb2-4e11-a9da-5a7c682b5d93 nodeName:}" failed. No retries permitted until 2026-01-27 19:00:15.157243022 +0000 UTC m=+1106.515096696 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-etc-swift") pod "swift-storage-0" (UID: "a50240d6-5cb2-4e11-a9da-5a7c682b5d93") : configmap "swift-ring-files" not found Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.544829 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" event={"ID":"84957bd3-d9f0-4452-9311-c1dee4133184","Type":"ContainerStarted","Data":"93566813ef8a71e9665950bb1a85a128a064e1dce7aaeafbf6ff3f225b55f567"} Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.545234 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.548057 4915 generic.go:334] "Generic (PLEG): container finished" podID="d38f6264-7883-4f50-87bb-870718f430e8" containerID="21b4cc2e6ff7ca4a23d376a0d17c6fe849b215c68723de492679ccb65b9ffc79" exitCode=0 Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.548848 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-hf52h" Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.550526 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-hf52h" event={"ID":"d38f6264-7883-4f50-87bb-870718f430e8","Type":"ContainerDied","Data":"21b4cc2e6ff7ca4a23d376a0d17c6fe849b215c68723de492679ccb65b9ffc79"} Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.550563 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-hf52h" event={"ID":"d38f6264-7883-4f50-87bb-870718f430e8","Type":"ContainerDied","Data":"c5d528ba38b65758d61234728185cabc377d434bbc727c21c69481289c1df353"} Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.550579 4915 scope.go:117] "RemoveContainer" containerID="21b4cc2e6ff7ca4a23d376a0d17c6fe849b215c68723de492679ccb65b9ffc79" Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.566178 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" podStartSLOduration=2.566162463 podStartE2EDuration="2.566162463s" podCreationTimestamp="2026-01-27 19:00:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:00:14.562976955 +0000 UTC m=+1105.920830619" watchObservedRunningTime="2026-01-27 19:00:14.566162463 +0000 UTC m=+1105.924016127" Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.576081 4915 scope.go:117] "RemoveContainer" containerID="71eb0203c6dbc185783ae61b9b3d02912e43ce31aacd2702509b99fb6451fe32" Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.591275 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-hf52h"] Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.596424 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-hf52h"] Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.603452 4915 scope.go:117] "RemoveContainer" containerID="21b4cc2e6ff7ca4a23d376a0d17c6fe849b215c68723de492679ccb65b9ffc79" Jan 27 19:00:14 crc kubenswrapper[4915]: E0127 19:00:14.603999 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21b4cc2e6ff7ca4a23d376a0d17c6fe849b215c68723de492679ccb65b9ffc79\": container with ID starting with 21b4cc2e6ff7ca4a23d376a0d17c6fe849b215c68723de492679ccb65b9ffc79 not found: ID does not exist" containerID="21b4cc2e6ff7ca4a23d376a0d17c6fe849b215c68723de492679ccb65b9ffc79" Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.604027 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21b4cc2e6ff7ca4a23d376a0d17c6fe849b215c68723de492679ccb65b9ffc79"} err="failed to get container status \"21b4cc2e6ff7ca4a23d376a0d17c6fe849b215c68723de492679ccb65b9ffc79\": rpc error: code = NotFound desc = could not find container \"21b4cc2e6ff7ca4a23d376a0d17c6fe849b215c68723de492679ccb65b9ffc79\": container with ID starting with 21b4cc2e6ff7ca4a23d376a0d17c6fe849b215c68723de492679ccb65b9ffc79 not found: ID does not exist" Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.604047 4915 scope.go:117] "RemoveContainer" containerID="71eb0203c6dbc185783ae61b9b3d02912e43ce31aacd2702509b99fb6451fe32" Jan 27 19:00:14 crc kubenswrapper[4915]: E0127 19:00:14.605719 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71eb0203c6dbc185783ae61b9b3d02912e43ce31aacd2702509b99fb6451fe32\": container with ID starting with 71eb0203c6dbc185783ae61b9b3d02912e43ce31aacd2702509b99fb6451fe32 not found: ID does not exist" containerID="71eb0203c6dbc185783ae61b9b3d02912e43ce31aacd2702509b99fb6451fe32" Jan 27 19:00:14 crc kubenswrapper[4915]: I0127 19:00:14.605741 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71eb0203c6dbc185783ae61b9b3d02912e43ce31aacd2702509b99fb6451fe32"} err="failed to get container status \"71eb0203c6dbc185783ae61b9b3d02912e43ce31aacd2702509b99fb6451fe32\": rpc error: code = NotFound desc = could not find container \"71eb0203c6dbc185783ae61b9b3d02912e43ce31aacd2702509b99fb6451fe32\": container with ID starting with 71eb0203c6dbc185783ae61b9b3d02912e43ce31aacd2702509b99fb6451fe32 not found: ID does not exist" Jan 27 19:00:15 crc kubenswrapper[4915]: I0127 19:00:15.176114 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-etc-swift\") pod \"swift-storage-0\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " pod="openstack/swift-storage-0" Jan 27 19:00:15 crc kubenswrapper[4915]: E0127 19:00:15.176308 4915 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 19:00:15 crc kubenswrapper[4915]: E0127 19:00:15.176333 4915 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 19:00:15 crc kubenswrapper[4915]: E0127 19:00:15.176387 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-etc-swift podName:a50240d6-5cb2-4e11-a9da-5a7c682b5d93 nodeName:}" failed. No retries permitted until 2026-01-27 19:00:17.176372791 +0000 UTC m=+1108.534226455 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-etc-swift") pod "swift-storage-0" (UID: "a50240d6-5cb2-4e11-a9da-5a7c682b5d93") : configmap "swift-ring-files" not found Jan 27 19:00:15 crc kubenswrapper[4915]: I0127 19:00:15.366289 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d38f6264-7883-4f50-87bb-870718f430e8" path="/var/lib/kubelet/pods/d38f6264-7883-4f50-87bb-870718f430e8/volumes" Jan 27 19:00:15 crc kubenswrapper[4915]: I0127 19:00:15.884070 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-aba9-account-create-update-jpw5r"] Jan 27 19:00:15 crc kubenswrapper[4915]: E0127 19:00:15.884748 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d38f6264-7883-4f50-87bb-870718f430e8" containerName="dnsmasq-dns" Jan 27 19:00:15 crc kubenswrapper[4915]: I0127 19:00:15.884763 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d38f6264-7883-4f50-87bb-870718f430e8" containerName="dnsmasq-dns" Jan 27 19:00:15 crc kubenswrapper[4915]: E0127 19:00:15.884776 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d38f6264-7883-4f50-87bb-870718f430e8" containerName="init" Jan 27 19:00:15 crc kubenswrapper[4915]: I0127 19:00:15.884781 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d38f6264-7883-4f50-87bb-870718f430e8" containerName="init" Jan 27 19:00:15 crc kubenswrapper[4915]: I0127 19:00:15.884976 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="d38f6264-7883-4f50-87bb-870718f430e8" containerName="dnsmasq-dns" Jan 27 19:00:15 crc kubenswrapper[4915]: I0127 19:00:15.885546 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-aba9-account-create-update-jpw5r" Jan 27 19:00:15 crc kubenswrapper[4915]: I0127 19:00:15.888244 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 27 19:00:15 crc kubenswrapper[4915]: I0127 19:00:15.894694 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-kxzjh"] Jan 27 19:00:15 crc kubenswrapper[4915]: I0127 19:00:15.895978 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kxzjh" Jan 27 19:00:15 crc kubenswrapper[4915]: I0127 19:00:15.903273 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-aba9-account-create-update-jpw5r"] Jan 27 19:00:15 crc kubenswrapper[4915]: I0127 19:00:15.963515 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-kxzjh"] Jan 27 19:00:15 crc kubenswrapper[4915]: I0127 19:00:15.988872 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c04e9061-ebcc-481f-8875-624ad3914bb0-operator-scripts\") pod \"glance-db-create-kxzjh\" (UID: \"c04e9061-ebcc-481f-8875-624ad3914bb0\") " pod="openstack/glance-db-create-kxzjh" Jan 27 19:00:15 crc kubenswrapper[4915]: I0127 19:00:15.988927 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl8qv\" (UniqueName: \"kubernetes.io/projected/c04e9061-ebcc-481f-8875-624ad3914bb0-kube-api-access-sl8qv\") pod \"glance-db-create-kxzjh\" (UID: \"c04e9061-ebcc-481f-8875-624ad3914bb0\") " pod="openstack/glance-db-create-kxzjh" Jan 27 19:00:15 crc kubenswrapper[4915]: I0127 19:00:15.988988 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87vqq\" (UniqueName: \"kubernetes.io/projected/bb347f27-66c6-4bc1-8c2b-d75bfcfeea72-kube-api-access-87vqq\") pod \"glance-aba9-account-create-update-jpw5r\" (UID: \"bb347f27-66c6-4bc1-8c2b-d75bfcfeea72\") " pod="openstack/glance-aba9-account-create-update-jpw5r" Jan 27 19:00:15 crc kubenswrapper[4915]: I0127 19:00:15.989077 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb347f27-66c6-4bc1-8c2b-d75bfcfeea72-operator-scripts\") pod \"glance-aba9-account-create-update-jpw5r\" (UID: \"bb347f27-66c6-4bc1-8c2b-d75bfcfeea72\") " pod="openstack/glance-aba9-account-create-update-jpw5r" Jan 27 19:00:16 crc kubenswrapper[4915]: I0127 19:00:16.090872 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c04e9061-ebcc-481f-8875-624ad3914bb0-operator-scripts\") pod \"glance-db-create-kxzjh\" (UID: \"c04e9061-ebcc-481f-8875-624ad3914bb0\") " pod="openstack/glance-db-create-kxzjh" Jan 27 19:00:16 crc kubenswrapper[4915]: I0127 19:00:16.090933 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl8qv\" (UniqueName: \"kubernetes.io/projected/c04e9061-ebcc-481f-8875-624ad3914bb0-kube-api-access-sl8qv\") pod \"glance-db-create-kxzjh\" (UID: \"c04e9061-ebcc-481f-8875-624ad3914bb0\") " pod="openstack/glance-db-create-kxzjh" Jan 27 19:00:16 crc kubenswrapper[4915]: I0127 19:00:16.090995 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87vqq\" (UniqueName: \"kubernetes.io/projected/bb347f27-66c6-4bc1-8c2b-d75bfcfeea72-kube-api-access-87vqq\") pod \"glance-aba9-account-create-update-jpw5r\" (UID: \"bb347f27-66c6-4bc1-8c2b-d75bfcfeea72\") " pod="openstack/glance-aba9-account-create-update-jpw5r" Jan 27 19:00:16 crc kubenswrapper[4915]: I0127 19:00:16.091087 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb347f27-66c6-4bc1-8c2b-d75bfcfeea72-operator-scripts\") pod \"glance-aba9-account-create-update-jpw5r\" (UID: \"bb347f27-66c6-4bc1-8c2b-d75bfcfeea72\") " pod="openstack/glance-aba9-account-create-update-jpw5r" Jan 27 19:00:16 crc kubenswrapper[4915]: I0127 19:00:16.092062 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb347f27-66c6-4bc1-8c2b-d75bfcfeea72-operator-scripts\") pod \"glance-aba9-account-create-update-jpw5r\" (UID: \"bb347f27-66c6-4bc1-8c2b-d75bfcfeea72\") " pod="openstack/glance-aba9-account-create-update-jpw5r" Jan 27 19:00:16 crc kubenswrapper[4915]: I0127 19:00:16.092500 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c04e9061-ebcc-481f-8875-624ad3914bb0-operator-scripts\") pod \"glance-db-create-kxzjh\" (UID: \"c04e9061-ebcc-481f-8875-624ad3914bb0\") " pod="openstack/glance-db-create-kxzjh" Jan 27 19:00:16 crc kubenswrapper[4915]: I0127 19:00:16.110358 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl8qv\" (UniqueName: \"kubernetes.io/projected/c04e9061-ebcc-481f-8875-624ad3914bb0-kube-api-access-sl8qv\") pod \"glance-db-create-kxzjh\" (UID: \"c04e9061-ebcc-481f-8875-624ad3914bb0\") " pod="openstack/glance-db-create-kxzjh" Jan 27 19:00:16 crc kubenswrapper[4915]: I0127 19:00:16.111395 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87vqq\" (UniqueName: \"kubernetes.io/projected/bb347f27-66c6-4bc1-8c2b-d75bfcfeea72-kube-api-access-87vqq\") pod \"glance-aba9-account-create-update-jpw5r\" (UID: \"bb347f27-66c6-4bc1-8c2b-d75bfcfeea72\") " pod="openstack/glance-aba9-account-create-update-jpw5r" Jan 27 19:00:16 crc kubenswrapper[4915]: I0127 19:00:16.243616 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-aba9-account-create-update-jpw5r" Jan 27 19:00:16 crc kubenswrapper[4915]: I0127 19:00:16.251315 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kxzjh" Jan 27 19:00:16 crc kubenswrapper[4915]: I0127 19:00:16.458429 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 27 19:00:16 crc kubenswrapper[4915]: I0127 19:00:16.572895 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 27 19:00:16 crc kubenswrapper[4915]: I0127 19:00:16.793695 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-kxzjh"] Jan 27 19:00:16 crc kubenswrapper[4915]: W0127 19:00:16.795631 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc04e9061_ebcc_481f_8875_624ad3914bb0.slice/crio-4db08ce106844b85d5010270ddf5be4374fff69088617d77c8f554c6a9386a37 WatchSource:0}: Error finding container 4db08ce106844b85d5010270ddf5be4374fff69088617d77c8f554c6a9386a37: Status 404 returned error can't find the container with id 4db08ce106844b85d5010270ddf5be4374fff69088617d77c8f554c6a9386a37 Jan 27 19:00:16 crc kubenswrapper[4915]: I0127 19:00:16.872637 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-aba9-account-create-update-jpw5r"] Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.214222 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-etc-swift\") pod \"swift-storage-0\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " pod="openstack/swift-storage-0" Jan 27 19:00:17 crc kubenswrapper[4915]: E0127 19:00:17.214429 4915 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 19:00:17 crc kubenswrapper[4915]: E0127 19:00:17.214642 4915 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 19:00:17 crc kubenswrapper[4915]: E0127 19:00:17.214697 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-etc-swift podName:a50240d6-5cb2-4e11-a9da-5a7c682b5d93 nodeName:}" failed. No retries permitted until 2026-01-27 19:00:21.214680753 +0000 UTC m=+1112.572534417 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-etc-swift") pod "swift-storage-0" (UID: "a50240d6-5cb2-4e11-a9da-5a7c682b5d93") : configmap "swift-ring-files" not found Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.331391 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-gw7dx"] Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.333004 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gw7dx" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.335330 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.353731 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-gw7dx"] Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.358880 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.359270 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.417906 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/af75964e-e79e-49da-ad8d-37b14e63b5e6-dispersionconf\") pod \"swift-ring-rebalance-gw7dx\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " pod="openstack/swift-ring-rebalance-gw7dx" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.417948 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/af75964e-e79e-49da-ad8d-37b14e63b5e6-ring-data-devices\") pod \"swift-ring-rebalance-gw7dx\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " pod="openstack/swift-ring-rebalance-gw7dx" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.417985 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af75964e-e79e-49da-ad8d-37b14e63b5e6-combined-ca-bundle\") pod \"swift-ring-rebalance-gw7dx\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " pod="openstack/swift-ring-rebalance-gw7dx" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.418113 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/af75964e-e79e-49da-ad8d-37b14e63b5e6-swiftconf\") pod \"swift-ring-rebalance-gw7dx\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " pod="openstack/swift-ring-rebalance-gw7dx" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.418200 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w867f\" (UniqueName: \"kubernetes.io/projected/af75964e-e79e-49da-ad8d-37b14e63b5e6-kube-api-access-w867f\") pod \"swift-ring-rebalance-gw7dx\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " pod="openstack/swift-ring-rebalance-gw7dx" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.418257 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af75964e-e79e-49da-ad8d-37b14e63b5e6-scripts\") pod \"swift-ring-rebalance-gw7dx\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " pod="openstack/swift-ring-rebalance-gw7dx" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.418296 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/af75964e-e79e-49da-ad8d-37b14e63b5e6-etc-swift\") pod \"swift-ring-rebalance-gw7dx\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " pod="openstack/swift-ring-rebalance-gw7dx" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.520328 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/af75964e-e79e-49da-ad8d-37b14e63b5e6-dispersionconf\") pod \"swift-ring-rebalance-gw7dx\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " pod="openstack/swift-ring-rebalance-gw7dx" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.520376 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/af75964e-e79e-49da-ad8d-37b14e63b5e6-ring-data-devices\") pod \"swift-ring-rebalance-gw7dx\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " pod="openstack/swift-ring-rebalance-gw7dx" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.520391 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af75964e-e79e-49da-ad8d-37b14e63b5e6-combined-ca-bundle\") pod \"swift-ring-rebalance-gw7dx\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " pod="openstack/swift-ring-rebalance-gw7dx" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.520412 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/af75964e-e79e-49da-ad8d-37b14e63b5e6-swiftconf\") pod \"swift-ring-rebalance-gw7dx\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " pod="openstack/swift-ring-rebalance-gw7dx" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.520438 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w867f\" (UniqueName: \"kubernetes.io/projected/af75964e-e79e-49da-ad8d-37b14e63b5e6-kube-api-access-w867f\") pod \"swift-ring-rebalance-gw7dx\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " pod="openstack/swift-ring-rebalance-gw7dx" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.520460 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af75964e-e79e-49da-ad8d-37b14e63b5e6-scripts\") pod \"swift-ring-rebalance-gw7dx\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " pod="openstack/swift-ring-rebalance-gw7dx" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.520490 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/af75964e-e79e-49da-ad8d-37b14e63b5e6-etc-swift\") pod \"swift-ring-rebalance-gw7dx\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " pod="openstack/swift-ring-rebalance-gw7dx" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.520939 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/af75964e-e79e-49da-ad8d-37b14e63b5e6-etc-swift\") pod \"swift-ring-rebalance-gw7dx\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " pod="openstack/swift-ring-rebalance-gw7dx" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.521534 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af75964e-e79e-49da-ad8d-37b14e63b5e6-scripts\") pod \"swift-ring-rebalance-gw7dx\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " pod="openstack/swift-ring-rebalance-gw7dx" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.523223 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/af75964e-e79e-49da-ad8d-37b14e63b5e6-ring-data-devices\") pod \"swift-ring-rebalance-gw7dx\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " pod="openstack/swift-ring-rebalance-gw7dx" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.526335 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/af75964e-e79e-49da-ad8d-37b14e63b5e6-dispersionconf\") pod \"swift-ring-rebalance-gw7dx\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " pod="openstack/swift-ring-rebalance-gw7dx" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.535570 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/af75964e-e79e-49da-ad8d-37b14e63b5e6-swiftconf\") pod \"swift-ring-rebalance-gw7dx\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " pod="openstack/swift-ring-rebalance-gw7dx" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.535774 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af75964e-e79e-49da-ad8d-37b14e63b5e6-combined-ca-bundle\") pod \"swift-ring-rebalance-gw7dx\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " pod="openstack/swift-ring-rebalance-gw7dx" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.538778 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w867f\" (UniqueName: \"kubernetes.io/projected/af75964e-e79e-49da-ad8d-37b14e63b5e6-kube-api-access-w867f\") pod \"swift-ring-rebalance-gw7dx\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " pod="openstack/swift-ring-rebalance-gw7dx" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.575918 4915 generic.go:334] "Generic (PLEG): container finished" podID="bb347f27-66c6-4bc1-8c2b-d75bfcfeea72" containerID="82d02e4e93b364d97c657c32d58bfee07e0aedaa25c9ce966dd80158a75b2166" exitCode=0 Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.576004 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-aba9-account-create-update-jpw5r" event={"ID":"bb347f27-66c6-4bc1-8c2b-d75bfcfeea72","Type":"ContainerDied","Data":"82d02e4e93b364d97c657c32d58bfee07e0aedaa25c9ce966dd80158a75b2166"} Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.576034 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-aba9-account-create-update-jpw5r" event={"ID":"bb347f27-66c6-4bc1-8c2b-d75bfcfeea72","Type":"ContainerStarted","Data":"e56f89fad3af110dcc95ab0b934b61fe7e0757f4489a27aa6edb26b52168dece"} Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.590095 4915 generic.go:334] "Generic (PLEG): container finished" podID="c04e9061-ebcc-481f-8875-624ad3914bb0" containerID="fbe7fa792658ac0d2938034c71e6e12768548bb7e57cdd4e8498d3744da73a91" exitCode=0 Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.590139 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kxzjh" event={"ID":"c04e9061-ebcc-481f-8875-624ad3914bb0","Type":"ContainerDied","Data":"fbe7fa792658ac0d2938034c71e6e12768548bb7e57cdd4e8498d3744da73a91"} Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.590168 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kxzjh" event={"ID":"c04e9061-ebcc-481f-8875-624ad3914bb0","Type":"ContainerStarted","Data":"4db08ce106844b85d5010270ddf5be4374fff69088617d77c8f554c6a9386a37"} Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.596158 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-rg5w9"] Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.597618 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rg5w9" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.603721 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.634131 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rg5w9"] Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.691272 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gw7dx" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.724005 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wdgq\" (UniqueName: \"kubernetes.io/projected/f9841c88-d3a2-4118-9a42-1599d009936d-kube-api-access-5wdgq\") pod \"root-account-create-update-rg5w9\" (UID: \"f9841c88-d3a2-4118-9a42-1599d009936d\") " pod="openstack/root-account-create-update-rg5w9" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.724408 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9841c88-d3a2-4118-9a42-1599d009936d-operator-scripts\") pod \"root-account-create-update-rg5w9\" (UID: \"f9841c88-d3a2-4118-9a42-1599d009936d\") " pod="openstack/root-account-create-update-rg5w9" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.828742 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wdgq\" (UniqueName: \"kubernetes.io/projected/f9841c88-d3a2-4118-9a42-1599d009936d-kube-api-access-5wdgq\") pod \"root-account-create-update-rg5w9\" (UID: \"f9841c88-d3a2-4118-9a42-1599d009936d\") " pod="openstack/root-account-create-update-rg5w9" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.828850 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9841c88-d3a2-4118-9a42-1599d009936d-operator-scripts\") pod \"root-account-create-update-rg5w9\" (UID: \"f9841c88-d3a2-4118-9a42-1599d009936d\") " pod="openstack/root-account-create-update-rg5w9" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.829576 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9841c88-d3a2-4118-9a42-1599d009936d-operator-scripts\") pod \"root-account-create-update-rg5w9\" (UID: \"f9841c88-d3a2-4118-9a42-1599d009936d\") " pod="openstack/root-account-create-update-rg5w9" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.847609 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wdgq\" (UniqueName: \"kubernetes.io/projected/f9841c88-d3a2-4118-9a42-1599d009936d-kube-api-access-5wdgq\") pod \"root-account-create-update-rg5w9\" (UID: \"f9841c88-d3a2-4118-9a42-1599d009936d\") " pod="openstack/root-account-create-update-rg5w9" Jan 27 19:00:17 crc kubenswrapper[4915]: I0127 19:00:17.935351 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rg5w9" Jan 27 19:00:18 crc kubenswrapper[4915]: I0127 19:00:18.144963 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-gw7dx"] Jan 27 19:00:18 crc kubenswrapper[4915]: I0127 19:00:18.367592 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rg5w9"] Jan 27 19:00:18 crc kubenswrapper[4915]: W0127 19:00:18.369205 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9841c88_d3a2_4118_9a42_1599d009936d.slice/crio-1d90c14722270f71a6c1e297c3a3877d041b19960bb1be4334eeae224d5f174b WatchSource:0}: Error finding container 1d90c14722270f71a6c1e297c3a3877d041b19960bb1be4334eeae224d5f174b: Status 404 returned error can't find the container with id 1d90c14722270f71a6c1e297c3a3877d041b19960bb1be4334eeae224d5f174b Jan 27 19:00:18 crc kubenswrapper[4915]: I0127 19:00:18.599597 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gw7dx" event={"ID":"af75964e-e79e-49da-ad8d-37b14e63b5e6","Type":"ContainerStarted","Data":"281cec9430f4c703e2d7657481255f3f02d7f7bbc088daf03112c425162f567c"} Jan 27 19:00:18 crc kubenswrapper[4915]: I0127 19:00:18.601723 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rg5w9" event={"ID":"f9841c88-d3a2-4118-9a42-1599d009936d","Type":"ContainerStarted","Data":"a33da52f918876be7e04c73bf550e4d751371de9f1f41a808dfdf89c2d34a2de"} Jan 27 19:00:18 crc kubenswrapper[4915]: I0127 19:00:18.601840 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rg5w9" event={"ID":"f9841c88-d3a2-4118-9a42-1599d009936d","Type":"ContainerStarted","Data":"1d90c14722270f71a6c1e297c3a3877d041b19960bb1be4334eeae224d5f174b"} Jan 27 19:00:18 crc kubenswrapper[4915]: I0127 19:00:18.621191 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-rg5w9" podStartSLOduration=1.6211679540000001 podStartE2EDuration="1.621167954s" podCreationTimestamp="2026-01-27 19:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:00:18.616192592 +0000 UTC m=+1109.974046256" watchObservedRunningTime="2026-01-27 19:00:18.621167954 +0000 UTC m=+1109.979021618" Jan 27 19:00:19 crc kubenswrapper[4915]: I0127 19:00:19.003875 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kxzjh" Jan 27 19:00:19 crc kubenswrapper[4915]: I0127 19:00:19.008804 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-aba9-account-create-update-jpw5r" Jan 27 19:00:19 crc kubenswrapper[4915]: I0127 19:00:19.051416 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb347f27-66c6-4bc1-8c2b-d75bfcfeea72-operator-scripts\") pod \"bb347f27-66c6-4bc1-8c2b-d75bfcfeea72\" (UID: \"bb347f27-66c6-4bc1-8c2b-d75bfcfeea72\") " Jan 27 19:00:19 crc kubenswrapper[4915]: I0127 19:00:19.051644 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87vqq\" (UniqueName: \"kubernetes.io/projected/bb347f27-66c6-4bc1-8c2b-d75bfcfeea72-kube-api-access-87vqq\") pod \"bb347f27-66c6-4bc1-8c2b-d75bfcfeea72\" (UID: \"bb347f27-66c6-4bc1-8c2b-d75bfcfeea72\") " Jan 27 19:00:19 crc kubenswrapper[4915]: I0127 19:00:19.051673 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c04e9061-ebcc-481f-8875-624ad3914bb0-operator-scripts\") pod \"c04e9061-ebcc-481f-8875-624ad3914bb0\" (UID: \"c04e9061-ebcc-481f-8875-624ad3914bb0\") " Jan 27 19:00:19 crc kubenswrapper[4915]: I0127 19:00:19.051729 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl8qv\" (UniqueName: \"kubernetes.io/projected/c04e9061-ebcc-481f-8875-624ad3914bb0-kube-api-access-sl8qv\") pod \"c04e9061-ebcc-481f-8875-624ad3914bb0\" (UID: \"c04e9061-ebcc-481f-8875-624ad3914bb0\") " Jan 27 19:00:19 crc kubenswrapper[4915]: I0127 19:00:19.052654 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb347f27-66c6-4bc1-8c2b-d75bfcfeea72-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bb347f27-66c6-4bc1-8c2b-d75bfcfeea72" (UID: "bb347f27-66c6-4bc1-8c2b-d75bfcfeea72"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:19 crc kubenswrapper[4915]: I0127 19:00:19.053593 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c04e9061-ebcc-481f-8875-624ad3914bb0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c04e9061-ebcc-481f-8875-624ad3914bb0" (UID: "c04e9061-ebcc-481f-8875-624ad3914bb0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:19 crc kubenswrapper[4915]: I0127 19:00:19.054045 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c04e9061-ebcc-481f-8875-624ad3914bb0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:19 crc kubenswrapper[4915]: I0127 19:00:19.054088 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb347f27-66c6-4bc1-8c2b-d75bfcfeea72-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:19 crc kubenswrapper[4915]: I0127 19:00:19.057289 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb347f27-66c6-4bc1-8c2b-d75bfcfeea72-kube-api-access-87vqq" (OuterVolumeSpecName: "kube-api-access-87vqq") pod "bb347f27-66c6-4bc1-8c2b-d75bfcfeea72" (UID: "bb347f27-66c6-4bc1-8c2b-d75bfcfeea72"). InnerVolumeSpecName "kube-api-access-87vqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:19 crc kubenswrapper[4915]: I0127 19:00:19.059357 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c04e9061-ebcc-481f-8875-624ad3914bb0-kube-api-access-sl8qv" (OuterVolumeSpecName: "kube-api-access-sl8qv") pod "c04e9061-ebcc-481f-8875-624ad3914bb0" (UID: "c04e9061-ebcc-481f-8875-624ad3914bb0"). InnerVolumeSpecName "kube-api-access-sl8qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:19 crc kubenswrapper[4915]: I0127 19:00:19.155783 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87vqq\" (UniqueName: \"kubernetes.io/projected/bb347f27-66c6-4bc1-8c2b-d75bfcfeea72-kube-api-access-87vqq\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:19 crc kubenswrapper[4915]: I0127 19:00:19.155830 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sl8qv\" (UniqueName: \"kubernetes.io/projected/c04e9061-ebcc-481f-8875-624ad3914bb0-kube-api-access-sl8qv\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:19 crc kubenswrapper[4915]: I0127 19:00:19.610365 4915 generic.go:334] "Generic (PLEG): container finished" podID="f9841c88-d3a2-4118-9a42-1599d009936d" containerID="a33da52f918876be7e04c73bf550e4d751371de9f1f41a808dfdf89c2d34a2de" exitCode=0 Jan 27 19:00:19 crc kubenswrapper[4915]: I0127 19:00:19.610430 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rg5w9" event={"ID":"f9841c88-d3a2-4118-9a42-1599d009936d","Type":"ContainerDied","Data":"a33da52f918876be7e04c73bf550e4d751371de9f1f41a808dfdf89c2d34a2de"} Jan 27 19:00:19 crc kubenswrapper[4915]: I0127 19:00:19.612520 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-aba9-account-create-update-jpw5r" event={"ID":"bb347f27-66c6-4bc1-8c2b-d75bfcfeea72","Type":"ContainerDied","Data":"e56f89fad3af110dcc95ab0b934b61fe7e0757f4489a27aa6edb26b52168dece"} Jan 27 19:00:19 crc kubenswrapper[4915]: I0127 19:00:19.612552 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e56f89fad3af110dcc95ab0b934b61fe7e0757f4489a27aa6edb26b52168dece" Jan 27 19:00:19 crc kubenswrapper[4915]: I0127 19:00:19.612598 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-aba9-account-create-update-jpw5r" Jan 27 19:00:19 crc kubenswrapper[4915]: I0127 19:00:19.619855 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kxzjh" event={"ID":"c04e9061-ebcc-481f-8875-624ad3914bb0","Type":"ContainerDied","Data":"4db08ce106844b85d5010270ddf5be4374fff69088617d77c8f554c6a9386a37"} Jan 27 19:00:19 crc kubenswrapper[4915]: I0127 19:00:19.619887 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4db08ce106844b85d5010270ddf5be4374fff69088617d77c8f554c6a9386a37" Jan 27 19:00:19 crc kubenswrapper[4915]: I0127 19:00:19.619937 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kxzjh" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.225157 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-st6bz"] Jan 27 19:00:20 crc kubenswrapper[4915]: E0127 19:00:20.225680 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04e9061-ebcc-481f-8875-624ad3914bb0" containerName="mariadb-database-create" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.225704 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04e9061-ebcc-481f-8875-624ad3914bb0" containerName="mariadb-database-create" Jan 27 19:00:20 crc kubenswrapper[4915]: E0127 19:00:20.225741 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb347f27-66c6-4bc1-8c2b-d75bfcfeea72" containerName="mariadb-account-create-update" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.225752 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb347f27-66c6-4bc1-8c2b-d75bfcfeea72" containerName="mariadb-account-create-update" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.226038 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c04e9061-ebcc-481f-8875-624ad3914bb0" containerName="mariadb-database-create" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.226063 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb347f27-66c6-4bc1-8c2b-d75bfcfeea72" containerName="mariadb-account-create-update" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.226886 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-st6bz" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.234623 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-st6bz"] Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.279973 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50b0c87d-6ff3-47cc-8991-884a52945586-operator-scripts\") pod \"keystone-db-create-st6bz\" (UID: \"50b0c87d-6ff3-47cc-8991-884a52945586\") " pod="openstack/keystone-db-create-st6bz" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.280174 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbh7d\" (UniqueName: \"kubernetes.io/projected/50b0c87d-6ff3-47cc-8991-884a52945586-kube-api-access-qbh7d\") pod \"keystone-db-create-st6bz\" (UID: \"50b0c87d-6ff3-47cc-8991-884a52945586\") " pod="openstack/keystone-db-create-st6bz" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.328433 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-0e30-account-create-update-pf994"] Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.329947 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0e30-account-create-update-pf994" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.332236 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.338915 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0e30-account-create-update-pf994"] Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.381642 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjmzn\" (UniqueName: \"kubernetes.io/projected/95cbe256-8e24-4222-9638-78b805a12278-kube-api-access-mjmzn\") pod \"keystone-0e30-account-create-update-pf994\" (UID: \"95cbe256-8e24-4222-9638-78b805a12278\") " pod="openstack/keystone-0e30-account-create-update-pf994" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.381690 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbh7d\" (UniqueName: \"kubernetes.io/projected/50b0c87d-6ff3-47cc-8991-884a52945586-kube-api-access-qbh7d\") pod \"keystone-db-create-st6bz\" (UID: \"50b0c87d-6ff3-47cc-8991-884a52945586\") " pod="openstack/keystone-db-create-st6bz" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.382077 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50b0c87d-6ff3-47cc-8991-884a52945586-operator-scripts\") pod \"keystone-db-create-st6bz\" (UID: \"50b0c87d-6ff3-47cc-8991-884a52945586\") " pod="openstack/keystone-db-create-st6bz" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.382163 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95cbe256-8e24-4222-9638-78b805a12278-operator-scripts\") pod \"keystone-0e30-account-create-update-pf994\" (UID: \"95cbe256-8e24-4222-9638-78b805a12278\") " pod="openstack/keystone-0e30-account-create-update-pf994" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.383101 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50b0c87d-6ff3-47cc-8991-884a52945586-operator-scripts\") pod \"keystone-db-create-st6bz\" (UID: \"50b0c87d-6ff3-47cc-8991-884a52945586\") " pod="openstack/keystone-db-create-st6bz" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.409873 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbh7d\" (UniqueName: \"kubernetes.io/projected/50b0c87d-6ff3-47cc-8991-884a52945586-kube-api-access-qbh7d\") pod \"keystone-db-create-st6bz\" (UID: \"50b0c87d-6ff3-47cc-8991-884a52945586\") " pod="openstack/keystone-db-create-st6bz" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.484480 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95cbe256-8e24-4222-9638-78b805a12278-operator-scripts\") pod \"keystone-0e30-account-create-update-pf994\" (UID: \"95cbe256-8e24-4222-9638-78b805a12278\") " pod="openstack/keystone-0e30-account-create-update-pf994" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.484800 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjmzn\" (UniqueName: \"kubernetes.io/projected/95cbe256-8e24-4222-9638-78b805a12278-kube-api-access-mjmzn\") pod \"keystone-0e30-account-create-update-pf994\" (UID: \"95cbe256-8e24-4222-9638-78b805a12278\") " pod="openstack/keystone-0e30-account-create-update-pf994" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.487133 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95cbe256-8e24-4222-9638-78b805a12278-operator-scripts\") pod \"keystone-0e30-account-create-update-pf994\" (UID: \"95cbe256-8e24-4222-9638-78b805a12278\") " pod="openstack/keystone-0e30-account-create-update-pf994" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.522807 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-x46j5"] Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.523859 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x46j5" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.532023 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-x46j5"] Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.536450 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjmzn\" (UniqueName: \"kubernetes.io/projected/95cbe256-8e24-4222-9638-78b805a12278-kube-api-access-mjmzn\") pod \"keystone-0e30-account-create-update-pf994\" (UID: \"95cbe256-8e24-4222-9638-78b805a12278\") " pod="openstack/keystone-0e30-account-create-update-pf994" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.547620 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-st6bz" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.586577 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1bfa088-3a3e-4393-ae45-1d2d26a39dc7-operator-scripts\") pod \"placement-db-create-x46j5\" (UID: \"f1bfa088-3a3e-4393-ae45-1d2d26a39dc7\") " pod="openstack/placement-db-create-x46j5" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.586703 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq57c\" (UniqueName: \"kubernetes.io/projected/f1bfa088-3a3e-4393-ae45-1d2d26a39dc7-kube-api-access-mq57c\") pod \"placement-db-create-x46j5\" (UID: \"f1bfa088-3a3e-4393-ae45-1d2d26a39dc7\") " pod="openstack/placement-db-create-x46j5" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.629565 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-2fe5-account-create-update-x9fkk"] Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.633162 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2fe5-account-create-update-x9fkk" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.638137 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.653930 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2fe5-account-create-update-x9fkk"] Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.683587 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0e30-account-create-update-pf994" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.687637 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1bfa088-3a3e-4393-ae45-1d2d26a39dc7-operator-scripts\") pod \"placement-db-create-x46j5\" (UID: \"f1bfa088-3a3e-4393-ae45-1d2d26a39dc7\") " pod="openstack/placement-db-create-x46j5" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.687702 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2852f67f-81d4-4f7c-b5b1-c66939d69aed-operator-scripts\") pod \"placement-2fe5-account-create-update-x9fkk\" (UID: \"2852f67f-81d4-4f7c-b5b1-c66939d69aed\") " pod="openstack/placement-2fe5-account-create-update-x9fkk" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.687748 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq57c\" (UniqueName: \"kubernetes.io/projected/f1bfa088-3a3e-4393-ae45-1d2d26a39dc7-kube-api-access-mq57c\") pod \"placement-db-create-x46j5\" (UID: \"f1bfa088-3a3e-4393-ae45-1d2d26a39dc7\") " pod="openstack/placement-db-create-x46j5" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.687818 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thm9b\" (UniqueName: \"kubernetes.io/projected/2852f67f-81d4-4f7c-b5b1-c66939d69aed-kube-api-access-thm9b\") pod \"placement-2fe5-account-create-update-x9fkk\" (UID: \"2852f67f-81d4-4f7c-b5b1-c66939d69aed\") " pod="openstack/placement-2fe5-account-create-update-x9fkk" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.688469 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1bfa088-3a3e-4393-ae45-1d2d26a39dc7-operator-scripts\") pod \"placement-db-create-x46j5\" (UID: \"f1bfa088-3a3e-4393-ae45-1d2d26a39dc7\") " pod="openstack/placement-db-create-x46j5" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.704286 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq57c\" (UniqueName: \"kubernetes.io/projected/f1bfa088-3a3e-4393-ae45-1d2d26a39dc7-kube-api-access-mq57c\") pod \"placement-db-create-x46j5\" (UID: \"f1bfa088-3a3e-4393-ae45-1d2d26a39dc7\") " pod="openstack/placement-db-create-x46j5" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.790369 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2852f67f-81d4-4f7c-b5b1-c66939d69aed-operator-scripts\") pod \"placement-2fe5-account-create-update-x9fkk\" (UID: \"2852f67f-81d4-4f7c-b5b1-c66939d69aed\") " pod="openstack/placement-2fe5-account-create-update-x9fkk" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.791212 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2852f67f-81d4-4f7c-b5b1-c66939d69aed-operator-scripts\") pod \"placement-2fe5-account-create-update-x9fkk\" (UID: \"2852f67f-81d4-4f7c-b5b1-c66939d69aed\") " pod="openstack/placement-2fe5-account-create-update-x9fkk" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.791635 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thm9b\" (UniqueName: \"kubernetes.io/projected/2852f67f-81d4-4f7c-b5b1-c66939d69aed-kube-api-access-thm9b\") pod \"placement-2fe5-account-create-update-x9fkk\" (UID: \"2852f67f-81d4-4f7c-b5b1-c66939d69aed\") " pod="openstack/placement-2fe5-account-create-update-x9fkk" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.808286 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thm9b\" (UniqueName: \"kubernetes.io/projected/2852f67f-81d4-4f7c-b5b1-c66939d69aed-kube-api-access-thm9b\") pod \"placement-2fe5-account-create-update-x9fkk\" (UID: \"2852f67f-81d4-4f7c-b5b1-c66939d69aed\") " pod="openstack/placement-2fe5-account-create-update-x9fkk" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.876226 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x46j5" Jan 27 19:00:20 crc kubenswrapper[4915]: I0127 19:00:20.959654 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2fe5-account-create-update-x9fkk" Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.026924 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-nk9l7"] Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.028287 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nk9l7" Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.032266 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.034409 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-cc55j" Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.038621 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-nk9l7"] Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.098454 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75d11dfb-4c76-4968-81c0-d64a2272772b-combined-ca-bundle\") pod \"glance-db-sync-nk9l7\" (UID: \"75d11dfb-4c76-4968-81c0-d64a2272772b\") " pod="openstack/glance-db-sync-nk9l7" Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.098593 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhqcl\" (UniqueName: \"kubernetes.io/projected/75d11dfb-4c76-4968-81c0-d64a2272772b-kube-api-access-lhqcl\") pod \"glance-db-sync-nk9l7\" (UID: \"75d11dfb-4c76-4968-81c0-d64a2272772b\") " pod="openstack/glance-db-sync-nk9l7" Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.098719 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75d11dfb-4c76-4968-81c0-d64a2272772b-config-data\") pod \"glance-db-sync-nk9l7\" (UID: \"75d11dfb-4c76-4968-81c0-d64a2272772b\") " pod="openstack/glance-db-sync-nk9l7" Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.098831 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75d11dfb-4c76-4968-81c0-d64a2272772b-db-sync-config-data\") pod \"glance-db-sync-nk9l7\" (UID: \"75d11dfb-4c76-4968-81c0-d64a2272772b\") " pod="openstack/glance-db-sync-nk9l7" Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.201434 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75d11dfb-4c76-4968-81c0-d64a2272772b-combined-ca-bundle\") pod \"glance-db-sync-nk9l7\" (UID: \"75d11dfb-4c76-4968-81c0-d64a2272772b\") " pod="openstack/glance-db-sync-nk9l7" Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.201533 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhqcl\" (UniqueName: \"kubernetes.io/projected/75d11dfb-4c76-4968-81c0-d64a2272772b-kube-api-access-lhqcl\") pod \"glance-db-sync-nk9l7\" (UID: \"75d11dfb-4c76-4968-81c0-d64a2272772b\") " pod="openstack/glance-db-sync-nk9l7" Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.201652 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75d11dfb-4c76-4968-81c0-d64a2272772b-config-data\") pod \"glance-db-sync-nk9l7\" (UID: \"75d11dfb-4c76-4968-81c0-d64a2272772b\") " pod="openstack/glance-db-sync-nk9l7" Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.201704 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75d11dfb-4c76-4968-81c0-d64a2272772b-db-sync-config-data\") pod \"glance-db-sync-nk9l7\" (UID: \"75d11dfb-4c76-4968-81c0-d64a2272772b\") " pod="openstack/glance-db-sync-nk9l7" Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.205774 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75d11dfb-4c76-4968-81c0-d64a2272772b-db-sync-config-data\") pod \"glance-db-sync-nk9l7\" (UID: \"75d11dfb-4c76-4968-81c0-d64a2272772b\") " pod="openstack/glance-db-sync-nk9l7" Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.206261 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75d11dfb-4c76-4968-81c0-d64a2272772b-combined-ca-bundle\") pod \"glance-db-sync-nk9l7\" (UID: \"75d11dfb-4c76-4968-81c0-d64a2272772b\") " pod="openstack/glance-db-sync-nk9l7" Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.207457 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75d11dfb-4c76-4968-81c0-d64a2272772b-config-data\") pod \"glance-db-sync-nk9l7\" (UID: \"75d11dfb-4c76-4968-81c0-d64a2272772b\") " pod="openstack/glance-db-sync-nk9l7" Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.223663 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhqcl\" (UniqueName: \"kubernetes.io/projected/75d11dfb-4c76-4968-81c0-d64a2272772b-kube-api-access-lhqcl\") pod \"glance-db-sync-nk9l7\" (UID: \"75d11dfb-4c76-4968-81c0-d64a2272772b\") " pod="openstack/glance-db-sync-nk9l7" Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.303394 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-etc-swift\") pod \"swift-storage-0\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " pod="openstack/swift-storage-0" Jan 27 19:00:21 crc kubenswrapper[4915]: E0127 19:00:21.303573 4915 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 19:00:21 crc kubenswrapper[4915]: E0127 19:00:21.303597 4915 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 19:00:21 crc kubenswrapper[4915]: E0127 19:00:21.303656 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-etc-swift podName:a50240d6-5cb2-4e11-a9da-5a7c682b5d93 nodeName:}" failed. No retries permitted until 2026-01-27 19:00:29.303640017 +0000 UTC m=+1120.661493681 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-etc-swift") pod "swift-storage-0" (UID: "a50240d6-5cb2-4e11-a9da-5a7c682b5d93") : configmap "swift-ring-files" not found Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.353812 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nk9l7" Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.628407 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rg5w9" Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.655768 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rg5w9" event={"ID":"f9841c88-d3a2-4118-9a42-1599d009936d","Type":"ContainerDied","Data":"1d90c14722270f71a6c1e297c3a3877d041b19960bb1be4334eeae224d5f174b"} Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.656116 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d90c14722270f71a6c1e297c3a3877d041b19960bb1be4334eeae224d5f174b" Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.656065 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rg5w9" Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.711107 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wdgq\" (UniqueName: \"kubernetes.io/projected/f9841c88-d3a2-4118-9a42-1599d009936d-kube-api-access-5wdgq\") pod \"f9841c88-d3a2-4118-9a42-1599d009936d\" (UID: \"f9841c88-d3a2-4118-9a42-1599d009936d\") " Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.711231 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9841c88-d3a2-4118-9a42-1599d009936d-operator-scripts\") pod \"f9841c88-d3a2-4118-9a42-1599d009936d\" (UID: \"f9841c88-d3a2-4118-9a42-1599d009936d\") " Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.712240 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9841c88-d3a2-4118-9a42-1599d009936d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f9841c88-d3a2-4118-9a42-1599d009936d" (UID: "f9841c88-d3a2-4118-9a42-1599d009936d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.714775 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9841c88-d3a2-4118-9a42-1599d009936d-kube-api-access-5wdgq" (OuterVolumeSpecName: "kube-api-access-5wdgq") pod "f9841c88-d3a2-4118-9a42-1599d009936d" (UID: "f9841c88-d3a2-4118-9a42-1599d009936d"). InnerVolumeSpecName "kube-api-access-5wdgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.813140 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wdgq\" (UniqueName: \"kubernetes.io/projected/f9841c88-d3a2-4118-9a42-1599d009936d-kube-api-access-5wdgq\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:21 crc kubenswrapper[4915]: I0127 19:00:21.813508 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9841c88-d3a2-4118-9a42-1599d009936d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:22 crc kubenswrapper[4915]: I0127 19:00:22.143227 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0e30-account-create-update-pf994"] Jan 27 19:00:22 crc kubenswrapper[4915]: W0127 19:00:22.147770 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95cbe256_8e24_4222_9638_78b805a12278.slice/crio-07a94a489fa9d196f5dcb1042a186d1affb664c0bffb2ee6bd427b89b0a585b2 WatchSource:0}: Error finding container 07a94a489fa9d196f5dcb1042a186d1affb664c0bffb2ee6bd427b89b0a585b2: Status 404 returned error can't find the container with id 07a94a489fa9d196f5dcb1042a186d1affb664c0bffb2ee6bd427b89b0a585b2 Jan 27 19:00:22 crc kubenswrapper[4915]: I0127 19:00:22.152228 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-x46j5"] Jan 27 19:00:22 crc kubenswrapper[4915]: I0127 19:00:22.285329 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-st6bz"] Jan 27 19:00:22 crc kubenswrapper[4915]: I0127 19:00:22.292561 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2fe5-account-create-update-x9fkk"] Jan 27 19:00:22 crc kubenswrapper[4915]: I0127 19:00:22.379001 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-nk9l7"] Jan 27 19:00:22 crc kubenswrapper[4915]: W0127 19:00:22.393405 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75d11dfb_4c76_4968_81c0_d64a2272772b.slice/crio-222b42e5a7537264f30eee2fb9a0ff65a9632668ab312a7d531fefe8dc031b10 WatchSource:0}: Error finding container 222b42e5a7537264f30eee2fb9a0ff65a9632668ab312a7d531fefe8dc031b10: Status 404 returned error can't find the container with id 222b42e5a7537264f30eee2fb9a0ff65a9632668ab312a7d531fefe8dc031b10 Jan 27 19:00:22 crc kubenswrapper[4915]: I0127 19:00:22.583037 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" Jan 27 19:00:22 crc kubenswrapper[4915]: I0127 19:00:22.650615 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zzsgm"] Jan 27 19:00:22 crc kubenswrapper[4915]: I0127 19:00:22.650879 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-zzsgm" podUID="03628b30-c873-40cb-a7f9-acea32d1f486" containerName="dnsmasq-dns" containerID="cri-o://5fddb3730fdff25f725dfc175dfe6c5b3f09fe1cf4a286caef09244356b6fb5a" gracePeriod=10 Jan 27 19:00:22 crc kubenswrapper[4915]: I0127 19:00:22.668102 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-st6bz" event={"ID":"50b0c87d-6ff3-47cc-8991-884a52945586","Type":"ContainerStarted","Data":"aaf6233019e696a30d5a3e828efaf0591ebe9af4cbbb29f70d99a50b3c0b4520"} Jan 27 19:00:22 crc kubenswrapper[4915]: I0127 19:00:22.668153 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-st6bz" event={"ID":"50b0c87d-6ff3-47cc-8991-884a52945586","Type":"ContainerStarted","Data":"fbd5cc2854f90bc6e0cf9baa10cb444bb1d196e4382c6ceb5c4579d8cee76963"} Jan 27 19:00:22 crc kubenswrapper[4915]: I0127 19:00:22.674659 4915 generic.go:334] "Generic (PLEG): container finished" podID="95cbe256-8e24-4222-9638-78b805a12278" containerID="9b6ca4421df3d3980e0456e18383ce4d6cf3d1302169aab1892b3f60264786ad" exitCode=0 Jan 27 19:00:22 crc kubenswrapper[4915]: I0127 19:00:22.674764 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0e30-account-create-update-pf994" event={"ID":"95cbe256-8e24-4222-9638-78b805a12278","Type":"ContainerDied","Data":"9b6ca4421df3d3980e0456e18383ce4d6cf3d1302169aab1892b3f60264786ad"} Jan 27 19:00:22 crc kubenswrapper[4915]: I0127 19:00:22.674821 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0e30-account-create-update-pf994" event={"ID":"95cbe256-8e24-4222-9638-78b805a12278","Type":"ContainerStarted","Data":"07a94a489fa9d196f5dcb1042a186d1affb664c0bffb2ee6bd427b89b0a585b2"} Jan 27 19:00:22 crc kubenswrapper[4915]: I0127 19:00:22.681664 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nk9l7" event={"ID":"75d11dfb-4c76-4968-81c0-d64a2272772b","Type":"ContainerStarted","Data":"222b42e5a7537264f30eee2fb9a0ff65a9632668ab312a7d531fefe8dc031b10"} Jan 27 19:00:22 crc kubenswrapper[4915]: I0127 19:00:22.692767 4915 generic.go:334] "Generic (PLEG): container finished" podID="f1bfa088-3a3e-4393-ae45-1d2d26a39dc7" containerID="dc92f7d6abd39e3ef81eaf6dc25224f561ed1b346a562d573a0cb55379bcf521" exitCode=0 Jan 27 19:00:22 crc kubenswrapper[4915]: I0127 19:00:22.692920 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x46j5" event={"ID":"f1bfa088-3a3e-4393-ae45-1d2d26a39dc7","Type":"ContainerDied","Data":"dc92f7d6abd39e3ef81eaf6dc25224f561ed1b346a562d573a0cb55379bcf521"} Jan 27 19:00:22 crc kubenswrapper[4915]: I0127 19:00:22.693058 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x46j5" event={"ID":"f1bfa088-3a3e-4393-ae45-1d2d26a39dc7","Type":"ContainerStarted","Data":"bb23b466c46f5540d0954a6bffe73a3732cf58ee2208b6f07680a1c8f25c3d51"} Jan 27 19:00:22 crc kubenswrapper[4915]: I0127 19:00:22.695140 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-st6bz" podStartSLOduration=2.69511369 podStartE2EDuration="2.69511369s" podCreationTimestamp="2026-01-27 19:00:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:00:22.686918879 +0000 UTC m=+1114.044772543" watchObservedRunningTime="2026-01-27 19:00:22.69511369 +0000 UTC m=+1114.052967354" Jan 27 19:00:22 crc kubenswrapper[4915]: I0127 19:00:22.695933 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2fe5-account-create-update-x9fkk" event={"ID":"2852f67f-81d4-4f7c-b5b1-c66939d69aed","Type":"ContainerStarted","Data":"94362fedde009c8c4e6d7acae6bce7dcc515be645c53ecd648341b39fd19e4cc"} Jan 27 19:00:22 crc kubenswrapper[4915]: I0127 19:00:22.695984 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2fe5-account-create-update-x9fkk" event={"ID":"2852f67f-81d4-4f7c-b5b1-c66939d69aed","Type":"ContainerStarted","Data":"3999d39406f768b248b484a2b629ac2bf1c2bc197b8f0f97b13ebf0f0352d1ec"} Jan 27 19:00:22 crc kubenswrapper[4915]: I0127 19:00:22.698328 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gw7dx" event={"ID":"af75964e-e79e-49da-ad8d-37b14e63b5e6","Type":"ContainerStarted","Data":"f58363042e991a9612d8ad38408e5dfa591ca7acbcfb33b084f57cad80503a36"} Jan 27 19:00:22 crc kubenswrapper[4915]: I0127 19:00:22.751637 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-gw7dx" podStartSLOduration=2.252318707 podStartE2EDuration="5.751618416s" podCreationTimestamp="2026-01-27 19:00:17 +0000 UTC" firstStartedPulling="2026-01-27 19:00:18.160460134 +0000 UTC m=+1109.518313798" lastFinishedPulling="2026-01-27 19:00:21.659759833 +0000 UTC m=+1113.017613507" observedRunningTime="2026-01-27 19:00:22.730313333 +0000 UTC m=+1114.088166997" watchObservedRunningTime="2026-01-27 19:00:22.751618416 +0000 UTC m=+1114.109472080" Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.135042 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zzsgm" Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.256613 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03628b30-c873-40cb-a7f9-acea32d1f486-dns-svc\") pod \"03628b30-c873-40cb-a7f9-acea32d1f486\" (UID: \"03628b30-c873-40cb-a7f9-acea32d1f486\") " Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.256782 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03628b30-c873-40cb-a7f9-acea32d1f486-config\") pod \"03628b30-c873-40cb-a7f9-acea32d1f486\" (UID: \"03628b30-c873-40cb-a7f9-acea32d1f486\") " Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.256878 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8drf7\" (UniqueName: \"kubernetes.io/projected/03628b30-c873-40cb-a7f9-acea32d1f486-kube-api-access-8drf7\") pod \"03628b30-c873-40cb-a7f9-acea32d1f486\" (UID: \"03628b30-c873-40cb-a7f9-acea32d1f486\") " Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.261719 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03628b30-c873-40cb-a7f9-acea32d1f486-kube-api-access-8drf7" (OuterVolumeSpecName: "kube-api-access-8drf7") pod "03628b30-c873-40cb-a7f9-acea32d1f486" (UID: "03628b30-c873-40cb-a7f9-acea32d1f486"). InnerVolumeSpecName "kube-api-access-8drf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.313982 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03628b30-c873-40cb-a7f9-acea32d1f486-config" (OuterVolumeSpecName: "config") pod "03628b30-c873-40cb-a7f9-acea32d1f486" (UID: "03628b30-c873-40cb-a7f9-acea32d1f486"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.315564 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03628b30-c873-40cb-a7f9-acea32d1f486-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "03628b30-c873-40cb-a7f9-acea32d1f486" (UID: "03628b30-c873-40cb-a7f9-acea32d1f486"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.358853 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03628b30-c873-40cb-a7f9-acea32d1f486-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.358892 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8drf7\" (UniqueName: \"kubernetes.io/projected/03628b30-c873-40cb-a7f9-acea32d1f486-kube-api-access-8drf7\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.358905 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03628b30-c873-40cb-a7f9-acea32d1f486-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.706063 4915 generic.go:334] "Generic (PLEG): container finished" podID="2852f67f-81d4-4f7c-b5b1-c66939d69aed" containerID="94362fedde009c8c4e6d7acae6bce7dcc515be645c53ecd648341b39fd19e4cc" exitCode=0 Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.706171 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2fe5-account-create-update-x9fkk" event={"ID":"2852f67f-81d4-4f7c-b5b1-c66939d69aed","Type":"ContainerDied","Data":"94362fedde009c8c4e6d7acae6bce7dcc515be645c53ecd648341b39fd19e4cc"} Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.709199 4915 generic.go:334] "Generic (PLEG): container finished" podID="50b0c87d-6ff3-47cc-8991-884a52945586" containerID="aaf6233019e696a30d5a3e828efaf0591ebe9af4cbbb29f70d99a50b3c0b4520" exitCode=0 Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.709279 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-st6bz" event={"ID":"50b0c87d-6ff3-47cc-8991-884a52945586","Type":"ContainerDied","Data":"aaf6233019e696a30d5a3e828efaf0591ebe9af4cbbb29f70d99a50b3c0b4520"} Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.711845 4915 generic.go:334] "Generic (PLEG): container finished" podID="03628b30-c873-40cb-a7f9-acea32d1f486" containerID="5fddb3730fdff25f725dfc175dfe6c5b3f09fe1cf4a286caef09244356b6fb5a" exitCode=0 Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.712686 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zzsgm" Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.713220 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-zzsgm" event={"ID":"03628b30-c873-40cb-a7f9-acea32d1f486","Type":"ContainerDied","Data":"5fddb3730fdff25f725dfc175dfe6c5b3f09fe1cf4a286caef09244356b6fb5a"} Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.713254 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-zzsgm" event={"ID":"03628b30-c873-40cb-a7f9-acea32d1f486","Type":"ContainerDied","Data":"7b7537387e550303788dece9a253ae5c826c3daf85c8efa19baeceb25a16f945"} Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.713274 4915 scope.go:117] "RemoveContainer" containerID="5fddb3730fdff25f725dfc175dfe6c5b3f09fe1cf4a286caef09244356b6fb5a" Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.758208 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zzsgm"] Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.769983 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zzsgm"] Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.776770 4915 scope.go:117] "RemoveContainer" containerID="27a3f767350147dbc2b3afa517fa8b47afa7df60b4ac2b72b7ed250e7f4580b5" Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.796221 4915 scope.go:117] "RemoveContainer" containerID="5fddb3730fdff25f725dfc175dfe6c5b3f09fe1cf4a286caef09244356b6fb5a" Jan 27 19:00:23 crc kubenswrapper[4915]: E0127 19:00:23.796578 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fddb3730fdff25f725dfc175dfe6c5b3f09fe1cf4a286caef09244356b6fb5a\": container with ID starting with 5fddb3730fdff25f725dfc175dfe6c5b3f09fe1cf4a286caef09244356b6fb5a not found: ID does not exist" containerID="5fddb3730fdff25f725dfc175dfe6c5b3f09fe1cf4a286caef09244356b6fb5a" Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.796611 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fddb3730fdff25f725dfc175dfe6c5b3f09fe1cf4a286caef09244356b6fb5a"} err="failed to get container status \"5fddb3730fdff25f725dfc175dfe6c5b3f09fe1cf4a286caef09244356b6fb5a\": rpc error: code = NotFound desc = could not find container \"5fddb3730fdff25f725dfc175dfe6c5b3f09fe1cf4a286caef09244356b6fb5a\": container with ID starting with 5fddb3730fdff25f725dfc175dfe6c5b3f09fe1cf4a286caef09244356b6fb5a not found: ID does not exist" Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.796632 4915 scope.go:117] "RemoveContainer" containerID="27a3f767350147dbc2b3afa517fa8b47afa7df60b4ac2b72b7ed250e7f4580b5" Jan 27 19:00:23 crc kubenswrapper[4915]: E0127 19:00:23.796952 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27a3f767350147dbc2b3afa517fa8b47afa7df60b4ac2b72b7ed250e7f4580b5\": container with ID starting with 27a3f767350147dbc2b3afa517fa8b47afa7df60b4ac2b72b7ed250e7f4580b5 not found: ID does not exist" containerID="27a3f767350147dbc2b3afa517fa8b47afa7df60b4ac2b72b7ed250e7f4580b5" Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.796974 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27a3f767350147dbc2b3afa517fa8b47afa7df60b4ac2b72b7ed250e7f4580b5"} err="failed to get container status \"27a3f767350147dbc2b3afa517fa8b47afa7df60b4ac2b72b7ed250e7f4580b5\": rpc error: code = NotFound desc = could not find container \"27a3f767350147dbc2b3afa517fa8b47afa7df60b4ac2b72b7ed250e7f4580b5\": container with ID starting with 27a3f767350147dbc2b3afa517fa8b47afa7df60b4ac2b72b7ed250e7f4580b5 not found: ID does not exist" Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.923000 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-rg5w9"] Jan 27 19:00:23 crc kubenswrapper[4915]: I0127 19:00:23.928757 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-rg5w9"] Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.186969 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x46j5" Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.251425 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0e30-account-create-update-pf994" Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.271505 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2fe5-account-create-update-x9fkk" Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.280698 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq57c\" (UniqueName: \"kubernetes.io/projected/f1bfa088-3a3e-4393-ae45-1d2d26a39dc7-kube-api-access-mq57c\") pod \"f1bfa088-3a3e-4393-ae45-1d2d26a39dc7\" (UID: \"f1bfa088-3a3e-4393-ae45-1d2d26a39dc7\") " Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.281457 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1bfa088-3a3e-4393-ae45-1d2d26a39dc7-operator-scripts\") pod \"f1bfa088-3a3e-4393-ae45-1d2d26a39dc7\" (UID: \"f1bfa088-3a3e-4393-ae45-1d2d26a39dc7\") " Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.284682 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1bfa088-3a3e-4393-ae45-1d2d26a39dc7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f1bfa088-3a3e-4393-ae45-1d2d26a39dc7" (UID: "f1bfa088-3a3e-4393-ae45-1d2d26a39dc7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.284876 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1bfa088-3a3e-4393-ae45-1d2d26a39dc7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.292311 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1bfa088-3a3e-4393-ae45-1d2d26a39dc7-kube-api-access-mq57c" (OuterVolumeSpecName: "kube-api-access-mq57c") pod "f1bfa088-3a3e-4393-ae45-1d2d26a39dc7" (UID: "f1bfa088-3a3e-4393-ae45-1d2d26a39dc7"). InnerVolumeSpecName "kube-api-access-mq57c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.385838 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjmzn\" (UniqueName: \"kubernetes.io/projected/95cbe256-8e24-4222-9638-78b805a12278-kube-api-access-mjmzn\") pod \"95cbe256-8e24-4222-9638-78b805a12278\" (UID: \"95cbe256-8e24-4222-9638-78b805a12278\") " Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.385893 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95cbe256-8e24-4222-9638-78b805a12278-operator-scripts\") pod \"95cbe256-8e24-4222-9638-78b805a12278\" (UID: \"95cbe256-8e24-4222-9638-78b805a12278\") " Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.385924 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thm9b\" (UniqueName: \"kubernetes.io/projected/2852f67f-81d4-4f7c-b5b1-c66939d69aed-kube-api-access-thm9b\") pod \"2852f67f-81d4-4f7c-b5b1-c66939d69aed\" (UID: \"2852f67f-81d4-4f7c-b5b1-c66939d69aed\") " Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.386007 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2852f67f-81d4-4f7c-b5b1-c66939d69aed-operator-scripts\") pod \"2852f67f-81d4-4f7c-b5b1-c66939d69aed\" (UID: \"2852f67f-81d4-4f7c-b5b1-c66939d69aed\") " Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.386472 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq57c\" (UniqueName: \"kubernetes.io/projected/f1bfa088-3a3e-4393-ae45-1d2d26a39dc7-kube-api-access-mq57c\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.386515 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95cbe256-8e24-4222-9638-78b805a12278-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95cbe256-8e24-4222-9638-78b805a12278" (UID: "95cbe256-8e24-4222-9638-78b805a12278"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.386848 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2852f67f-81d4-4f7c-b5b1-c66939d69aed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2852f67f-81d4-4f7c-b5b1-c66939d69aed" (UID: "2852f67f-81d4-4f7c-b5b1-c66939d69aed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.389797 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2852f67f-81d4-4f7c-b5b1-c66939d69aed-kube-api-access-thm9b" (OuterVolumeSpecName: "kube-api-access-thm9b") pod "2852f67f-81d4-4f7c-b5b1-c66939d69aed" (UID: "2852f67f-81d4-4f7c-b5b1-c66939d69aed"). InnerVolumeSpecName "kube-api-access-thm9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.389933 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95cbe256-8e24-4222-9638-78b805a12278-kube-api-access-mjmzn" (OuterVolumeSpecName: "kube-api-access-mjmzn") pod "95cbe256-8e24-4222-9638-78b805a12278" (UID: "95cbe256-8e24-4222-9638-78b805a12278"). InnerVolumeSpecName "kube-api-access-mjmzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.487481 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjmzn\" (UniqueName: \"kubernetes.io/projected/95cbe256-8e24-4222-9638-78b805a12278-kube-api-access-mjmzn\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.487513 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95cbe256-8e24-4222-9638-78b805a12278-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.487523 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thm9b\" (UniqueName: \"kubernetes.io/projected/2852f67f-81d4-4f7c-b5b1-c66939d69aed-kube-api-access-thm9b\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.487531 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2852f67f-81d4-4f7c-b5b1-c66939d69aed-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.721477 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0e30-account-create-update-pf994" event={"ID":"95cbe256-8e24-4222-9638-78b805a12278","Type":"ContainerDied","Data":"07a94a489fa9d196f5dcb1042a186d1affb664c0bffb2ee6bd427b89b0a585b2"} Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.721526 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07a94a489fa9d196f5dcb1042a186d1affb664c0bffb2ee6bd427b89b0a585b2" Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.722356 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0e30-account-create-update-pf994" Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.731156 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x46j5" event={"ID":"f1bfa088-3a3e-4393-ae45-1d2d26a39dc7","Type":"ContainerDied","Data":"bb23b466c46f5540d0954a6bffe73a3732cf58ee2208b6f07680a1c8f25c3d51"} Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.731190 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb23b466c46f5540d0954a6bffe73a3732cf58ee2208b6f07680a1c8f25c3d51" Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.731235 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x46j5" Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.733350 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2fe5-account-create-update-x9fkk" event={"ID":"2852f67f-81d4-4f7c-b5b1-c66939d69aed","Type":"ContainerDied","Data":"3999d39406f768b248b484a2b629ac2bf1c2bc197b8f0f97b13ebf0f0352d1ec"} Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.733385 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3999d39406f768b248b484a2b629ac2bf1c2bc197b8f0f97b13ebf0f0352d1ec" Jan 27 19:00:24 crc kubenswrapper[4915]: I0127 19:00:24.733445 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2fe5-account-create-update-x9fkk" Jan 27 19:00:25 crc kubenswrapper[4915]: I0127 19:00:25.035853 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 27 19:00:25 crc kubenswrapper[4915]: I0127 19:00:25.115234 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-st6bz" Jan 27 19:00:25 crc kubenswrapper[4915]: I0127 19:00:25.198989 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50b0c87d-6ff3-47cc-8991-884a52945586-operator-scripts\") pod \"50b0c87d-6ff3-47cc-8991-884a52945586\" (UID: \"50b0c87d-6ff3-47cc-8991-884a52945586\") " Jan 27 19:00:25 crc kubenswrapper[4915]: I0127 19:00:25.199061 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbh7d\" (UniqueName: \"kubernetes.io/projected/50b0c87d-6ff3-47cc-8991-884a52945586-kube-api-access-qbh7d\") pod \"50b0c87d-6ff3-47cc-8991-884a52945586\" (UID: \"50b0c87d-6ff3-47cc-8991-884a52945586\") " Jan 27 19:00:25 crc kubenswrapper[4915]: I0127 19:00:25.199517 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b0c87d-6ff3-47cc-8991-884a52945586-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "50b0c87d-6ff3-47cc-8991-884a52945586" (UID: "50b0c87d-6ff3-47cc-8991-884a52945586"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:25 crc kubenswrapper[4915]: I0127 19:00:25.204983 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50b0c87d-6ff3-47cc-8991-884a52945586-kube-api-access-qbh7d" (OuterVolumeSpecName: "kube-api-access-qbh7d") pod "50b0c87d-6ff3-47cc-8991-884a52945586" (UID: "50b0c87d-6ff3-47cc-8991-884a52945586"). InnerVolumeSpecName "kube-api-access-qbh7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:25 crc kubenswrapper[4915]: I0127 19:00:25.303736 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbh7d\" (UniqueName: \"kubernetes.io/projected/50b0c87d-6ff3-47cc-8991-884a52945586-kube-api-access-qbh7d\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:25 crc kubenswrapper[4915]: I0127 19:00:25.303804 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50b0c87d-6ff3-47cc-8991-884a52945586-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:25 crc kubenswrapper[4915]: I0127 19:00:25.366875 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03628b30-c873-40cb-a7f9-acea32d1f486" path="/var/lib/kubelet/pods/03628b30-c873-40cb-a7f9-acea32d1f486/volumes" Jan 27 19:00:25 crc kubenswrapper[4915]: I0127 19:00:25.367424 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9841c88-d3a2-4118-9a42-1599d009936d" path="/var/lib/kubelet/pods/f9841c88-d3a2-4118-9a42-1599d009936d/volumes" Jan 27 19:00:25 crc kubenswrapper[4915]: I0127 19:00:25.742652 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-st6bz" event={"ID":"50b0c87d-6ff3-47cc-8991-884a52945586","Type":"ContainerDied","Data":"fbd5cc2854f90bc6e0cf9baa10cb444bb1d196e4382c6ceb5c4579d8cee76963"} Jan 27 19:00:25 crc kubenswrapper[4915]: I0127 19:00:25.742688 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbd5cc2854f90bc6e0cf9baa10cb444bb1d196e4382c6ceb5c4579d8cee76963" Jan 27 19:00:25 crc kubenswrapper[4915]: I0127 19:00:25.742736 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-st6bz" Jan 27 19:00:27 crc kubenswrapper[4915]: I0127 19:00:27.672314 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-7jpr4"] Jan 27 19:00:27 crc kubenswrapper[4915]: E0127 19:00:27.673089 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03628b30-c873-40cb-a7f9-acea32d1f486" containerName="init" Jan 27 19:00:27 crc kubenswrapper[4915]: I0127 19:00:27.673107 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="03628b30-c873-40cb-a7f9-acea32d1f486" containerName="init" Jan 27 19:00:27 crc kubenswrapper[4915]: E0127 19:00:27.673125 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9841c88-d3a2-4118-9a42-1599d009936d" containerName="mariadb-account-create-update" Jan 27 19:00:27 crc kubenswrapper[4915]: I0127 19:00:27.673133 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9841c88-d3a2-4118-9a42-1599d009936d" containerName="mariadb-account-create-update" Jan 27 19:00:27 crc kubenswrapper[4915]: E0127 19:00:27.673151 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b0c87d-6ff3-47cc-8991-884a52945586" containerName="mariadb-database-create" Jan 27 19:00:27 crc kubenswrapper[4915]: I0127 19:00:27.673160 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b0c87d-6ff3-47cc-8991-884a52945586" containerName="mariadb-database-create" Jan 27 19:00:27 crc kubenswrapper[4915]: E0127 19:00:27.673177 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2852f67f-81d4-4f7c-b5b1-c66939d69aed" containerName="mariadb-account-create-update" Jan 27 19:00:27 crc kubenswrapper[4915]: I0127 19:00:27.673185 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="2852f67f-81d4-4f7c-b5b1-c66939d69aed" containerName="mariadb-account-create-update" Jan 27 19:00:27 crc kubenswrapper[4915]: E0127 19:00:27.673198 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1bfa088-3a3e-4393-ae45-1d2d26a39dc7" containerName="mariadb-database-create" Jan 27 19:00:27 crc kubenswrapper[4915]: I0127 19:00:27.673205 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1bfa088-3a3e-4393-ae45-1d2d26a39dc7" containerName="mariadb-database-create" Jan 27 19:00:27 crc kubenswrapper[4915]: E0127 19:00:27.673271 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95cbe256-8e24-4222-9638-78b805a12278" containerName="mariadb-account-create-update" Jan 27 19:00:27 crc kubenswrapper[4915]: I0127 19:00:27.673285 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="95cbe256-8e24-4222-9638-78b805a12278" containerName="mariadb-account-create-update" Jan 27 19:00:27 crc kubenswrapper[4915]: E0127 19:00:27.673302 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03628b30-c873-40cb-a7f9-acea32d1f486" containerName="dnsmasq-dns" Jan 27 19:00:27 crc kubenswrapper[4915]: I0127 19:00:27.673310 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="03628b30-c873-40cb-a7f9-acea32d1f486" containerName="dnsmasq-dns" Jan 27 19:00:27 crc kubenswrapper[4915]: I0127 19:00:27.673485 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="2852f67f-81d4-4f7c-b5b1-c66939d69aed" containerName="mariadb-account-create-update" Jan 27 19:00:27 crc kubenswrapper[4915]: I0127 19:00:27.673584 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="50b0c87d-6ff3-47cc-8991-884a52945586" containerName="mariadb-database-create" Jan 27 19:00:27 crc kubenswrapper[4915]: I0127 19:00:27.673606 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1bfa088-3a3e-4393-ae45-1d2d26a39dc7" containerName="mariadb-database-create" Jan 27 19:00:27 crc kubenswrapper[4915]: I0127 19:00:27.673624 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9841c88-d3a2-4118-9a42-1599d009936d" containerName="mariadb-account-create-update" Jan 27 19:00:27 crc kubenswrapper[4915]: I0127 19:00:27.673636 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="03628b30-c873-40cb-a7f9-acea32d1f486" containerName="dnsmasq-dns" Jan 27 19:00:27 crc kubenswrapper[4915]: I0127 19:00:27.673649 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="95cbe256-8e24-4222-9638-78b805a12278" containerName="mariadb-account-create-update" Jan 27 19:00:27 crc kubenswrapper[4915]: I0127 19:00:27.675237 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7jpr4" Jan 27 19:00:27 crc kubenswrapper[4915]: I0127 19:00:27.677084 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 19:00:27 crc kubenswrapper[4915]: I0127 19:00:27.682168 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7jpr4"] Jan 27 19:00:27 crc kubenswrapper[4915]: I0127 19:00:27.748631 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f82f44e4-9f01-446a-9658-fa30f67bcadf-operator-scripts\") pod \"root-account-create-update-7jpr4\" (UID: \"f82f44e4-9f01-446a-9658-fa30f67bcadf\") " pod="openstack/root-account-create-update-7jpr4" Jan 27 19:00:27 crc kubenswrapper[4915]: I0127 19:00:27.748841 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dxnz\" (UniqueName: \"kubernetes.io/projected/f82f44e4-9f01-446a-9658-fa30f67bcadf-kube-api-access-5dxnz\") pod \"root-account-create-update-7jpr4\" (UID: \"f82f44e4-9f01-446a-9658-fa30f67bcadf\") " pod="openstack/root-account-create-update-7jpr4" Jan 27 19:00:27 crc kubenswrapper[4915]: I0127 19:00:27.850826 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dxnz\" (UniqueName: \"kubernetes.io/projected/f82f44e4-9f01-446a-9658-fa30f67bcadf-kube-api-access-5dxnz\") pod \"root-account-create-update-7jpr4\" (UID: \"f82f44e4-9f01-446a-9658-fa30f67bcadf\") " pod="openstack/root-account-create-update-7jpr4" Jan 27 19:00:27 crc kubenswrapper[4915]: I0127 19:00:27.850928 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f82f44e4-9f01-446a-9658-fa30f67bcadf-operator-scripts\") pod \"root-account-create-update-7jpr4\" (UID: \"f82f44e4-9f01-446a-9658-fa30f67bcadf\") " pod="openstack/root-account-create-update-7jpr4" Jan 27 19:00:27 crc kubenswrapper[4915]: I0127 19:00:27.851731 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f82f44e4-9f01-446a-9658-fa30f67bcadf-operator-scripts\") pod \"root-account-create-update-7jpr4\" (UID: \"f82f44e4-9f01-446a-9658-fa30f67bcadf\") " pod="openstack/root-account-create-update-7jpr4" Jan 27 19:00:27 crc kubenswrapper[4915]: I0127 19:00:27.868417 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dxnz\" (UniqueName: \"kubernetes.io/projected/f82f44e4-9f01-446a-9658-fa30f67bcadf-kube-api-access-5dxnz\") pod \"root-account-create-update-7jpr4\" (UID: \"f82f44e4-9f01-446a-9658-fa30f67bcadf\") " pod="openstack/root-account-create-update-7jpr4" Jan 27 19:00:27 crc kubenswrapper[4915]: I0127 19:00:27.998608 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7jpr4" Jan 27 19:00:28 crc kubenswrapper[4915]: I0127 19:00:28.428473 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7jpr4"] Jan 27 19:00:28 crc kubenswrapper[4915]: I0127 19:00:28.771328 4915 generic.go:334] "Generic (PLEG): container finished" podID="af75964e-e79e-49da-ad8d-37b14e63b5e6" containerID="f58363042e991a9612d8ad38408e5dfa591ca7acbcfb33b084f57cad80503a36" exitCode=0 Jan 27 19:00:28 crc kubenswrapper[4915]: I0127 19:00:28.771370 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gw7dx" event={"ID":"af75964e-e79e-49da-ad8d-37b14e63b5e6","Type":"ContainerDied","Data":"f58363042e991a9612d8ad38408e5dfa591ca7acbcfb33b084f57cad80503a36"} Jan 27 19:00:29 crc kubenswrapper[4915]: I0127 19:00:29.382144 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-etc-swift\") pod \"swift-storage-0\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " pod="openstack/swift-storage-0" Jan 27 19:00:29 crc kubenswrapper[4915]: I0127 19:00:29.391484 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-etc-swift\") pod \"swift-storage-0\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " pod="openstack/swift-storage-0" Jan 27 19:00:29 crc kubenswrapper[4915]: I0127 19:00:29.630732 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 19:00:31 crc kubenswrapper[4915]: I0127 19:00:31.586585 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-lrzgd" podUID="aae7274f-1da9-4023-96b1-30cca477c6a2" containerName="ovn-controller" probeResult="failure" output=< Jan 27 19:00:31 crc kubenswrapper[4915]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 19:00:31 crc kubenswrapper[4915]: > Jan 27 19:00:31 crc kubenswrapper[4915]: I0127 19:00:31.673063 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-g8pg6" Jan 27 19:00:31 crc kubenswrapper[4915]: I0127 19:00:31.681331 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-g8pg6" Jan 27 19:00:31 crc kubenswrapper[4915]: I0127 19:00:31.794090 4915 generic.go:334] "Generic (PLEG): container finished" podID="b3ead5d8-b1e5-4145-a6de-64c316f4027e" containerID="30e1cae36df5f0d1cc0a2108f960ad759e0d97261fe96cef9f3de92ab69add31" exitCode=0 Jan 27 19:00:31 crc kubenswrapper[4915]: I0127 19:00:31.794175 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b3ead5d8-b1e5-4145-a6de-64c316f4027e","Type":"ContainerDied","Data":"30e1cae36df5f0d1cc0a2108f960ad759e0d97261fe96cef9f3de92ab69add31"} Jan 27 19:00:31 crc kubenswrapper[4915]: I0127 19:00:31.796465 4915 generic.go:334] "Generic (PLEG): container finished" podID="5b5f81dc-48ff-40c8-a0af-84c7c60338fd" containerID="1358fb4c705e4868c1b83ea13e0f2c10cac4558883cc330d554154fd44be9f97" exitCode=0 Jan 27 19:00:31 crc kubenswrapper[4915]: I0127 19:00:31.796548 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5b5f81dc-48ff-40c8-a0af-84c7c60338fd","Type":"ContainerDied","Data":"1358fb4c705e4868c1b83ea13e0f2c10cac4558883cc330d554154fd44be9f97"} Jan 27 19:00:31 crc kubenswrapper[4915]: I0127 19:00:31.895142 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-lrzgd-config-5j6tx"] Jan 27 19:00:31 crc kubenswrapper[4915]: I0127 19:00:31.897926 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lrzgd-config-5j6tx" Jan 27 19:00:31 crc kubenswrapper[4915]: I0127 19:00:31.900421 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 19:00:31 crc kubenswrapper[4915]: I0127 19:00:31.909524 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-lrzgd-config-5j6tx"] Jan 27 19:00:32 crc kubenswrapper[4915]: I0127 19:00:32.027281 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/83ca1070-e322-4204-9b67-6bccf505b073-var-run-ovn\") pod \"ovn-controller-lrzgd-config-5j6tx\" (UID: \"83ca1070-e322-4204-9b67-6bccf505b073\") " pod="openstack/ovn-controller-lrzgd-config-5j6tx" Jan 27 19:00:32 crc kubenswrapper[4915]: I0127 19:00:32.027324 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/83ca1070-e322-4204-9b67-6bccf505b073-var-log-ovn\") pod \"ovn-controller-lrzgd-config-5j6tx\" (UID: \"83ca1070-e322-4204-9b67-6bccf505b073\") " pod="openstack/ovn-controller-lrzgd-config-5j6tx" Jan 27 19:00:32 crc kubenswrapper[4915]: I0127 19:00:32.027349 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/83ca1070-e322-4204-9b67-6bccf505b073-var-run\") pod \"ovn-controller-lrzgd-config-5j6tx\" (UID: \"83ca1070-e322-4204-9b67-6bccf505b073\") " pod="openstack/ovn-controller-lrzgd-config-5j6tx" Jan 27 19:00:32 crc kubenswrapper[4915]: I0127 19:00:32.027428 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5j8k\" (UniqueName: \"kubernetes.io/projected/83ca1070-e322-4204-9b67-6bccf505b073-kube-api-access-m5j8k\") pod \"ovn-controller-lrzgd-config-5j6tx\" (UID: \"83ca1070-e322-4204-9b67-6bccf505b073\") " pod="openstack/ovn-controller-lrzgd-config-5j6tx" Jan 27 19:00:32 crc kubenswrapper[4915]: I0127 19:00:32.027584 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/83ca1070-e322-4204-9b67-6bccf505b073-additional-scripts\") pod \"ovn-controller-lrzgd-config-5j6tx\" (UID: \"83ca1070-e322-4204-9b67-6bccf505b073\") " pod="openstack/ovn-controller-lrzgd-config-5j6tx" Jan 27 19:00:32 crc kubenswrapper[4915]: I0127 19:00:32.027676 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83ca1070-e322-4204-9b67-6bccf505b073-scripts\") pod \"ovn-controller-lrzgd-config-5j6tx\" (UID: \"83ca1070-e322-4204-9b67-6bccf505b073\") " pod="openstack/ovn-controller-lrzgd-config-5j6tx" Jan 27 19:00:32 crc kubenswrapper[4915]: I0127 19:00:32.130509 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/83ca1070-e322-4204-9b67-6bccf505b073-var-run-ovn\") pod \"ovn-controller-lrzgd-config-5j6tx\" (UID: \"83ca1070-e322-4204-9b67-6bccf505b073\") " pod="openstack/ovn-controller-lrzgd-config-5j6tx" Jan 27 19:00:32 crc kubenswrapper[4915]: I0127 19:00:32.130565 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/83ca1070-e322-4204-9b67-6bccf505b073-var-log-ovn\") pod \"ovn-controller-lrzgd-config-5j6tx\" (UID: \"83ca1070-e322-4204-9b67-6bccf505b073\") " pod="openstack/ovn-controller-lrzgd-config-5j6tx" Jan 27 19:00:32 crc kubenswrapper[4915]: I0127 19:00:32.130585 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/83ca1070-e322-4204-9b67-6bccf505b073-var-run\") pod \"ovn-controller-lrzgd-config-5j6tx\" (UID: \"83ca1070-e322-4204-9b67-6bccf505b073\") " pod="openstack/ovn-controller-lrzgd-config-5j6tx" Jan 27 19:00:32 crc kubenswrapper[4915]: I0127 19:00:32.130668 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5j8k\" (UniqueName: \"kubernetes.io/projected/83ca1070-e322-4204-9b67-6bccf505b073-kube-api-access-m5j8k\") pod \"ovn-controller-lrzgd-config-5j6tx\" (UID: \"83ca1070-e322-4204-9b67-6bccf505b073\") " pod="openstack/ovn-controller-lrzgd-config-5j6tx" Jan 27 19:00:32 crc kubenswrapper[4915]: I0127 19:00:32.130734 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/83ca1070-e322-4204-9b67-6bccf505b073-additional-scripts\") pod \"ovn-controller-lrzgd-config-5j6tx\" (UID: \"83ca1070-e322-4204-9b67-6bccf505b073\") " pod="openstack/ovn-controller-lrzgd-config-5j6tx" Jan 27 19:00:32 crc kubenswrapper[4915]: I0127 19:00:32.130775 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83ca1070-e322-4204-9b67-6bccf505b073-scripts\") pod \"ovn-controller-lrzgd-config-5j6tx\" (UID: \"83ca1070-e322-4204-9b67-6bccf505b073\") " pod="openstack/ovn-controller-lrzgd-config-5j6tx" Jan 27 19:00:32 crc kubenswrapper[4915]: I0127 19:00:32.130879 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/83ca1070-e322-4204-9b67-6bccf505b073-var-log-ovn\") pod \"ovn-controller-lrzgd-config-5j6tx\" (UID: \"83ca1070-e322-4204-9b67-6bccf505b073\") " pod="openstack/ovn-controller-lrzgd-config-5j6tx" Jan 27 19:00:32 crc kubenswrapper[4915]: I0127 19:00:32.130880 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/83ca1070-e322-4204-9b67-6bccf505b073-var-run\") pod \"ovn-controller-lrzgd-config-5j6tx\" (UID: \"83ca1070-e322-4204-9b67-6bccf505b073\") " pod="openstack/ovn-controller-lrzgd-config-5j6tx" Jan 27 19:00:32 crc kubenswrapper[4915]: I0127 19:00:32.130880 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/83ca1070-e322-4204-9b67-6bccf505b073-var-run-ovn\") pod \"ovn-controller-lrzgd-config-5j6tx\" (UID: \"83ca1070-e322-4204-9b67-6bccf505b073\") " pod="openstack/ovn-controller-lrzgd-config-5j6tx" Jan 27 19:00:32 crc kubenswrapper[4915]: I0127 19:00:32.131766 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/83ca1070-e322-4204-9b67-6bccf505b073-additional-scripts\") pod \"ovn-controller-lrzgd-config-5j6tx\" (UID: \"83ca1070-e322-4204-9b67-6bccf505b073\") " pod="openstack/ovn-controller-lrzgd-config-5j6tx" Jan 27 19:00:32 crc kubenswrapper[4915]: I0127 19:00:32.133639 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83ca1070-e322-4204-9b67-6bccf505b073-scripts\") pod \"ovn-controller-lrzgd-config-5j6tx\" (UID: \"83ca1070-e322-4204-9b67-6bccf505b073\") " pod="openstack/ovn-controller-lrzgd-config-5j6tx" Jan 27 19:00:32 crc kubenswrapper[4915]: I0127 19:00:32.149256 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5j8k\" (UniqueName: \"kubernetes.io/projected/83ca1070-e322-4204-9b67-6bccf505b073-kube-api-access-m5j8k\") pod \"ovn-controller-lrzgd-config-5j6tx\" (UID: \"83ca1070-e322-4204-9b67-6bccf505b073\") " pod="openstack/ovn-controller-lrzgd-config-5j6tx" Jan 27 19:00:32 crc kubenswrapper[4915]: I0127 19:00:32.227701 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lrzgd-config-5j6tx" Jan 27 19:00:34 crc kubenswrapper[4915]: W0127 19:00:34.271499 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf82f44e4_9f01_446a_9658_fa30f67bcadf.slice/crio-a94ff81d863c80320f740c200a417f49b2e6f36fa1024bab944888e8d5d90d7c WatchSource:0}: Error finding container a94ff81d863c80320f740c200a417f49b2e6f36fa1024bab944888e8d5d90d7c: Status 404 returned error can't find the container with id a94ff81d863c80320f740c200a417f49b2e6f36fa1024bab944888e8d5d90d7c Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.513942 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gw7dx" Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.569450 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w867f\" (UniqueName: \"kubernetes.io/projected/af75964e-e79e-49da-ad8d-37b14e63b5e6-kube-api-access-w867f\") pod \"af75964e-e79e-49da-ad8d-37b14e63b5e6\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.569775 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af75964e-e79e-49da-ad8d-37b14e63b5e6-scripts\") pod \"af75964e-e79e-49da-ad8d-37b14e63b5e6\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.569816 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/af75964e-e79e-49da-ad8d-37b14e63b5e6-etc-swift\") pod \"af75964e-e79e-49da-ad8d-37b14e63b5e6\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.569943 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/af75964e-e79e-49da-ad8d-37b14e63b5e6-swiftconf\") pod \"af75964e-e79e-49da-ad8d-37b14e63b5e6\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.569992 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/af75964e-e79e-49da-ad8d-37b14e63b5e6-ring-data-devices\") pod \"af75964e-e79e-49da-ad8d-37b14e63b5e6\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.570033 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/af75964e-e79e-49da-ad8d-37b14e63b5e6-dispersionconf\") pod \"af75964e-e79e-49da-ad8d-37b14e63b5e6\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.570073 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af75964e-e79e-49da-ad8d-37b14e63b5e6-combined-ca-bundle\") pod \"af75964e-e79e-49da-ad8d-37b14e63b5e6\" (UID: \"af75964e-e79e-49da-ad8d-37b14e63b5e6\") " Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.571189 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af75964e-e79e-49da-ad8d-37b14e63b5e6-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "af75964e-e79e-49da-ad8d-37b14e63b5e6" (UID: "af75964e-e79e-49da-ad8d-37b14e63b5e6"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.571605 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af75964e-e79e-49da-ad8d-37b14e63b5e6-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "af75964e-e79e-49da-ad8d-37b14e63b5e6" (UID: "af75964e-e79e-49da-ad8d-37b14e63b5e6"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.578179 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af75964e-e79e-49da-ad8d-37b14e63b5e6-kube-api-access-w867f" (OuterVolumeSpecName: "kube-api-access-w867f") pod "af75964e-e79e-49da-ad8d-37b14e63b5e6" (UID: "af75964e-e79e-49da-ad8d-37b14e63b5e6"). InnerVolumeSpecName "kube-api-access-w867f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.582503 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af75964e-e79e-49da-ad8d-37b14e63b5e6-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "af75964e-e79e-49da-ad8d-37b14e63b5e6" (UID: "af75964e-e79e-49da-ad8d-37b14e63b5e6"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.628869 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af75964e-e79e-49da-ad8d-37b14e63b5e6-scripts" (OuterVolumeSpecName: "scripts") pod "af75964e-e79e-49da-ad8d-37b14e63b5e6" (UID: "af75964e-e79e-49da-ad8d-37b14e63b5e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.651917 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af75964e-e79e-49da-ad8d-37b14e63b5e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af75964e-e79e-49da-ad8d-37b14e63b5e6" (UID: "af75964e-e79e-49da-ad8d-37b14e63b5e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.651958 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af75964e-e79e-49da-ad8d-37b14e63b5e6-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "af75964e-e79e-49da-ad8d-37b14e63b5e6" (UID: "af75964e-e79e-49da-ad8d-37b14e63b5e6"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.672038 4915 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/af75964e-e79e-49da-ad8d-37b14e63b5e6-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.672064 4915 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/af75964e-e79e-49da-ad8d-37b14e63b5e6-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.672074 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af75964e-e79e-49da-ad8d-37b14e63b5e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.672088 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w867f\" (UniqueName: \"kubernetes.io/projected/af75964e-e79e-49da-ad8d-37b14e63b5e6-kube-api-access-w867f\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.672099 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af75964e-e79e-49da-ad8d-37b14e63b5e6-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.672107 4915 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/af75964e-e79e-49da-ad8d-37b14e63b5e6-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.672166 4915 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/af75964e-e79e-49da-ad8d-37b14e63b5e6-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.809168 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-lrzgd-config-5j6tx"] Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.820579 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7jpr4" event={"ID":"f82f44e4-9f01-446a-9658-fa30f67bcadf","Type":"ContainerStarted","Data":"a498e18cb645d2af96f71b6a4845679bc9d84cc14976fc27f3a86fb1021149d5"} Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.820631 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7jpr4" event={"ID":"f82f44e4-9f01-446a-9658-fa30f67bcadf","Type":"ContainerStarted","Data":"a94ff81d863c80320f740c200a417f49b2e6f36fa1024bab944888e8d5d90d7c"} Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.826922 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b3ead5d8-b1e5-4145-a6de-64c316f4027e","Type":"ContainerStarted","Data":"87b6ec8322f87a6503368ba614362e611cb45d804ae0510bab0ceb1477305fce"} Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.827782 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.829798 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5b5f81dc-48ff-40c8-a0af-84c7c60338fd","Type":"ContainerStarted","Data":"2b4f416be9fb86b0cb75f45fd91a7a0c66676aedc9385d72b5cec37350e25d70"} Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.830383 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.832930 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gw7dx" event={"ID":"af75964e-e79e-49da-ad8d-37b14e63b5e6","Type":"ContainerDied","Data":"281cec9430f4c703e2d7657481255f3f02d7f7bbc088daf03112c425162f567c"} Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.832997 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="281cec9430f4c703e2d7657481255f3f02d7f7bbc088daf03112c425162f567c" Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.833409 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gw7dx" Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.843508 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-7jpr4" podStartSLOduration=7.843492738 podStartE2EDuration="7.843492738s" podCreationTimestamp="2026-01-27 19:00:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:00:34.839645304 +0000 UTC m=+1126.197498978" watchObservedRunningTime="2026-01-27 19:00:34.843492738 +0000 UTC m=+1126.201346402" Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.873905 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=52.177415083 podStartE2EDuration="58.873884164s" podCreationTimestamp="2026-01-27 18:59:36 +0000 UTC" firstStartedPulling="2026-01-27 18:59:50.345638985 +0000 UTC m=+1081.703492649" lastFinishedPulling="2026-01-27 18:59:57.042108066 +0000 UTC m=+1088.399961730" observedRunningTime="2026-01-27 19:00:34.868574244 +0000 UTC m=+1126.226427908" watchObservedRunningTime="2026-01-27 19:00:34.873884164 +0000 UTC m=+1126.231737828" Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.905679 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=52.997615411 podStartE2EDuration="58.905662104s" podCreationTimestamp="2026-01-27 18:59:36 +0000 UTC" firstStartedPulling="2026-01-27 18:59:50.332895904 +0000 UTC m=+1081.690749568" lastFinishedPulling="2026-01-27 18:59:56.240942597 +0000 UTC m=+1087.598796261" observedRunningTime="2026-01-27 19:00:34.892949302 +0000 UTC m=+1126.250802966" watchObservedRunningTime="2026-01-27 19:00:34.905662104 +0000 UTC m=+1126.263515768" Jan 27 19:00:34 crc kubenswrapper[4915]: I0127 19:00:34.934677 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 19:00:35 crc kubenswrapper[4915]: I0127 19:00:35.842881 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nk9l7" event={"ID":"75d11dfb-4c76-4968-81c0-d64a2272772b","Type":"ContainerStarted","Data":"2307aac0f134ed53b3b103835823b9021aefeb9cd3a5ef2f8acd9077a9f61d25"} Jan 27 19:00:35 crc kubenswrapper[4915]: I0127 19:00:35.844482 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerStarted","Data":"3f3ea44280e6cf33cd3b40f5391706066ff6128d8fe5802e49d970c147950d80"} Jan 27 19:00:35 crc kubenswrapper[4915]: I0127 19:00:35.846823 4915 generic.go:334] "Generic (PLEG): container finished" podID="f82f44e4-9f01-446a-9658-fa30f67bcadf" containerID="a498e18cb645d2af96f71b6a4845679bc9d84cc14976fc27f3a86fb1021149d5" exitCode=0 Jan 27 19:00:35 crc kubenswrapper[4915]: I0127 19:00:35.846883 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7jpr4" event={"ID":"f82f44e4-9f01-446a-9658-fa30f67bcadf","Type":"ContainerDied","Data":"a498e18cb645d2af96f71b6a4845679bc9d84cc14976fc27f3a86fb1021149d5"} Jan 27 19:00:35 crc kubenswrapper[4915]: I0127 19:00:35.848852 4915 generic.go:334] "Generic (PLEG): container finished" podID="83ca1070-e322-4204-9b67-6bccf505b073" containerID="ddeea46d0068d0692a1877e62fa4cfe602af838cb6034de47ada5fb45c968cda" exitCode=0 Jan 27 19:00:35 crc kubenswrapper[4915]: I0127 19:00:35.848909 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lrzgd-config-5j6tx" event={"ID":"83ca1070-e322-4204-9b67-6bccf505b073","Type":"ContainerDied","Data":"ddeea46d0068d0692a1877e62fa4cfe602af838cb6034de47ada5fb45c968cda"} Jan 27 19:00:35 crc kubenswrapper[4915]: I0127 19:00:35.848934 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lrzgd-config-5j6tx" event={"ID":"83ca1070-e322-4204-9b67-6bccf505b073","Type":"ContainerStarted","Data":"4f2f9b448a214f7768e08a7353af87553a58a8f3ee9612aac6818a962a3c4c9f"} Jan 27 19:00:35 crc kubenswrapper[4915]: I0127 19:00:35.867690 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-nk9l7" podStartSLOduration=3.875968027 podStartE2EDuration="15.867676223s" podCreationTimestamp="2026-01-27 19:00:20 +0000 UTC" firstStartedPulling="2026-01-27 19:00:22.39592088 +0000 UTC m=+1113.753774544" lastFinishedPulling="2026-01-27 19:00:34.387629076 +0000 UTC m=+1125.745482740" observedRunningTime="2026-01-27 19:00:35.863016338 +0000 UTC m=+1127.220870022" watchObservedRunningTime="2026-01-27 19:00:35.867676223 +0000 UTC m=+1127.225529887" Jan 27 19:00:36 crc kubenswrapper[4915]: I0127 19:00:36.608437 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-lrzgd" Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.336688 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7jpr4" Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.341772 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lrzgd-config-5j6tx" Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.516135 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83ca1070-e322-4204-9b67-6bccf505b073-scripts\") pod \"83ca1070-e322-4204-9b67-6bccf505b073\" (UID: \"83ca1070-e322-4204-9b67-6bccf505b073\") " Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.516184 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/83ca1070-e322-4204-9b67-6bccf505b073-var-run-ovn\") pod \"83ca1070-e322-4204-9b67-6bccf505b073\" (UID: \"83ca1070-e322-4204-9b67-6bccf505b073\") " Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.516223 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5j8k\" (UniqueName: \"kubernetes.io/projected/83ca1070-e322-4204-9b67-6bccf505b073-kube-api-access-m5j8k\") pod \"83ca1070-e322-4204-9b67-6bccf505b073\" (UID: \"83ca1070-e322-4204-9b67-6bccf505b073\") " Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.516240 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/83ca1070-e322-4204-9b67-6bccf505b073-var-run\") pod \"83ca1070-e322-4204-9b67-6bccf505b073\" (UID: \"83ca1070-e322-4204-9b67-6bccf505b073\") " Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.516297 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/83ca1070-e322-4204-9b67-6bccf505b073-additional-scripts\") pod \"83ca1070-e322-4204-9b67-6bccf505b073\" (UID: \"83ca1070-e322-4204-9b67-6bccf505b073\") " Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.516361 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f82f44e4-9f01-446a-9658-fa30f67bcadf-operator-scripts\") pod \"f82f44e4-9f01-446a-9658-fa30f67bcadf\" (UID: \"f82f44e4-9f01-446a-9658-fa30f67bcadf\") " Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.516400 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dxnz\" (UniqueName: \"kubernetes.io/projected/f82f44e4-9f01-446a-9658-fa30f67bcadf-kube-api-access-5dxnz\") pod \"f82f44e4-9f01-446a-9658-fa30f67bcadf\" (UID: \"f82f44e4-9f01-446a-9658-fa30f67bcadf\") " Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.516430 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/83ca1070-e322-4204-9b67-6bccf505b073-var-log-ovn\") pod \"83ca1070-e322-4204-9b67-6bccf505b073\" (UID: \"83ca1070-e322-4204-9b67-6bccf505b073\") " Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.516775 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83ca1070-e322-4204-9b67-6bccf505b073-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "83ca1070-e322-4204-9b67-6bccf505b073" (UID: "83ca1070-e322-4204-9b67-6bccf505b073"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.517000 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83ca1070-e322-4204-9b67-6bccf505b073-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "83ca1070-e322-4204-9b67-6bccf505b073" (UID: "83ca1070-e322-4204-9b67-6bccf505b073"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.517201 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83ca1070-e322-4204-9b67-6bccf505b073-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "83ca1070-e322-4204-9b67-6bccf505b073" (UID: "83ca1070-e322-4204-9b67-6bccf505b073"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.517271 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83ca1070-e322-4204-9b67-6bccf505b073-scripts" (OuterVolumeSpecName: "scripts") pod "83ca1070-e322-4204-9b67-6bccf505b073" (UID: "83ca1070-e322-4204-9b67-6bccf505b073"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.517705 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f82f44e4-9f01-446a-9658-fa30f67bcadf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f82f44e4-9f01-446a-9658-fa30f67bcadf" (UID: "f82f44e4-9f01-446a-9658-fa30f67bcadf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.517820 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83ca1070-e322-4204-9b67-6bccf505b073-var-run" (OuterVolumeSpecName: "var-run") pod "83ca1070-e322-4204-9b67-6bccf505b073" (UID: "83ca1070-e322-4204-9b67-6bccf505b073"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.524463 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f82f44e4-9f01-446a-9658-fa30f67bcadf-kube-api-access-5dxnz" (OuterVolumeSpecName: "kube-api-access-5dxnz") pod "f82f44e4-9f01-446a-9658-fa30f67bcadf" (UID: "f82f44e4-9f01-446a-9658-fa30f67bcadf"). InnerVolumeSpecName "kube-api-access-5dxnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.524551 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ca1070-e322-4204-9b67-6bccf505b073-kube-api-access-m5j8k" (OuterVolumeSpecName: "kube-api-access-m5j8k") pod "83ca1070-e322-4204-9b67-6bccf505b073" (UID: "83ca1070-e322-4204-9b67-6bccf505b073"). InnerVolumeSpecName "kube-api-access-m5j8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.618814 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f82f44e4-9f01-446a-9658-fa30f67bcadf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.618933 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dxnz\" (UniqueName: \"kubernetes.io/projected/f82f44e4-9f01-446a-9658-fa30f67bcadf-kube-api-access-5dxnz\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.618951 4915 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/83ca1070-e322-4204-9b67-6bccf505b073-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.618963 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83ca1070-e322-4204-9b67-6bccf505b073-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.618975 4915 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/83ca1070-e322-4204-9b67-6bccf505b073-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.618986 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5j8k\" (UniqueName: \"kubernetes.io/projected/83ca1070-e322-4204-9b67-6bccf505b073-kube-api-access-m5j8k\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.619000 4915 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/83ca1070-e322-4204-9b67-6bccf505b073-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.619011 4915 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/83ca1070-e322-4204-9b67-6bccf505b073-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.873664 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7jpr4" event={"ID":"f82f44e4-9f01-446a-9658-fa30f67bcadf","Type":"ContainerDied","Data":"a94ff81d863c80320f740c200a417f49b2e6f36fa1024bab944888e8d5d90d7c"} Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.873723 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a94ff81d863c80320f740c200a417f49b2e6f36fa1024bab944888e8d5d90d7c" Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.873829 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7jpr4" Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.883220 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lrzgd-config-5j6tx" event={"ID":"83ca1070-e322-4204-9b67-6bccf505b073","Type":"ContainerDied","Data":"4f2f9b448a214f7768e08a7353af87553a58a8f3ee9612aac6818a962a3c4c9f"} Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.883259 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f2f9b448a214f7768e08a7353af87553a58a8f3ee9612aac6818a962a3c4c9f" Jan 27 19:00:37 crc kubenswrapper[4915]: I0127 19:00:37.883653 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lrzgd-config-5j6tx" Jan 27 19:00:38 crc kubenswrapper[4915]: I0127 19:00:38.473089 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-lrzgd-config-5j6tx"] Jan 27 19:00:38 crc kubenswrapper[4915]: I0127 19:00:38.482225 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-lrzgd-config-5j6tx"] Jan 27 19:00:38 crc kubenswrapper[4915]: I0127 19:00:38.989969 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-7jpr4"] Jan 27 19:00:38 crc kubenswrapper[4915]: I0127 19:00:38.997870 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-7jpr4"] Jan 27 19:00:39 crc kubenswrapper[4915]: I0127 19:00:39.373137 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83ca1070-e322-4204-9b67-6bccf505b073" path="/var/lib/kubelet/pods/83ca1070-e322-4204-9b67-6bccf505b073/volumes" Jan 27 19:00:39 crc kubenswrapper[4915]: I0127 19:00:39.373988 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f82f44e4-9f01-446a-9658-fa30f67bcadf" path="/var/lib/kubelet/pods/f82f44e4-9f01-446a-9658-fa30f67bcadf/volumes" Jan 27 19:00:39 crc kubenswrapper[4915]: I0127 19:00:39.898057 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerStarted","Data":"b249e78afb53948517adb9de33f9e258399a60ef239c7f6384d9c12096b96675"} Jan 27 19:00:39 crc kubenswrapper[4915]: I0127 19:00:39.898102 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerStarted","Data":"3b66e42edfceb0c87e0accf31893b0d59c8f918a236d296da7127a8dce3823a5"} Jan 27 19:00:39 crc kubenswrapper[4915]: I0127 19:00:39.898116 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerStarted","Data":"72eb6ab24b0561bd223bc040fab18a8f09dd1c00079159354ab22b869271f0ca"} Jan 27 19:00:39 crc kubenswrapper[4915]: I0127 19:00:39.898126 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerStarted","Data":"e4c6d8d3c41dd8a56303be9ab172d4c9244b4181c2505e4034f82cf5c1a04a49"} Jan 27 19:00:42 crc kubenswrapper[4915]: I0127 19:00:42.723009 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-gn8ks"] Jan 27 19:00:42 crc kubenswrapper[4915]: E0127 19:00:42.723973 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f82f44e4-9f01-446a-9658-fa30f67bcadf" containerName="mariadb-account-create-update" Jan 27 19:00:42 crc kubenswrapper[4915]: I0127 19:00:42.723990 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f82f44e4-9f01-446a-9658-fa30f67bcadf" containerName="mariadb-account-create-update" Jan 27 19:00:42 crc kubenswrapper[4915]: E0127 19:00:42.724025 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ca1070-e322-4204-9b67-6bccf505b073" containerName="ovn-config" Jan 27 19:00:42 crc kubenswrapper[4915]: I0127 19:00:42.724033 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ca1070-e322-4204-9b67-6bccf505b073" containerName="ovn-config" Jan 27 19:00:42 crc kubenswrapper[4915]: E0127 19:00:42.724054 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af75964e-e79e-49da-ad8d-37b14e63b5e6" containerName="swift-ring-rebalance" Jan 27 19:00:42 crc kubenswrapper[4915]: I0127 19:00:42.724063 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="af75964e-e79e-49da-ad8d-37b14e63b5e6" containerName="swift-ring-rebalance" Jan 27 19:00:42 crc kubenswrapper[4915]: I0127 19:00:42.724258 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="af75964e-e79e-49da-ad8d-37b14e63b5e6" containerName="swift-ring-rebalance" Jan 27 19:00:42 crc kubenswrapper[4915]: I0127 19:00:42.724285 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f82f44e4-9f01-446a-9658-fa30f67bcadf" containerName="mariadb-account-create-update" Jan 27 19:00:42 crc kubenswrapper[4915]: I0127 19:00:42.724299 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="83ca1070-e322-4204-9b67-6bccf505b073" containerName="ovn-config" Jan 27 19:00:42 crc kubenswrapper[4915]: I0127 19:00:42.725016 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gn8ks" Jan 27 19:00:42 crc kubenswrapper[4915]: I0127 19:00:42.728019 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 19:00:42 crc kubenswrapper[4915]: I0127 19:00:42.734106 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gn8ks"] Jan 27 19:00:42 crc kubenswrapper[4915]: I0127 19:00:42.808097 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb19a363-16d9-4c41-bfed-7087fd17bb9e-operator-scripts\") pod \"root-account-create-update-gn8ks\" (UID: \"bb19a363-16d9-4c41-bfed-7087fd17bb9e\") " pod="openstack/root-account-create-update-gn8ks" Jan 27 19:00:42 crc kubenswrapper[4915]: I0127 19:00:42.808510 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-674tj\" (UniqueName: \"kubernetes.io/projected/bb19a363-16d9-4c41-bfed-7087fd17bb9e-kube-api-access-674tj\") pod \"root-account-create-update-gn8ks\" (UID: \"bb19a363-16d9-4c41-bfed-7087fd17bb9e\") " pod="openstack/root-account-create-update-gn8ks" Jan 27 19:00:42 crc kubenswrapper[4915]: I0127 19:00:42.911694 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb19a363-16d9-4c41-bfed-7087fd17bb9e-operator-scripts\") pod \"root-account-create-update-gn8ks\" (UID: \"bb19a363-16d9-4c41-bfed-7087fd17bb9e\") " pod="openstack/root-account-create-update-gn8ks" Jan 27 19:00:42 crc kubenswrapper[4915]: I0127 19:00:42.912116 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-674tj\" (UniqueName: \"kubernetes.io/projected/bb19a363-16d9-4c41-bfed-7087fd17bb9e-kube-api-access-674tj\") pod \"root-account-create-update-gn8ks\" (UID: \"bb19a363-16d9-4c41-bfed-7087fd17bb9e\") " pod="openstack/root-account-create-update-gn8ks" Jan 27 19:00:42 crc kubenswrapper[4915]: I0127 19:00:42.912682 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb19a363-16d9-4c41-bfed-7087fd17bb9e-operator-scripts\") pod \"root-account-create-update-gn8ks\" (UID: \"bb19a363-16d9-4c41-bfed-7087fd17bb9e\") " pod="openstack/root-account-create-update-gn8ks" Jan 27 19:00:42 crc kubenswrapper[4915]: I0127 19:00:42.966045 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-674tj\" (UniqueName: \"kubernetes.io/projected/bb19a363-16d9-4c41-bfed-7087fd17bb9e-kube-api-access-674tj\") pod \"root-account-create-update-gn8ks\" (UID: \"bb19a363-16d9-4c41-bfed-7087fd17bb9e\") " pod="openstack/root-account-create-update-gn8ks" Jan 27 19:00:43 crc kubenswrapper[4915]: I0127 19:00:43.062899 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gn8ks" Jan 27 19:00:43 crc kubenswrapper[4915]: I0127 19:00:43.599165 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gn8ks"] Jan 27 19:00:43 crc kubenswrapper[4915]: I0127 19:00:43.932602 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gn8ks" event={"ID":"bb19a363-16d9-4c41-bfed-7087fd17bb9e","Type":"ContainerStarted","Data":"1ee0dcf89f1039dcee1b4b017dfe4671e43e97013e3f8cc394b283a18e9ecf21"} Jan 27 19:00:43 crc kubenswrapper[4915]: I0127 19:00:43.933011 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gn8ks" event={"ID":"bb19a363-16d9-4c41-bfed-7087fd17bb9e","Type":"ContainerStarted","Data":"e1662320b0df37db4dd5784e92123791b9a167a3ecf634948ee4f420de8e2bdb"} Jan 27 19:00:43 crc kubenswrapper[4915]: I0127 19:00:43.936848 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerStarted","Data":"c8eff7dd1e13e6c536d79e1ff22aa0f1ba0d443e44584288ac87cb8ec64b4588"} Jan 27 19:00:43 crc kubenswrapper[4915]: I0127 19:00:43.936878 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerStarted","Data":"beb400939f63213e02a46a559003731d0fda6c025a7c4af4fba8c1ee07526691"} Jan 27 19:00:43 crc kubenswrapper[4915]: I0127 19:00:43.951068 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-gn8ks" podStartSLOduration=1.951045853 podStartE2EDuration="1.951045853s" podCreationTimestamp="2026-01-27 19:00:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:00:43.949998537 +0000 UTC m=+1135.307852201" watchObservedRunningTime="2026-01-27 19:00:43.951045853 +0000 UTC m=+1135.308899517" Jan 27 19:00:44 crc kubenswrapper[4915]: I0127 19:00:44.948705 4915 generic.go:334] "Generic (PLEG): container finished" podID="75d11dfb-4c76-4968-81c0-d64a2272772b" containerID="2307aac0f134ed53b3b103835823b9021aefeb9cd3a5ef2f8acd9077a9f61d25" exitCode=0 Jan 27 19:00:44 crc kubenswrapper[4915]: I0127 19:00:44.948814 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nk9l7" event={"ID":"75d11dfb-4c76-4968-81c0-d64a2272772b","Type":"ContainerDied","Data":"2307aac0f134ed53b3b103835823b9021aefeb9cd3a5ef2f8acd9077a9f61d25"} Jan 27 19:00:44 crc kubenswrapper[4915]: I0127 19:00:44.953643 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerStarted","Data":"f190d7512b553feb7d1dac05ec55ec86498a8ed1cdd7497628af16abd2455ecd"} Jan 27 19:00:44 crc kubenswrapper[4915]: I0127 19:00:44.953697 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerStarted","Data":"656265f338caee64724f8f589fe0017f1ae9b7a5d7efe0c5ad8db7e9f450d63b"} Jan 27 19:00:44 crc kubenswrapper[4915]: I0127 19:00:44.957403 4915 generic.go:334] "Generic (PLEG): container finished" podID="bb19a363-16d9-4c41-bfed-7087fd17bb9e" containerID="1ee0dcf89f1039dcee1b4b017dfe4671e43e97013e3f8cc394b283a18e9ecf21" exitCode=0 Jan 27 19:00:44 crc kubenswrapper[4915]: I0127 19:00:44.957463 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gn8ks" event={"ID":"bb19a363-16d9-4c41-bfed-7087fd17bb9e","Type":"ContainerDied","Data":"1ee0dcf89f1039dcee1b4b017dfe4671e43e97013e3f8cc394b283a18e9ecf21"} Jan 27 19:00:46 crc kubenswrapper[4915]: I0127 19:00:46.375158 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gn8ks" Jan 27 19:00:46 crc kubenswrapper[4915]: I0127 19:00:46.382071 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nk9l7" Jan 27 19:00:46 crc kubenswrapper[4915]: I0127 19:00:46.472429 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75d11dfb-4c76-4968-81c0-d64a2272772b-db-sync-config-data\") pod \"75d11dfb-4c76-4968-81c0-d64a2272772b\" (UID: \"75d11dfb-4c76-4968-81c0-d64a2272772b\") " Jan 27 19:00:46 crc kubenswrapper[4915]: I0127 19:00:46.472553 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-674tj\" (UniqueName: \"kubernetes.io/projected/bb19a363-16d9-4c41-bfed-7087fd17bb9e-kube-api-access-674tj\") pod \"bb19a363-16d9-4c41-bfed-7087fd17bb9e\" (UID: \"bb19a363-16d9-4c41-bfed-7087fd17bb9e\") " Jan 27 19:00:46 crc kubenswrapper[4915]: I0127 19:00:46.472607 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75d11dfb-4c76-4968-81c0-d64a2272772b-config-data\") pod \"75d11dfb-4c76-4968-81c0-d64a2272772b\" (UID: \"75d11dfb-4c76-4968-81c0-d64a2272772b\") " Jan 27 19:00:46 crc kubenswrapper[4915]: I0127 19:00:46.472732 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75d11dfb-4c76-4968-81c0-d64a2272772b-combined-ca-bundle\") pod \"75d11dfb-4c76-4968-81c0-d64a2272772b\" (UID: \"75d11dfb-4c76-4968-81c0-d64a2272772b\") " Jan 27 19:00:46 crc kubenswrapper[4915]: I0127 19:00:46.472872 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb19a363-16d9-4c41-bfed-7087fd17bb9e-operator-scripts\") pod \"bb19a363-16d9-4c41-bfed-7087fd17bb9e\" (UID: \"bb19a363-16d9-4c41-bfed-7087fd17bb9e\") " Jan 27 19:00:46 crc kubenswrapper[4915]: I0127 19:00:46.472962 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhqcl\" (UniqueName: \"kubernetes.io/projected/75d11dfb-4c76-4968-81c0-d64a2272772b-kube-api-access-lhqcl\") pod \"75d11dfb-4c76-4968-81c0-d64a2272772b\" (UID: \"75d11dfb-4c76-4968-81c0-d64a2272772b\") " Jan 27 19:00:46 crc kubenswrapper[4915]: I0127 19:00:46.473928 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb19a363-16d9-4c41-bfed-7087fd17bb9e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bb19a363-16d9-4c41-bfed-7087fd17bb9e" (UID: "bb19a363-16d9-4c41-bfed-7087fd17bb9e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:46 crc kubenswrapper[4915]: I0127 19:00:46.474271 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb19a363-16d9-4c41-bfed-7087fd17bb9e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:46 crc kubenswrapper[4915]: I0127 19:00:46.477710 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75d11dfb-4c76-4968-81c0-d64a2272772b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "75d11dfb-4c76-4968-81c0-d64a2272772b" (UID: "75d11dfb-4c76-4968-81c0-d64a2272772b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:46 crc kubenswrapper[4915]: I0127 19:00:46.478123 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75d11dfb-4c76-4968-81c0-d64a2272772b-kube-api-access-lhqcl" (OuterVolumeSpecName: "kube-api-access-lhqcl") pod "75d11dfb-4c76-4968-81c0-d64a2272772b" (UID: "75d11dfb-4c76-4968-81c0-d64a2272772b"). InnerVolumeSpecName "kube-api-access-lhqcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:46 crc kubenswrapper[4915]: I0127 19:00:46.480346 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb19a363-16d9-4c41-bfed-7087fd17bb9e-kube-api-access-674tj" (OuterVolumeSpecName: "kube-api-access-674tj") pod "bb19a363-16d9-4c41-bfed-7087fd17bb9e" (UID: "bb19a363-16d9-4c41-bfed-7087fd17bb9e"). InnerVolumeSpecName "kube-api-access-674tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:46 crc kubenswrapper[4915]: I0127 19:00:46.496147 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75d11dfb-4c76-4968-81c0-d64a2272772b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75d11dfb-4c76-4968-81c0-d64a2272772b" (UID: "75d11dfb-4c76-4968-81c0-d64a2272772b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:46 crc kubenswrapper[4915]: I0127 19:00:46.517390 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75d11dfb-4c76-4968-81c0-d64a2272772b-config-data" (OuterVolumeSpecName: "config-data") pod "75d11dfb-4c76-4968-81c0-d64a2272772b" (UID: "75d11dfb-4c76-4968-81c0-d64a2272772b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:46 crc kubenswrapper[4915]: I0127 19:00:46.575534 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhqcl\" (UniqueName: \"kubernetes.io/projected/75d11dfb-4c76-4968-81c0-d64a2272772b-kube-api-access-lhqcl\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:46 crc kubenswrapper[4915]: I0127 19:00:46.575571 4915 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75d11dfb-4c76-4968-81c0-d64a2272772b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:46 crc kubenswrapper[4915]: I0127 19:00:46.575581 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-674tj\" (UniqueName: \"kubernetes.io/projected/bb19a363-16d9-4c41-bfed-7087fd17bb9e-kube-api-access-674tj\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:46 crc kubenswrapper[4915]: I0127 19:00:46.575590 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75d11dfb-4c76-4968-81c0-d64a2272772b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:46 crc kubenswrapper[4915]: I0127 19:00:46.575600 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75d11dfb-4c76-4968-81c0-d64a2272772b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.001265 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerStarted","Data":"1f21ec545ff0259b4b18263e691718a5351bd9422f37264a3483845c80f1cefb"} Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.001316 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerStarted","Data":"78a076b42bd4e681328f57f13c01615e258359268ea1466d9b974011df9b71db"} Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.003091 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gn8ks" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.003139 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gn8ks" event={"ID":"bb19a363-16d9-4c41-bfed-7087fd17bb9e","Type":"ContainerDied","Data":"e1662320b0df37db4dd5784e92123791b9a167a3ecf634948ee4f420de8e2bdb"} Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.003186 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1662320b0df37db4dd5784e92123791b9a167a3ecf634948ee4f420de8e2bdb" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.009532 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nk9l7" event={"ID":"75d11dfb-4c76-4968-81c0-d64a2272772b","Type":"ContainerDied","Data":"222b42e5a7537264f30eee2fb9a0ff65a9632668ab312a7d531fefe8dc031b10"} Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.009582 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="222b42e5a7537264f30eee2fb9a0ff65a9632668ab312a7d531fefe8dc031b10" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.009653 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nk9l7" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.393290 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-c5xhp"] Jan 27 19:00:47 crc kubenswrapper[4915]: E0127 19:00:47.393725 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d11dfb-4c76-4968-81c0-d64a2272772b" containerName="glance-db-sync" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.393740 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d11dfb-4c76-4968-81c0-d64a2272772b" containerName="glance-db-sync" Jan 27 19:00:47 crc kubenswrapper[4915]: E0127 19:00:47.393757 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb19a363-16d9-4c41-bfed-7087fd17bb9e" containerName="mariadb-account-create-update" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.393767 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb19a363-16d9-4c41-bfed-7087fd17bb9e" containerName="mariadb-account-create-update" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.393965 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="75d11dfb-4c76-4968-81c0-d64a2272772b" containerName="glance-db-sync" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.393986 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb19a363-16d9-4c41-bfed-7087fd17bb9e" containerName="mariadb-account-create-update" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.394791 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-c5xhp"] Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.394907 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-c5xhp" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.418045 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.501176 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/683aae0c-4149-4786-b2b0-e0c1fabaa622-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-c5xhp\" (UID: \"683aae0c-4149-4786-b2b0-e0c1fabaa622\") " pod="openstack/dnsmasq-dns-74dc88fc-c5xhp" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.503075 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/683aae0c-4149-4786-b2b0-e0c1fabaa622-dns-svc\") pod \"dnsmasq-dns-74dc88fc-c5xhp\" (UID: \"683aae0c-4149-4786-b2b0-e0c1fabaa622\") " pod="openstack/dnsmasq-dns-74dc88fc-c5xhp" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.503439 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wsr8\" (UniqueName: \"kubernetes.io/projected/683aae0c-4149-4786-b2b0-e0c1fabaa622-kube-api-access-9wsr8\") pod \"dnsmasq-dns-74dc88fc-c5xhp\" (UID: \"683aae0c-4149-4786-b2b0-e0c1fabaa622\") " pod="openstack/dnsmasq-dns-74dc88fc-c5xhp" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.503472 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/683aae0c-4149-4786-b2b0-e0c1fabaa622-config\") pod \"dnsmasq-dns-74dc88fc-c5xhp\" (UID: \"683aae0c-4149-4786-b2b0-e0c1fabaa622\") " pod="openstack/dnsmasq-dns-74dc88fc-c5xhp" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.503586 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/683aae0c-4149-4786-b2b0-e0c1fabaa622-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-c5xhp\" (UID: \"683aae0c-4149-4786-b2b0-e0c1fabaa622\") " pod="openstack/dnsmasq-dns-74dc88fc-c5xhp" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.605580 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wsr8\" (UniqueName: \"kubernetes.io/projected/683aae0c-4149-4786-b2b0-e0c1fabaa622-kube-api-access-9wsr8\") pod \"dnsmasq-dns-74dc88fc-c5xhp\" (UID: \"683aae0c-4149-4786-b2b0-e0c1fabaa622\") " pod="openstack/dnsmasq-dns-74dc88fc-c5xhp" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.606153 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/683aae0c-4149-4786-b2b0-e0c1fabaa622-config\") pod \"dnsmasq-dns-74dc88fc-c5xhp\" (UID: \"683aae0c-4149-4786-b2b0-e0c1fabaa622\") " pod="openstack/dnsmasq-dns-74dc88fc-c5xhp" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.606220 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/683aae0c-4149-4786-b2b0-e0c1fabaa622-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-c5xhp\" (UID: \"683aae0c-4149-4786-b2b0-e0c1fabaa622\") " pod="openstack/dnsmasq-dns-74dc88fc-c5xhp" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.606340 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/683aae0c-4149-4786-b2b0-e0c1fabaa622-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-c5xhp\" (UID: \"683aae0c-4149-4786-b2b0-e0c1fabaa622\") " pod="openstack/dnsmasq-dns-74dc88fc-c5xhp" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.606364 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/683aae0c-4149-4786-b2b0-e0c1fabaa622-dns-svc\") pod \"dnsmasq-dns-74dc88fc-c5xhp\" (UID: \"683aae0c-4149-4786-b2b0-e0c1fabaa622\") " pod="openstack/dnsmasq-dns-74dc88fc-c5xhp" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.610668 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/683aae0c-4149-4786-b2b0-e0c1fabaa622-config\") pod \"dnsmasq-dns-74dc88fc-c5xhp\" (UID: \"683aae0c-4149-4786-b2b0-e0c1fabaa622\") " pod="openstack/dnsmasq-dns-74dc88fc-c5xhp" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.611376 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/683aae0c-4149-4786-b2b0-e0c1fabaa622-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-c5xhp\" (UID: \"683aae0c-4149-4786-b2b0-e0c1fabaa622\") " pod="openstack/dnsmasq-dns-74dc88fc-c5xhp" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.611915 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/683aae0c-4149-4786-b2b0-e0c1fabaa622-dns-svc\") pod \"dnsmasq-dns-74dc88fc-c5xhp\" (UID: \"683aae0c-4149-4786-b2b0-e0c1fabaa622\") " pod="openstack/dnsmasq-dns-74dc88fc-c5xhp" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.612826 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/683aae0c-4149-4786-b2b0-e0c1fabaa622-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-c5xhp\" (UID: \"683aae0c-4149-4786-b2b0-e0c1fabaa622\") " pod="openstack/dnsmasq-dns-74dc88fc-c5xhp" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.650217 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wsr8\" (UniqueName: \"kubernetes.io/projected/683aae0c-4149-4786-b2b0-e0c1fabaa622-kube-api-access-9wsr8\") pod \"dnsmasq-dns-74dc88fc-c5xhp\" (UID: \"683aae0c-4149-4786-b2b0-e0c1fabaa622\") " pod="openstack/dnsmasq-dns-74dc88fc-c5xhp" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.795512 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-c5xhp" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.837041 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-w4pf8"] Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.839044 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w4pf8" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.852334 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-w4pf8"] Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.948851 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-fpwgv"] Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.950203 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fpwgv" Jan 27 19:00:47 crc kubenswrapper[4915]: I0127 19:00:47.967989 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-fpwgv"] Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.012443 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a7a7595-f421-4ec0-afa2-427233ac9191-operator-scripts\") pod \"barbican-db-create-w4pf8\" (UID: \"8a7a7595-f421-4ec0-afa2-427233ac9191\") " pod="openstack/barbican-db-create-w4pf8" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.012704 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5f64\" (UniqueName: \"kubernetes.io/projected/8a7a7595-f421-4ec0-afa2-427233ac9191-kube-api-access-g5f64\") pod \"barbican-db-create-w4pf8\" (UID: \"8a7a7595-f421-4ec0-afa2-427233ac9191\") " pod="openstack/barbican-db-create-w4pf8" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.016314 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.067522 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-3741-account-create-update-t6kdm"] Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.071483 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3741-account-create-update-t6kdm" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.074590 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.108165 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3741-account-create-update-t6kdm"] Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.116197 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/375893e2-1bac-422d-9768-3b462dccb8a5-operator-scripts\") pod \"cinder-db-create-fpwgv\" (UID: \"375893e2-1bac-422d-9768-3b462dccb8a5\") " pod="openstack/cinder-db-create-fpwgv" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.116276 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5f64\" (UniqueName: \"kubernetes.io/projected/8a7a7595-f421-4ec0-afa2-427233ac9191-kube-api-access-g5f64\") pod \"barbican-db-create-w4pf8\" (UID: \"8a7a7595-f421-4ec0-afa2-427233ac9191\") " pod="openstack/barbican-db-create-w4pf8" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.116325 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbdgg\" (UniqueName: \"kubernetes.io/projected/375893e2-1bac-422d-9768-3b462dccb8a5-kube-api-access-hbdgg\") pod \"cinder-db-create-fpwgv\" (UID: \"375893e2-1bac-422d-9768-3b462dccb8a5\") " pod="openstack/cinder-db-create-fpwgv" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.116373 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a7a7595-f421-4ec0-afa2-427233ac9191-operator-scripts\") pod \"barbican-db-create-w4pf8\" (UID: \"8a7a7595-f421-4ec0-afa2-427233ac9191\") " pod="openstack/barbican-db-create-w4pf8" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.118854 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a7a7595-f421-4ec0-afa2-427233ac9191-operator-scripts\") pod \"barbican-db-create-w4pf8\" (UID: \"8a7a7595-f421-4ec0-afa2-427233ac9191\") " pod="openstack/barbican-db-create-w4pf8" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.125287 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerStarted","Data":"46b8d42d59a4c9134b4742daa5c213bf555353d1a589cfa941606255e7eafb0d"} Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.125329 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerStarted","Data":"fd4068096fd5f2903d7d5df0c34be282b42e91a2fefeb9a0704db78cfc32d4e3"} Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.125338 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerStarted","Data":"ff8cba282b52306ad467900708f6df7a1eaf511c00b3b5981ea26c5c9e70e013"} Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.125346 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerStarted","Data":"b98e82fbb1e40903fba45d17c1b07ac3a0e0b6c3be8ba49bc847d57befbdd5ff"} Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.125354 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerStarted","Data":"106e5d620c0c7dcf69873a17ca1ebc31c7e5a9422f1f81e014cea3df62b62fb9"} Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.150140 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5f64\" (UniqueName: \"kubernetes.io/projected/8a7a7595-f421-4ec0-afa2-427233ac9191-kube-api-access-g5f64\") pod \"barbican-db-create-w4pf8\" (UID: \"8a7a7595-f421-4ec0-afa2-427233ac9191\") " pod="openstack/barbican-db-create-w4pf8" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.162137 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w4pf8" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.168863 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=24.489399574 podStartE2EDuration="36.168849719s" podCreationTimestamp="2026-01-27 19:00:12 +0000 UTC" firstStartedPulling="2026-01-27 19:00:34.947200943 +0000 UTC m=+1126.305054607" lastFinishedPulling="2026-01-27 19:00:46.626651088 +0000 UTC m=+1137.984504752" observedRunningTime="2026-01-27 19:00:48.168395238 +0000 UTC m=+1139.526248902" watchObservedRunningTime="2026-01-27 19:00:48.168849719 +0000 UTC m=+1139.526703383" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.202533 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-c5xhp"] Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.217093 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-58b8-account-create-update-nm57k"] Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.217597 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/375893e2-1bac-422d-9768-3b462dccb8a5-operator-scripts\") pod \"cinder-db-create-fpwgv\" (UID: \"375893e2-1bac-422d-9768-3b462dccb8a5\") " pod="openstack/cinder-db-create-fpwgv" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.217687 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2s52\" (UniqueName: \"kubernetes.io/projected/725ac4ea-6fad-4426-bab4-3e4a5ff3251c-kube-api-access-d2s52\") pod \"barbican-3741-account-create-update-t6kdm\" (UID: \"725ac4ea-6fad-4426-bab4-3e4a5ff3251c\") " pod="openstack/barbican-3741-account-create-update-t6kdm" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.217740 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbdgg\" (UniqueName: \"kubernetes.io/projected/375893e2-1bac-422d-9768-3b462dccb8a5-kube-api-access-hbdgg\") pod \"cinder-db-create-fpwgv\" (UID: \"375893e2-1bac-422d-9768-3b462dccb8a5\") " pod="openstack/cinder-db-create-fpwgv" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.217780 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/725ac4ea-6fad-4426-bab4-3e4a5ff3251c-operator-scripts\") pod \"barbican-3741-account-create-update-t6kdm\" (UID: \"725ac4ea-6fad-4426-bab4-3e4a5ff3251c\") " pod="openstack/barbican-3741-account-create-update-t6kdm" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.218144 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-58b8-account-create-update-nm57k" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.219397 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/375893e2-1bac-422d-9768-3b462dccb8a5-operator-scripts\") pod \"cinder-db-create-fpwgv\" (UID: \"375893e2-1bac-422d-9768-3b462dccb8a5\") " pod="openstack/cinder-db-create-fpwgv" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.223870 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.237123 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-58b8-account-create-update-nm57k"] Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.251654 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-blb6q"] Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.252625 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-blb6q" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.282742 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbdgg\" (UniqueName: \"kubernetes.io/projected/375893e2-1bac-422d-9768-3b462dccb8a5-kube-api-access-hbdgg\") pod \"cinder-db-create-fpwgv\" (UID: \"375893e2-1bac-422d-9768-3b462dccb8a5\") " pod="openstack/cinder-db-create-fpwgv" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.283209 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-lqql2"] Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.284321 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-lqql2" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.289170 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.289498 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.290090 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fc7sv" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.290154 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.320954 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8hc4\" (UniqueName: \"kubernetes.io/projected/b821973f-c181-4100-b31a-d9f1e9f90ebd-kube-api-access-b8hc4\") pod \"cinder-58b8-account-create-update-nm57k\" (UID: \"b821973f-c181-4100-b31a-d9f1e9f90ebd\") " pod="openstack/cinder-58b8-account-create-update-nm57k" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.321015 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6908876-dd99-42e8-b22f-de6349dca8e5-operator-scripts\") pod \"neutron-db-create-blb6q\" (UID: \"a6908876-dd99-42e8-b22f-de6349dca8e5\") " pod="openstack/neutron-db-create-blb6q" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.321045 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbwsk\" (UniqueName: \"kubernetes.io/projected/a6908876-dd99-42e8-b22f-de6349dca8e5-kube-api-access-jbwsk\") pod \"neutron-db-create-blb6q\" (UID: \"a6908876-dd99-42e8-b22f-de6349dca8e5\") " pod="openstack/neutron-db-create-blb6q" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.321072 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/725ac4ea-6fad-4426-bab4-3e4a5ff3251c-operator-scripts\") pod \"barbican-3741-account-create-update-t6kdm\" (UID: \"725ac4ea-6fad-4426-bab4-3e4a5ff3251c\") " pod="openstack/barbican-3741-account-create-update-t6kdm" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.321151 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d599751d-9b39-4de0-bce6-d1fd71f1fe0e-combined-ca-bundle\") pod \"keystone-db-sync-lqql2\" (UID: \"d599751d-9b39-4de0-bce6-d1fd71f1fe0e\") " pod="openstack/keystone-db-sync-lqql2" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.321180 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b821973f-c181-4100-b31a-d9f1e9f90ebd-operator-scripts\") pod \"cinder-58b8-account-create-update-nm57k\" (UID: \"b821973f-c181-4100-b31a-d9f1e9f90ebd\") " pod="openstack/cinder-58b8-account-create-update-nm57k" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.321200 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d599751d-9b39-4de0-bce6-d1fd71f1fe0e-config-data\") pod \"keystone-db-sync-lqql2\" (UID: \"d599751d-9b39-4de0-bce6-d1fd71f1fe0e\") " pod="openstack/keystone-db-sync-lqql2" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.321216 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbrvw\" (UniqueName: \"kubernetes.io/projected/d599751d-9b39-4de0-bce6-d1fd71f1fe0e-kube-api-access-rbrvw\") pod \"keystone-db-sync-lqql2\" (UID: \"d599751d-9b39-4de0-bce6-d1fd71f1fe0e\") " pod="openstack/keystone-db-sync-lqql2" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.321236 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2s52\" (UniqueName: \"kubernetes.io/projected/725ac4ea-6fad-4426-bab4-3e4a5ff3251c-kube-api-access-d2s52\") pod \"barbican-3741-account-create-update-t6kdm\" (UID: \"725ac4ea-6fad-4426-bab4-3e4a5ff3251c\") " pod="openstack/barbican-3741-account-create-update-t6kdm" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.324920 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/725ac4ea-6fad-4426-bab4-3e4a5ff3251c-operator-scripts\") pod \"barbican-3741-account-create-update-t6kdm\" (UID: \"725ac4ea-6fad-4426-bab4-3e4a5ff3251c\") " pod="openstack/barbican-3741-account-create-update-t6kdm" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.357679 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-blb6q"] Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.361659 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2s52\" (UniqueName: \"kubernetes.io/projected/725ac4ea-6fad-4426-bab4-3e4a5ff3251c-kube-api-access-d2s52\") pod \"barbican-3741-account-create-update-t6kdm\" (UID: \"725ac4ea-6fad-4426-bab4-3e4a5ff3251c\") " pod="openstack/barbican-3741-account-create-update-t6kdm" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.374710 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-lqql2"] Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.422886 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d599751d-9b39-4de0-bce6-d1fd71f1fe0e-combined-ca-bundle\") pod \"keystone-db-sync-lqql2\" (UID: \"d599751d-9b39-4de0-bce6-d1fd71f1fe0e\") " pod="openstack/keystone-db-sync-lqql2" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.422980 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b821973f-c181-4100-b31a-d9f1e9f90ebd-operator-scripts\") pod \"cinder-58b8-account-create-update-nm57k\" (UID: \"b821973f-c181-4100-b31a-d9f1e9f90ebd\") " pod="openstack/cinder-58b8-account-create-update-nm57k" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.423011 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d599751d-9b39-4de0-bce6-d1fd71f1fe0e-config-data\") pod \"keystone-db-sync-lqql2\" (UID: \"d599751d-9b39-4de0-bce6-d1fd71f1fe0e\") " pod="openstack/keystone-db-sync-lqql2" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.423036 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbrvw\" (UniqueName: \"kubernetes.io/projected/d599751d-9b39-4de0-bce6-d1fd71f1fe0e-kube-api-access-rbrvw\") pod \"keystone-db-sync-lqql2\" (UID: \"d599751d-9b39-4de0-bce6-d1fd71f1fe0e\") " pod="openstack/keystone-db-sync-lqql2" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.423097 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8hc4\" (UniqueName: \"kubernetes.io/projected/b821973f-c181-4100-b31a-d9f1e9f90ebd-kube-api-access-b8hc4\") pod \"cinder-58b8-account-create-update-nm57k\" (UID: \"b821973f-c181-4100-b31a-d9f1e9f90ebd\") " pod="openstack/cinder-58b8-account-create-update-nm57k" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.425012 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6908876-dd99-42e8-b22f-de6349dca8e5-operator-scripts\") pod \"neutron-db-create-blb6q\" (UID: \"a6908876-dd99-42e8-b22f-de6349dca8e5\") " pod="openstack/neutron-db-create-blb6q" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.425091 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbwsk\" (UniqueName: \"kubernetes.io/projected/a6908876-dd99-42e8-b22f-de6349dca8e5-kube-api-access-jbwsk\") pod \"neutron-db-create-blb6q\" (UID: \"a6908876-dd99-42e8-b22f-de6349dca8e5\") " pod="openstack/neutron-db-create-blb6q" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.426350 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6908876-dd99-42e8-b22f-de6349dca8e5-operator-scripts\") pod \"neutron-db-create-blb6q\" (UID: \"a6908876-dd99-42e8-b22f-de6349dca8e5\") " pod="openstack/neutron-db-create-blb6q" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.427172 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3741-account-create-update-t6kdm" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.428395 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d599751d-9b39-4de0-bce6-d1fd71f1fe0e-combined-ca-bundle\") pod \"keystone-db-sync-lqql2\" (UID: \"d599751d-9b39-4de0-bce6-d1fd71f1fe0e\") " pod="openstack/keystone-db-sync-lqql2" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.429036 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b821973f-c181-4100-b31a-d9f1e9f90ebd-operator-scripts\") pod \"cinder-58b8-account-create-update-nm57k\" (UID: \"b821973f-c181-4100-b31a-d9f1e9f90ebd\") " pod="openstack/cinder-58b8-account-create-update-nm57k" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.439363 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d599751d-9b39-4de0-bce6-d1fd71f1fe0e-config-data\") pod \"keystone-db-sync-lqql2\" (UID: \"d599751d-9b39-4de0-bce6-d1fd71f1fe0e\") " pod="openstack/keystone-db-sync-lqql2" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.453908 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbrvw\" (UniqueName: \"kubernetes.io/projected/d599751d-9b39-4de0-bce6-d1fd71f1fe0e-kube-api-access-rbrvw\") pod \"keystone-db-sync-lqql2\" (UID: \"d599751d-9b39-4de0-bce6-d1fd71f1fe0e\") " pod="openstack/keystone-db-sync-lqql2" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.458032 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8hc4\" (UniqueName: \"kubernetes.io/projected/b821973f-c181-4100-b31a-d9f1e9f90ebd-kube-api-access-b8hc4\") pod \"cinder-58b8-account-create-update-nm57k\" (UID: \"b821973f-c181-4100-b31a-d9f1e9f90ebd\") " pod="openstack/cinder-58b8-account-create-update-nm57k" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.465104 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbwsk\" (UniqueName: \"kubernetes.io/projected/a6908876-dd99-42e8-b22f-de6349dca8e5-kube-api-access-jbwsk\") pod \"neutron-db-create-blb6q\" (UID: \"a6908876-dd99-42e8-b22f-de6349dca8e5\") " pod="openstack/neutron-db-create-blb6q" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.468944 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-9380-account-create-update-244pm"] Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.469881 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9380-account-create-update-244pm" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.472117 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.477939 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9380-account-create-update-244pm"] Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.526188 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jskc9\" (UniqueName: \"kubernetes.io/projected/021d6127-8503-45c4-b420-7fd173e1a789-kube-api-access-jskc9\") pod \"neutron-9380-account-create-update-244pm\" (UID: \"021d6127-8503-45c4-b420-7fd173e1a789\") " pod="openstack/neutron-9380-account-create-update-244pm" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.526433 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/021d6127-8503-45c4-b420-7fd173e1a789-operator-scripts\") pod \"neutron-9380-account-create-update-244pm\" (UID: \"021d6127-8503-45c4-b420-7fd173e1a789\") " pod="openstack/neutron-9380-account-create-update-244pm" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.543872 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-blb6q" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.550091 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-58b8-account-create-update-nm57k" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.579752 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fpwgv" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.588294 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-lqql2" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.631334 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-c5xhp"] Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.632357 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/021d6127-8503-45c4-b420-7fd173e1a789-operator-scripts\") pod \"neutron-9380-account-create-update-244pm\" (UID: \"021d6127-8503-45c4-b420-7fd173e1a789\") " pod="openstack/neutron-9380-account-create-update-244pm" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.632443 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jskc9\" (UniqueName: \"kubernetes.io/projected/021d6127-8503-45c4-b420-7fd173e1a789-kube-api-access-jskc9\") pod \"neutron-9380-account-create-update-244pm\" (UID: \"021d6127-8503-45c4-b420-7fd173e1a789\") " pod="openstack/neutron-9380-account-create-update-244pm" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.633747 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/021d6127-8503-45c4-b420-7fd173e1a789-operator-scripts\") pod \"neutron-9380-account-create-update-244pm\" (UID: \"021d6127-8503-45c4-b420-7fd173e1a789\") " pod="openstack/neutron-9380-account-create-update-244pm" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.684397 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jskc9\" (UniqueName: \"kubernetes.io/projected/021d6127-8503-45c4-b420-7fd173e1a789-kube-api-access-jskc9\") pod \"neutron-9380-account-create-update-244pm\" (UID: \"021d6127-8503-45c4-b420-7fd173e1a789\") " pod="openstack/neutron-9380-account-create-update-244pm" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.687780 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-45rqx"] Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.691410 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.695581 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.721910 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-45rqx"] Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.733718 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-config\") pod \"dnsmasq-dns-5f59b8f679-45rqx\" (UID: \"c29f99b1-e205-4491-8c84-7d6247e3752c\") " pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.733843 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-45rqx\" (UID: \"c29f99b1-e205-4491-8c84-7d6247e3752c\") " pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.733866 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4ltz\" (UniqueName: \"kubernetes.io/projected/c29f99b1-e205-4491-8c84-7d6247e3752c-kube-api-access-p4ltz\") pod \"dnsmasq-dns-5f59b8f679-45rqx\" (UID: \"c29f99b1-e205-4491-8c84-7d6247e3752c\") " pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.733895 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-45rqx\" (UID: \"c29f99b1-e205-4491-8c84-7d6247e3752c\") " pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.733917 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-45rqx\" (UID: \"c29f99b1-e205-4491-8c84-7d6247e3752c\") " pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.733938 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-45rqx\" (UID: \"c29f99b1-e205-4491-8c84-7d6247e3752c\") " pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.817916 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-w4pf8"] Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.836373 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-config\") pod \"dnsmasq-dns-5f59b8f679-45rqx\" (UID: \"c29f99b1-e205-4491-8c84-7d6247e3752c\") " pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.836527 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-45rqx\" (UID: \"c29f99b1-e205-4491-8c84-7d6247e3752c\") " pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.836551 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4ltz\" (UniqueName: \"kubernetes.io/projected/c29f99b1-e205-4491-8c84-7d6247e3752c-kube-api-access-p4ltz\") pod \"dnsmasq-dns-5f59b8f679-45rqx\" (UID: \"c29f99b1-e205-4491-8c84-7d6247e3752c\") " pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.836578 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-45rqx\" (UID: \"c29f99b1-e205-4491-8c84-7d6247e3752c\") " pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.836606 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-45rqx\" (UID: \"c29f99b1-e205-4491-8c84-7d6247e3752c\") " pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.836633 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-45rqx\" (UID: \"c29f99b1-e205-4491-8c84-7d6247e3752c\") " pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.837444 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-config\") pod \"dnsmasq-dns-5f59b8f679-45rqx\" (UID: \"c29f99b1-e205-4491-8c84-7d6247e3752c\") " pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.837600 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-45rqx\" (UID: \"c29f99b1-e205-4491-8c84-7d6247e3752c\") " pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.837658 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-45rqx\" (UID: \"c29f99b1-e205-4491-8c84-7d6247e3752c\") " pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.837694 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-45rqx\" (UID: \"c29f99b1-e205-4491-8c84-7d6247e3752c\") " pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.838290 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-45rqx\" (UID: \"c29f99b1-e205-4491-8c84-7d6247e3752c\") " pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" Jan 27 19:00:48 crc kubenswrapper[4915]: W0127 19:00:48.845945 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a7a7595_f421_4ec0_afa2_427233ac9191.slice/crio-b3cd8592425c1e74626388bd098ede463c42a3f12fa033f42d81bcd7b8b5fcae WatchSource:0}: Error finding container b3cd8592425c1e74626388bd098ede463c42a3f12fa033f42d81bcd7b8b5fcae: Status 404 returned error can't find the container with id b3cd8592425c1e74626388bd098ede463c42a3f12fa033f42d81bcd7b8b5fcae Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.859997 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4ltz\" (UniqueName: \"kubernetes.io/projected/c29f99b1-e205-4491-8c84-7d6247e3752c-kube-api-access-p4ltz\") pod \"dnsmasq-dns-5f59b8f679-45rqx\" (UID: \"c29f99b1-e205-4491-8c84-7d6247e3752c\") " pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" Jan 27 19:00:48 crc kubenswrapper[4915]: I0127 19:00:48.898528 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9380-account-create-update-244pm" Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.000750 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-gn8ks"] Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.008973 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-gn8ks"] Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.025006 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3741-account-create-update-t6kdm"] Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.027503 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.139527 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3741-account-create-update-t6kdm" event={"ID":"725ac4ea-6fad-4426-bab4-3e4a5ff3251c","Type":"ContainerStarted","Data":"ecc5cd08de24525baa824f6e88c1f304d59613bb9898e9b2b52160cdcffc7179"} Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.142797 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w4pf8" event={"ID":"8a7a7595-f421-4ec0-afa2-427233ac9191","Type":"ContainerStarted","Data":"97d42013e83b1c626eff029049c8ac6906ce51db020228b85bd96d062e4efb8f"} Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.142835 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w4pf8" event={"ID":"8a7a7595-f421-4ec0-afa2-427233ac9191","Type":"ContainerStarted","Data":"b3cd8592425c1e74626388bd098ede463c42a3f12fa033f42d81bcd7b8b5fcae"} Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.159905 4915 generic.go:334] "Generic (PLEG): container finished" podID="683aae0c-4149-4786-b2b0-e0c1fabaa622" containerID="301f98a88f85de3d8bdc72d61c9d3da3b4f8cfdedc0f612f61551fcc1685370f" exitCode=0 Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.161137 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-c5xhp" event={"ID":"683aae0c-4149-4786-b2b0-e0c1fabaa622","Type":"ContainerDied","Data":"301f98a88f85de3d8bdc72d61c9d3da3b4f8cfdedc0f612f61551fcc1685370f"} Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.161168 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-c5xhp" event={"ID":"683aae0c-4149-4786-b2b0-e0c1fabaa622","Type":"ContainerStarted","Data":"b3196f285a6ea99bbf8e21a228a93924d457a8f8e09d889fd87bda1122c80353"} Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.174044 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-w4pf8" podStartSLOduration=2.174021187 podStartE2EDuration="2.174021187s" podCreationTimestamp="2026-01-27 19:00:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:00:49.166063822 +0000 UTC m=+1140.523917486" watchObservedRunningTime="2026-01-27 19:00:49.174021187 +0000 UTC m=+1140.531874851" Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.197466 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-blb6q"] Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.270904 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9380-account-create-update-244pm"] Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.299683 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-fpwgv"] Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.334431 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-58b8-account-create-update-nm57k"] Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.350394 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.442153 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb19a363-16d9-4c41-bfed-7087fd17bb9e" path="/var/lib/kubelet/pods/bb19a363-16d9-4c41-bfed-7087fd17bb9e/volumes" Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.443327 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-lqql2"] Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.475722 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 27 19:00:49 crc kubenswrapper[4915]: W0127 19:00:49.506261 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd599751d_9b39_4de0_bce6_d1fd71f1fe0e.slice/crio-c45a25b08d29d1b2779b024960440c24904ad11b12726103b1621c84b050a48b WatchSource:0}: Error finding container c45a25b08d29d1b2779b024960440c24904ad11b12726103b1621c84b050a48b: Status 404 returned error can't find the container with id c45a25b08d29d1b2779b024960440c24904ad11b12726103b1621c84b050a48b Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.576149 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-c5xhp" Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.653150 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/683aae0c-4149-4786-b2b0-e0c1fabaa622-dns-svc\") pod \"683aae0c-4149-4786-b2b0-e0c1fabaa622\" (UID: \"683aae0c-4149-4786-b2b0-e0c1fabaa622\") " Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.653237 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/683aae0c-4149-4786-b2b0-e0c1fabaa622-ovsdbserver-sb\") pod \"683aae0c-4149-4786-b2b0-e0c1fabaa622\" (UID: \"683aae0c-4149-4786-b2b0-e0c1fabaa622\") " Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.653538 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-45rqx"] Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.653564 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/683aae0c-4149-4786-b2b0-e0c1fabaa622-config\") pod \"683aae0c-4149-4786-b2b0-e0c1fabaa622\" (UID: \"683aae0c-4149-4786-b2b0-e0c1fabaa622\") " Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.653684 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wsr8\" (UniqueName: \"kubernetes.io/projected/683aae0c-4149-4786-b2b0-e0c1fabaa622-kube-api-access-9wsr8\") pod \"683aae0c-4149-4786-b2b0-e0c1fabaa622\" (UID: \"683aae0c-4149-4786-b2b0-e0c1fabaa622\") " Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.654110 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/683aae0c-4149-4786-b2b0-e0c1fabaa622-ovsdbserver-nb\") pod \"683aae0c-4149-4786-b2b0-e0c1fabaa622\" (UID: \"683aae0c-4149-4786-b2b0-e0c1fabaa622\") " Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.664828 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/683aae0c-4149-4786-b2b0-e0c1fabaa622-kube-api-access-9wsr8" (OuterVolumeSpecName: "kube-api-access-9wsr8") pod "683aae0c-4149-4786-b2b0-e0c1fabaa622" (UID: "683aae0c-4149-4786-b2b0-e0c1fabaa622"). InnerVolumeSpecName "kube-api-access-9wsr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.681371 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/683aae0c-4149-4786-b2b0-e0c1fabaa622-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "683aae0c-4149-4786-b2b0-e0c1fabaa622" (UID: "683aae0c-4149-4786-b2b0-e0c1fabaa622"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.692713 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/683aae0c-4149-4786-b2b0-e0c1fabaa622-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "683aae0c-4149-4786-b2b0-e0c1fabaa622" (UID: "683aae0c-4149-4786-b2b0-e0c1fabaa622"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.698010 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/683aae0c-4149-4786-b2b0-e0c1fabaa622-config" (OuterVolumeSpecName: "config") pod "683aae0c-4149-4786-b2b0-e0c1fabaa622" (UID: "683aae0c-4149-4786-b2b0-e0c1fabaa622"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.719229 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/683aae0c-4149-4786-b2b0-e0c1fabaa622-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "683aae0c-4149-4786-b2b0-e0c1fabaa622" (UID: "683aae0c-4149-4786-b2b0-e0c1fabaa622"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.756869 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/683aae0c-4149-4786-b2b0-e0c1fabaa622-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.756899 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wsr8\" (UniqueName: \"kubernetes.io/projected/683aae0c-4149-4786-b2b0-e0c1fabaa622-kube-api-access-9wsr8\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.756910 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/683aae0c-4149-4786-b2b0-e0c1fabaa622-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.756918 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/683aae0c-4149-4786-b2b0-e0c1fabaa622-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:49 crc kubenswrapper[4915]: I0127 19:00:49.756926 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/683aae0c-4149-4786-b2b0-e0c1fabaa622-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:50 crc kubenswrapper[4915]: I0127 19:00:50.169420 4915 generic.go:334] "Generic (PLEG): container finished" podID="c29f99b1-e205-4491-8c84-7d6247e3752c" containerID="c208aa897382a36123d50e1ea00cb967b93198d8c02de96a5f4eb928316c3b11" exitCode=0 Jan 27 19:00:50 crc kubenswrapper[4915]: I0127 19:00:50.169485 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" event={"ID":"c29f99b1-e205-4491-8c84-7d6247e3752c","Type":"ContainerDied","Data":"c208aa897382a36123d50e1ea00cb967b93198d8c02de96a5f4eb928316c3b11"} Jan 27 19:00:50 crc kubenswrapper[4915]: I0127 19:00:50.169510 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" event={"ID":"c29f99b1-e205-4491-8c84-7d6247e3752c","Type":"ContainerStarted","Data":"4dd87f49d1489b815ca04ecc09b6a77b3e7a78968e4a303e8f38694210fa48fa"} Jan 27 19:00:50 crc kubenswrapper[4915]: I0127 19:00:50.172869 4915 generic.go:334] "Generic (PLEG): container finished" podID="375893e2-1bac-422d-9768-3b462dccb8a5" containerID="6887c25ea3297356377874383d56d4801336bcec59eeece8e5861f7ce89c3764" exitCode=0 Jan 27 19:00:50 crc kubenswrapper[4915]: I0127 19:00:50.172928 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fpwgv" event={"ID":"375893e2-1bac-422d-9768-3b462dccb8a5","Type":"ContainerDied","Data":"6887c25ea3297356377874383d56d4801336bcec59eeece8e5861f7ce89c3764"} Jan 27 19:00:50 crc kubenswrapper[4915]: I0127 19:00:50.172951 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fpwgv" event={"ID":"375893e2-1bac-422d-9768-3b462dccb8a5","Type":"ContainerStarted","Data":"e7232fc7c44c8dfcae80b9df71d9828794f731b59ee234bcf07d48c91117b7a9"} Jan 27 19:00:50 crc kubenswrapper[4915]: I0127 19:00:50.174715 4915 generic.go:334] "Generic (PLEG): container finished" podID="021d6127-8503-45c4-b420-7fd173e1a789" containerID="5d11d9c2ec59f64a391e2f8f2814d678f9353abb0906a39b1d2681cc0384e324" exitCode=0 Jan 27 19:00:50 crc kubenswrapper[4915]: I0127 19:00:50.174784 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9380-account-create-update-244pm" event={"ID":"021d6127-8503-45c4-b420-7fd173e1a789","Type":"ContainerDied","Data":"5d11d9c2ec59f64a391e2f8f2814d678f9353abb0906a39b1d2681cc0384e324"} Jan 27 19:00:50 crc kubenswrapper[4915]: I0127 19:00:50.174835 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9380-account-create-update-244pm" event={"ID":"021d6127-8503-45c4-b420-7fd173e1a789","Type":"ContainerStarted","Data":"2ea0fcd15bf3a763d4bb4aa02d1d654d188e37bb792f9c64784a6961e5eb1bb3"} Jan 27 19:00:50 crc kubenswrapper[4915]: I0127 19:00:50.176170 4915 generic.go:334] "Generic (PLEG): container finished" podID="8a7a7595-f421-4ec0-afa2-427233ac9191" containerID="97d42013e83b1c626eff029049c8ac6906ce51db020228b85bd96d062e4efb8f" exitCode=0 Jan 27 19:00:50 crc kubenswrapper[4915]: I0127 19:00:50.176228 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w4pf8" event={"ID":"8a7a7595-f421-4ec0-afa2-427233ac9191","Type":"ContainerDied","Data":"97d42013e83b1c626eff029049c8ac6906ce51db020228b85bd96d062e4efb8f"} Jan 27 19:00:50 crc kubenswrapper[4915]: I0127 19:00:50.177446 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-lqql2" event={"ID":"d599751d-9b39-4de0-bce6-d1fd71f1fe0e","Type":"ContainerStarted","Data":"c45a25b08d29d1b2779b024960440c24904ad11b12726103b1621c84b050a48b"} Jan 27 19:00:50 crc kubenswrapper[4915]: I0127 19:00:50.179275 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-c5xhp" event={"ID":"683aae0c-4149-4786-b2b0-e0c1fabaa622","Type":"ContainerDied","Data":"b3196f285a6ea99bbf8e21a228a93924d457a8f8e09d889fd87bda1122c80353"} Jan 27 19:00:50 crc kubenswrapper[4915]: I0127 19:00:50.179315 4915 scope.go:117] "RemoveContainer" containerID="301f98a88f85de3d8bdc72d61c9d3da3b4f8cfdedc0f612f61551fcc1685370f" Jan 27 19:00:50 crc kubenswrapper[4915]: I0127 19:00:50.179424 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-c5xhp" Jan 27 19:00:50 crc kubenswrapper[4915]: I0127 19:00:50.184040 4915 generic.go:334] "Generic (PLEG): container finished" podID="b821973f-c181-4100-b31a-d9f1e9f90ebd" containerID="49805bddf1add5eeff06cfb04aca4a72f2aa708024fbe735b10e3d7051309f6f" exitCode=0 Jan 27 19:00:50 crc kubenswrapper[4915]: I0127 19:00:50.184173 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-58b8-account-create-update-nm57k" event={"ID":"b821973f-c181-4100-b31a-d9f1e9f90ebd","Type":"ContainerDied","Data":"49805bddf1add5eeff06cfb04aca4a72f2aa708024fbe735b10e3d7051309f6f"} Jan 27 19:00:50 crc kubenswrapper[4915]: I0127 19:00:50.184262 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-58b8-account-create-update-nm57k" event={"ID":"b821973f-c181-4100-b31a-d9f1e9f90ebd","Type":"ContainerStarted","Data":"b786cf39677e4eb48cb847a8f01fc324f72b9445b203d7a5b1c687fd4ac72c98"} Jan 27 19:00:50 crc kubenswrapper[4915]: I0127 19:00:50.189202 4915 generic.go:334] "Generic (PLEG): container finished" podID="a6908876-dd99-42e8-b22f-de6349dca8e5" containerID="dbafdb8a016d70bdd8b59a83241783799f12d6166b6a556bd79131357dd84181" exitCode=0 Jan 27 19:00:50 crc kubenswrapper[4915]: I0127 19:00:50.189327 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-blb6q" event={"ID":"a6908876-dd99-42e8-b22f-de6349dca8e5","Type":"ContainerDied","Data":"dbafdb8a016d70bdd8b59a83241783799f12d6166b6a556bd79131357dd84181"} Jan 27 19:00:50 crc kubenswrapper[4915]: I0127 19:00:50.189456 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-blb6q" event={"ID":"a6908876-dd99-42e8-b22f-de6349dca8e5","Type":"ContainerStarted","Data":"d8b1bda040ec680ed0dac77c00e846c5129a1af3eed9293b8be736a25b185b62"} Jan 27 19:00:50 crc kubenswrapper[4915]: I0127 19:00:50.194269 4915 generic.go:334] "Generic (PLEG): container finished" podID="725ac4ea-6fad-4426-bab4-3e4a5ff3251c" containerID="6fca1c9af90ea84832fc9b8dba07e316c52eae6fd6d8b293f38c5cf84004bd05" exitCode=0 Jan 27 19:00:50 crc kubenswrapper[4915]: I0127 19:00:50.194310 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3741-account-create-update-t6kdm" event={"ID":"725ac4ea-6fad-4426-bab4-3e4a5ff3251c","Type":"ContainerDied","Data":"6fca1c9af90ea84832fc9b8dba07e316c52eae6fd6d8b293f38c5cf84004bd05"} Jan 27 19:00:50 crc kubenswrapper[4915]: I0127 19:00:50.400854 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-c5xhp"] Jan 27 19:00:50 crc kubenswrapper[4915]: I0127 19:00:50.413616 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-c5xhp"] Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.203131 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" event={"ID":"c29f99b1-e205-4491-8c84-7d6247e3752c","Type":"ContainerStarted","Data":"b7dd8773947a7d57e333284840daf7a29c6af036c5f4dc30bfd43ad807d04fa7"} Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.203379 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.221997 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" podStartSLOduration=3.221977474 podStartE2EDuration="3.221977474s" podCreationTimestamp="2026-01-27 19:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:00:51.221863151 +0000 UTC m=+1142.579716815" watchObservedRunningTime="2026-01-27 19:00:51.221977474 +0000 UTC m=+1142.579831138" Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.382355 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="683aae0c-4149-4786-b2b0-e0c1fabaa622" path="/var/lib/kubelet/pods/683aae0c-4149-4786-b2b0-e0c1fabaa622/volumes" Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.663941 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-58b8-account-create-update-nm57k" Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.685053 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8hc4\" (UniqueName: \"kubernetes.io/projected/b821973f-c181-4100-b31a-d9f1e9f90ebd-kube-api-access-b8hc4\") pod \"b821973f-c181-4100-b31a-d9f1e9f90ebd\" (UID: \"b821973f-c181-4100-b31a-d9f1e9f90ebd\") " Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.685300 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b821973f-c181-4100-b31a-d9f1e9f90ebd-operator-scripts\") pod \"b821973f-c181-4100-b31a-d9f1e9f90ebd\" (UID: \"b821973f-c181-4100-b31a-d9f1e9f90ebd\") " Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.686228 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b821973f-c181-4100-b31a-d9f1e9f90ebd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b821973f-c181-4100-b31a-d9f1e9f90ebd" (UID: "b821973f-c181-4100-b31a-d9f1e9f90ebd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.686733 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b821973f-c181-4100-b31a-d9f1e9f90ebd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.692473 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b821973f-c181-4100-b31a-d9f1e9f90ebd-kube-api-access-b8hc4" (OuterVolumeSpecName: "kube-api-access-b8hc4") pod "b821973f-c181-4100-b31a-d9f1e9f90ebd" (UID: "b821973f-c181-4100-b31a-d9f1e9f90ebd"). InnerVolumeSpecName "kube-api-access-b8hc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.787470 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8hc4\" (UniqueName: \"kubernetes.io/projected/b821973f-c181-4100-b31a-d9f1e9f90ebd-kube-api-access-b8hc4\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.881063 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3741-account-create-update-t6kdm" Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.887771 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/725ac4ea-6fad-4426-bab4-3e4a5ff3251c-operator-scripts\") pod \"725ac4ea-6fad-4426-bab4-3e4a5ff3251c\" (UID: \"725ac4ea-6fad-4426-bab4-3e4a5ff3251c\") " Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.887868 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2s52\" (UniqueName: \"kubernetes.io/projected/725ac4ea-6fad-4426-bab4-3e4a5ff3251c-kube-api-access-d2s52\") pod \"725ac4ea-6fad-4426-bab4-3e4a5ff3251c\" (UID: \"725ac4ea-6fad-4426-bab4-3e4a5ff3251c\") " Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.888820 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w4pf8" Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.889160 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/725ac4ea-6fad-4426-bab4-3e4a5ff3251c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "725ac4ea-6fad-4426-bab4-3e4a5ff3251c" (UID: "725ac4ea-6fad-4426-bab4-3e4a5ff3251c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.891242 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/725ac4ea-6fad-4426-bab4-3e4a5ff3251c-kube-api-access-d2s52" (OuterVolumeSpecName: "kube-api-access-d2s52") pod "725ac4ea-6fad-4426-bab4-3e4a5ff3251c" (UID: "725ac4ea-6fad-4426-bab4-3e4a5ff3251c"). InnerVolumeSpecName "kube-api-access-d2s52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.899583 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9380-account-create-update-244pm" Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.904108 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-blb6q" Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.917630 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fpwgv" Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.988459 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/375893e2-1bac-422d-9768-3b462dccb8a5-operator-scripts\") pod \"375893e2-1bac-422d-9768-3b462dccb8a5\" (UID: \"375893e2-1bac-422d-9768-3b462dccb8a5\") " Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.988760 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5f64\" (UniqueName: \"kubernetes.io/projected/8a7a7595-f421-4ec0-afa2-427233ac9191-kube-api-access-g5f64\") pod \"8a7a7595-f421-4ec0-afa2-427233ac9191\" (UID: \"8a7a7595-f421-4ec0-afa2-427233ac9191\") " Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.988837 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6908876-dd99-42e8-b22f-de6349dca8e5-operator-scripts\") pod \"a6908876-dd99-42e8-b22f-de6349dca8e5\" (UID: \"a6908876-dd99-42e8-b22f-de6349dca8e5\") " Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.988860 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbdgg\" (UniqueName: \"kubernetes.io/projected/375893e2-1bac-422d-9768-3b462dccb8a5-kube-api-access-hbdgg\") pod \"375893e2-1bac-422d-9768-3b462dccb8a5\" (UID: \"375893e2-1bac-422d-9768-3b462dccb8a5\") " Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.988889 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a7a7595-f421-4ec0-afa2-427233ac9191-operator-scripts\") pod \"8a7a7595-f421-4ec0-afa2-427233ac9191\" (UID: \"8a7a7595-f421-4ec0-afa2-427233ac9191\") " Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.988909 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbwsk\" (UniqueName: \"kubernetes.io/projected/a6908876-dd99-42e8-b22f-de6349dca8e5-kube-api-access-jbwsk\") pod \"a6908876-dd99-42e8-b22f-de6349dca8e5\" (UID: \"a6908876-dd99-42e8-b22f-de6349dca8e5\") " Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.988946 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jskc9\" (UniqueName: \"kubernetes.io/projected/021d6127-8503-45c4-b420-7fd173e1a789-kube-api-access-jskc9\") pod \"021d6127-8503-45c4-b420-7fd173e1a789\" (UID: \"021d6127-8503-45c4-b420-7fd173e1a789\") " Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.988969 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/021d6127-8503-45c4-b420-7fd173e1a789-operator-scripts\") pod \"021d6127-8503-45c4-b420-7fd173e1a789\" (UID: \"021d6127-8503-45c4-b420-7fd173e1a789\") " Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.989169 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/725ac4ea-6fad-4426-bab4-3e4a5ff3251c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.989182 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2s52\" (UniqueName: \"kubernetes.io/projected/725ac4ea-6fad-4426-bab4-3e4a5ff3251c-kube-api-access-d2s52\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.989521 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/021d6127-8503-45c4-b420-7fd173e1a789-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "021d6127-8503-45c4-b420-7fd173e1a789" (UID: "021d6127-8503-45c4-b420-7fd173e1a789"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.989884 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/375893e2-1bac-422d-9768-3b462dccb8a5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "375893e2-1bac-422d-9768-3b462dccb8a5" (UID: "375893e2-1bac-422d-9768-3b462dccb8a5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.990408 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a7a7595-f421-4ec0-afa2-427233ac9191-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8a7a7595-f421-4ec0-afa2-427233ac9191" (UID: "8a7a7595-f421-4ec0-afa2-427233ac9191"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.990615 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6908876-dd99-42e8-b22f-de6349dca8e5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a6908876-dd99-42e8-b22f-de6349dca8e5" (UID: "a6908876-dd99-42e8-b22f-de6349dca8e5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.996037 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a7a7595-f421-4ec0-afa2-427233ac9191-kube-api-access-g5f64" (OuterVolumeSpecName: "kube-api-access-g5f64") pod "8a7a7595-f421-4ec0-afa2-427233ac9191" (UID: "8a7a7595-f421-4ec0-afa2-427233ac9191"). InnerVolumeSpecName "kube-api-access-g5f64". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.996074 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/375893e2-1bac-422d-9768-3b462dccb8a5-kube-api-access-hbdgg" (OuterVolumeSpecName: "kube-api-access-hbdgg") pod "375893e2-1bac-422d-9768-3b462dccb8a5" (UID: "375893e2-1bac-422d-9768-3b462dccb8a5"). InnerVolumeSpecName "kube-api-access-hbdgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.996353 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6908876-dd99-42e8-b22f-de6349dca8e5-kube-api-access-jbwsk" (OuterVolumeSpecName: "kube-api-access-jbwsk") pod "a6908876-dd99-42e8-b22f-de6349dca8e5" (UID: "a6908876-dd99-42e8-b22f-de6349dca8e5"). InnerVolumeSpecName "kube-api-access-jbwsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:51 crc kubenswrapper[4915]: I0127 19:00:51.996844 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/021d6127-8503-45c4-b420-7fd173e1a789-kube-api-access-jskc9" (OuterVolumeSpecName: "kube-api-access-jskc9") pod "021d6127-8503-45c4-b420-7fd173e1a789" (UID: "021d6127-8503-45c4-b420-7fd173e1a789"). InnerVolumeSpecName "kube-api-access-jskc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:52 crc kubenswrapper[4915]: I0127 19:00:52.090442 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jskc9\" (UniqueName: \"kubernetes.io/projected/021d6127-8503-45c4-b420-7fd173e1a789-kube-api-access-jskc9\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:52 crc kubenswrapper[4915]: I0127 19:00:52.090483 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/021d6127-8503-45c4-b420-7fd173e1a789-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:52 crc kubenswrapper[4915]: I0127 19:00:52.090497 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/375893e2-1bac-422d-9768-3b462dccb8a5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:52 crc kubenswrapper[4915]: I0127 19:00:52.090509 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5f64\" (UniqueName: \"kubernetes.io/projected/8a7a7595-f421-4ec0-afa2-427233ac9191-kube-api-access-g5f64\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:52 crc kubenswrapper[4915]: I0127 19:00:52.090520 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6908876-dd99-42e8-b22f-de6349dca8e5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:52 crc kubenswrapper[4915]: I0127 19:00:52.090532 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbdgg\" (UniqueName: \"kubernetes.io/projected/375893e2-1bac-422d-9768-3b462dccb8a5-kube-api-access-hbdgg\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:52 crc kubenswrapper[4915]: I0127 19:00:52.090542 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a7a7595-f421-4ec0-afa2-427233ac9191-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:52 crc kubenswrapper[4915]: I0127 19:00:52.090555 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbwsk\" (UniqueName: \"kubernetes.io/projected/a6908876-dd99-42e8-b22f-de6349dca8e5-kube-api-access-jbwsk\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:52 crc kubenswrapper[4915]: I0127 19:00:52.217468 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9380-account-create-update-244pm" Jan 27 19:00:52 crc kubenswrapper[4915]: I0127 19:00:52.223049 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9380-account-create-update-244pm" event={"ID":"021d6127-8503-45c4-b420-7fd173e1a789","Type":"ContainerDied","Data":"2ea0fcd15bf3a763d4bb4aa02d1d654d188e37bb792f9c64784a6961e5eb1bb3"} Jan 27 19:00:52 crc kubenswrapper[4915]: I0127 19:00:52.223106 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ea0fcd15bf3a763d4bb4aa02d1d654d188e37bb792f9c64784a6961e5eb1bb3" Jan 27 19:00:52 crc kubenswrapper[4915]: I0127 19:00:52.225887 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-58b8-account-create-update-nm57k" event={"ID":"b821973f-c181-4100-b31a-d9f1e9f90ebd","Type":"ContainerDied","Data":"b786cf39677e4eb48cb847a8f01fc324f72b9445b203d7a5b1c687fd4ac72c98"} Jan 27 19:00:52 crc kubenswrapper[4915]: I0127 19:00:52.225921 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b786cf39677e4eb48cb847a8f01fc324f72b9445b203d7a5b1c687fd4ac72c98" Jan 27 19:00:52 crc kubenswrapper[4915]: I0127 19:00:52.225941 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-58b8-account-create-update-nm57k" Jan 27 19:00:52 crc kubenswrapper[4915]: I0127 19:00:52.227290 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-blb6q" event={"ID":"a6908876-dd99-42e8-b22f-de6349dca8e5","Type":"ContainerDied","Data":"d8b1bda040ec680ed0dac77c00e846c5129a1af3eed9293b8be736a25b185b62"} Jan 27 19:00:52 crc kubenswrapper[4915]: I0127 19:00:52.227310 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8b1bda040ec680ed0dac77c00e846c5129a1af3eed9293b8be736a25b185b62" Jan 27 19:00:52 crc kubenswrapper[4915]: I0127 19:00:52.227328 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-blb6q" Jan 27 19:00:52 crc kubenswrapper[4915]: I0127 19:00:52.228294 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3741-account-create-update-t6kdm" event={"ID":"725ac4ea-6fad-4426-bab4-3e4a5ff3251c","Type":"ContainerDied","Data":"ecc5cd08de24525baa824f6e88c1f304d59613bb9898e9b2b52160cdcffc7179"} Jan 27 19:00:52 crc kubenswrapper[4915]: I0127 19:00:52.228316 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecc5cd08de24525baa824f6e88c1f304d59613bb9898e9b2b52160cdcffc7179" Jan 27 19:00:52 crc kubenswrapper[4915]: I0127 19:00:52.228367 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3741-account-create-update-t6kdm" Jan 27 19:00:52 crc kubenswrapper[4915]: I0127 19:00:52.236070 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fpwgv" event={"ID":"375893e2-1bac-422d-9768-3b462dccb8a5","Type":"ContainerDied","Data":"e7232fc7c44c8dfcae80b9df71d9828794f731b59ee234bcf07d48c91117b7a9"} Jan 27 19:00:52 crc kubenswrapper[4915]: I0127 19:00:52.236117 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7232fc7c44c8dfcae80b9df71d9828794f731b59ee234bcf07d48c91117b7a9" Jan 27 19:00:52 crc kubenswrapper[4915]: I0127 19:00:52.236141 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fpwgv" Jan 27 19:00:52 crc kubenswrapper[4915]: I0127 19:00:52.238368 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w4pf8" event={"ID":"8a7a7595-f421-4ec0-afa2-427233ac9191","Type":"ContainerDied","Data":"b3cd8592425c1e74626388bd098ede463c42a3f12fa033f42d81bcd7b8b5fcae"} Jan 27 19:00:52 crc kubenswrapper[4915]: I0127 19:00:52.238415 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cd8592425c1e74626388bd098ede463c42a3f12fa033f42d81bcd7b8b5fcae" Jan 27 19:00:52 crc kubenswrapper[4915]: I0127 19:00:52.238523 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w4pf8" Jan 27 19:00:54 crc kubenswrapper[4915]: I0127 19:00:54.026862 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-67s8h"] Jan 27 19:00:54 crc kubenswrapper[4915]: E0127 19:00:54.027773 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b821973f-c181-4100-b31a-d9f1e9f90ebd" containerName="mariadb-account-create-update" Jan 27 19:00:54 crc kubenswrapper[4915]: I0127 19:00:54.027822 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b821973f-c181-4100-b31a-d9f1e9f90ebd" containerName="mariadb-account-create-update" Jan 27 19:00:54 crc kubenswrapper[4915]: E0127 19:00:54.027840 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a7a7595-f421-4ec0-afa2-427233ac9191" containerName="mariadb-database-create" Jan 27 19:00:54 crc kubenswrapper[4915]: I0127 19:00:54.027851 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a7a7595-f421-4ec0-afa2-427233ac9191" containerName="mariadb-database-create" Jan 27 19:00:54 crc kubenswrapper[4915]: E0127 19:00:54.027882 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="725ac4ea-6fad-4426-bab4-3e4a5ff3251c" containerName="mariadb-account-create-update" Jan 27 19:00:54 crc kubenswrapper[4915]: I0127 19:00:54.027895 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="725ac4ea-6fad-4426-bab4-3e4a5ff3251c" containerName="mariadb-account-create-update" Jan 27 19:00:54 crc kubenswrapper[4915]: E0127 19:00:54.027927 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="375893e2-1bac-422d-9768-3b462dccb8a5" containerName="mariadb-database-create" Jan 27 19:00:54 crc kubenswrapper[4915]: I0127 19:00:54.027941 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="375893e2-1bac-422d-9768-3b462dccb8a5" containerName="mariadb-database-create" Jan 27 19:00:54 crc kubenswrapper[4915]: E0127 19:00:54.027961 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="683aae0c-4149-4786-b2b0-e0c1fabaa622" containerName="init" Jan 27 19:00:54 crc kubenswrapper[4915]: I0127 19:00:54.027971 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="683aae0c-4149-4786-b2b0-e0c1fabaa622" containerName="init" Jan 27 19:00:54 crc kubenswrapper[4915]: E0127 19:00:54.028167 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6908876-dd99-42e8-b22f-de6349dca8e5" containerName="mariadb-database-create" Jan 27 19:00:54 crc kubenswrapper[4915]: I0127 19:00:54.028178 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6908876-dd99-42e8-b22f-de6349dca8e5" containerName="mariadb-database-create" Jan 27 19:00:54 crc kubenswrapper[4915]: E0127 19:00:54.028199 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="021d6127-8503-45c4-b420-7fd173e1a789" containerName="mariadb-account-create-update" Jan 27 19:00:54 crc kubenswrapper[4915]: I0127 19:00:54.028209 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="021d6127-8503-45c4-b420-7fd173e1a789" containerName="mariadb-account-create-update" Jan 27 19:00:54 crc kubenswrapper[4915]: I0127 19:00:54.028452 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="683aae0c-4149-4786-b2b0-e0c1fabaa622" containerName="init" Jan 27 19:00:54 crc kubenswrapper[4915]: I0127 19:00:54.028479 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="375893e2-1bac-422d-9768-3b462dccb8a5" containerName="mariadb-database-create" Jan 27 19:00:54 crc kubenswrapper[4915]: I0127 19:00:54.028494 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="b821973f-c181-4100-b31a-d9f1e9f90ebd" containerName="mariadb-account-create-update" Jan 27 19:00:54 crc kubenswrapper[4915]: I0127 19:00:54.028509 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="021d6127-8503-45c4-b420-7fd173e1a789" containerName="mariadb-account-create-update" Jan 27 19:00:54 crc kubenswrapper[4915]: I0127 19:00:54.028528 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a7a7595-f421-4ec0-afa2-427233ac9191" containerName="mariadb-database-create" Jan 27 19:00:54 crc kubenswrapper[4915]: I0127 19:00:54.028558 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="725ac4ea-6fad-4426-bab4-3e4a5ff3251c" containerName="mariadb-account-create-update" Jan 27 19:00:54 crc kubenswrapper[4915]: I0127 19:00:54.028572 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6908876-dd99-42e8-b22f-de6349dca8e5" containerName="mariadb-database-create" Jan 27 19:00:54 crc kubenswrapper[4915]: I0127 19:00:54.029594 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-67s8h" Jan 27 19:00:54 crc kubenswrapper[4915]: I0127 19:00:54.035267 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 27 19:00:54 crc kubenswrapper[4915]: I0127 19:00:54.035980 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-67s8h"] Jan 27 19:00:54 crc kubenswrapper[4915]: I0127 19:00:54.226672 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2ktq\" (UniqueName: \"kubernetes.io/projected/2368f66b-5bd9-4cf4-8dc9-848d7565f1cd-kube-api-access-m2ktq\") pod \"root-account-create-update-67s8h\" (UID: \"2368f66b-5bd9-4cf4-8dc9-848d7565f1cd\") " pod="openstack/root-account-create-update-67s8h" Jan 27 19:00:54 crc kubenswrapper[4915]: I0127 19:00:54.226725 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2368f66b-5bd9-4cf4-8dc9-848d7565f1cd-operator-scripts\") pod \"root-account-create-update-67s8h\" (UID: \"2368f66b-5bd9-4cf4-8dc9-848d7565f1cd\") " pod="openstack/root-account-create-update-67s8h" Jan 27 19:00:54 crc kubenswrapper[4915]: I0127 19:00:54.328483 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2ktq\" (UniqueName: \"kubernetes.io/projected/2368f66b-5bd9-4cf4-8dc9-848d7565f1cd-kube-api-access-m2ktq\") pod \"root-account-create-update-67s8h\" (UID: \"2368f66b-5bd9-4cf4-8dc9-848d7565f1cd\") " pod="openstack/root-account-create-update-67s8h" Jan 27 19:00:54 crc kubenswrapper[4915]: I0127 19:00:54.328567 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2368f66b-5bd9-4cf4-8dc9-848d7565f1cd-operator-scripts\") pod \"root-account-create-update-67s8h\" (UID: \"2368f66b-5bd9-4cf4-8dc9-848d7565f1cd\") " pod="openstack/root-account-create-update-67s8h" Jan 27 19:00:54 crc kubenswrapper[4915]: I0127 19:00:54.329471 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2368f66b-5bd9-4cf4-8dc9-848d7565f1cd-operator-scripts\") pod \"root-account-create-update-67s8h\" (UID: \"2368f66b-5bd9-4cf4-8dc9-848d7565f1cd\") " pod="openstack/root-account-create-update-67s8h" Jan 27 19:00:54 crc kubenswrapper[4915]: I0127 19:00:54.355542 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2ktq\" (UniqueName: \"kubernetes.io/projected/2368f66b-5bd9-4cf4-8dc9-848d7565f1cd-kube-api-access-m2ktq\") pod \"root-account-create-update-67s8h\" (UID: \"2368f66b-5bd9-4cf4-8dc9-848d7565f1cd\") " pod="openstack/root-account-create-update-67s8h" Jan 27 19:00:54 crc kubenswrapper[4915]: I0127 19:00:54.654710 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-67s8h" Jan 27 19:00:55 crc kubenswrapper[4915]: I0127 19:00:55.178272 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-67s8h"] Jan 27 19:00:55 crc kubenswrapper[4915]: I0127 19:00:55.265673 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-67s8h" event={"ID":"2368f66b-5bd9-4cf4-8dc9-848d7565f1cd","Type":"ContainerStarted","Data":"2ab83ae6928f699035e229851c0ed98b2399e3337fa617a1f436d36368ad03ae"} Jan 27 19:00:55 crc kubenswrapper[4915]: I0127 19:00:55.270048 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-lqql2" event={"ID":"d599751d-9b39-4de0-bce6-d1fd71f1fe0e","Type":"ContainerStarted","Data":"986dc9ed2573c43d80b5094114f512209e4b19c99546c56aae48a2da45b7c15d"} Jan 27 19:00:55 crc kubenswrapper[4915]: I0127 19:00:55.292374 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-lqql2" podStartSLOduration=2.39097233 podStartE2EDuration="7.292353564s" podCreationTimestamp="2026-01-27 19:00:48 +0000 UTC" firstStartedPulling="2026-01-27 19:00:49.508127793 +0000 UTC m=+1140.865981457" lastFinishedPulling="2026-01-27 19:00:54.409509027 +0000 UTC m=+1145.767362691" observedRunningTime="2026-01-27 19:00:55.285729451 +0000 UTC m=+1146.643583115" watchObservedRunningTime="2026-01-27 19:00:55.292353564 +0000 UTC m=+1146.650207228" Jan 27 19:00:56 crc kubenswrapper[4915]: I0127 19:00:56.281136 4915 generic.go:334] "Generic (PLEG): container finished" podID="2368f66b-5bd9-4cf4-8dc9-848d7565f1cd" containerID="8e59ad30f91880c70b1191ba7fdf5268caf84fda39488bb872a4b8b61d203b13" exitCode=0 Jan 27 19:00:56 crc kubenswrapper[4915]: I0127 19:00:56.281192 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-67s8h" event={"ID":"2368f66b-5bd9-4cf4-8dc9-848d7565f1cd","Type":"ContainerDied","Data":"8e59ad30f91880c70b1191ba7fdf5268caf84fda39488bb872a4b8b61d203b13"} Jan 27 19:00:57 crc kubenswrapper[4915]: I0127 19:00:57.667668 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-67s8h" Jan 27 19:00:57 crc kubenswrapper[4915]: I0127 19:00:57.824462 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2ktq\" (UniqueName: \"kubernetes.io/projected/2368f66b-5bd9-4cf4-8dc9-848d7565f1cd-kube-api-access-m2ktq\") pod \"2368f66b-5bd9-4cf4-8dc9-848d7565f1cd\" (UID: \"2368f66b-5bd9-4cf4-8dc9-848d7565f1cd\") " Jan 27 19:00:57 crc kubenswrapper[4915]: I0127 19:00:57.824582 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2368f66b-5bd9-4cf4-8dc9-848d7565f1cd-operator-scripts\") pod \"2368f66b-5bd9-4cf4-8dc9-848d7565f1cd\" (UID: \"2368f66b-5bd9-4cf4-8dc9-848d7565f1cd\") " Jan 27 19:00:57 crc kubenswrapper[4915]: I0127 19:00:57.825746 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2368f66b-5bd9-4cf4-8dc9-848d7565f1cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2368f66b-5bd9-4cf4-8dc9-848d7565f1cd" (UID: "2368f66b-5bd9-4cf4-8dc9-848d7565f1cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:57 crc kubenswrapper[4915]: I0127 19:00:57.830660 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2368f66b-5bd9-4cf4-8dc9-848d7565f1cd-kube-api-access-m2ktq" (OuterVolumeSpecName: "kube-api-access-m2ktq") pod "2368f66b-5bd9-4cf4-8dc9-848d7565f1cd" (UID: "2368f66b-5bd9-4cf4-8dc9-848d7565f1cd"). InnerVolumeSpecName "kube-api-access-m2ktq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:57 crc kubenswrapper[4915]: I0127 19:00:57.927021 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2ktq\" (UniqueName: \"kubernetes.io/projected/2368f66b-5bd9-4cf4-8dc9-848d7565f1cd-kube-api-access-m2ktq\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:57 crc kubenswrapper[4915]: I0127 19:00:57.927061 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2368f66b-5bd9-4cf4-8dc9-848d7565f1cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:58 crc kubenswrapper[4915]: I0127 19:00:58.305427 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-67s8h" event={"ID":"2368f66b-5bd9-4cf4-8dc9-848d7565f1cd","Type":"ContainerDied","Data":"2ab83ae6928f699035e229851c0ed98b2399e3337fa617a1f436d36368ad03ae"} Jan 27 19:00:58 crc kubenswrapper[4915]: I0127 19:00:58.305499 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ab83ae6928f699035e229851c0ed98b2399e3337fa617a1f436d36368ad03ae" Jan 27 19:00:58 crc kubenswrapper[4915]: I0127 19:00:58.305453 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-67s8h" Jan 27 19:00:58 crc kubenswrapper[4915]: I0127 19:00:58.308346 4915 generic.go:334] "Generic (PLEG): container finished" podID="d599751d-9b39-4de0-bce6-d1fd71f1fe0e" containerID="986dc9ed2573c43d80b5094114f512209e4b19c99546c56aae48a2da45b7c15d" exitCode=0 Jan 27 19:00:58 crc kubenswrapper[4915]: I0127 19:00:58.308405 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-lqql2" event={"ID":"d599751d-9b39-4de0-bce6-d1fd71f1fe0e","Type":"ContainerDied","Data":"986dc9ed2573c43d80b5094114f512209e4b19c99546c56aae48a2da45b7c15d"} Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.030009 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.082903 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-bm9dh"] Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.083133 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" podUID="84957bd3-d9f0-4452-9311-c1dee4133184" containerName="dnsmasq-dns" containerID="cri-o://93566813ef8a71e9665950bb1a85a128a064e1dce7aaeafbf6ff3f225b55f567" gracePeriod=10 Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.327059 4915 generic.go:334] "Generic (PLEG): container finished" podID="84957bd3-d9f0-4452-9311-c1dee4133184" containerID="93566813ef8a71e9665950bb1a85a128a064e1dce7aaeafbf6ff3f225b55f567" exitCode=0 Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.327287 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" event={"ID":"84957bd3-d9f0-4452-9311-c1dee4133184","Type":"ContainerDied","Data":"93566813ef8a71e9665950bb1a85a128a064e1dce7aaeafbf6ff3f225b55f567"} Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.560448 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.633631 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-lqql2" Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.663329 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d599751d-9b39-4de0-bce6-d1fd71f1fe0e-config-data\") pod \"d599751d-9b39-4de0-bce6-d1fd71f1fe0e\" (UID: \"d599751d-9b39-4de0-bce6-d1fd71f1fe0e\") " Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.663397 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84957bd3-d9f0-4452-9311-c1dee4133184-dns-svc\") pod \"84957bd3-d9f0-4452-9311-c1dee4133184\" (UID: \"84957bd3-d9f0-4452-9311-c1dee4133184\") " Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.663436 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xglhn\" (UniqueName: \"kubernetes.io/projected/84957bd3-d9f0-4452-9311-c1dee4133184-kube-api-access-xglhn\") pod \"84957bd3-d9f0-4452-9311-c1dee4133184\" (UID: \"84957bd3-d9f0-4452-9311-c1dee4133184\") " Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.663460 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84957bd3-d9f0-4452-9311-c1dee4133184-ovsdbserver-nb\") pod \"84957bd3-d9f0-4452-9311-c1dee4133184\" (UID: \"84957bd3-d9f0-4452-9311-c1dee4133184\") " Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.663493 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84957bd3-d9f0-4452-9311-c1dee4133184-ovsdbserver-sb\") pod \"84957bd3-d9f0-4452-9311-c1dee4133184\" (UID: \"84957bd3-d9f0-4452-9311-c1dee4133184\") " Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.663513 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d599751d-9b39-4de0-bce6-d1fd71f1fe0e-combined-ca-bundle\") pod \"d599751d-9b39-4de0-bce6-d1fd71f1fe0e\" (UID: \"d599751d-9b39-4de0-bce6-d1fd71f1fe0e\") " Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.663564 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84957bd3-d9f0-4452-9311-c1dee4133184-config\") pod \"84957bd3-d9f0-4452-9311-c1dee4133184\" (UID: \"84957bd3-d9f0-4452-9311-c1dee4133184\") " Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.663665 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbrvw\" (UniqueName: \"kubernetes.io/projected/d599751d-9b39-4de0-bce6-d1fd71f1fe0e-kube-api-access-rbrvw\") pod \"d599751d-9b39-4de0-bce6-d1fd71f1fe0e\" (UID: \"d599751d-9b39-4de0-bce6-d1fd71f1fe0e\") " Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.669929 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d599751d-9b39-4de0-bce6-d1fd71f1fe0e-kube-api-access-rbrvw" (OuterVolumeSpecName: "kube-api-access-rbrvw") pod "d599751d-9b39-4de0-bce6-d1fd71f1fe0e" (UID: "d599751d-9b39-4de0-bce6-d1fd71f1fe0e"). InnerVolumeSpecName "kube-api-access-rbrvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.675441 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84957bd3-d9f0-4452-9311-c1dee4133184-kube-api-access-xglhn" (OuterVolumeSpecName: "kube-api-access-xglhn") pod "84957bd3-d9f0-4452-9311-c1dee4133184" (UID: "84957bd3-d9f0-4452-9311-c1dee4133184"). InnerVolumeSpecName "kube-api-access-xglhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.687934 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d599751d-9b39-4de0-bce6-d1fd71f1fe0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d599751d-9b39-4de0-bce6-d1fd71f1fe0e" (UID: "d599751d-9b39-4de0-bce6-d1fd71f1fe0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.701410 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84957bd3-d9f0-4452-9311-c1dee4133184-config" (OuterVolumeSpecName: "config") pod "84957bd3-d9f0-4452-9311-c1dee4133184" (UID: "84957bd3-d9f0-4452-9311-c1dee4133184"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.703363 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84957bd3-d9f0-4452-9311-c1dee4133184-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "84957bd3-d9f0-4452-9311-c1dee4133184" (UID: "84957bd3-d9f0-4452-9311-c1dee4133184"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.705191 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84957bd3-d9f0-4452-9311-c1dee4133184-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "84957bd3-d9f0-4452-9311-c1dee4133184" (UID: "84957bd3-d9f0-4452-9311-c1dee4133184"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.712912 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84957bd3-d9f0-4452-9311-c1dee4133184-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "84957bd3-d9f0-4452-9311-c1dee4133184" (UID: "84957bd3-d9f0-4452-9311-c1dee4133184"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.715385 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d599751d-9b39-4de0-bce6-d1fd71f1fe0e-config-data" (OuterVolumeSpecName: "config-data") pod "d599751d-9b39-4de0-bce6-d1fd71f1fe0e" (UID: "d599751d-9b39-4de0-bce6-d1fd71f1fe0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.766177 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84957bd3-d9f0-4452-9311-c1dee4133184-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.766207 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84957bd3-d9f0-4452-9311-c1dee4133184-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.766216 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d599751d-9b39-4de0-bce6-d1fd71f1fe0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.766246 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84957bd3-d9f0-4452-9311-c1dee4133184-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.766256 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbrvw\" (UniqueName: \"kubernetes.io/projected/d599751d-9b39-4de0-bce6-d1fd71f1fe0e-kube-api-access-rbrvw\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.766267 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d599751d-9b39-4de0-bce6-d1fd71f1fe0e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.766274 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84957bd3-d9f0-4452-9311-c1dee4133184-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:59 crc kubenswrapper[4915]: I0127 19:00:59.766284 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xglhn\" (UniqueName: \"kubernetes.io/projected/84957bd3-d9f0-4452-9311-c1dee4133184-kube-api-access-xglhn\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.337888 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-lqql2" event={"ID":"d599751d-9b39-4de0-bce6-d1fd71f1fe0e","Type":"ContainerDied","Data":"c45a25b08d29d1b2779b024960440c24904ad11b12726103b1621c84b050a48b"} Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.337936 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c45a25b08d29d1b2779b024960440c24904ad11b12726103b1621c84b050a48b" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.337952 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-lqql2" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.342782 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" event={"ID":"84957bd3-d9f0-4452-9311-c1dee4133184","Type":"ContainerDied","Data":"7b2723cf717d8354cb1bf36c546533768f97acd5eefd2607fa5f387f361c9eea"} Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.342950 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-bm9dh" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.342953 4915 scope.go:117] "RemoveContainer" containerID="93566813ef8a71e9665950bb1a85a128a064e1dce7aaeafbf6ff3f225b55f567" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.394558 4915 scope.go:117] "RemoveContainer" containerID="f53109c47e79e8245f76b3f3cb3f5dddf48f6c680d364bf20f999fb39b79f031" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.404068 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-bm9dh"] Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.418289 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-bm9dh"] Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.636216 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-d88pg"] Jan 27 19:01:00 crc kubenswrapper[4915]: E0127 19:01:00.636626 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84957bd3-d9f0-4452-9311-c1dee4133184" containerName="init" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.636646 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="84957bd3-d9f0-4452-9311-c1dee4133184" containerName="init" Jan 27 19:01:00 crc kubenswrapper[4915]: E0127 19:01:00.636667 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2368f66b-5bd9-4cf4-8dc9-848d7565f1cd" containerName="mariadb-account-create-update" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.636675 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="2368f66b-5bd9-4cf4-8dc9-848d7565f1cd" containerName="mariadb-account-create-update" Jan 27 19:01:00 crc kubenswrapper[4915]: E0127 19:01:00.636683 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d599751d-9b39-4de0-bce6-d1fd71f1fe0e" containerName="keystone-db-sync" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.636690 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d599751d-9b39-4de0-bce6-d1fd71f1fe0e" containerName="keystone-db-sync" Jan 27 19:01:00 crc kubenswrapper[4915]: E0127 19:01:00.636701 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84957bd3-d9f0-4452-9311-c1dee4133184" containerName="dnsmasq-dns" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.636706 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="84957bd3-d9f0-4452-9311-c1dee4133184" containerName="dnsmasq-dns" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.636867 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="84957bd3-d9f0-4452-9311-c1dee4133184" containerName="dnsmasq-dns" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.636880 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="2368f66b-5bd9-4cf4-8dc9-848d7565f1cd" containerName="mariadb-account-create-update" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.636899 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="d599751d-9b39-4de0-bce6-d1fd71f1fe0e" containerName="keystone-db-sync" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.637729 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-d88pg" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.647066 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-rfd8l"] Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.649004 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rfd8l" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.670995 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.671101 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.671214 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.671262 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.672645 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fc7sv" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.685566 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8rdr\" (UniqueName: \"kubernetes.io/projected/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-kube-api-access-l8rdr\") pod \"keystone-bootstrap-rfd8l\" (UID: \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\") " pod="openstack/keystone-bootstrap-rfd8l" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.685608 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-fernet-keys\") pod \"keystone-bootstrap-rfd8l\" (UID: \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\") " pod="openstack/keystone-bootstrap-rfd8l" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.685631 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-d88pg\" (UID: \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\") " pod="openstack/dnsmasq-dns-bbf5cc879-d88pg" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.685690 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-d88pg\" (UID: \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\") " pod="openstack/dnsmasq-dns-bbf5cc879-d88pg" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.685734 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-config\") pod \"dnsmasq-dns-bbf5cc879-d88pg\" (UID: \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\") " pod="openstack/dnsmasq-dns-bbf5cc879-d88pg" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.685751 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-config-data\") pod \"keystone-bootstrap-rfd8l\" (UID: \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\") " pod="openstack/keystone-bootstrap-rfd8l" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.685773 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-combined-ca-bundle\") pod \"keystone-bootstrap-rfd8l\" (UID: \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\") " pod="openstack/keystone-bootstrap-rfd8l" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.686753 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-scripts\") pod \"keystone-bootstrap-rfd8l\" (UID: \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\") " pod="openstack/keystone-bootstrap-rfd8l" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.686783 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdb8r\" (UniqueName: \"kubernetes.io/projected/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-kube-api-access-vdb8r\") pod \"dnsmasq-dns-bbf5cc879-d88pg\" (UID: \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\") " pod="openstack/dnsmasq-dns-bbf5cc879-d88pg" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.686819 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-d88pg\" (UID: \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\") " pod="openstack/dnsmasq-dns-bbf5cc879-d88pg" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.686852 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-credential-keys\") pod \"keystone-bootstrap-rfd8l\" (UID: \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\") " pod="openstack/keystone-bootstrap-rfd8l" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.686877 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-d88pg\" (UID: \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\") " pod="openstack/dnsmasq-dns-bbf5cc879-d88pg" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.719074 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-d88pg"] Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.719137 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rfd8l"] Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.789575 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-config\") pod \"dnsmasq-dns-bbf5cc879-d88pg\" (UID: \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\") " pod="openstack/dnsmasq-dns-bbf5cc879-d88pg" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.789621 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-config-data\") pod \"keystone-bootstrap-rfd8l\" (UID: \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\") " pod="openstack/keystone-bootstrap-rfd8l" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.789650 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-combined-ca-bundle\") pod \"keystone-bootstrap-rfd8l\" (UID: \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\") " pod="openstack/keystone-bootstrap-rfd8l" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.789681 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-scripts\") pod \"keystone-bootstrap-rfd8l\" (UID: \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\") " pod="openstack/keystone-bootstrap-rfd8l" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.789697 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdb8r\" (UniqueName: \"kubernetes.io/projected/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-kube-api-access-vdb8r\") pod \"dnsmasq-dns-bbf5cc879-d88pg\" (UID: \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\") " pod="openstack/dnsmasq-dns-bbf5cc879-d88pg" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.789713 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-d88pg\" (UID: \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\") " pod="openstack/dnsmasq-dns-bbf5cc879-d88pg" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.789739 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-credential-keys\") pod \"keystone-bootstrap-rfd8l\" (UID: \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\") " pod="openstack/keystone-bootstrap-rfd8l" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.789760 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-d88pg\" (UID: \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\") " pod="openstack/dnsmasq-dns-bbf5cc879-d88pg" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.789784 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8rdr\" (UniqueName: \"kubernetes.io/projected/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-kube-api-access-l8rdr\") pod \"keystone-bootstrap-rfd8l\" (UID: \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\") " pod="openstack/keystone-bootstrap-rfd8l" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.789815 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-fernet-keys\") pod \"keystone-bootstrap-rfd8l\" (UID: \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\") " pod="openstack/keystone-bootstrap-rfd8l" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.789830 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-d88pg\" (UID: \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\") " pod="openstack/dnsmasq-dns-bbf5cc879-d88pg" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.789879 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-d88pg\" (UID: \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\") " pod="openstack/dnsmasq-dns-bbf5cc879-d88pg" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.790716 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-d88pg\" (UID: \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\") " pod="openstack/dnsmasq-dns-bbf5cc879-d88pg" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.791365 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-config\") pod \"dnsmasq-dns-bbf5cc879-d88pg\" (UID: \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\") " pod="openstack/dnsmasq-dns-bbf5cc879-d88pg" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.798189 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-d88pg\" (UID: \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\") " pod="openstack/dnsmasq-dns-bbf5cc879-d88pg" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.799778 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-d88pg\" (UID: \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\") " pod="openstack/dnsmasq-dns-bbf5cc879-d88pg" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.800532 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-config-data\") pod \"keystone-bootstrap-rfd8l\" (UID: \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\") " pod="openstack/keystone-bootstrap-rfd8l" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.803585 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-d88pg\" (UID: \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\") " pod="openstack/dnsmasq-dns-bbf5cc879-d88pg" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.804329 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-credential-keys\") pod \"keystone-bootstrap-rfd8l\" (UID: \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\") " pod="openstack/keystone-bootstrap-rfd8l" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.804655 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-scripts\") pod \"keystone-bootstrap-rfd8l\" (UID: \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\") " pod="openstack/keystone-bootstrap-rfd8l" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.811948 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-combined-ca-bundle\") pod \"keystone-bootstrap-rfd8l\" (UID: \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\") " pod="openstack/keystone-bootstrap-rfd8l" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.815690 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-fernet-keys\") pod \"keystone-bootstrap-rfd8l\" (UID: \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\") " pod="openstack/keystone-bootstrap-rfd8l" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.825065 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8rdr\" (UniqueName: \"kubernetes.io/projected/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-kube-api-access-l8rdr\") pod \"keystone-bootstrap-rfd8l\" (UID: \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\") " pod="openstack/keystone-bootstrap-rfd8l" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.826336 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdb8r\" (UniqueName: \"kubernetes.io/projected/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-kube-api-access-vdb8r\") pod \"dnsmasq-dns-bbf5cc879-d88pg\" (UID: \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\") " pod="openstack/dnsmasq-dns-bbf5cc879-d88pg" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.937636 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-7drwm"] Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.938859 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7drwm" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.947266 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.948330 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.949046 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-hfk4n" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.955318 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-d88pg" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.969975 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rfd8l" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.978396 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-vs7dx"] Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.982075 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vs7dx" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.992200 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 19:01:00 crc kubenswrapper[4915]: I0127 19:01:00.993297 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.014929 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhxhw\" (UniqueName: \"kubernetes.io/projected/8c061dc9-adce-4ec1-9d89-d55751e9f851-kube-api-access-rhxhw\") pod \"neutron-db-sync-vs7dx\" (UID: \"8c061dc9-adce-4ec1-9d89-d55751e9f851\") " pod="openstack/neutron-db-sync-vs7dx" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.014990 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8c061dc9-adce-4ec1-9d89-d55751e9f851-config\") pod \"neutron-db-sync-vs7dx\" (UID: \"8c061dc9-adce-4ec1-9d89-d55751e9f851\") " pod="openstack/neutron-db-sync-vs7dx" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.015101 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99e935b3-c64c-4c02-821b-18301c6b6c27-config-data\") pod \"cinder-db-sync-7drwm\" (UID: \"99e935b3-c64c-4c02-821b-18301c6b6c27\") " pod="openstack/cinder-db-sync-7drwm" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.015134 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99e935b3-c64c-4c02-821b-18301c6b6c27-etc-machine-id\") pod \"cinder-db-sync-7drwm\" (UID: \"99e935b3-c64c-4c02-821b-18301c6b6c27\") " pod="openstack/cinder-db-sync-7drwm" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.015163 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c061dc9-adce-4ec1-9d89-d55751e9f851-combined-ca-bundle\") pod \"neutron-db-sync-vs7dx\" (UID: \"8c061dc9-adce-4ec1-9d89-d55751e9f851\") " pod="openstack/neutron-db-sync-vs7dx" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.015280 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvbtc\" (UniqueName: \"kubernetes.io/projected/99e935b3-c64c-4c02-821b-18301c6b6c27-kube-api-access-kvbtc\") pod \"cinder-db-sync-7drwm\" (UID: \"99e935b3-c64c-4c02-821b-18301c6b6c27\") " pod="openstack/cinder-db-sync-7drwm" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.015300 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99e935b3-c64c-4c02-821b-18301c6b6c27-combined-ca-bundle\") pod \"cinder-db-sync-7drwm\" (UID: \"99e935b3-c64c-4c02-821b-18301c6b6c27\") " pod="openstack/cinder-db-sync-7drwm" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.015343 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99e935b3-c64c-4c02-821b-18301c6b6c27-scripts\") pod \"cinder-db-sync-7drwm\" (UID: \"99e935b3-c64c-4c02-821b-18301c6b6c27\") " pod="openstack/cinder-db-sync-7drwm" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.015359 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/99e935b3-c64c-4c02-821b-18301c6b6c27-db-sync-config-data\") pod \"cinder-db-sync-7drwm\" (UID: \"99e935b3-c64c-4c02-821b-18301c6b6c27\") " pod="openstack/cinder-db-sync-7drwm" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.016422 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-vqr4f" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.035333 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7drwm"] Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.069073 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vs7dx"] Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.091607 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-tbtjt"] Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.093450 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-tbtjt" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.098100 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-ctllf" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.098346 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.124264 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvbtc\" (UniqueName: \"kubernetes.io/projected/99e935b3-c64c-4c02-821b-18301c6b6c27-kube-api-access-kvbtc\") pod \"cinder-db-sync-7drwm\" (UID: \"99e935b3-c64c-4c02-821b-18301c6b6c27\") " pod="openstack/cinder-db-sync-7drwm" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.124305 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99e935b3-c64c-4c02-821b-18301c6b6c27-combined-ca-bundle\") pod \"cinder-db-sync-7drwm\" (UID: \"99e935b3-c64c-4c02-821b-18301c6b6c27\") " pod="openstack/cinder-db-sync-7drwm" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.124337 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2crb\" (UniqueName: \"kubernetes.io/projected/2a336723-5840-48ef-b010-ca1cff69f962-kube-api-access-l2crb\") pod \"barbican-db-sync-tbtjt\" (UID: \"2a336723-5840-48ef-b010-ca1cff69f962\") " pod="openstack/barbican-db-sync-tbtjt" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.124365 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99e935b3-c64c-4c02-821b-18301c6b6c27-scripts\") pod \"cinder-db-sync-7drwm\" (UID: \"99e935b3-c64c-4c02-821b-18301c6b6c27\") " pod="openstack/cinder-db-sync-7drwm" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.124380 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/99e935b3-c64c-4c02-821b-18301c6b6c27-db-sync-config-data\") pod \"cinder-db-sync-7drwm\" (UID: \"99e935b3-c64c-4c02-821b-18301c6b6c27\") " pod="openstack/cinder-db-sync-7drwm" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.124400 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhxhw\" (UniqueName: \"kubernetes.io/projected/8c061dc9-adce-4ec1-9d89-d55751e9f851-kube-api-access-rhxhw\") pod \"neutron-db-sync-vs7dx\" (UID: \"8c061dc9-adce-4ec1-9d89-d55751e9f851\") " pod="openstack/neutron-db-sync-vs7dx" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.124498 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a336723-5840-48ef-b010-ca1cff69f962-combined-ca-bundle\") pod \"barbican-db-sync-tbtjt\" (UID: \"2a336723-5840-48ef-b010-ca1cff69f962\") " pod="openstack/barbican-db-sync-tbtjt" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.124568 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8c061dc9-adce-4ec1-9d89-d55751e9f851-config\") pod \"neutron-db-sync-vs7dx\" (UID: \"8c061dc9-adce-4ec1-9d89-d55751e9f851\") " pod="openstack/neutron-db-sync-vs7dx" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.124610 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99e935b3-c64c-4c02-821b-18301c6b6c27-config-data\") pod \"cinder-db-sync-7drwm\" (UID: \"99e935b3-c64c-4c02-821b-18301c6b6c27\") " pod="openstack/cinder-db-sync-7drwm" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.124630 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99e935b3-c64c-4c02-821b-18301c6b6c27-etc-machine-id\") pod \"cinder-db-sync-7drwm\" (UID: \"99e935b3-c64c-4c02-821b-18301c6b6c27\") " pod="openstack/cinder-db-sync-7drwm" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.124651 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c061dc9-adce-4ec1-9d89-d55751e9f851-combined-ca-bundle\") pod \"neutron-db-sync-vs7dx\" (UID: \"8c061dc9-adce-4ec1-9d89-d55751e9f851\") " pod="openstack/neutron-db-sync-vs7dx" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.124685 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a336723-5840-48ef-b010-ca1cff69f962-db-sync-config-data\") pod \"barbican-db-sync-tbtjt\" (UID: \"2a336723-5840-48ef-b010-ca1cff69f962\") " pod="openstack/barbican-db-sync-tbtjt" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.125685 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99e935b3-c64c-4c02-821b-18301c6b6c27-etc-machine-id\") pod \"cinder-db-sync-7drwm\" (UID: \"99e935b3-c64c-4c02-821b-18301c6b6c27\") " pod="openstack/cinder-db-sync-7drwm" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.131301 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99e935b3-c64c-4c02-821b-18301c6b6c27-config-data\") pod \"cinder-db-sync-7drwm\" (UID: \"99e935b3-c64c-4c02-821b-18301c6b6c27\") " pod="openstack/cinder-db-sync-7drwm" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.132194 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99e935b3-c64c-4c02-821b-18301c6b6c27-scripts\") pod \"cinder-db-sync-7drwm\" (UID: \"99e935b3-c64c-4c02-821b-18301c6b6c27\") " pod="openstack/cinder-db-sync-7drwm" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.134312 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-tbtjt"] Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.140876 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/99e935b3-c64c-4c02-821b-18301c6b6c27-db-sync-config-data\") pod \"cinder-db-sync-7drwm\" (UID: \"99e935b3-c64c-4c02-821b-18301c6b6c27\") " pod="openstack/cinder-db-sync-7drwm" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.142086 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c061dc9-adce-4ec1-9d89-d55751e9f851-combined-ca-bundle\") pod \"neutron-db-sync-vs7dx\" (UID: \"8c061dc9-adce-4ec1-9d89-d55751e9f851\") " pod="openstack/neutron-db-sync-vs7dx" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.163332 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8c061dc9-adce-4ec1-9d89-d55751e9f851-config\") pod \"neutron-db-sync-vs7dx\" (UID: \"8c061dc9-adce-4ec1-9d89-d55751e9f851\") " pod="openstack/neutron-db-sync-vs7dx" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.163573 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99e935b3-c64c-4c02-821b-18301c6b6c27-combined-ca-bundle\") pod \"cinder-db-sync-7drwm\" (UID: \"99e935b3-c64c-4c02-821b-18301c6b6c27\") " pod="openstack/cinder-db-sync-7drwm" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.163715 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhxhw\" (UniqueName: \"kubernetes.io/projected/8c061dc9-adce-4ec1-9d89-d55751e9f851-kube-api-access-rhxhw\") pod \"neutron-db-sync-vs7dx\" (UID: \"8c061dc9-adce-4ec1-9d89-d55751e9f851\") " pod="openstack/neutron-db-sync-vs7dx" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.168433 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-4ppcd"] Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.169714 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4ppcd" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.172249 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvbtc\" (UniqueName: \"kubernetes.io/projected/99e935b3-c64c-4c02-821b-18301c6b6c27-kube-api-access-kvbtc\") pod \"cinder-db-sync-7drwm\" (UID: \"99e935b3-c64c-4c02-821b-18301c6b6c27\") " pod="openstack/cinder-db-sync-7drwm" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.176077 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.176520 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hxwmd" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.176658 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.225125 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-4ppcd"] Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.229691 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a336723-5840-48ef-b010-ca1cff69f962-db-sync-config-data\") pod \"barbican-db-sync-tbtjt\" (UID: \"2a336723-5840-48ef-b010-ca1cff69f962\") " pod="openstack/barbican-db-sync-tbtjt" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.229752 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r48sw\" (UniqueName: \"kubernetes.io/projected/49c48f92-8129-4065-910c-166770ecb401-kube-api-access-r48sw\") pod \"placement-db-sync-4ppcd\" (UID: \"49c48f92-8129-4065-910c-166770ecb401\") " pod="openstack/placement-db-sync-4ppcd" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.229806 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49c48f92-8129-4065-910c-166770ecb401-logs\") pod \"placement-db-sync-4ppcd\" (UID: \"49c48f92-8129-4065-910c-166770ecb401\") " pod="openstack/placement-db-sync-4ppcd" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.229839 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2crb\" (UniqueName: \"kubernetes.io/projected/2a336723-5840-48ef-b010-ca1cff69f962-kube-api-access-l2crb\") pod \"barbican-db-sync-tbtjt\" (UID: \"2a336723-5840-48ef-b010-ca1cff69f962\") " pod="openstack/barbican-db-sync-tbtjt" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.229880 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a336723-5840-48ef-b010-ca1cff69f962-combined-ca-bundle\") pod \"barbican-db-sync-tbtjt\" (UID: \"2a336723-5840-48ef-b010-ca1cff69f962\") " pod="openstack/barbican-db-sync-tbtjt" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.229925 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49c48f92-8129-4065-910c-166770ecb401-config-data\") pod \"placement-db-sync-4ppcd\" (UID: \"49c48f92-8129-4065-910c-166770ecb401\") " pod="openstack/placement-db-sync-4ppcd" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.229943 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49c48f92-8129-4065-910c-166770ecb401-combined-ca-bundle\") pod \"placement-db-sync-4ppcd\" (UID: \"49c48f92-8129-4065-910c-166770ecb401\") " pod="openstack/placement-db-sync-4ppcd" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.229977 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49c48f92-8129-4065-910c-166770ecb401-scripts\") pod \"placement-db-sync-4ppcd\" (UID: \"49c48f92-8129-4065-910c-166770ecb401\") " pod="openstack/placement-db-sync-4ppcd" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.236529 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-d88pg"] Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.249385 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a336723-5840-48ef-b010-ca1cff69f962-combined-ca-bundle\") pod \"barbican-db-sync-tbtjt\" (UID: \"2a336723-5840-48ef-b010-ca1cff69f962\") " pod="openstack/barbican-db-sync-tbtjt" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.249388 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a336723-5840-48ef-b010-ca1cff69f962-db-sync-config-data\") pod \"barbican-db-sync-tbtjt\" (UID: \"2a336723-5840-48ef-b010-ca1cff69f962\") " pod="openstack/barbican-db-sync-tbtjt" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.252699 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.254867 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.258633 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.258865 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.261963 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2crb\" (UniqueName: \"kubernetes.io/projected/2a336723-5840-48ef-b010-ca1cff69f962-kube-api-access-l2crb\") pod \"barbican-db-sync-tbtjt\" (UID: \"2a336723-5840-48ef-b010-ca1cff69f962\") " pod="openstack/barbican-db-sync-tbtjt" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.267747 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.279719 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-rsxp6"] Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.281188 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.303342 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-rsxp6"] Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.333577 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49c48f92-8129-4065-910c-166770ecb401-config-data\") pod \"placement-db-sync-4ppcd\" (UID: \"49c48f92-8129-4065-910c-166770ecb401\") " pod="openstack/placement-db-sync-4ppcd" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.333617 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49c48f92-8129-4065-910c-166770ecb401-combined-ca-bundle\") pod \"placement-db-sync-4ppcd\" (UID: \"49c48f92-8129-4065-910c-166770ecb401\") " pod="openstack/placement-db-sync-4ppcd" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.333642 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-rsxp6\" (UID: \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.335503 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49c48f92-8129-4065-910c-166770ecb401-scripts\") pod \"placement-db-sync-4ppcd\" (UID: \"49c48f92-8129-4065-910c-166770ecb401\") " pod="openstack/placement-db-sync-4ppcd" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.336624 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-config\") pod \"dnsmasq-dns-56df8fb6b7-rsxp6\" (UID: \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.336737 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fc00196d-5999-4fb5-ad85-c2ed51b570ae-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " pod="openstack/ceilometer-0" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.336778 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjr4f\" (UniqueName: \"kubernetes.io/projected/fc00196d-5999-4fb5-ad85-c2ed51b570ae-kube-api-access-cjr4f\") pod \"ceilometer-0\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " pod="openstack/ceilometer-0" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.336822 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-rsxp6\" (UID: \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.336875 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc00196d-5999-4fb5-ad85-c2ed51b570ae-config-data\") pod \"ceilometer-0\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " pod="openstack/ceilometer-0" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.336914 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc00196d-5999-4fb5-ad85-c2ed51b570ae-run-httpd\") pod \"ceilometer-0\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " pod="openstack/ceilometer-0" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.336985 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc00196d-5999-4fb5-ad85-c2ed51b570ae-log-httpd\") pod \"ceilometer-0\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " pod="openstack/ceilometer-0" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.337118 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r48sw\" (UniqueName: \"kubernetes.io/projected/49c48f92-8129-4065-910c-166770ecb401-kube-api-access-r48sw\") pod \"placement-db-sync-4ppcd\" (UID: \"49c48f92-8129-4065-910c-166770ecb401\") " pod="openstack/placement-db-sync-4ppcd" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.337188 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49c48f92-8129-4065-910c-166770ecb401-logs\") pod \"placement-db-sync-4ppcd\" (UID: \"49c48f92-8129-4065-910c-166770ecb401\") " pod="openstack/placement-db-sync-4ppcd" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.337296 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc00196d-5999-4fb5-ad85-c2ed51b570ae-scripts\") pod \"ceilometer-0\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " pod="openstack/ceilometer-0" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.337336 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc00196d-5999-4fb5-ad85-c2ed51b570ae-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " pod="openstack/ceilometer-0" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.337371 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-rsxp6\" (UID: \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.337397 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-rsxp6\" (UID: \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.337451 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htjnf\" (UniqueName: \"kubernetes.io/projected/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-kube-api-access-htjnf\") pod \"dnsmasq-dns-56df8fb6b7-rsxp6\" (UID: \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.338323 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49c48f92-8129-4065-910c-166770ecb401-logs\") pod \"placement-db-sync-4ppcd\" (UID: \"49c48f92-8129-4065-910c-166770ecb401\") " pod="openstack/placement-db-sync-4ppcd" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.343556 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49c48f92-8129-4065-910c-166770ecb401-scripts\") pod \"placement-db-sync-4ppcd\" (UID: \"49c48f92-8129-4065-910c-166770ecb401\") " pod="openstack/placement-db-sync-4ppcd" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.352586 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49c48f92-8129-4065-910c-166770ecb401-combined-ca-bundle\") pod \"placement-db-sync-4ppcd\" (UID: \"49c48f92-8129-4065-910c-166770ecb401\") " pod="openstack/placement-db-sync-4ppcd" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.369444 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84957bd3-d9f0-4452-9311-c1dee4133184" path="/var/lib/kubelet/pods/84957bd3-d9f0-4452-9311-c1dee4133184/volumes" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.376192 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49c48f92-8129-4065-910c-166770ecb401-config-data\") pod \"placement-db-sync-4ppcd\" (UID: \"49c48f92-8129-4065-910c-166770ecb401\") " pod="openstack/placement-db-sync-4ppcd" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.388110 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r48sw\" (UniqueName: \"kubernetes.io/projected/49c48f92-8129-4065-910c-166770ecb401-kube-api-access-r48sw\") pod \"placement-db-sync-4ppcd\" (UID: \"49c48f92-8129-4065-910c-166770ecb401\") " pod="openstack/placement-db-sync-4ppcd" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.399541 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7drwm" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.430339 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vs7dx" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.431723 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4ppcd" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.439868 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-config\") pod \"dnsmasq-dns-56df8fb6b7-rsxp6\" (UID: \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.439919 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fc00196d-5999-4fb5-ad85-c2ed51b570ae-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " pod="openstack/ceilometer-0" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.439939 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjr4f\" (UniqueName: \"kubernetes.io/projected/fc00196d-5999-4fb5-ad85-c2ed51b570ae-kube-api-access-cjr4f\") pod \"ceilometer-0\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " pod="openstack/ceilometer-0" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.439955 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-rsxp6\" (UID: \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.439978 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc00196d-5999-4fb5-ad85-c2ed51b570ae-config-data\") pod \"ceilometer-0\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " pod="openstack/ceilometer-0" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.439997 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc00196d-5999-4fb5-ad85-c2ed51b570ae-run-httpd\") pod \"ceilometer-0\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " pod="openstack/ceilometer-0" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.440016 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc00196d-5999-4fb5-ad85-c2ed51b570ae-log-httpd\") pod \"ceilometer-0\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " pod="openstack/ceilometer-0" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.440077 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc00196d-5999-4fb5-ad85-c2ed51b570ae-scripts\") pod \"ceilometer-0\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " pod="openstack/ceilometer-0" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.440094 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc00196d-5999-4fb5-ad85-c2ed51b570ae-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " pod="openstack/ceilometer-0" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.440112 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-rsxp6\" (UID: \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.440131 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-rsxp6\" (UID: \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.440155 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htjnf\" (UniqueName: \"kubernetes.io/projected/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-kube-api-access-htjnf\") pod \"dnsmasq-dns-56df8fb6b7-rsxp6\" (UID: \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.440200 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-rsxp6\" (UID: \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.459703 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fc00196d-5999-4fb5-ad85-c2ed51b570ae-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " pod="openstack/ceilometer-0" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.460103 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc00196d-5999-4fb5-ad85-c2ed51b570ae-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " pod="openstack/ceilometer-0" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.461932 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc00196d-5999-4fb5-ad85-c2ed51b570ae-config-data\") pod \"ceilometer-0\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " pod="openstack/ceilometer-0" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.462171 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc00196d-5999-4fb5-ad85-c2ed51b570ae-scripts\") pod \"ceilometer-0\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " pod="openstack/ceilometer-0" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.462907 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc00196d-5999-4fb5-ad85-c2ed51b570ae-log-httpd\") pod \"ceilometer-0\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " pod="openstack/ceilometer-0" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.462975 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc00196d-5999-4fb5-ad85-c2ed51b570ae-run-httpd\") pod \"ceilometer-0\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " pod="openstack/ceilometer-0" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.463137 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-rsxp6\" (UID: \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.464808 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-tbtjt" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.465606 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-rsxp6\" (UID: \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.465762 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htjnf\" (UniqueName: \"kubernetes.io/projected/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-kube-api-access-htjnf\") pod \"dnsmasq-dns-56df8fb6b7-rsxp6\" (UID: \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.465872 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjr4f\" (UniqueName: \"kubernetes.io/projected/fc00196d-5999-4fb5-ad85-c2ed51b570ae-kube-api-access-cjr4f\") pod \"ceilometer-0\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " pod="openstack/ceilometer-0" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.467243 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-rsxp6\" (UID: \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.467452 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-config\") pod \"dnsmasq-dns-56df8fb6b7-rsxp6\" (UID: \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.467724 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-rsxp6\" (UID: \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" Jan 27 19:01:01 crc kubenswrapper[4915]: I0127 19:01:01.627130 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-d88pg"] Jan 27 19:01:01 crc kubenswrapper[4915]: W0127 19:01:01.648950 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded0cdc0f_0017_434d_a2ef_eb1b8c8ec007.slice/crio-87f8a50569e849d6edad4916454caee376147392175c78bf73d1f323ba704086 WatchSource:0}: Error finding container 87f8a50569e849d6edad4916454caee376147392175c78bf73d1f323ba704086: Status 404 returned error can't find the container with id 87f8a50569e849d6edad4916454caee376147392175c78bf73d1f323ba704086 Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:01.711607 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rfd8l"] Jan 27 19:01:03 crc kubenswrapper[4915]: W0127 19:01:01.736360 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b2ce2a3_5bf2_4522_b081_b4bfd6bdcb26.slice/crio-995266b63fd8f791a6eb02b41b18d22313654914f14bd7c12009090226f92941 WatchSource:0}: Error finding container 995266b63fd8f791a6eb02b41b18d22313654914f14bd7c12009090226f92941: Status 404 returned error can't find the container with id 995266b63fd8f791a6eb02b41b18d22313654914f14bd7c12009090226f92941 Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:01.746247 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:01.765028 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:01.864234 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:01.866193 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:01.870616 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:01.870773 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:01.870916 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-cc55j" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:01.878586 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:01.883559 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:01.893937 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7drwm"] Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:01.958207 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86087cb3-4c77-44e5-b974-8e988cee6c9c-config-data\") pod \"glance-default-external-api-0\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:01.958356 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/86087cb3-4c77-44e5-b974-8e988cee6c9c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:01.958492 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86087cb3-4c77-44e5-b974-8e988cee6c9c-logs\") pod \"glance-default-external-api-0\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:01.958521 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5k5k\" (UniqueName: \"kubernetes.io/projected/86087cb3-4c77-44e5-b974-8e988cee6c9c-kube-api-access-q5k5k\") pod \"glance-default-external-api-0\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:01.958586 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86087cb3-4c77-44e5-b974-8e988cee6c9c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:01.958614 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:01.958641 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86087cb3-4c77-44e5-b974-8e988cee6c9c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:01.958703 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86087cb3-4c77-44e5-b974-8e988cee6c9c-scripts\") pod \"glance-default-external-api-0\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.076041 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86087cb3-4c77-44e5-b974-8e988cee6c9c-scripts\") pod \"glance-default-external-api-0\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.076435 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86087cb3-4c77-44e5-b974-8e988cee6c9c-config-data\") pod \"glance-default-external-api-0\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.076520 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/86087cb3-4c77-44e5-b974-8e988cee6c9c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.076577 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86087cb3-4c77-44e5-b974-8e988cee6c9c-logs\") pod \"glance-default-external-api-0\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.076612 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5k5k\" (UniqueName: \"kubernetes.io/projected/86087cb3-4c77-44e5-b974-8e988cee6c9c-kube-api-access-q5k5k\") pod \"glance-default-external-api-0\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.076661 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86087cb3-4c77-44e5-b974-8e988cee6c9c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.076692 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.076721 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86087cb3-4c77-44e5-b974-8e988cee6c9c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.080011 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86087cb3-4c77-44e5-b974-8e988cee6c9c-logs\") pod \"glance-default-external-api-0\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.081196 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/86087cb3-4c77-44e5-b974-8e988cee6c9c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.082247 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.089811 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86087cb3-4c77-44e5-b974-8e988cee6c9c-scripts\") pod \"glance-default-external-api-0\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.091578 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86087cb3-4c77-44e5-b974-8e988cee6c9c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.091951 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86087cb3-4c77-44e5-b974-8e988cee6c9c-config-data\") pod \"glance-default-external-api-0\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.092407 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86087cb3-4c77-44e5-b974-8e988cee6c9c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.103909 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.105448 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5k5k\" (UniqueName: \"kubernetes.io/projected/86087cb3-4c77-44e5-b974-8e988cee6c9c-kube-api-access-q5k5k\") pod \"glance-default-external-api-0\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.119778 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.125442 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.125838 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.144193 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.178621 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a76b02c3-0641-40d4-81ab-c4cdad9f5717-logs\") pod \"glance-default-internal-api-0\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.178668 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a76b02c3-0641-40d4-81ab-c4cdad9f5717-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.178697 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a76b02c3-0641-40d4-81ab-c4cdad9f5717-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.178847 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.178948 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a76b02c3-0641-40d4-81ab-c4cdad9f5717-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.179130 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a76b02c3-0641-40d4-81ab-c4cdad9f5717-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.179193 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a76b02c3-0641-40d4-81ab-c4cdad9f5717-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.179303 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdw8k\" (UniqueName: \"kubernetes.io/projected/a76b02c3-0641-40d4-81ab-c4cdad9f5717-kube-api-access-fdw8k\") pod \"glance-default-internal-api-0\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.200396 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.209557 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.281311 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a76b02c3-0641-40d4-81ab-c4cdad9f5717-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.281354 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a76b02c3-0641-40d4-81ab-c4cdad9f5717-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.281391 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdw8k\" (UniqueName: \"kubernetes.io/projected/a76b02c3-0641-40d4-81ab-c4cdad9f5717-kube-api-access-fdw8k\") pod \"glance-default-internal-api-0\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.281435 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a76b02c3-0641-40d4-81ab-c4cdad9f5717-logs\") pod \"glance-default-internal-api-0\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.281452 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a76b02c3-0641-40d4-81ab-c4cdad9f5717-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.281469 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a76b02c3-0641-40d4-81ab-c4cdad9f5717-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.281515 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.281549 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a76b02c3-0641-40d4-81ab-c4cdad9f5717-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.283551 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a76b02c3-0641-40d4-81ab-c4cdad9f5717-logs\") pod \"glance-default-internal-api-0\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.283577 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a76b02c3-0641-40d4-81ab-c4cdad9f5717-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.283935 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.289140 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a76b02c3-0641-40d4-81ab-c4cdad9f5717-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.289602 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a76b02c3-0641-40d4-81ab-c4cdad9f5717-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.291512 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a76b02c3-0641-40d4-81ab-c4cdad9f5717-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.293831 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a76b02c3-0641-40d4-81ab-c4cdad9f5717-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.304266 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdw8k\" (UniqueName: \"kubernetes.io/projected/a76b02c3-0641-40d4-81ab-c4cdad9f5717-kube-api-access-fdw8k\") pod \"glance-default-internal-api-0\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.314973 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.371864 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7drwm" event={"ID":"99e935b3-c64c-4c02-821b-18301c6b6c27","Type":"ContainerStarted","Data":"f7558eccc1ef72598e2149a47aa4b003380a4ba3f2928ff7147a818001d7b66b"} Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.372748 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rfd8l" event={"ID":"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26","Type":"ContainerStarted","Data":"753760b10b6c7ebc4f0b10f1a269dd6898612cf07c1c2cfc1191d8b5e78b2a3f"} Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.372764 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rfd8l" event={"ID":"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26","Type":"ContainerStarted","Data":"995266b63fd8f791a6eb02b41b18d22313654914f14bd7c12009090226f92941"} Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.374511 4915 generic.go:334] "Generic (PLEG): container finished" podID="ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007" containerID="a381f93852c99e35ede6c45c19dd017ebf80c7e66e3589dbfd9ed2e9707722e4" exitCode=0 Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.374533 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-d88pg" event={"ID":"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007","Type":"ContainerDied","Data":"a381f93852c99e35ede6c45c19dd017ebf80c7e66e3589dbfd9ed2e9707722e4"} Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.374545 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-d88pg" event={"ID":"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007","Type":"ContainerStarted","Data":"87f8a50569e849d6edad4916454caee376147392175c78bf73d1f323ba704086"} Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.450121 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-rfd8l" podStartSLOduration=2.450091998 podStartE2EDuration="2.450091998s" podCreationTimestamp="2026-01-27 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:02.422706486 +0000 UTC m=+1153.780560140" watchObservedRunningTime="2026-01-27 19:01:02.450091998 +0000 UTC m=+1153.807945652" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:02.451195 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:03.460918 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:03.535685 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:03.703272 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:03.889953 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-d88pg" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:03.942053 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-dns-swift-storage-0\") pod \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\" (UID: \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\") " Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:03.942203 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-ovsdbserver-nb\") pod \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\" (UID: \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\") " Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:03.942243 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdb8r\" (UniqueName: \"kubernetes.io/projected/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-kube-api-access-vdb8r\") pod \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\" (UID: \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\") " Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:03.942270 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-config\") pod \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\" (UID: \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\") " Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:03.942348 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-ovsdbserver-sb\") pod \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\" (UID: \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\") " Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:03.942375 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-dns-svc\") pod \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\" (UID: \"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007\") " Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:03.949228 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-kube-api-access-vdb8r" (OuterVolumeSpecName: "kube-api-access-vdb8r") pod "ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007" (UID: "ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007"). InnerVolumeSpecName "kube-api-access-vdb8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:03.967878 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007" (UID: "ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:03.970643 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007" (UID: "ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:03.974423 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007" (UID: "ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:03.980739 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007" (UID: "ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:03 crc kubenswrapper[4915]: I0127 19:01:03.993846 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-config" (OuterVolumeSpecName: "config") pod "ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007" (UID: "ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:04 crc kubenswrapper[4915]: I0127 19:01:04.045527 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:04 crc kubenswrapper[4915]: I0127 19:01:04.045581 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdb8r\" (UniqueName: \"kubernetes.io/projected/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-kube-api-access-vdb8r\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:04 crc kubenswrapper[4915]: I0127 19:01:04.045592 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:04 crc kubenswrapper[4915]: I0127 19:01:04.045603 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:04 crc kubenswrapper[4915]: I0127 19:01:04.045613 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:04 crc kubenswrapper[4915]: I0127 19:01:04.045621 4915 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:04 crc kubenswrapper[4915]: I0127 19:01:04.177037 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-tbtjt"] Jan 27 19:01:04 crc kubenswrapper[4915]: I0127 19:01:04.210802 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:04 crc kubenswrapper[4915]: I0127 19:01:04.233994 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-rsxp6"] Jan 27 19:01:04 crc kubenswrapper[4915]: I0127 19:01:04.263138 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vs7dx"] Jan 27 19:01:04 crc kubenswrapper[4915]: W0127 19:01:04.263369 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc00196d_5999_4fb5_ad85_c2ed51b570ae.slice/crio-95b5ae3c4e845048c16aaba4c0cc8813c2b9538cefec8715e5045ef1a3410d9d WatchSource:0}: Error finding container 95b5ae3c4e845048c16aaba4c0cc8813c2b9538cefec8715e5045ef1a3410d9d: Status 404 returned error can't find the container with id 95b5ae3c4e845048c16aaba4c0cc8813c2b9538cefec8715e5045ef1a3410d9d Jan 27 19:01:04 crc kubenswrapper[4915]: I0127 19:01:04.271576 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-4ppcd"] Jan 27 19:01:04 crc kubenswrapper[4915]: I0127 19:01:04.290261 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 19:01:04 crc kubenswrapper[4915]: I0127 19:01:04.401028 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 19:01:04 crc kubenswrapper[4915]: I0127 19:01:04.402252 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" event={"ID":"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0","Type":"ContainerStarted","Data":"9340356295e3449674fb9081893031fe2c93e2e30ceddb9aac0592dd361c9aad"} Jan 27 19:01:04 crc kubenswrapper[4915]: I0127 19:01:04.405051 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4ppcd" event={"ID":"49c48f92-8129-4065-910c-166770ecb401","Type":"ContainerStarted","Data":"fab93bb9607872e331d2697811a973608ec8a38c41ecab9abf099ea4a8fc32a6"} Jan 27 19:01:04 crc kubenswrapper[4915]: I0127 19:01:04.406439 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-tbtjt" event={"ID":"2a336723-5840-48ef-b010-ca1cff69f962","Type":"ContainerStarted","Data":"cc77d0bc5caa5af115bc53df859abff0de9512d53e7e16773685bf53d7e63749"} Jan 27 19:01:04 crc kubenswrapper[4915]: I0127 19:01:04.407967 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a76b02c3-0641-40d4-81ab-c4cdad9f5717","Type":"ContainerStarted","Data":"389fd930f719907b1662b46979e0bbf53bf47a263cb91e72a04c9a01562b221c"} Jan 27 19:01:04 crc kubenswrapper[4915]: I0127 19:01:04.409281 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vs7dx" event={"ID":"8c061dc9-adce-4ec1-9d89-d55751e9f851","Type":"ContainerStarted","Data":"114e5ac4db455ba7fb0c282a4106b7d67ea2029d3ede13ac0464c1ecaae0048a"} Jan 27 19:01:04 crc kubenswrapper[4915]: I0127 19:01:04.411253 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc00196d-5999-4fb5-ad85-c2ed51b570ae","Type":"ContainerStarted","Data":"95b5ae3c4e845048c16aaba4c0cc8813c2b9538cefec8715e5045ef1a3410d9d"} Jan 27 19:01:04 crc kubenswrapper[4915]: I0127 19:01:04.413048 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-d88pg" event={"ID":"ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007","Type":"ContainerDied","Data":"87f8a50569e849d6edad4916454caee376147392175c78bf73d1f323ba704086"} Jan 27 19:01:04 crc kubenswrapper[4915]: I0127 19:01:04.413078 4915 scope.go:117] "RemoveContainer" containerID="a381f93852c99e35ede6c45c19dd017ebf80c7e66e3589dbfd9ed2e9707722e4" Jan 27 19:01:04 crc kubenswrapper[4915]: I0127 19:01:04.413231 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-d88pg" Jan 27 19:01:04 crc kubenswrapper[4915]: I0127 19:01:04.511072 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-d88pg"] Jan 27 19:01:04 crc kubenswrapper[4915]: I0127 19:01:04.519744 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-d88pg"] Jan 27 19:01:05 crc kubenswrapper[4915]: I0127 19:01:05.371759 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007" path="/var/lib/kubelet/pods/ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007/volumes" Jan 27 19:01:05 crc kubenswrapper[4915]: I0127 19:01:05.423936 4915 generic.go:334] "Generic (PLEG): container finished" podID="3fc52a32-aa71-44cf-81f3-ac7a1545c3b0" containerID="97502c59181578a12e44b071396f1b40560fdfe0d1e791031e7c28373b5b2136" exitCode=0 Jan 27 19:01:05 crc kubenswrapper[4915]: I0127 19:01:05.424002 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" event={"ID":"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0","Type":"ContainerDied","Data":"97502c59181578a12e44b071396f1b40560fdfe0d1e791031e7c28373b5b2136"} Jan 27 19:01:05 crc kubenswrapper[4915]: I0127 19:01:05.472243 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a76b02c3-0641-40d4-81ab-c4cdad9f5717","Type":"ContainerStarted","Data":"6203aaf9b445da3072b2c3f1e0f01220637fb767405abfbf7e1dc80ef816ffa6"} Jan 27 19:01:05 crc kubenswrapper[4915]: I0127 19:01:05.477380 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vs7dx" event={"ID":"8c061dc9-adce-4ec1-9d89-d55751e9f851","Type":"ContainerStarted","Data":"01e3ce09a2526a0318f8a67730a60d1b0cc8744b130e54f91d75d1a77fb8f6a7"} Jan 27 19:01:05 crc kubenswrapper[4915]: I0127 19:01:05.484516 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"86087cb3-4c77-44e5-b974-8e988cee6c9c","Type":"ContainerStarted","Data":"ee3f81ad7d09bfb093284cba0385dadffba13392f9edb7eca07a4437ed8a54d1"} Jan 27 19:01:05 crc kubenswrapper[4915]: I0127 19:01:05.484591 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"86087cb3-4c77-44e5-b974-8e988cee6c9c","Type":"ContainerStarted","Data":"7ed4bae6fddef3c7128cf5402e688ccc25c5676f538b476fec3948bf37c34aa8"} Jan 27 19:01:05 crc kubenswrapper[4915]: I0127 19:01:05.497112 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-vs7dx" podStartSLOduration=5.497096673 podStartE2EDuration="5.497096673s" podCreationTimestamp="2026-01-27 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:05.496306244 +0000 UTC m=+1156.854159918" watchObservedRunningTime="2026-01-27 19:01:05.497096673 +0000 UTC m=+1156.854950337" Jan 27 19:01:06 crc kubenswrapper[4915]: I0127 19:01:06.511648 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a76b02c3-0641-40d4-81ab-c4cdad9f5717","Type":"ContainerStarted","Data":"afa2fa21b086994d71eeb4c0cebf45bb0d8f0d8a6f8780919ab746c20b689796"} Jan 27 19:01:06 crc kubenswrapper[4915]: I0127 19:01:06.511974 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a76b02c3-0641-40d4-81ab-c4cdad9f5717" containerName="glance-log" containerID="cri-o://6203aaf9b445da3072b2c3f1e0f01220637fb767405abfbf7e1dc80ef816ffa6" gracePeriod=30 Jan 27 19:01:06 crc kubenswrapper[4915]: I0127 19:01:06.512268 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a76b02c3-0641-40d4-81ab-c4cdad9f5717" containerName="glance-httpd" containerID="cri-o://afa2fa21b086994d71eeb4c0cebf45bb0d8f0d8a6f8780919ab746c20b689796" gracePeriod=30 Jan 27 19:01:06 crc kubenswrapper[4915]: I0127 19:01:06.519474 4915 generic.go:334] "Generic (PLEG): container finished" podID="1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26" containerID="753760b10b6c7ebc4f0b10f1a269dd6898612cf07c1c2cfc1191d8b5e78b2a3f" exitCode=0 Jan 27 19:01:06 crc kubenswrapper[4915]: I0127 19:01:06.519607 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rfd8l" event={"ID":"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26","Type":"ContainerDied","Data":"753760b10b6c7ebc4f0b10f1a269dd6898612cf07c1c2cfc1191d8b5e78b2a3f"} Jan 27 19:01:06 crc kubenswrapper[4915]: I0127 19:01:06.523946 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"86087cb3-4c77-44e5-b974-8e988cee6c9c","Type":"ContainerStarted","Data":"423b9afd158628ef095cf2ef147219b8368a082e0ec8c44d6a1f858e3e20e173"} Jan 27 19:01:06 crc kubenswrapper[4915]: I0127 19:01:06.524096 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="86087cb3-4c77-44e5-b974-8e988cee6c9c" containerName="glance-log" containerID="cri-o://ee3f81ad7d09bfb093284cba0385dadffba13392f9edb7eca07a4437ed8a54d1" gracePeriod=30 Jan 27 19:01:06 crc kubenswrapper[4915]: I0127 19:01:06.524167 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="86087cb3-4c77-44e5-b974-8e988cee6c9c" containerName="glance-httpd" containerID="cri-o://423b9afd158628ef095cf2ef147219b8368a082e0ec8c44d6a1f858e3e20e173" gracePeriod=30 Jan 27 19:01:06 crc kubenswrapper[4915]: I0127 19:01:06.541116 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.541093813 podStartE2EDuration="6.541093813s" podCreationTimestamp="2026-01-27 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:06.536572242 +0000 UTC m=+1157.894425896" watchObservedRunningTime="2026-01-27 19:01:06.541093813 +0000 UTC m=+1157.898947477" Jan 27 19:01:06 crc kubenswrapper[4915]: I0127 19:01:06.542946 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" event={"ID":"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0","Type":"ContainerStarted","Data":"306ff8dc7e6887829ec92ba9a6ee920b7c7804f1110868596815ffd9849f9510"} Jan 27 19:01:06 crc kubenswrapper[4915]: I0127 19:01:06.548865 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" Jan 27 19:01:06 crc kubenswrapper[4915]: I0127 19:01:06.578710 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.578693936 podStartE2EDuration="6.578693936s" podCreationTimestamp="2026-01-27 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:06.575267992 +0000 UTC m=+1157.933121656" watchObservedRunningTime="2026-01-27 19:01:06.578693936 +0000 UTC m=+1157.936547600" Jan 27 19:01:06 crc kubenswrapper[4915]: I0127 19:01:06.599150 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" podStartSLOduration=5.599131737 podStartE2EDuration="5.599131737s" podCreationTimestamp="2026-01-27 19:01:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:06.594869592 +0000 UTC m=+1157.952723256" watchObservedRunningTime="2026-01-27 19:01:06.599131737 +0000 UTC m=+1157.956985401" Jan 27 19:01:07 crc kubenswrapper[4915]: I0127 19:01:07.553148 4915 generic.go:334] "Generic (PLEG): container finished" podID="a76b02c3-0641-40d4-81ab-c4cdad9f5717" containerID="afa2fa21b086994d71eeb4c0cebf45bb0d8f0d8a6f8780919ab746c20b689796" exitCode=0 Jan 27 19:01:07 crc kubenswrapper[4915]: I0127 19:01:07.553524 4915 generic.go:334] "Generic (PLEG): container finished" podID="a76b02c3-0641-40d4-81ab-c4cdad9f5717" containerID="6203aaf9b445da3072b2c3f1e0f01220637fb767405abfbf7e1dc80ef816ffa6" exitCode=143 Jan 27 19:01:07 crc kubenswrapper[4915]: I0127 19:01:07.553201 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a76b02c3-0641-40d4-81ab-c4cdad9f5717","Type":"ContainerDied","Data":"afa2fa21b086994d71eeb4c0cebf45bb0d8f0d8a6f8780919ab746c20b689796"} Jan 27 19:01:07 crc kubenswrapper[4915]: I0127 19:01:07.553607 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a76b02c3-0641-40d4-81ab-c4cdad9f5717","Type":"ContainerDied","Data":"6203aaf9b445da3072b2c3f1e0f01220637fb767405abfbf7e1dc80ef816ffa6"} Jan 27 19:01:07 crc kubenswrapper[4915]: I0127 19:01:07.556302 4915 generic.go:334] "Generic (PLEG): container finished" podID="86087cb3-4c77-44e5-b974-8e988cee6c9c" containerID="423b9afd158628ef095cf2ef147219b8368a082e0ec8c44d6a1f858e3e20e173" exitCode=0 Jan 27 19:01:07 crc kubenswrapper[4915]: I0127 19:01:07.556332 4915 generic.go:334] "Generic (PLEG): container finished" podID="86087cb3-4c77-44e5-b974-8e988cee6c9c" containerID="ee3f81ad7d09bfb093284cba0385dadffba13392f9edb7eca07a4437ed8a54d1" exitCode=143 Jan 27 19:01:07 crc kubenswrapper[4915]: I0127 19:01:07.557364 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"86087cb3-4c77-44e5-b974-8e988cee6c9c","Type":"ContainerDied","Data":"423b9afd158628ef095cf2ef147219b8368a082e0ec8c44d6a1f858e3e20e173"} Jan 27 19:01:07 crc kubenswrapper[4915]: I0127 19:01:07.557416 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"86087cb3-4c77-44e5-b974-8e988cee6c9c","Type":"ContainerDied","Data":"ee3f81ad7d09bfb093284cba0385dadffba13392f9edb7eca07a4437ed8a54d1"} Jan 27 19:01:10 crc kubenswrapper[4915]: I0127 19:01:10.111221 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rfd8l" Jan 27 19:01:10 crc kubenswrapper[4915]: I0127 19:01:10.174371 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-combined-ca-bundle\") pod \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\" (UID: \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\") " Jan 27 19:01:10 crc kubenswrapper[4915]: I0127 19:01:10.174707 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8rdr\" (UniqueName: \"kubernetes.io/projected/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-kube-api-access-l8rdr\") pod \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\" (UID: \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\") " Jan 27 19:01:10 crc kubenswrapper[4915]: I0127 19:01:10.174855 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-config-data\") pod \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\" (UID: \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\") " Jan 27 19:01:10 crc kubenswrapper[4915]: I0127 19:01:10.174902 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-fernet-keys\") pod \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\" (UID: \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\") " Jan 27 19:01:10 crc kubenswrapper[4915]: I0127 19:01:10.174939 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-scripts\") pod \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\" (UID: \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\") " Jan 27 19:01:10 crc kubenswrapper[4915]: I0127 19:01:10.175095 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-credential-keys\") pod \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\" (UID: \"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26\") " Jan 27 19:01:10 crc kubenswrapper[4915]: I0127 19:01:10.180980 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-kube-api-access-l8rdr" (OuterVolumeSpecName: "kube-api-access-l8rdr") pod "1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26" (UID: "1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26"). InnerVolumeSpecName "kube-api-access-l8rdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:10 crc kubenswrapper[4915]: I0127 19:01:10.183908 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26" (UID: "1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:10 crc kubenswrapper[4915]: I0127 19:01:10.186494 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-scripts" (OuterVolumeSpecName: "scripts") pod "1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26" (UID: "1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:10 crc kubenswrapper[4915]: I0127 19:01:10.187166 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26" (UID: "1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:10 crc kubenswrapper[4915]: I0127 19:01:10.202482 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-config-data" (OuterVolumeSpecName: "config-data") pod "1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26" (UID: "1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:10 crc kubenswrapper[4915]: I0127 19:01:10.204359 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26" (UID: "1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:10 crc kubenswrapper[4915]: I0127 19:01:10.279259 4915 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:10 crc kubenswrapper[4915]: I0127 19:01:10.279302 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:10 crc kubenswrapper[4915]: I0127 19:01:10.279392 4915 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:10 crc kubenswrapper[4915]: I0127 19:01:10.279407 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:10 crc kubenswrapper[4915]: I0127 19:01:10.279416 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8rdr\" (UniqueName: \"kubernetes.io/projected/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-kube-api-access-l8rdr\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:10 crc kubenswrapper[4915]: I0127 19:01:10.279424 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:10 crc kubenswrapper[4915]: I0127 19:01:10.581652 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rfd8l" event={"ID":"1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26","Type":"ContainerDied","Data":"995266b63fd8f791a6eb02b41b18d22313654914f14bd7c12009090226f92941"} Jan 27 19:01:10 crc kubenswrapper[4915]: I0127 19:01:10.581698 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="995266b63fd8f791a6eb02b41b18d22313654914f14bd7c12009090226f92941" Jan 27 19:01:10 crc kubenswrapper[4915]: I0127 19:01:10.581764 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rfd8l" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.198766 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-rfd8l"] Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.206932 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-rfd8l"] Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.284950 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-snpdp"] Jan 27 19:01:11 crc kubenswrapper[4915]: E0127 19:01:11.285455 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007" containerName="init" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.285476 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007" containerName="init" Jan 27 19:01:11 crc kubenswrapper[4915]: E0127 19:01:11.285530 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26" containerName="keystone-bootstrap" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.285542 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26" containerName="keystone-bootstrap" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.285767 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26" containerName="keystone-bootstrap" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.285821 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed0cdc0f-0017-434d-a2ef-eb1b8c8ec007" containerName="init" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.286898 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-snpdp" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.288415 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.288961 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.289022 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fc7sv" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.290171 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.290219 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.294773 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-snpdp"] Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.367975 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26" path="/var/lib/kubelet/pods/1b2ce2a3-5bf2-4522-b081-b4bfd6bdcb26/volumes" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.398565 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-fernet-keys\") pod \"keystone-bootstrap-snpdp\" (UID: \"a6c0c7d8-6044-4702-ac31-e90652d15248\") " pod="openstack/keystone-bootstrap-snpdp" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.398634 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st78t\" (UniqueName: \"kubernetes.io/projected/a6c0c7d8-6044-4702-ac31-e90652d15248-kube-api-access-st78t\") pod \"keystone-bootstrap-snpdp\" (UID: \"a6c0c7d8-6044-4702-ac31-e90652d15248\") " pod="openstack/keystone-bootstrap-snpdp" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.398669 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-config-data\") pod \"keystone-bootstrap-snpdp\" (UID: \"a6c0c7d8-6044-4702-ac31-e90652d15248\") " pod="openstack/keystone-bootstrap-snpdp" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.398690 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-combined-ca-bundle\") pod \"keystone-bootstrap-snpdp\" (UID: \"a6c0c7d8-6044-4702-ac31-e90652d15248\") " pod="openstack/keystone-bootstrap-snpdp" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.398782 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-scripts\") pod \"keystone-bootstrap-snpdp\" (UID: \"a6c0c7d8-6044-4702-ac31-e90652d15248\") " pod="openstack/keystone-bootstrap-snpdp" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.398821 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-credential-keys\") pod \"keystone-bootstrap-snpdp\" (UID: \"a6c0c7d8-6044-4702-ac31-e90652d15248\") " pod="openstack/keystone-bootstrap-snpdp" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.500909 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-credential-keys\") pod \"keystone-bootstrap-snpdp\" (UID: \"a6c0c7d8-6044-4702-ac31-e90652d15248\") " pod="openstack/keystone-bootstrap-snpdp" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.501111 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-fernet-keys\") pod \"keystone-bootstrap-snpdp\" (UID: \"a6c0c7d8-6044-4702-ac31-e90652d15248\") " pod="openstack/keystone-bootstrap-snpdp" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.501196 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st78t\" (UniqueName: \"kubernetes.io/projected/a6c0c7d8-6044-4702-ac31-e90652d15248-kube-api-access-st78t\") pod \"keystone-bootstrap-snpdp\" (UID: \"a6c0c7d8-6044-4702-ac31-e90652d15248\") " pod="openstack/keystone-bootstrap-snpdp" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.501263 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-config-data\") pod \"keystone-bootstrap-snpdp\" (UID: \"a6c0c7d8-6044-4702-ac31-e90652d15248\") " pod="openstack/keystone-bootstrap-snpdp" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.501283 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-combined-ca-bundle\") pod \"keystone-bootstrap-snpdp\" (UID: \"a6c0c7d8-6044-4702-ac31-e90652d15248\") " pod="openstack/keystone-bootstrap-snpdp" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.501392 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-scripts\") pod \"keystone-bootstrap-snpdp\" (UID: \"a6c0c7d8-6044-4702-ac31-e90652d15248\") " pod="openstack/keystone-bootstrap-snpdp" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.506400 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-fernet-keys\") pod \"keystone-bootstrap-snpdp\" (UID: \"a6c0c7d8-6044-4702-ac31-e90652d15248\") " pod="openstack/keystone-bootstrap-snpdp" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.506486 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-config-data\") pod \"keystone-bootstrap-snpdp\" (UID: \"a6c0c7d8-6044-4702-ac31-e90652d15248\") " pod="openstack/keystone-bootstrap-snpdp" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.507047 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-scripts\") pod \"keystone-bootstrap-snpdp\" (UID: \"a6c0c7d8-6044-4702-ac31-e90652d15248\") " pod="openstack/keystone-bootstrap-snpdp" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.509013 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-combined-ca-bundle\") pod \"keystone-bootstrap-snpdp\" (UID: \"a6c0c7d8-6044-4702-ac31-e90652d15248\") " pod="openstack/keystone-bootstrap-snpdp" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.510188 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-credential-keys\") pod \"keystone-bootstrap-snpdp\" (UID: \"a6c0c7d8-6044-4702-ac31-e90652d15248\") " pod="openstack/keystone-bootstrap-snpdp" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.517999 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st78t\" (UniqueName: \"kubernetes.io/projected/a6c0c7d8-6044-4702-ac31-e90652d15248-kube-api-access-st78t\") pod \"keystone-bootstrap-snpdp\" (UID: \"a6c0c7d8-6044-4702-ac31-e90652d15248\") " pod="openstack/keystone-bootstrap-snpdp" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.607847 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-snpdp" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.767992 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.859439 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-45rqx"] Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.859654 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" podUID="c29f99b1-e205-4491-8c84-7d6247e3752c" containerName="dnsmasq-dns" containerID="cri-o://b7dd8773947a7d57e333284840daf7a29c6af036c5f4dc30bfd43ad807d04fa7" gracePeriod=10 Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.985322 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 19:01:11 crc kubenswrapper[4915]: I0127 19:01:11.994159 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.112619 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86087cb3-4c77-44e5-b974-8e988cee6c9c-config-data\") pod \"86087cb3-4c77-44e5-b974-8e988cee6c9c\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.112667 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a76b02c3-0641-40d4-81ab-c4cdad9f5717-config-data\") pod \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.112710 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a76b02c3-0641-40d4-81ab-c4cdad9f5717-logs\") pod \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.112733 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86087cb3-4c77-44e5-b974-8e988cee6c9c-public-tls-certs\") pod \"86087cb3-4c77-44e5-b974-8e988cee6c9c\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.113218 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a76b02c3-0641-40d4-81ab-c4cdad9f5717-logs" (OuterVolumeSpecName: "logs") pod "a76b02c3-0641-40d4-81ab-c4cdad9f5717" (UID: "a76b02c3-0641-40d4-81ab-c4cdad9f5717"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.113249 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.113323 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"86087cb3-4c77-44e5-b974-8e988cee6c9c\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.113350 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86087cb3-4c77-44e5-b974-8e988cee6c9c-scripts\") pod \"86087cb3-4c77-44e5-b974-8e988cee6c9c\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.113377 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a76b02c3-0641-40d4-81ab-c4cdad9f5717-httpd-run\") pod \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.113410 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86087cb3-4c77-44e5-b974-8e988cee6c9c-combined-ca-bundle\") pod \"86087cb3-4c77-44e5-b974-8e988cee6c9c\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.113437 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a76b02c3-0641-40d4-81ab-c4cdad9f5717-internal-tls-certs\") pod \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.113464 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/86087cb3-4c77-44e5-b974-8e988cee6c9c-httpd-run\") pod \"86087cb3-4c77-44e5-b974-8e988cee6c9c\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.113503 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdw8k\" (UniqueName: \"kubernetes.io/projected/a76b02c3-0641-40d4-81ab-c4cdad9f5717-kube-api-access-fdw8k\") pod \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.113537 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5k5k\" (UniqueName: \"kubernetes.io/projected/86087cb3-4c77-44e5-b974-8e988cee6c9c-kube-api-access-q5k5k\") pod \"86087cb3-4c77-44e5-b974-8e988cee6c9c\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.113598 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a76b02c3-0641-40d4-81ab-c4cdad9f5717-combined-ca-bundle\") pod \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.113625 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a76b02c3-0641-40d4-81ab-c4cdad9f5717-scripts\") pod \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\" (UID: \"a76b02c3-0641-40d4-81ab-c4cdad9f5717\") " Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.113642 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86087cb3-4c77-44e5-b974-8e988cee6c9c-logs\") pod \"86087cb3-4c77-44e5-b974-8e988cee6c9c\" (UID: \"86087cb3-4c77-44e5-b974-8e988cee6c9c\") " Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.113932 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86087cb3-4c77-44e5-b974-8e988cee6c9c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "86087cb3-4c77-44e5-b974-8e988cee6c9c" (UID: "86087cb3-4c77-44e5-b974-8e988cee6c9c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.113988 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a76b02c3-0641-40d4-81ab-c4cdad9f5717-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.115422 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a76b02c3-0641-40d4-81ab-c4cdad9f5717-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a76b02c3-0641-40d4-81ab-c4cdad9f5717" (UID: "a76b02c3-0641-40d4-81ab-c4cdad9f5717"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.118332 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "86087cb3-4c77-44e5-b974-8e988cee6c9c" (UID: "86087cb3-4c77-44e5-b974-8e988cee6c9c"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.118533 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86087cb3-4c77-44e5-b974-8e988cee6c9c-logs" (OuterVolumeSpecName: "logs") pod "86087cb3-4c77-44e5-b974-8e988cee6c9c" (UID: "86087cb3-4c77-44e5-b974-8e988cee6c9c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.118809 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86087cb3-4c77-44e5-b974-8e988cee6c9c-kube-api-access-q5k5k" (OuterVolumeSpecName: "kube-api-access-q5k5k") pod "86087cb3-4c77-44e5-b974-8e988cee6c9c" (UID: "86087cb3-4c77-44e5-b974-8e988cee6c9c"). InnerVolumeSpecName "kube-api-access-q5k5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.119556 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a76b02c3-0641-40d4-81ab-c4cdad9f5717-kube-api-access-fdw8k" (OuterVolumeSpecName: "kube-api-access-fdw8k") pod "a76b02c3-0641-40d4-81ab-c4cdad9f5717" (UID: "a76b02c3-0641-40d4-81ab-c4cdad9f5717"). InnerVolumeSpecName "kube-api-access-fdw8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.119612 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86087cb3-4c77-44e5-b974-8e988cee6c9c-scripts" (OuterVolumeSpecName: "scripts") pod "86087cb3-4c77-44e5-b974-8e988cee6c9c" (UID: "86087cb3-4c77-44e5-b974-8e988cee6c9c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.120751 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a76b02c3-0641-40d4-81ab-c4cdad9f5717-scripts" (OuterVolumeSpecName: "scripts") pod "a76b02c3-0641-40d4-81ab-c4cdad9f5717" (UID: "a76b02c3-0641-40d4-81ab-c4cdad9f5717"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.122453 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "a76b02c3-0641-40d4-81ab-c4cdad9f5717" (UID: "a76b02c3-0641-40d4-81ab-c4cdad9f5717"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.142135 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86087cb3-4c77-44e5-b974-8e988cee6c9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86087cb3-4c77-44e5-b974-8e988cee6c9c" (UID: "86087cb3-4c77-44e5-b974-8e988cee6c9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.144970 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a76b02c3-0641-40d4-81ab-c4cdad9f5717-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a76b02c3-0641-40d4-81ab-c4cdad9f5717" (UID: "a76b02c3-0641-40d4-81ab-c4cdad9f5717"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.165160 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86087cb3-4c77-44e5-b974-8e988cee6c9c-config-data" (OuterVolumeSpecName: "config-data") pod "86087cb3-4c77-44e5-b974-8e988cee6c9c" (UID: "86087cb3-4c77-44e5-b974-8e988cee6c9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.169993 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86087cb3-4c77-44e5-b974-8e988cee6c9c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "86087cb3-4c77-44e5-b974-8e988cee6c9c" (UID: "86087cb3-4c77-44e5-b974-8e988cee6c9c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.173105 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a76b02c3-0641-40d4-81ab-c4cdad9f5717-config-data" (OuterVolumeSpecName: "config-data") pod "a76b02c3-0641-40d4-81ab-c4cdad9f5717" (UID: "a76b02c3-0641-40d4-81ab-c4cdad9f5717"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.178643 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a76b02c3-0641-40d4-81ab-c4cdad9f5717-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a76b02c3-0641-40d4-81ab-c4cdad9f5717" (UID: "a76b02c3-0641-40d4-81ab-c4cdad9f5717"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.215700 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a76b02c3-0641-40d4-81ab-c4cdad9f5717-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.215750 4915 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86087cb3-4c77-44e5-b974-8e988cee6c9c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.215811 4915 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.215835 4915 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.215849 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86087cb3-4c77-44e5-b974-8e988cee6c9c-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.215861 4915 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a76b02c3-0641-40d4-81ab-c4cdad9f5717-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.215872 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86087cb3-4c77-44e5-b974-8e988cee6c9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.215885 4915 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a76b02c3-0641-40d4-81ab-c4cdad9f5717-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.215898 4915 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/86087cb3-4c77-44e5-b974-8e988cee6c9c-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.215911 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdw8k\" (UniqueName: \"kubernetes.io/projected/a76b02c3-0641-40d4-81ab-c4cdad9f5717-kube-api-access-fdw8k\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.215924 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5k5k\" (UniqueName: \"kubernetes.io/projected/86087cb3-4c77-44e5-b974-8e988cee6c9c-kube-api-access-q5k5k\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.215938 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a76b02c3-0641-40d4-81ab-c4cdad9f5717-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.215975 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a76b02c3-0641-40d4-81ab-c4cdad9f5717-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.215988 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86087cb3-4c77-44e5-b974-8e988cee6c9c-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.215998 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86087cb3-4c77-44e5-b974-8e988cee6c9c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.236460 4915 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.238400 4915 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.318059 4915 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.318094 4915 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.598745 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a76b02c3-0641-40d4-81ab-c4cdad9f5717","Type":"ContainerDied","Data":"389fd930f719907b1662b46979e0bbf53bf47a263cb91e72a04c9a01562b221c"} Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.598805 4915 scope.go:117] "RemoveContainer" containerID="afa2fa21b086994d71eeb4c0cebf45bb0d8f0d8a6f8780919ab746c20b689796" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.598822 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.601497 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.601679 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"86087cb3-4c77-44e5-b974-8e988cee6c9c","Type":"ContainerDied","Data":"7ed4bae6fddef3c7128cf5402e688ccc25c5676f538b476fec3948bf37c34aa8"} Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.605514 4915 generic.go:334] "Generic (PLEG): container finished" podID="c29f99b1-e205-4491-8c84-7d6247e3752c" containerID="b7dd8773947a7d57e333284840daf7a29c6af036c5f4dc30bfd43ad807d04fa7" exitCode=0 Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.605742 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" event={"ID":"c29f99b1-e205-4491-8c84-7d6247e3752c","Type":"ContainerDied","Data":"b7dd8773947a7d57e333284840daf7a29c6af036c5f4dc30bfd43ad807d04fa7"} Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.676958 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.689307 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.712163 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.722174 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.734143 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 19:01:12 crc kubenswrapper[4915]: E0127 19:01:12.734540 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a76b02c3-0641-40d4-81ab-c4cdad9f5717" containerName="glance-log" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.734567 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a76b02c3-0641-40d4-81ab-c4cdad9f5717" containerName="glance-log" Jan 27 19:01:12 crc kubenswrapper[4915]: E0127 19:01:12.734588 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a76b02c3-0641-40d4-81ab-c4cdad9f5717" containerName="glance-httpd" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.734599 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a76b02c3-0641-40d4-81ab-c4cdad9f5717" containerName="glance-httpd" Jan 27 19:01:12 crc kubenswrapper[4915]: E0127 19:01:12.734625 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86087cb3-4c77-44e5-b974-8e988cee6c9c" containerName="glance-httpd" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.734634 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="86087cb3-4c77-44e5-b974-8e988cee6c9c" containerName="glance-httpd" Jan 27 19:01:12 crc kubenswrapper[4915]: E0127 19:01:12.734665 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86087cb3-4c77-44e5-b974-8e988cee6c9c" containerName="glance-log" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.734674 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="86087cb3-4c77-44e5-b974-8e988cee6c9c" containerName="glance-log" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.734940 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="86087cb3-4c77-44e5-b974-8e988cee6c9c" containerName="glance-httpd" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.734966 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a76b02c3-0641-40d4-81ab-c4cdad9f5717" containerName="glance-log" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.734983 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="86087cb3-4c77-44e5-b974-8e988cee6c9c" containerName="glance-log" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.734997 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a76b02c3-0641-40d4-81ab-c4cdad9f5717" containerName="glance-httpd" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.736099 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.737905 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.738211 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.738629 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.739313 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-cc55j" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.752457 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.764165 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.765605 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.769335 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.769666 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.773610 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.827845 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4dfd337-f738-4898-82e2-1e0362890db6-config-data\") pod \"glance-default-external-api-0\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.827894 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab212b87-cac4-4075-9899-28c8ca8bae4b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.827911 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab212b87-cac4-4075-9899-28c8ca8bae4b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.827966 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4dfd337-f738-4898-82e2-1e0362890db6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.828037 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4dfd337-f738-4898-82e2-1e0362890db6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.828183 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab212b87-cac4-4075-9899-28c8ca8bae4b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.828234 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.828260 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qkct\" (UniqueName: \"kubernetes.io/projected/ab212b87-cac4-4075-9899-28c8ca8bae4b-kube-api-access-2qkct\") pod \"glance-default-internal-api-0\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.828332 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7psjl\" (UniqueName: \"kubernetes.io/projected/c4dfd337-f738-4898-82e2-1e0362890db6-kube-api-access-7psjl\") pod \"glance-default-external-api-0\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.828348 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab212b87-cac4-4075-9899-28c8ca8bae4b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.828366 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab212b87-cac4-4075-9899-28c8ca8bae4b-logs\") pod \"glance-default-internal-api-0\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.828440 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4dfd337-f738-4898-82e2-1e0362890db6-scripts\") pod \"glance-default-external-api-0\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.828486 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4dfd337-f738-4898-82e2-1e0362890db6-logs\") pod \"glance-default-external-api-0\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.828563 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4dfd337-f738-4898-82e2-1e0362890db6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.828658 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab212b87-cac4-4075-9899-28c8ca8bae4b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.828715 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.930276 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4dfd337-f738-4898-82e2-1e0362890db6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.930336 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab212b87-cac4-4075-9899-28c8ca8bae4b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.930363 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.930387 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qkct\" (UniqueName: \"kubernetes.io/projected/ab212b87-cac4-4075-9899-28c8ca8bae4b-kube-api-access-2qkct\") pod \"glance-default-internal-api-0\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.930421 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7psjl\" (UniqueName: \"kubernetes.io/projected/c4dfd337-f738-4898-82e2-1e0362890db6-kube-api-access-7psjl\") pod \"glance-default-external-api-0\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.930442 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab212b87-cac4-4075-9899-28c8ca8bae4b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.930465 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab212b87-cac4-4075-9899-28c8ca8bae4b-logs\") pod \"glance-default-internal-api-0\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.930489 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4dfd337-f738-4898-82e2-1e0362890db6-scripts\") pod \"glance-default-external-api-0\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.930506 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4dfd337-f738-4898-82e2-1e0362890db6-logs\") pod \"glance-default-external-api-0\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.930541 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4dfd337-f738-4898-82e2-1e0362890db6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.930584 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab212b87-cac4-4075-9899-28c8ca8bae4b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.930612 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.930643 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4dfd337-f738-4898-82e2-1e0362890db6-config-data\") pod \"glance-default-external-api-0\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.930669 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab212b87-cac4-4075-9899-28c8ca8bae4b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.930690 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab212b87-cac4-4075-9899-28c8ca8bae4b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.930750 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4dfd337-f738-4898-82e2-1e0362890db6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.931342 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4dfd337-f738-4898-82e2-1e0362890db6-logs\") pod \"glance-default-external-api-0\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.931390 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.931599 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab212b87-cac4-4075-9899-28c8ca8bae4b-logs\") pod \"glance-default-internal-api-0\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.931724 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab212b87-cac4-4075-9899-28c8ca8bae4b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.931836 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4dfd337-f738-4898-82e2-1e0362890db6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.932147 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.934882 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab212b87-cac4-4075-9899-28c8ca8bae4b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.935962 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4dfd337-f738-4898-82e2-1e0362890db6-scripts\") pod \"glance-default-external-api-0\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.938666 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab212b87-cac4-4075-9899-28c8ca8bae4b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.939333 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4dfd337-f738-4898-82e2-1e0362890db6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.939558 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4dfd337-f738-4898-82e2-1e0362890db6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.946427 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab212b87-cac4-4075-9899-28c8ca8bae4b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.947095 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7psjl\" (UniqueName: \"kubernetes.io/projected/c4dfd337-f738-4898-82e2-1e0362890db6-kube-api-access-7psjl\") pod \"glance-default-external-api-0\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.949538 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab212b87-cac4-4075-9899-28c8ca8bae4b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.954033 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4dfd337-f738-4898-82e2-1e0362890db6-config-data\") pod \"glance-default-external-api-0\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.959017 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qkct\" (UniqueName: \"kubernetes.io/projected/ab212b87-cac4-4075-9899-28c8ca8bae4b-kube-api-access-2qkct\") pod \"glance-default-internal-api-0\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.972833 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:01:12 crc kubenswrapper[4915]: I0127 19:01:12.973081 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " pod="openstack/glance-default-external-api-0" Jan 27 19:01:13 crc kubenswrapper[4915]: I0127 19:01:13.057933 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 19:01:13 crc kubenswrapper[4915]: I0127 19:01:13.101399 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:13 crc kubenswrapper[4915]: I0127 19:01:13.372657 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86087cb3-4c77-44e5-b974-8e988cee6c9c" path="/var/lib/kubelet/pods/86087cb3-4c77-44e5-b974-8e988cee6c9c/volumes" Jan 27 19:01:13 crc kubenswrapper[4915]: I0127 19:01:13.373495 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a76b02c3-0641-40d4-81ab-c4cdad9f5717" path="/var/lib/kubelet/pods/a76b02c3-0641-40d4-81ab-c4cdad9f5717/volumes" Jan 27 19:01:14 crc kubenswrapper[4915]: I0127 19:01:14.029181 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" podUID="c29f99b1-e205-4491-8c84-7d6247e3752c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.138:5353: connect: connection refused" Jan 27 19:01:19 crc kubenswrapper[4915]: I0127 19:01:19.028573 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" podUID="c29f99b1-e205-4491-8c84-7d6247e3752c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.138:5353: connect: connection refused" Jan 27 19:01:20 crc kubenswrapper[4915]: I0127 19:01:20.624601 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:01:20 crc kubenswrapper[4915]: I0127 19:01:20.624995 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:01:21 crc kubenswrapper[4915]: E0127 19:01:21.888012 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 27 19:01:21 crc kubenswrapper[4915]: E0127 19:01:21.888455 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvbtc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-7drwm_openstack(99e935b3-c64c-4c02-821b-18301c6b6c27): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 19:01:21 crc kubenswrapper[4915]: E0127 19:01:21.890173 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-7drwm" podUID="99e935b3-c64c-4c02-821b-18301c6b6c27" Jan 27 19:01:22 crc kubenswrapper[4915]: E0127 19:01:22.265223 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 27 19:01:22 crc kubenswrapper[4915]: E0127 19:01:22.265436 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf9hf5h58bhc7h645h76hc5h545h5cfh6h57dh586h67bh574h8fh599h67ch5f6h678h8h8h7fh8bh6bh75h658h86h68dh58dh5dch58chfbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cjr4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(fc00196d-5999-4fb5-ad85-c2ed51b570ae): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.289526 4915 scope.go:117] "RemoveContainer" containerID="6203aaf9b445da3072b2c3f1e0f01220637fb767405abfbf7e1dc80ef816ffa6" Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.458893 4915 scope.go:117] "RemoveContainer" containerID="423b9afd158628ef095cf2ef147219b8368a082e0ec8c44d6a1f858e3e20e173" Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.619299 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.639782 4915 scope.go:117] "RemoveContainer" containerID="ee3f81ad7d09bfb093284cba0385dadffba13392f9edb7eca07a4437ed8a54d1" Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.694932 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.695382 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-45rqx" event={"ID":"c29f99b1-e205-4491-8c84-7d6247e3752c","Type":"ContainerDied","Data":"4dd87f49d1489b815ca04ecc09b6a77b3e7a78968e4a303e8f38694210fa48fa"} Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.695436 4915 scope.go:117] "RemoveContainer" containerID="b7dd8773947a7d57e333284840daf7a29c6af036c5f4dc30bfd43ad807d04fa7" Jan 27 19:01:22 crc kubenswrapper[4915]: E0127 19:01:22.697474 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-7drwm" podUID="99e935b3-c64c-4c02-821b-18301c6b6c27" Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.708064 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-snpdp"] Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.713349 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-ovsdbserver-sb\") pod \"c29f99b1-e205-4491-8c84-7d6247e3752c\" (UID: \"c29f99b1-e205-4491-8c84-7d6247e3752c\") " Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.713988 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-dns-swift-storage-0\") pod \"c29f99b1-e205-4491-8c84-7d6247e3752c\" (UID: \"c29f99b1-e205-4491-8c84-7d6247e3752c\") " Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.714156 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-ovsdbserver-nb\") pod \"c29f99b1-e205-4491-8c84-7d6247e3752c\" (UID: \"c29f99b1-e205-4491-8c84-7d6247e3752c\") " Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.714327 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-config\") pod \"c29f99b1-e205-4491-8c84-7d6247e3752c\" (UID: \"c29f99b1-e205-4491-8c84-7d6247e3752c\") " Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.714600 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-dns-svc\") pod \"c29f99b1-e205-4491-8c84-7d6247e3752c\" (UID: \"c29f99b1-e205-4491-8c84-7d6247e3752c\") " Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.714659 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4ltz\" (UniqueName: \"kubernetes.io/projected/c29f99b1-e205-4491-8c84-7d6247e3752c-kube-api-access-p4ltz\") pod \"c29f99b1-e205-4491-8c84-7d6247e3752c\" (UID: \"c29f99b1-e205-4491-8c84-7d6247e3752c\") " Jan 27 19:01:22 crc kubenswrapper[4915]: W0127 19:01:22.723002 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6c0c7d8_6044_4702_ac31_e90652d15248.slice/crio-dd9258ed1ca1d40fc42bdeaa49a09e78986d7edb6b3273dda5cc11409b40da7a WatchSource:0}: Error finding container dd9258ed1ca1d40fc42bdeaa49a09e78986d7edb6b3273dda5cc11409b40da7a: Status 404 returned error can't find the container with id dd9258ed1ca1d40fc42bdeaa49a09e78986d7edb6b3273dda5cc11409b40da7a Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.731744 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c29f99b1-e205-4491-8c84-7d6247e3752c-kube-api-access-p4ltz" (OuterVolumeSpecName: "kube-api-access-p4ltz") pod "c29f99b1-e205-4491-8c84-7d6247e3752c" (UID: "c29f99b1-e205-4491-8c84-7d6247e3752c"). InnerVolumeSpecName "kube-api-access-p4ltz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.770597 4915 scope.go:117] "RemoveContainer" containerID="c208aa897382a36123d50e1ea00cb967b93198d8c02de96a5f4eb928316c3b11" Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.775847 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c29f99b1-e205-4491-8c84-7d6247e3752c" (UID: "c29f99b1-e205-4491-8c84-7d6247e3752c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.779230 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-config" (OuterVolumeSpecName: "config") pod "c29f99b1-e205-4491-8c84-7d6247e3752c" (UID: "c29f99b1-e205-4491-8c84-7d6247e3752c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.780221 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c29f99b1-e205-4491-8c84-7d6247e3752c" (UID: "c29f99b1-e205-4491-8c84-7d6247e3752c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.795448 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c29f99b1-e205-4491-8c84-7d6247e3752c" (UID: "c29f99b1-e205-4491-8c84-7d6247e3752c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.801607 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c29f99b1-e205-4491-8c84-7d6247e3752c" (UID: "c29f99b1-e205-4491-8c84-7d6247e3752c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.817343 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.817372 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.817384 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4ltz\" (UniqueName: \"kubernetes.io/projected/c29f99b1-e205-4491-8c84-7d6247e3752c-kube-api-access-p4ltz\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.817672 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.817708 4915 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.817717 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c29f99b1-e205-4491-8c84-7d6247e3752c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:22 crc kubenswrapper[4915]: W0127 19:01:22.935785 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab212b87_cac4_4075_9899_28c8ca8bae4b.slice/crio-3449e9500fbf287fb5075c1f5b1da2dd76e56098add32cdf199e332ffe41412c WatchSource:0}: Error finding container 3449e9500fbf287fb5075c1f5b1da2dd76e56098add32cdf199e332ffe41412c: Status 404 returned error can't find the container with id 3449e9500fbf287fb5075c1f5b1da2dd76e56098add32cdf199e332ffe41412c Jan 27 19:01:22 crc kubenswrapper[4915]: I0127 19:01:22.960634 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 19:01:23 crc kubenswrapper[4915]: I0127 19:01:23.024871 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 19:01:23 crc kubenswrapper[4915]: I0127 19:01:23.035090 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-45rqx"] Jan 27 19:01:23 crc kubenswrapper[4915]: I0127 19:01:23.041405 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-45rqx"] Jan 27 19:01:23 crc kubenswrapper[4915]: I0127 19:01:23.367713 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c29f99b1-e205-4491-8c84-7d6247e3752c" path="/var/lib/kubelet/pods/c29f99b1-e205-4491-8c84-7d6247e3752c/volumes" Jan 27 19:01:23 crc kubenswrapper[4915]: I0127 19:01:23.724025 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-tbtjt" event={"ID":"2a336723-5840-48ef-b010-ca1cff69f962","Type":"ContainerStarted","Data":"e2a265c1ce7bdb5d70e035fb5c422c6c12f3d29c42f1e121829c270efeb41f4f"} Jan 27 19:01:23 crc kubenswrapper[4915]: I0127 19:01:23.729746 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-snpdp" event={"ID":"a6c0c7d8-6044-4702-ac31-e90652d15248","Type":"ContainerStarted","Data":"4b0112d5705b109ff815f61cf7bdeeaff9197c76f99b6d90650c55afe47637e3"} Jan 27 19:01:23 crc kubenswrapper[4915]: I0127 19:01:23.729820 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-snpdp" event={"ID":"a6c0c7d8-6044-4702-ac31-e90652d15248","Type":"ContainerStarted","Data":"dd9258ed1ca1d40fc42bdeaa49a09e78986d7edb6b3273dda5cc11409b40da7a"} Jan 27 19:01:23 crc kubenswrapper[4915]: I0127 19:01:23.731994 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ab212b87-cac4-4075-9899-28c8ca8bae4b","Type":"ContainerStarted","Data":"eac834827418aba18158643238f641ee146250a04d7df698569ab918434a53d5"} Jan 27 19:01:23 crc kubenswrapper[4915]: I0127 19:01:23.732031 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ab212b87-cac4-4075-9899-28c8ca8bae4b","Type":"ContainerStarted","Data":"3449e9500fbf287fb5075c1f5b1da2dd76e56098add32cdf199e332ffe41412c"} Jan 27 19:01:23 crc kubenswrapper[4915]: I0127 19:01:23.750122 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-tbtjt" podStartSLOduration=4.783559134 podStartE2EDuration="22.750100854s" podCreationTimestamp="2026-01-27 19:01:01 +0000 UTC" firstStartedPulling="2026-01-27 19:01:04.283890933 +0000 UTC m=+1155.641744597" lastFinishedPulling="2026-01-27 19:01:22.250432653 +0000 UTC m=+1173.608286317" observedRunningTime="2026-01-27 19:01:23.740735854 +0000 UTC m=+1175.098589518" watchObservedRunningTime="2026-01-27 19:01:23.750100854 +0000 UTC m=+1175.107954518" Jan 27 19:01:23 crc kubenswrapper[4915]: I0127 19:01:23.761684 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4ppcd" event={"ID":"49c48f92-8129-4065-910c-166770ecb401","Type":"ContainerStarted","Data":"77c76021d33ae4822a885e5c353f68fba8512e26f1aaf7f82c6dc167a05c68d6"} Jan 27 19:01:23 crc kubenswrapper[4915]: I0127 19:01:23.764753 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c4dfd337-f738-4898-82e2-1e0362890db6","Type":"ContainerStarted","Data":"21810529ed8ee02366c51c1050f3fb71bd8d8b4d18df7ce556caba17c5d367cd"} Jan 27 19:01:23 crc kubenswrapper[4915]: I0127 19:01:23.764814 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c4dfd337-f738-4898-82e2-1e0362890db6","Type":"ContainerStarted","Data":"b876248c3439affcab72a047008d79858fb24c5de0fc7a7a6df3fafe866fd097"} Jan 27 19:01:23 crc kubenswrapper[4915]: I0127 19:01:23.790724 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-4ppcd" podStartSLOduration=4.803019309 podStartE2EDuration="22.790697036s" podCreationTimestamp="2026-01-27 19:01:01 +0000 UTC" firstStartedPulling="2026-01-27 19:01:04.289613763 +0000 UTC m=+1155.647467427" lastFinishedPulling="2026-01-27 19:01:22.27729149 +0000 UTC m=+1173.635145154" observedRunningTime="2026-01-27 19:01:23.778070808 +0000 UTC m=+1175.135924472" watchObservedRunningTime="2026-01-27 19:01:23.790697036 +0000 UTC m=+1175.148550700" Jan 27 19:01:23 crc kubenswrapper[4915]: I0127 19:01:23.791424 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-snpdp" podStartSLOduration=12.791415324 podStartE2EDuration="12.791415324s" podCreationTimestamp="2026-01-27 19:01:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:23.759989765 +0000 UTC m=+1175.117843429" watchObservedRunningTime="2026-01-27 19:01:23.791415324 +0000 UTC m=+1175.149269008" Jan 27 19:01:24 crc kubenswrapper[4915]: I0127 19:01:24.781917 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc00196d-5999-4fb5-ad85-c2ed51b570ae","Type":"ContainerStarted","Data":"5a76fa29bc2eda93d30186a1db18d9c67ba10a4fa44315cfee2b9ff812297c96"} Jan 27 19:01:24 crc kubenswrapper[4915]: I0127 19:01:24.788974 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c4dfd337-f738-4898-82e2-1e0362890db6","Type":"ContainerStarted","Data":"3d1a5e0f164c523c27775e8d8045682b87e3d22b442149c1d46c8e2ad5a7ff75"} Jan 27 19:01:24 crc kubenswrapper[4915]: I0127 19:01:24.794970 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ab212b87-cac4-4075-9899-28c8ca8bae4b","Type":"ContainerStarted","Data":"36403e6d91375fdf20e47e86e7e39de77372ec912070c6ea562a9bbcb00877c9"} Jan 27 19:01:24 crc kubenswrapper[4915]: I0127 19:01:24.811953 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=12.811934516 podStartE2EDuration="12.811934516s" podCreationTimestamp="2026-01-27 19:01:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:24.811445784 +0000 UTC m=+1176.169299448" watchObservedRunningTime="2026-01-27 19:01:24.811934516 +0000 UTC m=+1176.169788190" Jan 27 19:01:24 crc kubenswrapper[4915]: I0127 19:01:24.834235 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=12.834212981 podStartE2EDuration="12.834212981s" podCreationTimestamp="2026-01-27 19:01:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:24.831267199 +0000 UTC m=+1176.189120883" watchObservedRunningTime="2026-01-27 19:01:24.834212981 +0000 UTC m=+1176.192066655" Jan 27 19:01:25 crc kubenswrapper[4915]: I0127 19:01:25.806259 4915 generic.go:334] "Generic (PLEG): container finished" podID="49c48f92-8129-4065-910c-166770ecb401" containerID="77c76021d33ae4822a885e5c353f68fba8512e26f1aaf7f82c6dc167a05c68d6" exitCode=0 Jan 27 19:01:25 crc kubenswrapper[4915]: I0127 19:01:25.807614 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4ppcd" event={"ID":"49c48f92-8129-4065-910c-166770ecb401","Type":"ContainerDied","Data":"77c76021d33ae4822a885e5c353f68fba8512e26f1aaf7f82c6dc167a05c68d6"} Jan 27 19:01:26 crc kubenswrapper[4915]: E0127 19:01:26.120125 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6c0c7d8_6044_4702_ac31_e90652d15248.slice/crio-4b0112d5705b109ff815f61cf7bdeeaff9197c76f99b6d90650c55afe47637e3.scope\": RecentStats: unable to find data in memory cache]" Jan 27 19:01:26 crc kubenswrapper[4915]: I0127 19:01:26.815340 4915 generic.go:334] "Generic (PLEG): container finished" podID="8c061dc9-adce-4ec1-9d89-d55751e9f851" containerID="01e3ce09a2526a0318f8a67730a60d1b0cc8744b130e54f91d75d1a77fb8f6a7" exitCode=0 Jan 27 19:01:26 crc kubenswrapper[4915]: I0127 19:01:26.815411 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vs7dx" event={"ID":"8c061dc9-adce-4ec1-9d89-d55751e9f851","Type":"ContainerDied","Data":"01e3ce09a2526a0318f8a67730a60d1b0cc8744b130e54f91d75d1a77fb8f6a7"} Jan 27 19:01:26 crc kubenswrapper[4915]: I0127 19:01:26.817030 4915 generic.go:334] "Generic (PLEG): container finished" podID="2a336723-5840-48ef-b010-ca1cff69f962" containerID="e2a265c1ce7bdb5d70e035fb5c422c6c12f3d29c42f1e121829c270efeb41f4f" exitCode=0 Jan 27 19:01:26 crc kubenswrapper[4915]: I0127 19:01:26.817083 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-tbtjt" event={"ID":"2a336723-5840-48ef-b010-ca1cff69f962","Type":"ContainerDied","Data":"e2a265c1ce7bdb5d70e035fb5c422c6c12f3d29c42f1e121829c270efeb41f4f"} Jan 27 19:01:26 crc kubenswrapper[4915]: I0127 19:01:26.818602 4915 generic.go:334] "Generic (PLEG): container finished" podID="a6c0c7d8-6044-4702-ac31-e90652d15248" containerID="4b0112d5705b109ff815f61cf7bdeeaff9197c76f99b6d90650c55afe47637e3" exitCode=0 Jan 27 19:01:26 crc kubenswrapper[4915]: I0127 19:01:26.818680 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-snpdp" event={"ID":"a6c0c7d8-6044-4702-ac31-e90652d15248","Type":"ContainerDied","Data":"4b0112d5705b109ff815f61cf7bdeeaff9197c76f99b6d90650c55afe47637e3"} Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.402202 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4ppcd" Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.535355 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49c48f92-8129-4065-910c-166770ecb401-logs\") pod \"49c48f92-8129-4065-910c-166770ecb401\" (UID: \"49c48f92-8129-4065-910c-166770ecb401\") " Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.535779 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49c48f92-8129-4065-910c-166770ecb401-scripts\") pod \"49c48f92-8129-4065-910c-166770ecb401\" (UID: \"49c48f92-8129-4065-910c-166770ecb401\") " Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.535983 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49c48f92-8129-4065-910c-166770ecb401-config-data\") pod \"49c48f92-8129-4065-910c-166770ecb401\" (UID: \"49c48f92-8129-4065-910c-166770ecb401\") " Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.536035 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r48sw\" (UniqueName: \"kubernetes.io/projected/49c48f92-8129-4065-910c-166770ecb401-kube-api-access-r48sw\") pod \"49c48f92-8129-4065-910c-166770ecb401\" (UID: \"49c48f92-8129-4065-910c-166770ecb401\") " Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.536060 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49c48f92-8129-4065-910c-166770ecb401-combined-ca-bundle\") pod \"49c48f92-8129-4065-910c-166770ecb401\" (UID: \"49c48f92-8129-4065-910c-166770ecb401\") " Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.536771 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49c48f92-8129-4065-910c-166770ecb401-logs" (OuterVolumeSpecName: "logs") pod "49c48f92-8129-4065-910c-166770ecb401" (UID: "49c48f92-8129-4065-910c-166770ecb401"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.541920 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c48f92-8129-4065-910c-166770ecb401-kube-api-access-r48sw" (OuterVolumeSpecName: "kube-api-access-r48sw") pod "49c48f92-8129-4065-910c-166770ecb401" (UID: "49c48f92-8129-4065-910c-166770ecb401"). InnerVolumeSpecName "kube-api-access-r48sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.558972 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c48f92-8129-4065-910c-166770ecb401-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49c48f92-8129-4065-910c-166770ecb401" (UID: "49c48f92-8129-4065-910c-166770ecb401"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.559271 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c48f92-8129-4065-910c-166770ecb401-scripts" (OuterVolumeSpecName: "scripts") pod "49c48f92-8129-4065-910c-166770ecb401" (UID: "49c48f92-8129-4065-910c-166770ecb401"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.562678 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c48f92-8129-4065-910c-166770ecb401-config-data" (OuterVolumeSpecName: "config-data") pod "49c48f92-8129-4065-910c-166770ecb401" (UID: "49c48f92-8129-4065-910c-166770ecb401"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.638179 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49c48f92-8129-4065-910c-166770ecb401-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.638212 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49c48f92-8129-4065-910c-166770ecb401-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.638222 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49c48f92-8129-4065-910c-166770ecb401-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.638235 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r48sw\" (UniqueName: \"kubernetes.io/projected/49c48f92-8129-4065-910c-166770ecb401-kube-api-access-r48sw\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.638248 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49c48f92-8129-4065-910c-166770ecb401-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.829560 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4ppcd" Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.829574 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4ppcd" event={"ID":"49c48f92-8129-4065-910c-166770ecb401","Type":"ContainerDied","Data":"fab93bb9607872e331d2697811a973608ec8a38c41ecab9abf099ea4a8fc32a6"} Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.829612 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fab93bb9607872e331d2697811a973608ec8a38c41ecab9abf099ea4a8fc32a6" Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.978910 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-cff5fcc84-lxsfm"] Jan 27 19:01:27 crc kubenswrapper[4915]: E0127 19:01:27.979409 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29f99b1-e205-4491-8c84-7d6247e3752c" containerName="dnsmasq-dns" Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.979435 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29f99b1-e205-4491-8c84-7d6247e3752c" containerName="dnsmasq-dns" Jan 27 19:01:27 crc kubenswrapper[4915]: E0127 19:01:27.979469 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49c48f92-8129-4065-910c-166770ecb401" containerName="placement-db-sync" Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.979479 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="49c48f92-8129-4065-910c-166770ecb401" containerName="placement-db-sync" Jan 27 19:01:27 crc kubenswrapper[4915]: E0127 19:01:27.979493 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29f99b1-e205-4491-8c84-7d6247e3752c" containerName="init" Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.979500 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29f99b1-e205-4491-8c84-7d6247e3752c" containerName="init" Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.979699 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29f99b1-e205-4491-8c84-7d6247e3752c" containerName="dnsmasq-dns" Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.979725 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="49c48f92-8129-4065-910c-166770ecb401" containerName="placement-db-sync" Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.980906 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.985374 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.985627 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.985764 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.986026 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hxwmd" Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.986648 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 27 19:01:27 crc kubenswrapper[4915]: I0127 19:01:27.989465 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-cff5fcc84-lxsfm"] Jan 27 19:01:28 crc kubenswrapper[4915]: I0127 19:01:28.046606 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-scripts\") pod \"placement-cff5fcc84-lxsfm\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:01:28 crc kubenswrapper[4915]: I0127 19:01:28.046685 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-config-data\") pod \"placement-cff5fcc84-lxsfm\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:01:28 crc kubenswrapper[4915]: I0127 19:01:28.046722 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-internal-tls-certs\") pod \"placement-cff5fcc84-lxsfm\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:01:28 crc kubenswrapper[4915]: I0127 19:01:28.046753 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mchd4\" (UniqueName: \"kubernetes.io/projected/102e986e-f101-4f49-af96-50368468f7b4-kube-api-access-mchd4\") pod \"placement-cff5fcc84-lxsfm\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:01:28 crc kubenswrapper[4915]: I0127 19:01:28.046810 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-combined-ca-bundle\") pod \"placement-cff5fcc84-lxsfm\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:01:28 crc kubenswrapper[4915]: I0127 19:01:28.046860 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-public-tls-certs\") pod \"placement-cff5fcc84-lxsfm\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:01:28 crc kubenswrapper[4915]: I0127 19:01:28.046881 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/102e986e-f101-4f49-af96-50368468f7b4-logs\") pod \"placement-cff5fcc84-lxsfm\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:01:28 crc kubenswrapper[4915]: I0127 19:01:28.149714 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-config-data\") pod \"placement-cff5fcc84-lxsfm\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:01:28 crc kubenswrapper[4915]: I0127 19:01:28.149777 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-internal-tls-certs\") pod \"placement-cff5fcc84-lxsfm\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:01:28 crc kubenswrapper[4915]: I0127 19:01:28.149822 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mchd4\" (UniqueName: \"kubernetes.io/projected/102e986e-f101-4f49-af96-50368468f7b4-kube-api-access-mchd4\") pod \"placement-cff5fcc84-lxsfm\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:01:28 crc kubenswrapper[4915]: I0127 19:01:28.149862 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-combined-ca-bundle\") pod \"placement-cff5fcc84-lxsfm\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:01:28 crc kubenswrapper[4915]: I0127 19:01:28.149901 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-public-tls-certs\") pod \"placement-cff5fcc84-lxsfm\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:01:28 crc kubenswrapper[4915]: I0127 19:01:28.149920 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/102e986e-f101-4f49-af96-50368468f7b4-logs\") pod \"placement-cff5fcc84-lxsfm\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:01:28 crc kubenswrapper[4915]: I0127 19:01:28.149953 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-scripts\") pod \"placement-cff5fcc84-lxsfm\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:01:28 crc kubenswrapper[4915]: I0127 19:01:28.153882 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-scripts\") pod \"placement-cff5fcc84-lxsfm\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:01:28 crc kubenswrapper[4915]: I0127 19:01:28.156028 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-config-data\") pod \"placement-cff5fcc84-lxsfm\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:01:28 crc kubenswrapper[4915]: I0127 19:01:28.158296 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-internal-tls-certs\") pod \"placement-cff5fcc84-lxsfm\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:01:28 crc kubenswrapper[4915]: I0127 19:01:28.161726 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-combined-ca-bundle\") pod \"placement-cff5fcc84-lxsfm\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:01:28 crc kubenswrapper[4915]: I0127 19:01:28.164571 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-public-tls-certs\") pod \"placement-cff5fcc84-lxsfm\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:01:28 crc kubenswrapper[4915]: I0127 19:01:28.164853 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/102e986e-f101-4f49-af96-50368468f7b4-logs\") pod \"placement-cff5fcc84-lxsfm\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:01:28 crc kubenswrapper[4915]: I0127 19:01:28.177145 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mchd4\" (UniqueName: \"kubernetes.io/projected/102e986e-f101-4f49-af96-50368468f7b4-kube-api-access-mchd4\") pod \"placement-cff5fcc84-lxsfm\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:01:28 crc kubenswrapper[4915]: I0127 19:01:28.308020 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.385269 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-tbtjt" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.393019 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-snpdp" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.396519 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vs7dx" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.476649 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8c061dc9-adce-4ec1-9d89-d55751e9f851-config\") pod \"8c061dc9-adce-4ec1-9d89-d55751e9f851\" (UID: \"8c061dc9-adce-4ec1-9d89-d55751e9f851\") " Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.476798 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2crb\" (UniqueName: \"kubernetes.io/projected/2a336723-5840-48ef-b010-ca1cff69f962-kube-api-access-l2crb\") pod \"2a336723-5840-48ef-b010-ca1cff69f962\" (UID: \"2a336723-5840-48ef-b010-ca1cff69f962\") " Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.476954 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhxhw\" (UniqueName: \"kubernetes.io/projected/8c061dc9-adce-4ec1-9d89-d55751e9f851-kube-api-access-rhxhw\") pod \"8c061dc9-adce-4ec1-9d89-d55751e9f851\" (UID: \"8c061dc9-adce-4ec1-9d89-d55751e9f851\") " Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.476982 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st78t\" (UniqueName: \"kubernetes.io/projected/a6c0c7d8-6044-4702-ac31-e90652d15248-kube-api-access-st78t\") pod \"a6c0c7d8-6044-4702-ac31-e90652d15248\" (UID: \"a6c0c7d8-6044-4702-ac31-e90652d15248\") " Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.477006 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a336723-5840-48ef-b010-ca1cff69f962-combined-ca-bundle\") pod \"2a336723-5840-48ef-b010-ca1cff69f962\" (UID: \"2a336723-5840-48ef-b010-ca1cff69f962\") " Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.477048 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-combined-ca-bundle\") pod \"a6c0c7d8-6044-4702-ac31-e90652d15248\" (UID: \"a6c0c7d8-6044-4702-ac31-e90652d15248\") " Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.477066 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c061dc9-adce-4ec1-9d89-d55751e9f851-combined-ca-bundle\") pod \"8c061dc9-adce-4ec1-9d89-d55751e9f851\" (UID: \"8c061dc9-adce-4ec1-9d89-d55751e9f851\") " Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.477093 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-credential-keys\") pod \"a6c0c7d8-6044-4702-ac31-e90652d15248\" (UID: \"a6c0c7d8-6044-4702-ac31-e90652d15248\") " Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.477130 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-config-data\") pod \"a6c0c7d8-6044-4702-ac31-e90652d15248\" (UID: \"a6c0c7d8-6044-4702-ac31-e90652d15248\") " Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.477144 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-fernet-keys\") pod \"a6c0c7d8-6044-4702-ac31-e90652d15248\" (UID: \"a6c0c7d8-6044-4702-ac31-e90652d15248\") " Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.477162 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a336723-5840-48ef-b010-ca1cff69f962-db-sync-config-data\") pod \"2a336723-5840-48ef-b010-ca1cff69f962\" (UID: \"2a336723-5840-48ef-b010-ca1cff69f962\") " Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.477199 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-scripts\") pod \"a6c0c7d8-6044-4702-ac31-e90652d15248\" (UID: \"a6c0c7d8-6044-4702-ac31-e90652d15248\") " Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.482130 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c061dc9-adce-4ec1-9d89-d55751e9f851-kube-api-access-rhxhw" (OuterVolumeSpecName: "kube-api-access-rhxhw") pod "8c061dc9-adce-4ec1-9d89-d55751e9f851" (UID: "8c061dc9-adce-4ec1-9d89-d55751e9f851"). InnerVolumeSpecName "kube-api-access-rhxhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.482434 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a6c0c7d8-6044-4702-ac31-e90652d15248" (UID: "a6c0c7d8-6044-4702-ac31-e90652d15248"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.485954 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a336723-5840-48ef-b010-ca1cff69f962-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2a336723-5840-48ef-b010-ca1cff69f962" (UID: "2a336723-5840-48ef-b010-ca1cff69f962"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.488158 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a336723-5840-48ef-b010-ca1cff69f962-kube-api-access-l2crb" (OuterVolumeSpecName: "kube-api-access-l2crb") pod "2a336723-5840-48ef-b010-ca1cff69f962" (UID: "2a336723-5840-48ef-b010-ca1cff69f962"). InnerVolumeSpecName "kube-api-access-l2crb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.488372 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a6c0c7d8-6044-4702-ac31-e90652d15248" (UID: "a6c0c7d8-6044-4702-ac31-e90652d15248"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.491346 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-scripts" (OuterVolumeSpecName: "scripts") pod "a6c0c7d8-6044-4702-ac31-e90652d15248" (UID: "a6c0c7d8-6044-4702-ac31-e90652d15248"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.505976 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6c0c7d8-6044-4702-ac31-e90652d15248-kube-api-access-st78t" (OuterVolumeSpecName: "kube-api-access-st78t") pod "a6c0c7d8-6044-4702-ac31-e90652d15248" (UID: "a6c0c7d8-6044-4702-ac31-e90652d15248"). InnerVolumeSpecName "kube-api-access-st78t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.523815 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6c0c7d8-6044-4702-ac31-e90652d15248" (UID: "a6c0c7d8-6044-4702-ac31-e90652d15248"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.524955 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-config-data" (OuterVolumeSpecName: "config-data") pod "a6c0c7d8-6044-4702-ac31-e90652d15248" (UID: "a6c0c7d8-6044-4702-ac31-e90652d15248"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.532974 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c061dc9-adce-4ec1-9d89-d55751e9f851-config" (OuterVolumeSpecName: "config") pod "8c061dc9-adce-4ec1-9d89-d55751e9f851" (UID: "8c061dc9-adce-4ec1-9d89-d55751e9f851"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.540774 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a336723-5840-48ef-b010-ca1cff69f962-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a336723-5840-48ef-b010-ca1cff69f962" (UID: "2a336723-5840-48ef-b010-ca1cff69f962"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.541688 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c061dc9-adce-4ec1-9d89-d55751e9f851-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c061dc9-adce-4ec1-9d89-d55751e9f851" (UID: "8c061dc9-adce-4ec1-9d89-d55751e9f851"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.579281 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhxhw\" (UniqueName: \"kubernetes.io/projected/8c061dc9-adce-4ec1-9d89-d55751e9f851-kube-api-access-rhxhw\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.579315 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st78t\" (UniqueName: \"kubernetes.io/projected/a6c0c7d8-6044-4702-ac31-e90652d15248-kube-api-access-st78t\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.579325 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a336723-5840-48ef-b010-ca1cff69f962-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.579333 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.579345 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c061dc9-adce-4ec1-9d89-d55751e9f851-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.579354 4915 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.579362 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.579371 4915 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.579379 4915 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a336723-5840-48ef-b010-ca1cff69f962-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.579387 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6c0c7d8-6044-4702-ac31-e90652d15248-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.579395 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8c061dc9-adce-4ec1-9d89-d55751e9f851-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.579404 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2crb\" (UniqueName: \"kubernetes.io/projected/2a336723-5840-48ef-b010-ca1cff69f962-kube-api-access-l2crb\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.683347 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-cff5fcc84-lxsfm"] Jan 27 19:01:29 crc kubenswrapper[4915]: W0127 19:01:29.691456 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod102e986e_f101_4f49_af96_50368468f7b4.slice/crio-55fb094174121245c8c88645bfb4614591493838efd05635bfa7bab760a3f60e WatchSource:0}: Error finding container 55fb094174121245c8c88645bfb4614591493838efd05635bfa7bab760a3f60e: Status 404 returned error can't find the container with id 55fb094174121245c8c88645bfb4614591493838efd05635bfa7bab760a3f60e Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.849344 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vs7dx" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.849555 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vs7dx" event={"ID":"8c061dc9-adce-4ec1-9d89-d55751e9f851","Type":"ContainerDied","Data":"114e5ac4db455ba7fb0c282a4106b7d67ea2029d3ede13ac0464c1ecaae0048a"} Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.850029 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="114e5ac4db455ba7fb0c282a4106b7d67ea2029d3ede13ac0464c1ecaae0048a" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.858602 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cff5fcc84-lxsfm" event={"ID":"102e986e-f101-4f49-af96-50368468f7b4","Type":"ContainerStarted","Data":"55fb094174121245c8c88645bfb4614591493838efd05635bfa7bab760a3f60e"} Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.860619 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc00196d-5999-4fb5-ad85-c2ed51b570ae","Type":"ContainerStarted","Data":"e0a916e110308d756b2c4be697a99ac33c79fa849fb72cada723642f15d817d7"} Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.862456 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-tbtjt" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.862495 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-tbtjt" event={"ID":"2a336723-5840-48ef-b010-ca1cff69f962","Type":"ContainerDied","Data":"cc77d0bc5caa5af115bc53df859abff0de9512d53e7e16773685bf53d7e63749"} Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.862540 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc77d0bc5caa5af115bc53df859abff0de9512d53e7e16773685bf53d7e63749" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.863918 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-snpdp" event={"ID":"a6c0c7d8-6044-4702-ac31-e90652d15248","Type":"ContainerDied","Data":"dd9258ed1ca1d40fc42bdeaa49a09e78986d7edb6b3273dda5cc11409b40da7a"} Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.863941 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd9258ed1ca1d40fc42bdeaa49a09e78986d7edb6b3273dda5cc11409b40da7a" Jan 27 19:01:29 crc kubenswrapper[4915]: I0127 19:01:29.864034 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-snpdp" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.607387 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5bf7f58cfb-6c779"] Jan 27 19:01:30 crc kubenswrapper[4915]: E0127 19:01:30.608982 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c0c7d8-6044-4702-ac31-e90652d15248" containerName="keystone-bootstrap" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.609071 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c0c7d8-6044-4702-ac31-e90652d15248" containerName="keystone-bootstrap" Jan 27 19:01:30 crc kubenswrapper[4915]: E0127 19:01:30.609166 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a336723-5840-48ef-b010-ca1cff69f962" containerName="barbican-db-sync" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.609235 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a336723-5840-48ef-b010-ca1cff69f962" containerName="barbican-db-sync" Jan 27 19:01:30 crc kubenswrapper[4915]: E0127 19:01:30.609335 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c061dc9-adce-4ec1-9d89-d55751e9f851" containerName="neutron-db-sync" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.609406 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c061dc9-adce-4ec1-9d89-d55751e9f851" containerName="neutron-db-sync" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.609671 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c061dc9-adce-4ec1-9d89-d55751e9f851" containerName="neutron-db-sync" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.609757 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c0c7d8-6044-4702-ac31-e90652d15248" containerName="keystone-bootstrap" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.609941 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a336723-5840-48ef-b010-ca1cff69f962" containerName="barbican-db-sync" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.610662 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.614272 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fc7sv" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.614435 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.614536 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.615137 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.615270 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.615333 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.636631 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5bf7f58cfb-6c779"] Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.701877 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-slkqz"] Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.702939 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-combined-ca-bundle\") pod \"keystone-5bf7f58cfb-6c779\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.703084 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-fernet-keys\") pod \"keystone-5bf7f58cfb-6c779\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.703192 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-scripts\") pod \"keystone-5bf7f58cfb-6c779\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.703299 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-config-data\") pod \"keystone-5bf7f58cfb-6c779\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.703478 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-public-tls-certs\") pod \"keystone-5bf7f58cfb-6c779\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.703594 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-credential-keys\") pod \"keystone-5bf7f58cfb-6c779\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.703706 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrccc\" (UniqueName: \"kubernetes.io/projected/d0031b79-12aa-4487-8501-6e122053cc13-kube-api-access-qrccc\") pod \"keystone-5bf7f58cfb-6c779\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.703824 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-internal-tls-certs\") pod \"keystone-5bf7f58cfb-6c779\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.708020 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-slkqz" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.744517 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-slkqz"] Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.775526 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7686c5764d-hfh9t"] Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.777304 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.782854 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5bb5986567-mfzn6"] Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.784564 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5bb5986567-mfzn6" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.786860 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.787077 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-ctllf" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.787288 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.788020 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.799107 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7686c5764d-hfh9t"] Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.808646 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rswfm\" (UniqueName: \"kubernetes.io/projected/b088b573-e938-44b5-bb88-977b7b23dfcb-kube-api-access-rswfm\") pod \"dnsmasq-dns-6b7b667979-slkqz\" (UID: \"b088b573-e938-44b5-bb88-977b7b23dfcb\") " pod="openstack/dnsmasq-dns-6b7b667979-slkqz" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.808704 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-config-data\") pod \"keystone-5bf7f58cfb-6c779\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.808782 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-slkqz\" (UID: \"b088b573-e938-44b5-bb88-977b7b23dfcb\") " pod="openstack/dnsmasq-dns-6b7b667979-slkqz" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.808998 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-public-tls-certs\") pod \"keystone-5bf7f58cfb-6c779\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.809030 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-credential-keys\") pod \"keystone-5bf7f58cfb-6c779\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.809050 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrccc\" (UniqueName: \"kubernetes.io/projected/d0031b79-12aa-4487-8501-6e122053cc13-kube-api-access-qrccc\") pod \"keystone-5bf7f58cfb-6c779\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.809066 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-internal-tls-certs\") pod \"keystone-5bf7f58cfb-6c779\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.809130 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-dns-svc\") pod \"dnsmasq-dns-6b7b667979-slkqz\" (UID: \"b088b573-e938-44b5-bb88-977b7b23dfcb\") " pod="openstack/dnsmasq-dns-6b7b667979-slkqz" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.809151 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-slkqz\" (UID: \"b088b573-e938-44b5-bb88-977b7b23dfcb\") " pod="openstack/dnsmasq-dns-6b7b667979-slkqz" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.809167 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-combined-ca-bundle\") pod \"keystone-5bf7f58cfb-6c779\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.809200 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-slkqz\" (UID: \"b088b573-e938-44b5-bb88-977b7b23dfcb\") " pod="openstack/dnsmasq-dns-6b7b667979-slkqz" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.809239 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-config\") pod \"dnsmasq-dns-6b7b667979-slkqz\" (UID: \"b088b573-e938-44b5-bb88-977b7b23dfcb\") " pod="openstack/dnsmasq-dns-6b7b667979-slkqz" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.809274 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-fernet-keys\") pod \"keystone-5bf7f58cfb-6c779\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.809317 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-scripts\") pod \"keystone-5bf7f58cfb-6c779\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.820854 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-internal-tls-certs\") pod \"keystone-5bf7f58cfb-6c779\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.821922 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-fernet-keys\") pod \"keystone-5bf7f58cfb-6c779\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.846836 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-scripts\") pod \"keystone-5bf7f58cfb-6c779\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.848800 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-public-tls-certs\") pod \"keystone-5bf7f58cfb-6c779\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.849345 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-combined-ca-bundle\") pod \"keystone-5bf7f58cfb-6c779\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.849905 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-config-data\") pod \"keystone-5bf7f58cfb-6c779\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.854020 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-credential-keys\") pod \"keystone-5bf7f58cfb-6c779\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.875217 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrccc\" (UniqueName: \"kubernetes.io/projected/d0031b79-12aa-4487-8501-6e122053cc13-kube-api-access-qrccc\") pod \"keystone-5bf7f58cfb-6c779\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.912265 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rswfm\" (UniqueName: \"kubernetes.io/projected/b088b573-e938-44b5-bb88-977b7b23dfcb-kube-api-access-rswfm\") pod \"dnsmasq-dns-6b7b667979-slkqz\" (UID: \"b088b573-e938-44b5-bb88-977b7b23dfcb\") " pod="openstack/dnsmasq-dns-6b7b667979-slkqz" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.912331 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6c1db90-61fc-4aa0-8371-eae7ac202752-logs\") pod \"barbican-keystone-listener-7686c5764d-hfh9t\" (UID: \"a6c1db90-61fc-4aa0-8371-eae7ac202752\") " pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.912359 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9455215d-2a98-42df-a801-53f31071447e-config-data\") pod \"barbican-worker-5bb5986567-mfzn6\" (UID: \"9455215d-2a98-42df-a801-53f31071447e\") " pod="openstack/barbican-worker-5bb5986567-mfzn6" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.912385 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9455215d-2a98-42df-a801-53f31071447e-config-data-custom\") pod \"barbican-worker-5bb5986567-mfzn6\" (UID: \"9455215d-2a98-42df-a801-53f31071447e\") " pod="openstack/barbican-worker-5bb5986567-mfzn6" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.912413 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-slkqz\" (UID: \"b088b573-e938-44b5-bb88-977b7b23dfcb\") " pod="openstack/dnsmasq-dns-6b7b667979-slkqz" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.912435 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6c1db90-61fc-4aa0-8371-eae7ac202752-config-data-custom\") pod \"barbican-keystone-listener-7686c5764d-hfh9t\" (UID: \"a6c1db90-61fc-4aa0-8371-eae7ac202752\") " pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.912479 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9455215d-2a98-42df-a801-53f31071447e-combined-ca-bundle\") pod \"barbican-worker-5bb5986567-mfzn6\" (UID: \"9455215d-2a98-42df-a801-53f31071447e\") " pod="openstack/barbican-worker-5bb5986567-mfzn6" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.912503 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c1db90-61fc-4aa0-8371-eae7ac202752-config-data\") pod \"barbican-keystone-listener-7686c5764d-hfh9t\" (UID: \"a6c1db90-61fc-4aa0-8371-eae7ac202752\") " pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.912534 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg8qw\" (UniqueName: \"kubernetes.io/projected/9455215d-2a98-42df-a801-53f31071447e-kube-api-access-gg8qw\") pod \"barbican-worker-5bb5986567-mfzn6\" (UID: \"9455215d-2a98-42df-a801-53f31071447e\") " pod="openstack/barbican-worker-5bb5986567-mfzn6" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.912557 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-dns-svc\") pod \"dnsmasq-dns-6b7b667979-slkqz\" (UID: \"b088b573-e938-44b5-bb88-977b7b23dfcb\") " pod="openstack/dnsmasq-dns-6b7b667979-slkqz" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.912577 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-slkqz\" (UID: \"b088b573-e938-44b5-bb88-977b7b23dfcb\") " pod="openstack/dnsmasq-dns-6b7b667979-slkqz" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.912595 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9455215d-2a98-42df-a801-53f31071447e-logs\") pod \"barbican-worker-5bb5986567-mfzn6\" (UID: \"9455215d-2a98-42df-a801-53f31071447e\") " pod="openstack/barbican-worker-5bb5986567-mfzn6" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.912631 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-slkqz\" (UID: \"b088b573-e938-44b5-bb88-977b7b23dfcb\") " pod="openstack/dnsmasq-dns-6b7b667979-slkqz" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.912659 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-config\") pod \"dnsmasq-dns-6b7b667979-slkqz\" (UID: \"b088b573-e938-44b5-bb88-977b7b23dfcb\") " pod="openstack/dnsmasq-dns-6b7b667979-slkqz" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.912693 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gwnv\" (UniqueName: \"kubernetes.io/projected/a6c1db90-61fc-4aa0-8371-eae7ac202752-kube-api-access-9gwnv\") pod \"barbican-keystone-listener-7686c5764d-hfh9t\" (UID: \"a6c1db90-61fc-4aa0-8371-eae7ac202752\") " pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.912722 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c1db90-61fc-4aa0-8371-eae7ac202752-combined-ca-bundle\") pod \"barbican-keystone-listener-7686c5764d-hfh9t\" (UID: \"a6c1db90-61fc-4aa0-8371-eae7ac202752\") " pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.913992 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-slkqz\" (UID: \"b088b573-e938-44b5-bb88-977b7b23dfcb\") " pod="openstack/dnsmasq-dns-6b7b667979-slkqz" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.914578 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-dns-svc\") pod \"dnsmasq-dns-6b7b667979-slkqz\" (UID: \"b088b573-e938-44b5-bb88-977b7b23dfcb\") " pod="openstack/dnsmasq-dns-6b7b667979-slkqz" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.915136 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-slkqz\" (UID: \"b088b573-e938-44b5-bb88-977b7b23dfcb\") " pod="openstack/dnsmasq-dns-6b7b667979-slkqz" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.915630 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-slkqz\" (UID: \"b088b573-e938-44b5-bb88-977b7b23dfcb\") " pod="openstack/dnsmasq-dns-6b7b667979-slkqz" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.915672 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-config\") pod \"dnsmasq-dns-6b7b667979-slkqz\" (UID: \"b088b573-e938-44b5-bb88-977b7b23dfcb\") " pod="openstack/dnsmasq-dns-6b7b667979-slkqz" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.922358 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5bb5986567-mfzn6"] Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.945440 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rswfm\" (UniqueName: \"kubernetes.io/projected/b088b573-e938-44b5-bb88-977b7b23dfcb-kube-api-access-rswfm\") pod \"dnsmasq-dns-6b7b667979-slkqz\" (UID: \"b088b573-e938-44b5-bb88-977b7b23dfcb\") " pod="openstack/dnsmasq-dns-6b7b667979-slkqz" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.956922 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-79f7874b76-rrchs"] Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.958310 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-79f7874b76-rrchs" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.975887 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.976082 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.976264 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.976464 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-vqr4f" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.978074 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-79f7874b76-rrchs"] Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.978101 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.978120 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cff5fcc84-lxsfm" event={"ID":"102e986e-f101-4f49-af96-50368468f7b4","Type":"ContainerStarted","Data":"ad1a62b5c13ac79f78f79de53e79d8b6e2d71d37af6a621d62c0f82ef67828b5"} Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.978133 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.978143 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cff5fcc84-lxsfm" event={"ID":"102e986e-f101-4f49-af96-50368468f7b4","Type":"ContainerStarted","Data":"2be69f7a262552a158049c3b0bbfbb5ea2b1a992e0c212434065b95a436e316e"} Jan 27 19:01:30 crc kubenswrapper[4915]: I0127 19:01:30.982404 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.015113 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53bd39be-21df-4cea-aed2-0ac820ce45b6-combined-ca-bundle\") pod \"neutron-79f7874b76-rrchs\" (UID: \"53bd39be-21df-4cea-aed2-0ac820ce45b6\") " pod="openstack/neutron-79f7874b76-rrchs" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.015160 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/53bd39be-21df-4cea-aed2-0ac820ce45b6-ovndb-tls-certs\") pod \"neutron-79f7874b76-rrchs\" (UID: \"53bd39be-21df-4cea-aed2-0ac820ce45b6\") " pod="openstack/neutron-79f7874b76-rrchs" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.015493 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6c1db90-61fc-4aa0-8371-eae7ac202752-logs\") pod \"barbican-keystone-listener-7686c5764d-hfh9t\" (UID: \"a6c1db90-61fc-4aa0-8371-eae7ac202752\") " pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.015519 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9455215d-2a98-42df-a801-53f31071447e-config-data\") pod \"barbican-worker-5bb5986567-mfzn6\" (UID: \"9455215d-2a98-42df-a801-53f31071447e\") " pod="openstack/barbican-worker-5bb5986567-mfzn6" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.015538 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/53bd39be-21df-4cea-aed2-0ac820ce45b6-httpd-config\") pod \"neutron-79f7874b76-rrchs\" (UID: \"53bd39be-21df-4cea-aed2-0ac820ce45b6\") " pod="openstack/neutron-79f7874b76-rrchs" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.015572 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9455215d-2a98-42df-a801-53f31071447e-config-data-custom\") pod \"barbican-worker-5bb5986567-mfzn6\" (UID: \"9455215d-2a98-42df-a801-53f31071447e\") " pod="openstack/barbican-worker-5bb5986567-mfzn6" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.015601 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6c1db90-61fc-4aa0-8371-eae7ac202752-config-data-custom\") pod \"barbican-keystone-listener-7686c5764d-hfh9t\" (UID: \"a6c1db90-61fc-4aa0-8371-eae7ac202752\") " pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.015645 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9455215d-2a98-42df-a801-53f31071447e-combined-ca-bundle\") pod \"barbican-worker-5bb5986567-mfzn6\" (UID: \"9455215d-2a98-42df-a801-53f31071447e\") " pod="openstack/barbican-worker-5bb5986567-mfzn6" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.015674 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c1db90-61fc-4aa0-8371-eae7ac202752-config-data\") pod \"barbican-keystone-listener-7686c5764d-hfh9t\" (UID: \"a6c1db90-61fc-4aa0-8371-eae7ac202752\") " pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.015703 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg8qw\" (UniqueName: \"kubernetes.io/projected/9455215d-2a98-42df-a801-53f31071447e-kube-api-access-gg8qw\") pod \"barbican-worker-5bb5986567-mfzn6\" (UID: \"9455215d-2a98-42df-a801-53f31071447e\") " pod="openstack/barbican-worker-5bb5986567-mfzn6" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.015725 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9455215d-2a98-42df-a801-53f31071447e-logs\") pod \"barbican-worker-5bb5986567-mfzn6\" (UID: \"9455215d-2a98-42df-a801-53f31071447e\") " pod="openstack/barbican-worker-5bb5986567-mfzn6" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.015745 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/53bd39be-21df-4cea-aed2-0ac820ce45b6-config\") pod \"neutron-79f7874b76-rrchs\" (UID: \"53bd39be-21df-4cea-aed2-0ac820ce45b6\") " pod="openstack/neutron-79f7874b76-rrchs" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.015776 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wbvf\" (UniqueName: \"kubernetes.io/projected/53bd39be-21df-4cea-aed2-0ac820ce45b6-kube-api-access-6wbvf\") pod \"neutron-79f7874b76-rrchs\" (UID: \"53bd39be-21df-4cea-aed2-0ac820ce45b6\") " pod="openstack/neutron-79f7874b76-rrchs" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.015812 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gwnv\" (UniqueName: \"kubernetes.io/projected/a6c1db90-61fc-4aa0-8371-eae7ac202752-kube-api-access-9gwnv\") pod \"barbican-keystone-listener-7686c5764d-hfh9t\" (UID: \"a6c1db90-61fc-4aa0-8371-eae7ac202752\") " pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.015834 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c1db90-61fc-4aa0-8371-eae7ac202752-combined-ca-bundle\") pod \"barbican-keystone-listener-7686c5764d-hfh9t\" (UID: \"a6c1db90-61fc-4aa0-8371-eae7ac202752\") " pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.016205 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-slkqz"] Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.016863 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-slkqz" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.025702 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7dfc77bd66-5pjwk"] Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.025883 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c1db90-61fc-4aa0-8371-eae7ac202752-combined-ca-bundle\") pod \"barbican-keystone-listener-7686c5764d-hfh9t\" (UID: \"a6c1db90-61fc-4aa0-8371-eae7ac202752\") " pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.026264 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6c1db90-61fc-4aa0-8371-eae7ac202752-logs\") pod \"barbican-keystone-listener-7686c5764d-hfh9t\" (UID: \"a6c1db90-61fc-4aa0-8371-eae7ac202752\") " pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.026532 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9455215d-2a98-42df-a801-53f31071447e-logs\") pod \"barbican-worker-5bb5986567-mfzn6\" (UID: \"9455215d-2a98-42df-a801-53f31071447e\") " pod="openstack/barbican-worker-5bb5986567-mfzn6" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.027103 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dfc77bd66-5pjwk" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.030761 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.039482 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9455215d-2a98-42df-a801-53f31071447e-config-data-custom\") pod \"barbican-worker-5bb5986567-mfzn6\" (UID: \"9455215d-2a98-42df-a801-53f31071447e\") " pod="openstack/barbican-worker-5bb5986567-mfzn6" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.049725 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6c1db90-61fc-4aa0-8371-eae7ac202752-config-data-custom\") pod \"barbican-keystone-listener-7686c5764d-hfh9t\" (UID: \"a6c1db90-61fc-4aa0-8371-eae7ac202752\") " pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.051052 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9455215d-2a98-42df-a801-53f31071447e-config-data\") pod \"barbican-worker-5bb5986567-mfzn6\" (UID: \"9455215d-2a98-42df-a801-53f31071447e\") " pod="openstack/barbican-worker-5bb5986567-mfzn6" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.062027 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c1db90-61fc-4aa0-8371-eae7ac202752-config-data\") pod \"barbican-keystone-listener-7686c5764d-hfh9t\" (UID: \"a6c1db90-61fc-4aa0-8371-eae7ac202752\") " pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.070860 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7dfc77bd66-5pjwk"] Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.094382 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9455215d-2a98-42df-a801-53f31071447e-combined-ca-bundle\") pod \"barbican-worker-5bb5986567-mfzn6\" (UID: \"9455215d-2a98-42df-a801-53f31071447e\") " pod="openstack/barbican-worker-5bb5986567-mfzn6" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.097404 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg8qw\" (UniqueName: \"kubernetes.io/projected/9455215d-2a98-42df-a801-53f31071447e-kube-api-access-gg8qw\") pod \"barbican-worker-5bb5986567-mfzn6\" (UID: \"9455215d-2a98-42df-a801-53f31071447e\") " pod="openstack/barbican-worker-5bb5986567-mfzn6" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.118684 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6a6a3d-1872-4859-9d04-cbe10f799f89-combined-ca-bundle\") pod \"barbican-api-7dfc77bd66-5pjwk\" (UID: \"8f6a6a3d-1872-4859-9d04-cbe10f799f89\") " pod="openstack/barbican-api-7dfc77bd66-5pjwk" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.118818 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f6a6a3d-1872-4859-9d04-cbe10f799f89-logs\") pod \"barbican-api-7dfc77bd66-5pjwk\" (UID: \"8f6a6a3d-1872-4859-9d04-cbe10f799f89\") " pod="openstack/barbican-api-7dfc77bd66-5pjwk" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.118893 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53bd39be-21df-4cea-aed2-0ac820ce45b6-combined-ca-bundle\") pod \"neutron-79f7874b76-rrchs\" (UID: \"53bd39be-21df-4cea-aed2-0ac820ce45b6\") " pod="openstack/neutron-79f7874b76-rrchs" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.118955 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/53bd39be-21df-4cea-aed2-0ac820ce45b6-ovndb-tls-certs\") pod \"neutron-79f7874b76-rrchs\" (UID: \"53bd39be-21df-4cea-aed2-0ac820ce45b6\") " pod="openstack/neutron-79f7874b76-rrchs" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.119054 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/53bd39be-21df-4cea-aed2-0ac820ce45b6-httpd-config\") pod \"neutron-79f7874b76-rrchs\" (UID: \"53bd39be-21df-4cea-aed2-0ac820ce45b6\") " pod="openstack/neutron-79f7874b76-rrchs" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.119164 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvk5l\" (UniqueName: \"kubernetes.io/projected/8f6a6a3d-1872-4859-9d04-cbe10f799f89-kube-api-access-jvk5l\") pod \"barbican-api-7dfc77bd66-5pjwk\" (UID: \"8f6a6a3d-1872-4859-9d04-cbe10f799f89\") " pod="openstack/barbican-api-7dfc77bd66-5pjwk" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.119246 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f6a6a3d-1872-4859-9d04-cbe10f799f89-config-data\") pod \"barbican-api-7dfc77bd66-5pjwk\" (UID: \"8f6a6a3d-1872-4859-9d04-cbe10f799f89\") " pod="openstack/barbican-api-7dfc77bd66-5pjwk" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.119410 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f6a6a3d-1872-4859-9d04-cbe10f799f89-config-data-custom\") pod \"barbican-api-7dfc77bd66-5pjwk\" (UID: \"8f6a6a3d-1872-4859-9d04-cbe10f799f89\") " pod="openstack/barbican-api-7dfc77bd66-5pjwk" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.119484 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/53bd39be-21df-4cea-aed2-0ac820ce45b6-config\") pod \"neutron-79f7874b76-rrchs\" (UID: \"53bd39be-21df-4cea-aed2-0ac820ce45b6\") " pod="openstack/neutron-79f7874b76-rrchs" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.119558 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wbvf\" (UniqueName: \"kubernetes.io/projected/53bd39be-21df-4cea-aed2-0ac820ce45b6-kube-api-access-6wbvf\") pod \"neutron-79f7874b76-rrchs\" (UID: \"53bd39be-21df-4cea-aed2-0ac820ce45b6\") " pod="openstack/neutron-79f7874b76-rrchs" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.128289 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/53bd39be-21df-4cea-aed2-0ac820ce45b6-httpd-config\") pod \"neutron-79f7874b76-rrchs\" (UID: \"53bd39be-21df-4cea-aed2-0ac820ce45b6\") " pod="openstack/neutron-79f7874b76-rrchs" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.128296 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/53bd39be-21df-4cea-aed2-0ac820ce45b6-ovndb-tls-certs\") pod \"neutron-79f7874b76-rrchs\" (UID: \"53bd39be-21df-4cea-aed2-0ac820ce45b6\") " pod="openstack/neutron-79f7874b76-rrchs" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.145947 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-gjvmw"] Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.158327 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.160464 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gwnv\" (UniqueName: \"kubernetes.io/projected/a6c1db90-61fc-4aa0-8371-eae7ac202752-kube-api-access-9gwnv\") pod \"barbican-keystone-listener-7686c5764d-hfh9t\" (UID: \"a6c1db90-61fc-4aa0-8371-eae7ac202752\") " pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.161695 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53bd39be-21df-4cea-aed2-0ac820ce45b6-combined-ca-bundle\") pod \"neutron-79f7874b76-rrchs\" (UID: \"53bd39be-21df-4cea-aed2-0ac820ce45b6\") " pod="openstack/neutron-79f7874b76-rrchs" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.172553 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wbvf\" (UniqueName: \"kubernetes.io/projected/53bd39be-21df-4cea-aed2-0ac820ce45b6-kube-api-access-6wbvf\") pod \"neutron-79f7874b76-rrchs\" (UID: \"53bd39be-21df-4cea-aed2-0ac820ce45b6\") " pod="openstack/neutron-79f7874b76-rrchs" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.172609 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-f9776577f-2jndx"] Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.174051 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-f9776577f-2jndx" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.180145 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/53bd39be-21df-4cea-aed2-0ac820ce45b6-config\") pod \"neutron-79f7874b76-rrchs\" (UID: \"53bd39be-21df-4cea-aed2-0ac820ce45b6\") " pod="openstack/neutron-79f7874b76-rrchs" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.216036 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-gjvmw"] Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.218609 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-f9776577f-2jndx"] Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.220731 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f6a6a3d-1872-4859-9d04-cbe10f799f89-logs\") pod \"barbican-api-7dfc77bd66-5pjwk\" (UID: \"8f6a6a3d-1872-4859-9d04-cbe10f799f89\") " pod="openstack/barbican-api-7dfc77bd66-5pjwk" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.220869 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-gjvmw\" (UID: \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.220967 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-gjvmw\" (UID: \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.221071 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9gvm\" (UniqueName: \"kubernetes.io/projected/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-kube-api-access-r9gvm\") pod \"dnsmasq-dns-848cf88cfc-gjvmw\" (UID: \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.221146 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvk5l\" (UniqueName: \"kubernetes.io/projected/8f6a6a3d-1872-4859-9d04-cbe10f799f89-kube-api-access-jvk5l\") pod \"barbican-api-7dfc77bd66-5pjwk\" (UID: \"8f6a6a3d-1872-4859-9d04-cbe10f799f89\") " pod="openstack/barbican-api-7dfc77bd66-5pjwk" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.221217 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-gjvmw\" (UID: \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.221288 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f6a6a3d-1872-4859-9d04-cbe10f799f89-config-data\") pod \"barbican-api-7dfc77bd66-5pjwk\" (UID: \"8f6a6a3d-1872-4859-9d04-cbe10f799f89\") " pod="openstack/barbican-api-7dfc77bd66-5pjwk" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.221381 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-config\") pod \"dnsmasq-dns-848cf88cfc-gjvmw\" (UID: \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.221461 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-gjvmw\" (UID: \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.225917 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f6a6a3d-1872-4859-9d04-cbe10f799f89-config-data-custom\") pod \"barbican-api-7dfc77bd66-5pjwk\" (UID: \"8f6a6a3d-1872-4859-9d04-cbe10f799f89\") " pod="openstack/barbican-api-7dfc77bd66-5pjwk" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.226257 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6a6a3d-1872-4859-9d04-cbe10f799f89-combined-ca-bundle\") pod \"barbican-api-7dfc77bd66-5pjwk\" (UID: \"8f6a6a3d-1872-4859-9d04-cbe10f799f89\") " pod="openstack/barbican-api-7dfc77bd66-5pjwk" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.223224 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f6a6a3d-1872-4859-9d04-cbe10f799f89-logs\") pod \"barbican-api-7dfc77bd66-5pjwk\" (UID: \"8f6a6a3d-1872-4859-9d04-cbe10f799f89\") " pod="openstack/barbican-api-7dfc77bd66-5pjwk" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.224198 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5bb5986567-mfzn6" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.230931 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6a6a3d-1872-4859-9d04-cbe10f799f89-combined-ca-bundle\") pod \"barbican-api-7dfc77bd66-5pjwk\" (UID: \"8f6a6a3d-1872-4859-9d04-cbe10f799f89\") " pod="openstack/barbican-api-7dfc77bd66-5pjwk" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.234480 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f6a6a3d-1872-4859-9d04-cbe10f799f89-config-data-custom\") pod \"barbican-api-7dfc77bd66-5pjwk\" (UID: \"8f6a6a3d-1872-4859-9d04-cbe10f799f89\") " pod="openstack/barbican-api-7dfc77bd66-5pjwk" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.234551 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-854f9c8998-68jxd"] Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.235986 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.247169 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvk5l\" (UniqueName: \"kubernetes.io/projected/8f6a6a3d-1872-4859-9d04-cbe10f799f89-kube-api-access-jvk5l\") pod \"barbican-api-7dfc77bd66-5pjwk\" (UID: \"8f6a6a3d-1872-4859-9d04-cbe10f799f89\") " pod="openstack/barbican-api-7dfc77bd66-5pjwk" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.251931 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f6a6a3d-1872-4859-9d04-cbe10f799f89-config-data\") pod \"barbican-api-7dfc77bd66-5pjwk\" (UID: \"8f6a6a3d-1872-4859-9d04-cbe10f799f89\") " pod="openstack/barbican-api-7dfc77bd66-5pjwk" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.252477 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-cff5fcc84-lxsfm" podStartSLOduration=4.25246266 podStartE2EDuration="4.25246266s" podCreationTimestamp="2026-01-27 19:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:31.089557005 +0000 UTC m=+1182.447410669" watchObservedRunningTime="2026-01-27 19:01:31.25246266 +0000 UTC m=+1182.610316314" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.265859 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-854f9c8998-68jxd"] Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.302172 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-79f7874b76-rrchs" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.328892 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2041e54-fb55-4f2a-8cf9-e439c7774485-logs\") pod \"barbican-worker-f9776577f-2jndx\" (UID: \"f2041e54-fb55-4f2a-8cf9-e439c7774485\") " pod="openstack/barbican-worker-f9776577f-2jndx" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.328951 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-gjvmw\" (UID: \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.328967 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-logs\") pod \"barbican-keystone-listener-854f9c8998-68jxd\" (UID: \"8079a88f-5f47-4988-b4c8-6031fbfc9dd8\") " pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.328982 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4k5r\" (UniqueName: \"kubernetes.io/projected/f2041e54-fb55-4f2a-8cf9-e439c7774485-kube-api-access-m4k5r\") pod \"barbican-worker-f9776577f-2jndx\" (UID: \"f2041e54-fb55-4f2a-8cf9-e439c7774485\") " pod="openstack/barbican-worker-f9776577f-2jndx" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.329013 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q948r\" (UniqueName: \"kubernetes.io/projected/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-kube-api-access-q948r\") pod \"barbican-keystone-listener-854f9c8998-68jxd\" (UID: \"8079a88f-5f47-4988-b4c8-6031fbfc9dd8\") " pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.329052 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-gjvmw\" (UID: \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.329088 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9gvm\" (UniqueName: \"kubernetes.io/projected/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-kube-api-access-r9gvm\") pod \"dnsmasq-dns-848cf88cfc-gjvmw\" (UID: \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.329103 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-combined-ca-bundle\") pod \"barbican-keystone-listener-854f9c8998-68jxd\" (UID: \"8079a88f-5f47-4988-b4c8-6031fbfc9dd8\") " pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.329127 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-gjvmw\" (UID: \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.329153 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2041e54-fb55-4f2a-8cf9-e439c7774485-config-data\") pod \"barbican-worker-f9776577f-2jndx\" (UID: \"f2041e54-fb55-4f2a-8cf9-e439c7774485\") " pod="openstack/barbican-worker-f9776577f-2jndx" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.329177 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2041e54-fb55-4f2a-8cf9-e439c7774485-config-data-custom\") pod \"barbican-worker-f9776577f-2jndx\" (UID: \"f2041e54-fb55-4f2a-8cf9-e439c7774485\") " pod="openstack/barbican-worker-f9776577f-2jndx" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.329198 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2041e54-fb55-4f2a-8cf9-e439c7774485-combined-ca-bundle\") pod \"barbican-worker-f9776577f-2jndx\" (UID: \"f2041e54-fb55-4f2a-8cf9-e439c7774485\") " pod="openstack/barbican-worker-f9776577f-2jndx" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.329221 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-config\") pod \"dnsmasq-dns-848cf88cfc-gjvmw\" (UID: \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.329239 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-gjvmw\" (UID: \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.329272 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-config-data\") pod \"barbican-keystone-listener-854f9c8998-68jxd\" (UID: \"8079a88f-5f47-4988-b4c8-6031fbfc9dd8\") " pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.329299 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-config-data-custom\") pod \"barbican-keystone-listener-854f9c8998-68jxd\" (UID: \"8079a88f-5f47-4988-b4c8-6031fbfc9dd8\") " pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.330401 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-gjvmw\" (UID: \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.331054 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-gjvmw\" (UID: \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.351953 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-config\") pod \"dnsmasq-dns-848cf88cfc-gjvmw\" (UID: \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.364557 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-gjvmw\" (UID: \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.369153 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-gjvmw\" (UID: \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.402314 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9gvm\" (UniqueName: \"kubernetes.io/projected/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-kube-api-access-r9gvm\") pod \"dnsmasq-dns-848cf88cfc-gjvmw\" (UID: \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.406594 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.431270 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-combined-ca-bundle\") pod \"barbican-keystone-listener-854f9c8998-68jxd\" (UID: \"8079a88f-5f47-4988-b4c8-6031fbfc9dd8\") " pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.431334 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2041e54-fb55-4f2a-8cf9-e439c7774485-config-data\") pod \"barbican-worker-f9776577f-2jndx\" (UID: \"f2041e54-fb55-4f2a-8cf9-e439c7774485\") " pod="openstack/barbican-worker-f9776577f-2jndx" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.431364 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2041e54-fb55-4f2a-8cf9-e439c7774485-config-data-custom\") pod \"barbican-worker-f9776577f-2jndx\" (UID: \"f2041e54-fb55-4f2a-8cf9-e439c7774485\") " pod="openstack/barbican-worker-f9776577f-2jndx" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.431445 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2041e54-fb55-4f2a-8cf9-e439c7774485-combined-ca-bundle\") pod \"barbican-worker-f9776577f-2jndx\" (UID: \"f2041e54-fb55-4f2a-8cf9-e439c7774485\") " pod="openstack/barbican-worker-f9776577f-2jndx" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.431640 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-config-data\") pod \"barbican-keystone-listener-854f9c8998-68jxd\" (UID: \"8079a88f-5f47-4988-b4c8-6031fbfc9dd8\") " pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.431678 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-config-data-custom\") pod \"barbican-keystone-listener-854f9c8998-68jxd\" (UID: \"8079a88f-5f47-4988-b4c8-6031fbfc9dd8\") " pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.433356 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2041e54-fb55-4f2a-8cf9-e439c7774485-logs\") pod \"barbican-worker-f9776577f-2jndx\" (UID: \"f2041e54-fb55-4f2a-8cf9-e439c7774485\") " pod="openstack/barbican-worker-f9776577f-2jndx" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.433397 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-logs\") pod \"barbican-keystone-listener-854f9c8998-68jxd\" (UID: \"8079a88f-5f47-4988-b4c8-6031fbfc9dd8\") " pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.433421 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4k5r\" (UniqueName: \"kubernetes.io/projected/f2041e54-fb55-4f2a-8cf9-e439c7774485-kube-api-access-m4k5r\") pod \"barbican-worker-f9776577f-2jndx\" (UID: \"f2041e54-fb55-4f2a-8cf9-e439c7774485\") " pod="openstack/barbican-worker-f9776577f-2jndx" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.433462 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q948r\" (UniqueName: \"kubernetes.io/projected/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-kube-api-access-q948r\") pod \"barbican-keystone-listener-854f9c8998-68jxd\" (UID: \"8079a88f-5f47-4988-b4c8-6031fbfc9dd8\") " pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.436747 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2041e54-fb55-4f2a-8cf9-e439c7774485-logs\") pod \"barbican-worker-f9776577f-2jndx\" (UID: \"f2041e54-fb55-4f2a-8cf9-e439c7774485\") " pod="openstack/barbican-worker-f9776577f-2jndx" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.436764 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-logs\") pod \"barbican-keystone-listener-854f9c8998-68jxd\" (UID: \"8079a88f-5f47-4988-b4c8-6031fbfc9dd8\") " pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.444081 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-config-data\") pod \"barbican-keystone-listener-854f9c8998-68jxd\" (UID: \"8079a88f-5f47-4988-b4c8-6031fbfc9dd8\") " pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.451665 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2041e54-fb55-4f2a-8cf9-e439c7774485-config-data-custom\") pod \"barbican-worker-f9776577f-2jndx\" (UID: \"f2041e54-fb55-4f2a-8cf9-e439c7774485\") " pod="openstack/barbican-worker-f9776577f-2jndx" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.453243 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-65cdc599f4-6dhwr"] Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.454358 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q948r\" (UniqueName: \"kubernetes.io/projected/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-kube-api-access-q948r\") pod \"barbican-keystone-listener-854f9c8998-68jxd\" (UID: \"8079a88f-5f47-4988-b4c8-6031fbfc9dd8\") " pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.455083 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-65cdc599f4-6dhwr"] Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.455113 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-cb96bd94d-x5bv4"] Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.455510 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-combined-ca-bundle\") pod \"barbican-keystone-listener-854f9c8998-68jxd\" (UID: \"8079a88f-5f47-4988-b4c8-6031fbfc9dd8\") " pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.456244 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2041e54-fb55-4f2a-8cf9-e439c7774485-combined-ca-bundle\") pod \"barbican-worker-f9776577f-2jndx\" (UID: \"f2041e54-fb55-4f2a-8cf9-e439c7774485\") " pod="openstack/barbican-worker-f9776577f-2jndx" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.456432 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cb96bd94d-x5bv4"] Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.456464 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-65cdc599f4-6dhwr" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.456498 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cb96bd94d-x5bv4" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.456630 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4k5r\" (UniqueName: \"kubernetes.io/projected/f2041e54-fb55-4f2a-8cf9-e439c7774485-kube-api-access-m4k5r\") pod \"barbican-worker-f9776577f-2jndx\" (UID: \"f2041e54-fb55-4f2a-8cf9-e439c7774485\") " pod="openstack/barbican-worker-f9776577f-2jndx" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.458572 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dfc77bd66-5pjwk" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.459228 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2041e54-fb55-4f2a-8cf9-e439c7774485-config-data\") pod \"barbican-worker-f9776577f-2jndx\" (UID: \"f2041e54-fb55-4f2a-8cf9-e439c7774485\") " pod="openstack/barbican-worker-f9776577f-2jndx" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.464110 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-config-data-custom\") pod \"barbican-keystone-listener-854f9c8998-68jxd\" (UID: \"8079a88f-5f47-4988-b4c8-6031fbfc9dd8\") " pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.525077 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.534768 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c540b760-c850-4418-ba54-5a404dc3dbbd-ovndb-tls-certs\") pod \"neutron-cb96bd94d-x5bv4\" (UID: \"c540b760-c850-4418-ba54-5a404dc3dbbd\") " pod="openstack/neutron-cb96bd94d-x5bv4" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.534964 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rvvp\" (UniqueName: \"kubernetes.io/projected/5d299c59-4dbe-4032-aadc-3c35ecde70b2-kube-api-access-8rvvp\") pod \"barbican-api-65cdc599f4-6dhwr\" (UID: \"5d299c59-4dbe-4032-aadc-3c35ecde70b2\") " pod="openstack/barbican-api-65cdc599f4-6dhwr" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.534999 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d299c59-4dbe-4032-aadc-3c35ecde70b2-config-data\") pod \"barbican-api-65cdc599f4-6dhwr\" (UID: \"5d299c59-4dbe-4032-aadc-3c35ecde70b2\") " pod="openstack/barbican-api-65cdc599f4-6dhwr" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.535062 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c540b760-c850-4418-ba54-5a404dc3dbbd-httpd-config\") pod \"neutron-cb96bd94d-x5bv4\" (UID: \"c540b760-c850-4418-ba54-5a404dc3dbbd\") " pod="openstack/neutron-cb96bd94d-x5bv4" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.535185 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc8z2\" (UniqueName: \"kubernetes.io/projected/c540b760-c850-4418-ba54-5a404dc3dbbd-kube-api-access-pc8z2\") pod \"neutron-cb96bd94d-x5bv4\" (UID: \"c540b760-c850-4418-ba54-5a404dc3dbbd\") " pod="openstack/neutron-cb96bd94d-x5bv4" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.535212 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d299c59-4dbe-4032-aadc-3c35ecde70b2-combined-ca-bundle\") pod \"barbican-api-65cdc599f4-6dhwr\" (UID: \"5d299c59-4dbe-4032-aadc-3c35ecde70b2\") " pod="openstack/barbican-api-65cdc599f4-6dhwr" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.535232 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c540b760-c850-4418-ba54-5a404dc3dbbd-combined-ca-bundle\") pod \"neutron-cb96bd94d-x5bv4\" (UID: \"c540b760-c850-4418-ba54-5a404dc3dbbd\") " pod="openstack/neutron-cb96bd94d-x5bv4" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.535252 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c540b760-c850-4418-ba54-5a404dc3dbbd-config\") pod \"neutron-cb96bd94d-x5bv4\" (UID: \"c540b760-c850-4418-ba54-5a404dc3dbbd\") " pod="openstack/neutron-cb96bd94d-x5bv4" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.535277 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d299c59-4dbe-4032-aadc-3c35ecde70b2-logs\") pod \"barbican-api-65cdc599f4-6dhwr\" (UID: \"5d299c59-4dbe-4032-aadc-3c35ecde70b2\") " pod="openstack/barbican-api-65cdc599f4-6dhwr" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.535318 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d299c59-4dbe-4032-aadc-3c35ecde70b2-config-data-custom\") pod \"barbican-api-65cdc599f4-6dhwr\" (UID: \"5d299c59-4dbe-4032-aadc-3c35ecde70b2\") " pod="openstack/barbican-api-65cdc599f4-6dhwr" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.577855 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-f9776577f-2jndx" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.616513 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.639514 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c540b760-c850-4418-ba54-5a404dc3dbbd-httpd-config\") pod \"neutron-cb96bd94d-x5bv4\" (UID: \"c540b760-c850-4418-ba54-5a404dc3dbbd\") " pod="openstack/neutron-cb96bd94d-x5bv4" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.639667 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc8z2\" (UniqueName: \"kubernetes.io/projected/c540b760-c850-4418-ba54-5a404dc3dbbd-kube-api-access-pc8z2\") pod \"neutron-cb96bd94d-x5bv4\" (UID: \"c540b760-c850-4418-ba54-5a404dc3dbbd\") " pod="openstack/neutron-cb96bd94d-x5bv4" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.639695 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d299c59-4dbe-4032-aadc-3c35ecde70b2-combined-ca-bundle\") pod \"barbican-api-65cdc599f4-6dhwr\" (UID: \"5d299c59-4dbe-4032-aadc-3c35ecde70b2\") " pod="openstack/barbican-api-65cdc599f4-6dhwr" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.639717 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c540b760-c850-4418-ba54-5a404dc3dbbd-combined-ca-bundle\") pod \"neutron-cb96bd94d-x5bv4\" (UID: \"c540b760-c850-4418-ba54-5a404dc3dbbd\") " pod="openstack/neutron-cb96bd94d-x5bv4" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.639735 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c540b760-c850-4418-ba54-5a404dc3dbbd-config\") pod \"neutron-cb96bd94d-x5bv4\" (UID: \"c540b760-c850-4418-ba54-5a404dc3dbbd\") " pod="openstack/neutron-cb96bd94d-x5bv4" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.639755 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d299c59-4dbe-4032-aadc-3c35ecde70b2-logs\") pod \"barbican-api-65cdc599f4-6dhwr\" (UID: \"5d299c59-4dbe-4032-aadc-3c35ecde70b2\") " pod="openstack/barbican-api-65cdc599f4-6dhwr" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.639778 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d299c59-4dbe-4032-aadc-3c35ecde70b2-config-data-custom\") pod \"barbican-api-65cdc599f4-6dhwr\" (UID: \"5d299c59-4dbe-4032-aadc-3c35ecde70b2\") " pod="openstack/barbican-api-65cdc599f4-6dhwr" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.639820 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c540b760-c850-4418-ba54-5a404dc3dbbd-ovndb-tls-certs\") pod \"neutron-cb96bd94d-x5bv4\" (UID: \"c540b760-c850-4418-ba54-5a404dc3dbbd\") " pod="openstack/neutron-cb96bd94d-x5bv4" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.639921 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rvvp\" (UniqueName: \"kubernetes.io/projected/5d299c59-4dbe-4032-aadc-3c35ecde70b2-kube-api-access-8rvvp\") pod \"barbican-api-65cdc599f4-6dhwr\" (UID: \"5d299c59-4dbe-4032-aadc-3c35ecde70b2\") " pod="openstack/barbican-api-65cdc599f4-6dhwr" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.639939 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d299c59-4dbe-4032-aadc-3c35ecde70b2-config-data\") pod \"barbican-api-65cdc599f4-6dhwr\" (UID: \"5d299c59-4dbe-4032-aadc-3c35ecde70b2\") " pod="openstack/barbican-api-65cdc599f4-6dhwr" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.642455 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d299c59-4dbe-4032-aadc-3c35ecde70b2-logs\") pod \"barbican-api-65cdc599f4-6dhwr\" (UID: \"5d299c59-4dbe-4032-aadc-3c35ecde70b2\") " pod="openstack/barbican-api-65cdc599f4-6dhwr" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.659487 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c540b760-c850-4418-ba54-5a404dc3dbbd-config\") pod \"neutron-cb96bd94d-x5bv4\" (UID: \"c540b760-c850-4418-ba54-5a404dc3dbbd\") " pod="openstack/neutron-cb96bd94d-x5bv4" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.663251 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c540b760-c850-4418-ba54-5a404dc3dbbd-ovndb-tls-certs\") pod \"neutron-cb96bd94d-x5bv4\" (UID: \"c540b760-c850-4418-ba54-5a404dc3dbbd\") " pod="openstack/neutron-cb96bd94d-x5bv4" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.663743 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c540b760-c850-4418-ba54-5a404dc3dbbd-combined-ca-bundle\") pod \"neutron-cb96bd94d-x5bv4\" (UID: \"c540b760-c850-4418-ba54-5a404dc3dbbd\") " pod="openstack/neutron-cb96bd94d-x5bv4" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.664282 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d299c59-4dbe-4032-aadc-3c35ecde70b2-config-data-custom\") pod \"barbican-api-65cdc599f4-6dhwr\" (UID: \"5d299c59-4dbe-4032-aadc-3c35ecde70b2\") " pod="openstack/barbican-api-65cdc599f4-6dhwr" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.664550 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c540b760-c850-4418-ba54-5a404dc3dbbd-httpd-config\") pod \"neutron-cb96bd94d-x5bv4\" (UID: \"c540b760-c850-4418-ba54-5a404dc3dbbd\") " pod="openstack/neutron-cb96bd94d-x5bv4" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.664874 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d299c59-4dbe-4032-aadc-3c35ecde70b2-combined-ca-bundle\") pod \"barbican-api-65cdc599f4-6dhwr\" (UID: \"5d299c59-4dbe-4032-aadc-3c35ecde70b2\") " pod="openstack/barbican-api-65cdc599f4-6dhwr" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.665747 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rvvp\" (UniqueName: \"kubernetes.io/projected/5d299c59-4dbe-4032-aadc-3c35ecde70b2-kube-api-access-8rvvp\") pod \"barbican-api-65cdc599f4-6dhwr\" (UID: \"5d299c59-4dbe-4032-aadc-3c35ecde70b2\") " pod="openstack/barbican-api-65cdc599f4-6dhwr" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.665834 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d299c59-4dbe-4032-aadc-3c35ecde70b2-config-data\") pod \"barbican-api-65cdc599f4-6dhwr\" (UID: \"5d299c59-4dbe-4032-aadc-3c35ecde70b2\") " pod="openstack/barbican-api-65cdc599f4-6dhwr" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.667554 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc8z2\" (UniqueName: \"kubernetes.io/projected/c540b760-c850-4418-ba54-5a404dc3dbbd-kube-api-access-pc8z2\") pod \"neutron-cb96bd94d-x5bv4\" (UID: \"c540b760-c850-4418-ba54-5a404dc3dbbd\") " pod="openstack/neutron-cb96bd94d-x5bv4" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.846000 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-65cdc599f4-6dhwr" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.896351 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5bf7f58cfb-6c779"] Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.946194 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cb96bd94d-x5bv4" Jan 27 19:01:31 crc kubenswrapper[4915]: I0127 19:01:31.989738 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5bf7f58cfb-6c779" event={"ID":"d0031b79-12aa-4487-8501-6e122053cc13","Type":"ContainerStarted","Data":"c1d97c77399d2cd02cd8e2b010752cad135e8e77a434c2a2fa694a86efaecc7d"} Jan 27 19:01:32 crc kubenswrapper[4915]: I0127 19:01:32.233605 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5bb5986567-mfzn6"] Jan 27 19:01:32 crc kubenswrapper[4915]: W0127 19:01:32.252801 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9455215d_2a98_42df_a801_53f31071447e.slice/crio-51f19db988a90c12173bf7469812b2b41d8d54e9f4debf7e7701ded8ee263f01 WatchSource:0}: Error finding container 51f19db988a90c12173bf7469812b2b41d8d54e9f4debf7e7701ded8ee263f01: Status 404 returned error can't find the container with id 51f19db988a90c12173bf7469812b2b41d8d54e9f4debf7e7701ded8ee263f01 Jan 27 19:01:32 crc kubenswrapper[4915]: I0127 19:01:32.256068 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7686c5764d-hfh9t"] Jan 27 19:01:32 crc kubenswrapper[4915]: W0127 19:01:32.265841 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6c1db90_61fc_4aa0_8371_eae7ac202752.slice/crio-71f4ace2955425eb87c5118964008f2ca4f570e18e4e221df8dd2338bffa18ab WatchSource:0}: Error finding container 71f4ace2955425eb87c5118964008f2ca4f570e18e4e221df8dd2338bffa18ab: Status 404 returned error can't find the container with id 71f4ace2955425eb87c5118964008f2ca4f570e18e4e221df8dd2338bffa18ab Jan 27 19:01:32 crc kubenswrapper[4915]: I0127 19:01:32.267630 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-slkqz"] Jan 27 19:01:32 crc kubenswrapper[4915]: W0127 19:01:32.268677 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb088b573_e938_44b5_bb88_977b7b23dfcb.slice/crio-79dfe6d1ba98ad28c5801316bcb24bf303906988d7a9409d8f56c5eb27fc46ac WatchSource:0}: Error finding container 79dfe6d1ba98ad28c5801316bcb24bf303906988d7a9409d8f56c5eb27fc46ac: Status 404 returned error can't find the container with id 79dfe6d1ba98ad28c5801316bcb24bf303906988d7a9409d8f56c5eb27fc46ac Jan 27 19:01:32 crc kubenswrapper[4915]: I0127 19:01:32.635989 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-854f9c8998-68jxd"] Jan 27 19:01:32 crc kubenswrapper[4915]: I0127 19:01:32.673421 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-gjvmw"] Jan 27 19:01:32 crc kubenswrapper[4915]: I0127 19:01:32.688053 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-f9776577f-2jndx"] Jan 27 19:01:32 crc kubenswrapper[4915]: I0127 19:01:32.710836 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7dfc77bd66-5pjwk"] Jan 27 19:01:32 crc kubenswrapper[4915]: I0127 19:01:32.723635 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-65cdc599f4-6dhwr"] Jan 27 19:01:32 crc kubenswrapper[4915]: W0127 19:01:32.757805 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d299c59_4dbe_4032_aadc_3c35ecde70b2.slice/crio-711b392f3670777e08ac006546bfdd2ee1140c99c9e18d8ea026e627ab9b779d WatchSource:0}: Error finding container 711b392f3670777e08ac006546bfdd2ee1140c99c9e18d8ea026e627ab9b779d: Status 404 returned error can't find the container with id 711b392f3670777e08ac006546bfdd2ee1140c99c9e18d8ea026e627ab9b779d Jan 27 19:01:32 crc kubenswrapper[4915]: W0127 19:01:32.761591 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53bd39be_21df_4cea_aed2_0ac820ce45b6.slice/crio-0cd778ea3fa7bdad7f1b59022fb427177a46e2d2ca6eb9f428a40356c29f7ef2 WatchSource:0}: Error finding container 0cd778ea3fa7bdad7f1b59022fb427177a46e2d2ca6eb9f428a40356c29f7ef2: Status 404 returned error can't find the container with id 0cd778ea3fa7bdad7f1b59022fb427177a46e2d2ca6eb9f428a40356c29f7ef2 Jan 27 19:01:32 crc kubenswrapper[4915]: I0127 19:01:32.762980 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-79f7874b76-rrchs"] Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.010891 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" event={"ID":"8079a88f-5f47-4988-b4c8-6031fbfc9dd8","Type":"ContainerStarted","Data":"d39158ff46672ce4353c66afac0a87241133bfc0e6b9bd69d455fdaacb04f63c"} Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.013206 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" event={"ID":"a6c1db90-61fc-4aa0-8371-eae7ac202752","Type":"ContainerStarted","Data":"71f4ace2955425eb87c5118964008f2ca4f570e18e4e221df8dd2338bffa18ab"} Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.020712 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f9776577f-2jndx" event={"ID":"f2041e54-fb55-4f2a-8cf9-e439c7774485","Type":"ContainerStarted","Data":"2bd3496683d10b4d9c7be3493afac9d6f97f82c7c11b9e02477f35b574ff4b57"} Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.023456 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5bb5986567-mfzn6" event={"ID":"9455215d-2a98-42df-a801-53f31071447e","Type":"ContainerStarted","Data":"51f19db988a90c12173bf7469812b2b41d8d54e9f4debf7e7701ded8ee263f01"} Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.025370 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dfc77bd66-5pjwk" event={"ID":"8f6a6a3d-1872-4859-9d04-cbe10f799f89","Type":"ContainerStarted","Data":"7313c993a7e83aa75d8cfc2c9ae52eea741b1e8cb01ffe7982588c46a2d68441"} Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.025406 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dfc77bd66-5pjwk" event={"ID":"8f6a6a3d-1872-4859-9d04-cbe10f799f89","Type":"ContainerStarted","Data":"6ff7769b097d5457f779b7f50753db4669d14d50d23329f7bc3fd26035b95242"} Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.029097 4915 generic.go:334] "Generic (PLEG): container finished" podID="f6541065-8f84-4ce9-9b90-61ea0f80ee0d" containerID="9dba18cdec925441030673e35b291830b0f2392b2ae376e795f0c6564d26e348" exitCode=0 Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.029201 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" event={"ID":"f6541065-8f84-4ce9-9b90-61ea0f80ee0d","Type":"ContainerDied","Data":"9dba18cdec925441030673e35b291830b0f2392b2ae376e795f0c6564d26e348"} Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.029268 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" event={"ID":"f6541065-8f84-4ce9-9b90-61ea0f80ee0d","Type":"ContainerStarted","Data":"443ef3f650d8fbe32e72497c1c38bba1b7b16635faf02be9c3449fc7ba19b248"} Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.034068 4915 generic.go:334] "Generic (PLEG): container finished" podID="b088b573-e938-44b5-bb88-977b7b23dfcb" containerID="843067c4da6992c39692e5ee139888237518e0f7dc5776e8a18d6f81b67d3689" exitCode=0 Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.034189 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-slkqz" event={"ID":"b088b573-e938-44b5-bb88-977b7b23dfcb","Type":"ContainerDied","Data":"843067c4da6992c39692e5ee139888237518e0f7dc5776e8a18d6f81b67d3689"} Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.034238 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-slkqz" event={"ID":"b088b573-e938-44b5-bb88-977b7b23dfcb","Type":"ContainerStarted","Data":"79dfe6d1ba98ad28c5801316bcb24bf303906988d7a9409d8f56c5eb27fc46ac"} Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.036348 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5bf7f58cfb-6c779" event={"ID":"d0031b79-12aa-4487-8501-6e122053cc13","Type":"ContainerStarted","Data":"5462e3db7b4374ade37ec810af37e0e036487f16475e907a721f3b531cb23675"} Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.036517 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.049832 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79f7874b76-rrchs" event={"ID":"53bd39be-21df-4cea-aed2-0ac820ce45b6","Type":"ContainerStarted","Data":"2f5ebdc9256b15339951da1e7a6b802d34dbb6a7dde142f5f99b12f9156c78e0"} Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.049886 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79f7874b76-rrchs" event={"ID":"53bd39be-21df-4cea-aed2-0ac820ce45b6","Type":"ContainerStarted","Data":"0cd778ea3fa7bdad7f1b59022fb427177a46e2d2ca6eb9f428a40356c29f7ef2"} Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.058118 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65cdc599f4-6dhwr" event={"ID":"5d299c59-4dbe-4032-aadc-3c35ecde70b2","Type":"ContainerStarted","Data":"9b2ff4fa11cb679a08a4692b918f9230b644f60da09705a4bf74a03926c47c44"} Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.058167 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65cdc599f4-6dhwr" event={"ID":"5d299c59-4dbe-4032-aadc-3c35ecde70b2","Type":"ContainerStarted","Data":"711b392f3670777e08ac006546bfdd2ee1140c99c9e18d8ea026e627ab9b779d"} Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.058337 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.058353 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.071842 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5bf7f58cfb-6c779" podStartSLOduration=3.071827461 podStartE2EDuration="3.071827461s" podCreationTimestamp="2026-01-27 19:01:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:33.065958568 +0000 UTC m=+1184.423812262" watchObservedRunningTime="2026-01-27 19:01:33.071827461 +0000 UTC m=+1184.429681125" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.102090 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.102392 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.121343 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.194335 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.227230 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.325726 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.329243 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cb96bd94d-x5bv4"] Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.373564 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-slkqz" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.487587 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-ovsdbserver-nb\") pod \"b088b573-e938-44b5-bb88-977b7b23dfcb\" (UID: \"b088b573-e938-44b5-bb88-977b7b23dfcb\") " Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.487721 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rswfm\" (UniqueName: \"kubernetes.io/projected/b088b573-e938-44b5-bb88-977b7b23dfcb-kube-api-access-rswfm\") pod \"b088b573-e938-44b5-bb88-977b7b23dfcb\" (UID: \"b088b573-e938-44b5-bb88-977b7b23dfcb\") " Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.488702 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-config\") pod \"b088b573-e938-44b5-bb88-977b7b23dfcb\" (UID: \"b088b573-e938-44b5-bb88-977b7b23dfcb\") " Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.488739 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-ovsdbserver-sb\") pod \"b088b573-e938-44b5-bb88-977b7b23dfcb\" (UID: \"b088b573-e938-44b5-bb88-977b7b23dfcb\") " Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.488946 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-dns-swift-storage-0\") pod \"b088b573-e938-44b5-bb88-977b7b23dfcb\" (UID: \"b088b573-e938-44b5-bb88-977b7b23dfcb\") " Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.488982 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-dns-svc\") pod \"b088b573-e938-44b5-bb88-977b7b23dfcb\" (UID: \"b088b573-e938-44b5-bb88-977b7b23dfcb\") " Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.522805 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b088b573-e938-44b5-bb88-977b7b23dfcb-kube-api-access-rswfm" (OuterVolumeSpecName: "kube-api-access-rswfm") pod "b088b573-e938-44b5-bb88-977b7b23dfcb" (UID: "b088b573-e938-44b5-bb88-977b7b23dfcb"). InnerVolumeSpecName "kube-api-access-rswfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.529713 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b088b573-e938-44b5-bb88-977b7b23dfcb" (UID: "b088b573-e938-44b5-bb88-977b7b23dfcb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.531567 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-config" (OuterVolumeSpecName: "config") pod "b088b573-e938-44b5-bb88-977b7b23dfcb" (UID: "b088b573-e938-44b5-bb88-977b7b23dfcb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.555675 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b088b573-e938-44b5-bb88-977b7b23dfcb" (UID: "b088b573-e938-44b5-bb88-977b7b23dfcb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.561078 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b088b573-e938-44b5-bb88-977b7b23dfcb" (UID: "b088b573-e938-44b5-bb88-977b7b23dfcb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.591428 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b088b573-e938-44b5-bb88-977b7b23dfcb" (UID: "b088b573-e938-44b5-bb88-977b7b23dfcb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.595627 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.595747 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rswfm\" (UniqueName: \"kubernetes.io/projected/b088b573-e938-44b5-bb88-977b7b23dfcb-kube-api-access-rswfm\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.595819 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.595873 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.595921 4915 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.596189 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b088b573-e938-44b5-bb88-977b7b23dfcb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.796632 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-79f7874b76-rrchs"] Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.840350 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-cc59d8b57-zj69c"] Jan 27 19:01:33 crc kubenswrapper[4915]: E0127 19:01:33.840806 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b088b573-e938-44b5-bb88-977b7b23dfcb" containerName="init" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.840827 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b088b573-e938-44b5-bb88-977b7b23dfcb" containerName="init" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.841652 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="b088b573-e938-44b5-bb88-977b7b23dfcb" containerName="init" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.843240 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cc59d8b57-zj69c" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.846739 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cc59d8b57-zj69c"] Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.849575 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 27 19:01:33 crc kubenswrapper[4915]: I0127 19:01:33.849776 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.019551 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-ovndb-tls-certs\") pod \"neutron-cc59d8b57-zj69c\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " pod="openstack/neutron-cc59d8b57-zj69c" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.019606 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-public-tls-certs\") pod \"neutron-cc59d8b57-zj69c\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " pod="openstack/neutron-cc59d8b57-zj69c" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.019643 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-config\") pod \"neutron-cc59d8b57-zj69c\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " pod="openstack/neutron-cc59d8b57-zj69c" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.019685 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-combined-ca-bundle\") pod \"neutron-cc59d8b57-zj69c\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " pod="openstack/neutron-cc59d8b57-zj69c" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.019711 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-httpd-config\") pod \"neutron-cc59d8b57-zj69c\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " pod="openstack/neutron-cc59d8b57-zj69c" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.019971 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vg7v\" (UniqueName: \"kubernetes.io/projected/0b70cef8-be7d-4d25-87e3-c9916452d855-kube-api-access-2vg7v\") pod \"neutron-cc59d8b57-zj69c\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " pod="openstack/neutron-cc59d8b57-zj69c" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.020015 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-internal-tls-certs\") pod \"neutron-cc59d8b57-zj69c\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " pod="openstack/neutron-cc59d8b57-zj69c" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.074118 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dfc77bd66-5pjwk" event={"ID":"8f6a6a3d-1872-4859-9d04-cbe10f799f89","Type":"ContainerStarted","Data":"8278d12cdd301820b7664825101b93243b9e377222557bca73e51d67736d6dcf"} Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.074356 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7dfc77bd66-5pjwk" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.078154 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79f7874b76-rrchs" event={"ID":"53bd39be-21df-4cea-aed2-0ac820ce45b6","Type":"ContainerStarted","Data":"9dcb7ba1f3ec9fd6b49c31a0cd733639a18e603ad9340d0bb81f8a3609ce0718"} Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.078885 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-79f7874b76-rrchs" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.082304 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65cdc599f4-6dhwr" event={"ID":"5d299c59-4dbe-4032-aadc-3c35ecde70b2","Type":"ContainerStarted","Data":"1f0d61ad56942ea7287b283e39eeb0e031a58874d420cbece9afe0a6b008de9e"} Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.082961 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-65cdc599f4-6dhwr" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.083068 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-65cdc599f4-6dhwr" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.085630 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" event={"ID":"f6541065-8f84-4ce9-9b90-61ea0f80ee0d","Type":"ContainerStarted","Data":"bec621d3d287aef537baa9604283134644db9a9872b51b976d0580388579769d"} Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.086302 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.097182 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-slkqz" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.097187 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-slkqz" event={"ID":"b088b573-e938-44b5-bb88-977b7b23dfcb","Type":"ContainerDied","Data":"79dfe6d1ba98ad28c5801316bcb24bf303906988d7a9409d8f56c5eb27fc46ac"} Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.097250 4915 scope.go:117] "RemoveContainer" containerID="843067c4da6992c39692e5ee139888237518e0f7dc5776e8a18d6f81b67d3689" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.099492 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cb96bd94d-x5bv4" event={"ID":"c540b760-c850-4418-ba54-5a404dc3dbbd","Type":"ContainerStarted","Data":"687c50bfe1b3eeebc2fda8e8986896f3b9229925aab3f64bacbce732edac5ae9"} Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.101091 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.101116 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.101127 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.101225 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.118953 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7dfc77bd66-5pjwk" podStartSLOduration=4.118936013 podStartE2EDuration="4.118936013s" podCreationTimestamp="2026-01-27 19:01:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:34.0885766 +0000 UTC m=+1185.446430264" watchObservedRunningTime="2026-01-27 19:01:34.118936013 +0000 UTC m=+1185.476789677" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.121469 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vg7v\" (UniqueName: \"kubernetes.io/projected/0b70cef8-be7d-4d25-87e3-c9916452d855-kube-api-access-2vg7v\") pod \"neutron-cc59d8b57-zj69c\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " pod="openstack/neutron-cc59d8b57-zj69c" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.121545 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-internal-tls-certs\") pod \"neutron-cc59d8b57-zj69c\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " pod="openstack/neutron-cc59d8b57-zj69c" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.121567 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-ovndb-tls-certs\") pod \"neutron-cc59d8b57-zj69c\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " pod="openstack/neutron-cc59d8b57-zj69c" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.121594 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-public-tls-certs\") pod \"neutron-cc59d8b57-zj69c\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " pod="openstack/neutron-cc59d8b57-zj69c" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.121616 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-config\") pod \"neutron-cc59d8b57-zj69c\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " pod="openstack/neutron-cc59d8b57-zj69c" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.121634 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-combined-ca-bundle\") pod \"neutron-cc59d8b57-zj69c\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " pod="openstack/neutron-cc59d8b57-zj69c" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.121648 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-httpd-config\") pod \"neutron-cc59d8b57-zj69c\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " pod="openstack/neutron-cc59d8b57-zj69c" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.135643 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-internal-tls-certs\") pod \"neutron-cc59d8b57-zj69c\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " pod="openstack/neutron-cc59d8b57-zj69c" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.137424 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-79f7874b76-rrchs" podStartSLOduration=4.137404304 podStartE2EDuration="4.137404304s" podCreationTimestamp="2026-01-27 19:01:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:34.105697379 +0000 UTC m=+1185.463551043" watchObservedRunningTime="2026-01-27 19:01:34.137404304 +0000 UTC m=+1185.495257968" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.139464 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-ovndb-tls-certs\") pod \"neutron-cc59d8b57-zj69c\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " pod="openstack/neutron-cc59d8b57-zj69c" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.143449 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-combined-ca-bundle\") pod \"neutron-cc59d8b57-zj69c\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " pod="openstack/neutron-cc59d8b57-zj69c" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.143550 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vg7v\" (UniqueName: \"kubernetes.io/projected/0b70cef8-be7d-4d25-87e3-c9916452d855-kube-api-access-2vg7v\") pod \"neutron-cc59d8b57-zj69c\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " pod="openstack/neutron-cc59d8b57-zj69c" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.143981 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-config\") pod \"neutron-cc59d8b57-zj69c\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " pod="openstack/neutron-cc59d8b57-zj69c" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.144154 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-public-tls-certs\") pod \"neutron-cc59d8b57-zj69c\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " pod="openstack/neutron-cc59d8b57-zj69c" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.153850 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-httpd-config\") pod \"neutron-cc59d8b57-zj69c\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " pod="openstack/neutron-cc59d8b57-zj69c" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.169347 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" podStartSLOduration=4.169328735 podStartE2EDuration="4.169328735s" podCreationTimestamp="2026-01-27 19:01:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:34.131003868 +0000 UTC m=+1185.488857532" watchObservedRunningTime="2026-01-27 19:01:34.169328735 +0000 UTC m=+1185.527182399" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.191262 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cc59d8b57-zj69c" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.191453 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-65cdc599f4-6dhwr" podStartSLOduration=3.191428896 podStartE2EDuration="3.191428896s" podCreationTimestamp="2026-01-27 19:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:34.15236719 +0000 UTC m=+1185.510220854" watchObservedRunningTime="2026-01-27 19:01:34.191428896 +0000 UTC m=+1185.549282560" Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.240516 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-slkqz"] Jan 27 19:01:34 crc kubenswrapper[4915]: I0127 19:01:34.240569 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-slkqz"] Jan 27 19:01:35 crc kubenswrapper[4915]: I0127 19:01:35.107785 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-79f7874b76-rrchs" podUID="53bd39be-21df-4cea-aed2-0ac820ce45b6" containerName="neutron-api" containerID="cri-o://2f5ebdc9256b15339951da1e7a6b802d34dbb6a7dde142f5f99b12f9156c78e0" gracePeriod=30 Jan 27 19:01:35 crc kubenswrapper[4915]: I0127 19:01:35.108259 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-79f7874b76-rrchs" podUID="53bd39be-21df-4cea-aed2-0ac820ce45b6" containerName="neutron-httpd" containerID="cri-o://9dcb7ba1f3ec9fd6b49c31a0cd733639a18e603ad9340d0bb81f8a3609ce0718" gracePeriod=30 Jan 27 19:01:35 crc kubenswrapper[4915]: I0127 19:01:35.108031 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7dfc77bd66-5pjwk" Jan 27 19:01:35 crc kubenswrapper[4915]: I0127 19:01:35.385569 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b088b573-e938-44b5-bb88-977b7b23dfcb" path="/var/lib/kubelet/pods/b088b573-e938-44b5-bb88-977b7b23dfcb/volumes" Jan 27 19:01:36 crc kubenswrapper[4915]: I0127 19:01:36.115498 4915 generic.go:334] "Generic (PLEG): container finished" podID="53bd39be-21df-4cea-aed2-0ac820ce45b6" containerID="9dcb7ba1f3ec9fd6b49c31a0cd733639a18e603ad9340d0bb81f8a3609ce0718" exitCode=0 Jan 27 19:01:36 crc kubenswrapper[4915]: I0127 19:01:36.115657 4915 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 19:01:36 crc kubenswrapper[4915]: I0127 19:01:36.115664 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79f7874b76-rrchs" event={"ID":"53bd39be-21df-4cea-aed2-0ac820ce45b6","Type":"ContainerDied","Data":"9dcb7ba1f3ec9fd6b49c31a0cd733639a18e603ad9340d0bb81f8a3609ce0718"} Jan 27 19:01:36 crc kubenswrapper[4915]: I0127 19:01:36.115675 4915 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 19:01:36 crc kubenswrapper[4915]: I0127 19:01:36.115809 4915 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 19:01:36 crc kubenswrapper[4915]: I0127 19:01:36.115826 4915 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 19:01:36 crc kubenswrapper[4915]: I0127 19:01:36.141599 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:36 crc kubenswrapper[4915]: I0127 19:01:36.146208 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 19:01:36 crc kubenswrapper[4915]: I0127 19:01:36.347389 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 19:01:36 crc kubenswrapper[4915]: I0127 19:01:36.378735 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 19:01:37 crc kubenswrapper[4915]: I0127 19:01:37.798748 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7dfc77bd66-5pjwk"] Jan 27 19:01:37 crc kubenswrapper[4915]: I0127 19:01:37.799571 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7dfc77bd66-5pjwk" podUID="8f6a6a3d-1872-4859-9d04-cbe10f799f89" containerName="barbican-api-log" containerID="cri-o://7313c993a7e83aa75d8cfc2c9ae52eea741b1e8cb01ffe7982588c46a2d68441" gracePeriod=30 Jan 27 19:01:37 crc kubenswrapper[4915]: I0127 19:01:37.799617 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7dfc77bd66-5pjwk" podUID="8f6a6a3d-1872-4859-9d04-cbe10f799f89" containerName="barbican-api" containerID="cri-o://8278d12cdd301820b7664825101b93243b9e377222557bca73e51d67736d6dcf" gracePeriod=30 Jan 27 19:01:37 crc kubenswrapper[4915]: I0127 19:01:37.805723 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7dfc77bd66-5pjwk" podUID="8f6a6a3d-1872-4859-9d04-cbe10f799f89" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": EOF" Jan 27 19:01:37 crc kubenswrapper[4915]: I0127 19:01:37.805861 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7dfc77bd66-5pjwk" podUID="8f6a6a3d-1872-4859-9d04-cbe10f799f89" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": EOF" Jan 27 19:01:37 crc kubenswrapper[4915]: I0127 19:01:37.842907 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7644f9784b-dbhxl"] Jan 27 19:01:37 crc kubenswrapper[4915]: I0127 19:01:37.851873 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:01:37 crc kubenswrapper[4915]: I0127 19:01:37.854684 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 27 19:01:37 crc kubenswrapper[4915]: I0127 19:01:37.864006 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7644f9784b-dbhxl"] Jan 27 19:01:37 crc kubenswrapper[4915]: I0127 19:01:37.879101 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 27 19:01:37 crc kubenswrapper[4915]: I0127 19:01:37.900875 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-logs\") pod \"barbican-api-7644f9784b-dbhxl\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:01:37 crc kubenswrapper[4915]: I0127 19:01:37.900952 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-config-data-custom\") pod \"barbican-api-7644f9784b-dbhxl\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:01:37 crc kubenswrapper[4915]: I0127 19:01:37.900980 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnx5s\" (UniqueName: \"kubernetes.io/projected/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-kube-api-access-pnx5s\") pod \"barbican-api-7644f9784b-dbhxl\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:01:37 crc kubenswrapper[4915]: I0127 19:01:37.901037 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-internal-tls-certs\") pod \"barbican-api-7644f9784b-dbhxl\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:01:37 crc kubenswrapper[4915]: I0127 19:01:37.901076 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-config-data\") pod \"barbican-api-7644f9784b-dbhxl\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:01:37 crc kubenswrapper[4915]: I0127 19:01:37.901090 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-public-tls-certs\") pod \"barbican-api-7644f9784b-dbhxl\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:01:37 crc kubenswrapper[4915]: I0127 19:01:37.901178 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-combined-ca-bundle\") pod \"barbican-api-7644f9784b-dbhxl\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:01:38 crc kubenswrapper[4915]: I0127 19:01:38.002726 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-internal-tls-certs\") pod \"barbican-api-7644f9784b-dbhxl\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:01:38 crc kubenswrapper[4915]: I0127 19:01:38.002808 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-config-data\") pod \"barbican-api-7644f9784b-dbhxl\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:01:38 crc kubenswrapper[4915]: I0127 19:01:38.002832 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-public-tls-certs\") pod \"barbican-api-7644f9784b-dbhxl\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:01:38 crc kubenswrapper[4915]: I0127 19:01:38.002856 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-combined-ca-bundle\") pod \"barbican-api-7644f9784b-dbhxl\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:01:38 crc kubenswrapper[4915]: I0127 19:01:38.002917 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-logs\") pod \"barbican-api-7644f9784b-dbhxl\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:01:38 crc kubenswrapper[4915]: I0127 19:01:38.002961 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-config-data-custom\") pod \"barbican-api-7644f9784b-dbhxl\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:01:38 crc kubenswrapper[4915]: I0127 19:01:38.002985 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnx5s\" (UniqueName: \"kubernetes.io/projected/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-kube-api-access-pnx5s\") pod \"barbican-api-7644f9784b-dbhxl\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:01:38 crc kubenswrapper[4915]: I0127 19:01:38.003346 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-logs\") pod \"barbican-api-7644f9784b-dbhxl\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:01:38 crc kubenswrapper[4915]: I0127 19:01:38.010298 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-internal-tls-certs\") pod \"barbican-api-7644f9784b-dbhxl\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:01:38 crc kubenswrapper[4915]: I0127 19:01:38.010886 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-config-data-custom\") pod \"barbican-api-7644f9784b-dbhxl\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:01:38 crc kubenswrapper[4915]: I0127 19:01:38.012533 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-config-data\") pod \"barbican-api-7644f9784b-dbhxl\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:01:38 crc kubenswrapper[4915]: I0127 19:01:38.013466 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-combined-ca-bundle\") pod \"barbican-api-7644f9784b-dbhxl\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:01:38 crc kubenswrapper[4915]: I0127 19:01:38.024587 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-public-tls-certs\") pod \"barbican-api-7644f9784b-dbhxl\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:01:38 crc kubenswrapper[4915]: I0127 19:01:38.026512 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnx5s\" (UniqueName: \"kubernetes.io/projected/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-kube-api-access-pnx5s\") pod \"barbican-api-7644f9784b-dbhxl\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:01:38 crc kubenswrapper[4915]: I0127 19:01:38.134992 4915 generic.go:334] "Generic (PLEG): container finished" podID="8f6a6a3d-1872-4859-9d04-cbe10f799f89" containerID="7313c993a7e83aa75d8cfc2c9ae52eea741b1e8cb01ffe7982588c46a2d68441" exitCode=143 Jan 27 19:01:38 crc kubenswrapper[4915]: I0127 19:01:38.135038 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dfc77bd66-5pjwk" event={"ID":"8f6a6a3d-1872-4859-9d04-cbe10f799f89","Type":"ContainerDied","Data":"7313c993a7e83aa75d8cfc2c9ae52eea741b1e8cb01ffe7982588c46a2d68441"} Jan 27 19:01:38 crc kubenswrapper[4915]: I0127 19:01:38.181961 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:01:41 crc kubenswrapper[4915]: I0127 19:01:41.533051 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" Jan 27 19:01:41 crc kubenswrapper[4915]: I0127 19:01:41.598377 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-rsxp6"] Jan 27 19:01:41 crc kubenswrapper[4915]: I0127 19:01:41.598819 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" podUID="3fc52a32-aa71-44cf-81f3-ac7a1545c3b0" containerName="dnsmasq-dns" containerID="cri-o://306ff8dc7e6887829ec92ba9a6ee920b7c7804f1110868596815ffd9849f9510" gracePeriod=10 Jan 27 19:01:41 crc kubenswrapper[4915]: I0127 19:01:41.766319 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" podUID="3fc52a32-aa71-44cf-81f3-ac7a1545c3b0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: connect: connection refused" Jan 27 19:01:42 crc kubenswrapper[4915]: I0127 19:01:42.179402 4915 generic.go:334] "Generic (PLEG): container finished" podID="3fc52a32-aa71-44cf-81f3-ac7a1545c3b0" containerID="306ff8dc7e6887829ec92ba9a6ee920b7c7804f1110868596815ffd9849f9510" exitCode=0 Jan 27 19:01:42 crc kubenswrapper[4915]: I0127 19:01:42.179444 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" event={"ID":"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0","Type":"ContainerDied","Data":"306ff8dc7e6887829ec92ba9a6ee920b7c7804f1110868596815ffd9849f9510"} Jan 27 19:01:43 crc kubenswrapper[4915]: I0127 19:01:43.199262 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7dfc77bd66-5pjwk" podUID="8f6a6a3d-1872-4859-9d04-cbe10f799f89" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:52876->10.217.0.159:9311: read: connection reset by peer" Jan 27 19:01:43 crc kubenswrapper[4915]: I0127 19:01:43.206000 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7dfc77bd66-5pjwk" podUID="8f6a6a3d-1872-4859-9d04-cbe10f799f89" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:52888->10.217.0.159:9311: read: connection reset by peer" Jan 27 19:01:43 crc kubenswrapper[4915]: I0127 19:01:43.319953 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-65cdc599f4-6dhwr" Jan 27 19:01:43 crc kubenswrapper[4915]: I0127 19:01:43.500612 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-65cdc599f4-6dhwr" Jan 27 19:01:44 crc kubenswrapper[4915]: I0127 19:01:44.196842 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cb96bd94d-x5bv4" event={"ID":"c540b760-c850-4418-ba54-5a404dc3dbbd","Type":"ContainerStarted","Data":"1280df6d409410b600e18da6a6e8aafdbe4fd0199f72fea9509acc39fa8bd675"} Jan 27 19:01:44 crc kubenswrapper[4915]: I0127 19:01:44.198460 4915 generic.go:334] "Generic (PLEG): container finished" podID="8f6a6a3d-1872-4859-9d04-cbe10f799f89" containerID="8278d12cdd301820b7664825101b93243b9e377222557bca73e51d67736d6dcf" exitCode=0 Jan 27 19:01:44 crc kubenswrapper[4915]: I0127 19:01:44.198516 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dfc77bd66-5pjwk" event={"ID":"8f6a6a3d-1872-4859-9d04-cbe10f799f89","Type":"ContainerDied","Data":"8278d12cdd301820b7664825101b93243b9e377222557bca73e51d67736d6dcf"} Jan 27 19:01:45 crc kubenswrapper[4915]: E0127 19:01:45.713367 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 27 19:01:45 crc kubenswrapper[4915]: E0127 19:01:45.714411 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cjr4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(fc00196d-5999-4fb5-ad85-c2ed51b570ae): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 19:01:45 crc kubenswrapper[4915]: E0127 19:01:45.729081 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="fc00196d-5999-4fb5-ad85-c2ed51b570ae" Jan 27 19:01:45 crc kubenswrapper[4915]: I0127 19:01:45.857744 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" Jan 27 19:01:45 crc kubenswrapper[4915]: I0127 19:01:45.958853 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-ovsdbserver-sb\") pod \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\" (UID: \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\") " Jan 27 19:01:45 crc kubenswrapper[4915]: I0127 19:01:45.959013 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-dns-swift-storage-0\") pod \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\" (UID: \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\") " Jan 27 19:01:45 crc kubenswrapper[4915]: I0127 19:01:45.959072 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-ovsdbserver-nb\") pod \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\" (UID: \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\") " Jan 27 19:01:45 crc kubenswrapper[4915]: I0127 19:01:45.959156 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-dns-svc\") pod \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\" (UID: \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\") " Jan 27 19:01:45 crc kubenswrapper[4915]: I0127 19:01:45.959228 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htjnf\" (UniqueName: \"kubernetes.io/projected/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-kube-api-access-htjnf\") pod \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\" (UID: \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\") " Jan 27 19:01:45 crc kubenswrapper[4915]: I0127 19:01:45.959309 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-config\") pod \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\" (UID: \"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0\") " Jan 27 19:01:45 crc kubenswrapper[4915]: I0127 19:01:45.971669 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-kube-api-access-htjnf" (OuterVolumeSpecName: "kube-api-access-htjnf") pod "3fc52a32-aa71-44cf-81f3-ac7a1545c3b0" (UID: "3fc52a32-aa71-44cf-81f3-ac7a1545c3b0"). InnerVolumeSpecName "kube-api-access-htjnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.007922 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3fc52a32-aa71-44cf-81f3-ac7a1545c3b0" (UID: "3fc52a32-aa71-44cf-81f3-ac7a1545c3b0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.008511 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3fc52a32-aa71-44cf-81f3-ac7a1545c3b0" (UID: "3fc52a32-aa71-44cf-81f3-ac7a1545c3b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.016407 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3fc52a32-aa71-44cf-81f3-ac7a1545c3b0" (UID: "3fc52a32-aa71-44cf-81f3-ac7a1545c3b0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.032196 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-config" (OuterVolumeSpecName: "config") pod "3fc52a32-aa71-44cf-81f3-ac7a1545c3b0" (UID: "3fc52a32-aa71-44cf-81f3-ac7a1545c3b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.032364 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3fc52a32-aa71-44cf-81f3-ac7a1545c3b0" (UID: "3fc52a32-aa71-44cf-81f3-ac7a1545c3b0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.066153 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.066188 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.066199 4915 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.066211 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.066220 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.066229 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htjnf\" (UniqueName: \"kubernetes.io/projected/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0-kube-api-access-htjnf\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.217615 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dfc77bd66-5pjwk" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.236656 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7644f9784b-dbhxl"] Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.243573 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cb96bd94d-x5bv4" event={"ID":"c540b760-c850-4418-ba54-5a404dc3dbbd","Type":"ContainerStarted","Data":"e56ef5c3c5d49086313d6f16eebe7dfaaf8ee91810e742a06d368894b5781c22"} Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.246903 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-cb96bd94d-x5bv4" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.270026 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f6a6a3d-1872-4859-9d04-cbe10f799f89-logs\") pod \"8f6a6a3d-1872-4859-9d04-cbe10f799f89\" (UID: \"8f6a6a3d-1872-4859-9d04-cbe10f799f89\") " Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.270114 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6a6a3d-1872-4859-9d04-cbe10f799f89-combined-ca-bundle\") pod \"8f6a6a3d-1872-4859-9d04-cbe10f799f89\" (UID: \"8f6a6a3d-1872-4859-9d04-cbe10f799f89\") " Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.270164 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f6a6a3d-1872-4859-9d04-cbe10f799f89-config-data-custom\") pod \"8f6a6a3d-1872-4859-9d04-cbe10f799f89\" (UID: \"8f6a6a3d-1872-4859-9d04-cbe10f799f89\") " Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.270238 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f6a6a3d-1872-4859-9d04-cbe10f799f89-config-data\") pod \"8f6a6a3d-1872-4859-9d04-cbe10f799f89\" (UID: \"8f6a6a3d-1872-4859-9d04-cbe10f799f89\") " Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.270272 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvk5l\" (UniqueName: \"kubernetes.io/projected/8f6a6a3d-1872-4859-9d04-cbe10f799f89-kube-api-access-jvk5l\") pod \"8f6a6a3d-1872-4859-9d04-cbe10f799f89\" (UID: \"8f6a6a3d-1872-4859-9d04-cbe10f799f89\") " Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.275655 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f6a6a3d-1872-4859-9d04-cbe10f799f89-logs" (OuterVolumeSpecName: "logs") pod "8f6a6a3d-1872-4859-9d04-cbe10f799f89" (UID: "8f6a6a3d-1872-4859-9d04-cbe10f799f89"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.276461 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-cb96bd94d-x5bv4" podStartSLOduration=15.276437954 podStartE2EDuration="15.276437954s" podCreationTimestamp="2026-01-27 19:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:46.259707705 +0000 UTC m=+1197.617561379" watchObservedRunningTime="2026-01-27 19:01:46.276437954 +0000 UTC m=+1197.634291618" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.276643 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dfc77bd66-5pjwk" event={"ID":"8f6a6a3d-1872-4859-9d04-cbe10f799f89","Type":"ContainerDied","Data":"6ff7769b097d5457f779b7f50753db4669d14d50d23329f7bc3fd26035b95242"} Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.276695 4915 scope.go:117] "RemoveContainer" containerID="8278d12cdd301820b7664825101b93243b9e377222557bca73e51d67736d6dcf" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.276841 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dfc77bd66-5pjwk" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.285172 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" event={"ID":"3fc52a32-aa71-44cf-81f3-ac7a1545c3b0","Type":"ContainerDied","Data":"9340356295e3449674fb9081893031fe2c93e2e30ceddb9aac0592dd361c9aad"} Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.285273 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-rsxp6" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.285927 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fc00196d-5999-4fb5-ad85-c2ed51b570ae" containerName="ceilometer-notification-agent" containerID="cri-o://5a76fa29bc2eda93d30186a1db18d9c67ba10a4fa44315cfee2b9ff812297c96" gracePeriod=30 Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.285994 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fc00196d-5999-4fb5-ad85-c2ed51b570ae" containerName="sg-core" containerID="cri-o://e0a916e110308d756b2c4be697a99ac33c79fa849fb72cada723642f15d817d7" gracePeriod=30 Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.288179 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f6a6a3d-1872-4859-9d04-cbe10f799f89-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8f6a6a3d-1872-4859-9d04-cbe10f799f89" (UID: "8f6a6a3d-1872-4859-9d04-cbe10f799f89"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.290225 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f6a6a3d-1872-4859-9d04-cbe10f799f89-kube-api-access-jvk5l" (OuterVolumeSpecName: "kube-api-access-jvk5l") pod "8f6a6a3d-1872-4859-9d04-cbe10f799f89" (UID: "8f6a6a3d-1872-4859-9d04-cbe10f799f89"). InnerVolumeSpecName "kube-api-access-jvk5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.372500 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f6a6a3d-1872-4859-9d04-cbe10f799f89-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.372525 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvk5l\" (UniqueName: \"kubernetes.io/projected/8f6a6a3d-1872-4859-9d04-cbe10f799f89-kube-api-access-jvk5l\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.372536 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f6a6a3d-1872-4859-9d04-cbe10f799f89-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.399152 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f6a6a3d-1872-4859-9d04-cbe10f799f89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f6a6a3d-1872-4859-9d04-cbe10f799f89" (UID: "8f6a6a3d-1872-4859-9d04-cbe10f799f89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.408772 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cc59d8b57-zj69c"] Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.417975 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-rsxp6"] Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.422011 4915 scope.go:117] "RemoveContainer" containerID="7313c993a7e83aa75d8cfc2c9ae52eea741b1e8cb01ffe7982588c46a2d68441" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.425367 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-rsxp6"] Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.448646 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f6a6a3d-1872-4859-9d04-cbe10f799f89-config-data" (OuterVolumeSpecName: "config-data") pod "8f6a6a3d-1872-4859-9d04-cbe10f799f89" (UID: "8f6a6a3d-1872-4859-9d04-cbe10f799f89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:46 crc kubenswrapper[4915]: W0127 19:01:46.458833 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b70cef8_be7d_4d25_87e3_c9916452d855.slice/crio-46d2d0a45b46bd93cc97664b69410895904ed5527da3fc4ab8f2104c135b3805 WatchSource:0}: Error finding container 46d2d0a45b46bd93cc97664b69410895904ed5527da3fc4ab8f2104c135b3805: Status 404 returned error can't find the container with id 46d2d0a45b46bd93cc97664b69410895904ed5527da3fc4ab8f2104c135b3805 Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.474635 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f6a6a3d-1872-4859-9d04-cbe10f799f89-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.474679 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6a6a3d-1872-4859-9d04-cbe10f799f89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.476496 4915 scope.go:117] "RemoveContainer" containerID="306ff8dc7e6887829ec92ba9a6ee920b7c7804f1110868596815ffd9849f9510" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.518043 4915 scope.go:117] "RemoveContainer" containerID="97502c59181578a12e44b071396f1b40560fdfe0d1e791031e7c28373b5b2136" Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.647986 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7dfc77bd66-5pjwk"] Jan 27 19:01:46 crc kubenswrapper[4915]: I0127 19:01:46.667743 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7dfc77bd66-5pjwk"] Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.294138 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7drwm" event={"ID":"99e935b3-c64c-4c02-821b-18301c6b6c27","Type":"ContainerStarted","Data":"1b588bc3241f6527c713e1b6ce11c85307d42360eacca87a03d21e1910e5ed34"} Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.297010 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" event={"ID":"8079a88f-5f47-4988-b4c8-6031fbfc9dd8","Type":"ContainerStarted","Data":"6cce6eea7541be78647a81657904cffa63e1bcb52b367db413e89aa732f8add2"} Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.297054 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" event={"ID":"8079a88f-5f47-4988-b4c8-6031fbfc9dd8","Type":"ContainerStarted","Data":"b733ca9ab35e4ebe64a9190112a4fec324eed485f565cf65e04c47fd1024429e"} Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.299292 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f9776577f-2jndx" event={"ID":"f2041e54-fb55-4f2a-8cf9-e439c7774485","Type":"ContainerStarted","Data":"23aa745b7dd112c1f66341caed263053b7f5ea24b9e8c71afd2243cebd1cab75"} Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.299358 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f9776577f-2jndx" event={"ID":"f2041e54-fb55-4f2a-8cf9-e439c7774485","Type":"ContainerStarted","Data":"c4b9293d64815d72131fea7b74e18820d6f50efa91a0dc1e08436c11314f0de5"} Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.301262 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cc59d8b57-zj69c" event={"ID":"0b70cef8-be7d-4d25-87e3-c9916452d855","Type":"ContainerStarted","Data":"5b5fd7950d039f8f25f1800441b524fb35816018c64293f78005b6a7b70b7696"} Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.301292 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cc59d8b57-zj69c" event={"ID":"0b70cef8-be7d-4d25-87e3-c9916452d855","Type":"ContainerStarted","Data":"62678ab433696d7351f81f9c1770c3c809c3a53ec6feeae8a28042bf8f437350"} Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.301303 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cc59d8b57-zj69c" event={"ID":"0b70cef8-be7d-4d25-87e3-c9916452d855","Type":"ContainerStarted","Data":"46d2d0a45b46bd93cc97664b69410895904ed5527da3fc4ab8f2104c135b3805"} Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.301409 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-cc59d8b57-zj69c" Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.302928 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7644f9784b-dbhxl" event={"ID":"1ec4f102-db6b-4f45-a5f4-1aad213e05fb","Type":"ContainerStarted","Data":"e45fa431d5b8ceb54b435ca3de9cafe6ec697d885699991a82ee10f7135d868b"} Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.302969 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7644f9784b-dbhxl" event={"ID":"1ec4f102-db6b-4f45-a5f4-1aad213e05fb","Type":"ContainerStarted","Data":"4e8f634b8ef683d623319b77feb8b9f9b68eb10a115f361187710eedb123c004"} Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.302981 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7644f9784b-dbhxl" event={"ID":"1ec4f102-db6b-4f45-a5f4-1aad213e05fb","Type":"ContainerStarted","Data":"1bee0958d742c43266ba16c4e675c2cb0143e957cb5b3d242b1120f1e83cbf06"} Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.303009 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.303095 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.306916 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" event={"ID":"a6c1db90-61fc-4aa0-8371-eae7ac202752","Type":"ContainerStarted","Data":"5084446e20df6862f5537b93fd308a566c43f9b09533944070de614530073257"} Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.306953 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" event={"ID":"a6c1db90-61fc-4aa0-8371-eae7ac202752","Type":"ContainerStarted","Data":"a4787e1f3d251eb309be9279ab54509ca98dbe79c0739e086e49b61781ac3ab7"} Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.312923 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-7drwm" podStartSLOduration=3.495638014 podStartE2EDuration="47.312908916s" podCreationTimestamp="2026-01-27 19:01:00 +0000 UTC" firstStartedPulling="2026-01-27 19:01:01.912774617 +0000 UTC m=+1153.270628281" lastFinishedPulling="2026-01-27 19:01:45.730045519 +0000 UTC m=+1197.087899183" observedRunningTime="2026-01-27 19:01:47.307677948 +0000 UTC m=+1198.665531612" watchObservedRunningTime="2026-01-27 19:01:47.312908916 +0000 UTC m=+1198.670762580" Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.327106 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" podStartSLOduration=4.306515962 podStartE2EDuration="17.327090303s" podCreationTimestamp="2026-01-27 19:01:30 +0000 UTC" firstStartedPulling="2026-01-27 19:01:32.664098168 +0000 UTC m=+1184.021951832" lastFinishedPulling="2026-01-27 19:01:45.684672479 +0000 UTC m=+1197.042526173" observedRunningTime="2026-01-27 19:01:47.323204638 +0000 UTC m=+1198.681058302" watchObservedRunningTime="2026-01-27 19:01:47.327090303 +0000 UTC m=+1198.684943967" Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.327674 4915 generic.go:334] "Generic (PLEG): container finished" podID="fc00196d-5999-4fb5-ad85-c2ed51b570ae" containerID="e0a916e110308d756b2c4be697a99ac33c79fa849fb72cada723642f15d817d7" exitCode=2 Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.327737 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc00196d-5999-4fb5-ad85-c2ed51b570ae","Type":"ContainerDied","Data":"e0a916e110308d756b2c4be697a99ac33c79fa849fb72cada723642f15d817d7"} Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.331321 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5bb5986567-mfzn6" event={"ID":"9455215d-2a98-42df-a801-53f31071447e","Type":"ContainerStarted","Data":"242b85c3ebcbf86617e37e0baab13393e52a400ba4ba2e7ad14eb374b26149a4"} Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.331349 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5bb5986567-mfzn6" event={"ID":"9455215d-2a98-42df-a801-53f31071447e","Type":"ContainerStarted","Data":"e1099dce504cb119e341c86eb7e8e9e9820968ff40772d97e4d942a2883762d6"} Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.358561 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-f9776577f-2jndx" podStartSLOduration=4.292752025 podStartE2EDuration="17.358545942s" podCreationTimestamp="2026-01-27 19:01:30 +0000 UTC" firstStartedPulling="2026-01-27 19:01:32.664265802 +0000 UTC m=+1184.022119466" lastFinishedPulling="2026-01-27 19:01:45.730059719 +0000 UTC m=+1197.087913383" observedRunningTime="2026-01-27 19:01:47.356366839 +0000 UTC m=+1198.714220503" watchObservedRunningTime="2026-01-27 19:01:47.358545942 +0000 UTC m=+1198.716399606" Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.386665 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fc52a32-aa71-44cf-81f3-ac7a1545c3b0" path="/var/lib/kubelet/pods/3fc52a32-aa71-44cf-81f3-ac7a1545c3b0/volumes" Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.387315 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f6a6a3d-1872-4859-9d04-cbe10f799f89" path="/var/lib/kubelet/pods/8f6a6a3d-1872-4859-9d04-cbe10f799f89/volumes" Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.390836 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-7686c5764d-hfh9t"] Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.399112 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7644f9784b-dbhxl" podStartSLOduration=10.399098174 podStartE2EDuration="10.399098174s" podCreationTimestamp="2026-01-27 19:01:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:47.386287691 +0000 UTC m=+1198.744141355" watchObservedRunningTime="2026-01-27 19:01:47.399098174 +0000 UTC m=+1198.756951838" Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.434566 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5bb5986567-mfzn6"] Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.438359 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" podStartSLOduration=3.984793992 podStartE2EDuration="17.438340974s" podCreationTimestamp="2026-01-27 19:01:30 +0000 UTC" firstStartedPulling="2026-01-27 19:01:32.277304387 +0000 UTC m=+1183.635158051" lastFinishedPulling="2026-01-27 19:01:45.730851369 +0000 UTC m=+1197.088705033" observedRunningTime="2026-01-27 19:01:47.41815936 +0000 UTC m=+1198.776013024" watchObservedRunningTime="2026-01-27 19:01:47.438340974 +0000 UTC m=+1198.796194628" Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.448251 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-cc59d8b57-zj69c" podStartSLOduration=14.448239056 podStartE2EDuration="14.448239056s" podCreationTimestamp="2026-01-27 19:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:47.44225317 +0000 UTC m=+1198.800106834" watchObservedRunningTime="2026-01-27 19:01:47.448239056 +0000 UTC m=+1198.806092710" Jan 27 19:01:47 crc kubenswrapper[4915]: I0127 19:01:47.475182 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5bb5986567-mfzn6" podStartSLOduration=4.062291309 podStartE2EDuration="17.475166435s" podCreationTimestamp="2026-01-27 19:01:30 +0000 UTC" firstStartedPulling="2026-01-27 19:01:32.258712423 +0000 UTC m=+1183.616566087" lastFinishedPulling="2026-01-27 19:01:45.671587509 +0000 UTC m=+1197.029441213" observedRunningTime="2026-01-27 19:01:47.469441905 +0000 UTC m=+1198.827295569" watchObservedRunningTime="2026-01-27 19:01:47.475166435 +0000 UTC m=+1198.833020099" Jan 27 19:01:49 crc kubenswrapper[4915]: I0127 19:01:49.372362 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" podUID="a6c1db90-61fc-4aa0-8371-eae7ac202752" containerName="barbican-keystone-listener-log" containerID="cri-o://a4787e1f3d251eb309be9279ab54509ca98dbe79c0739e086e49b61781ac3ab7" gracePeriod=30 Jan 27 19:01:49 crc kubenswrapper[4915]: I0127 19:01:49.378435 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5bb5986567-mfzn6" podUID="9455215d-2a98-42df-a801-53f31071447e" containerName="barbican-worker-log" containerID="cri-o://e1099dce504cb119e341c86eb7e8e9e9820968ff40772d97e4d942a2883762d6" gracePeriod=30 Jan 27 19:01:49 crc kubenswrapper[4915]: I0127 19:01:49.378894 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5bb5986567-mfzn6" podUID="9455215d-2a98-42df-a801-53f31071447e" containerName="barbican-worker" containerID="cri-o://242b85c3ebcbf86617e37e0baab13393e52a400ba4ba2e7ad14eb374b26149a4" gracePeriod=30 Jan 27 19:01:49 crc kubenswrapper[4915]: I0127 19:01:49.377895 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" podUID="a6c1db90-61fc-4aa0-8371-eae7ac202752" containerName="barbican-keystone-listener" containerID="cri-o://5084446e20df6862f5537b93fd308a566c43f9b09533944070de614530073257" gracePeriod=30 Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.273763 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5bb5986567-mfzn6" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.283036 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.360767 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c1db90-61fc-4aa0-8371-eae7ac202752-config-data\") pod \"a6c1db90-61fc-4aa0-8371-eae7ac202752\" (UID: \"a6c1db90-61fc-4aa0-8371-eae7ac202752\") " Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.360871 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9455215d-2a98-42df-a801-53f31071447e-logs\") pod \"9455215d-2a98-42df-a801-53f31071447e\" (UID: \"9455215d-2a98-42df-a801-53f31071447e\") " Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.360922 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c1db90-61fc-4aa0-8371-eae7ac202752-combined-ca-bundle\") pod \"a6c1db90-61fc-4aa0-8371-eae7ac202752\" (UID: \"a6c1db90-61fc-4aa0-8371-eae7ac202752\") " Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.360950 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6c1db90-61fc-4aa0-8371-eae7ac202752-config-data-custom\") pod \"a6c1db90-61fc-4aa0-8371-eae7ac202752\" (UID: \"a6c1db90-61fc-4aa0-8371-eae7ac202752\") " Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.361009 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9455215d-2a98-42df-a801-53f31071447e-config-data\") pod \"9455215d-2a98-42df-a801-53f31071447e\" (UID: \"9455215d-2a98-42df-a801-53f31071447e\") " Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.361063 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg8qw\" (UniqueName: \"kubernetes.io/projected/9455215d-2a98-42df-a801-53f31071447e-kube-api-access-gg8qw\") pod \"9455215d-2a98-42df-a801-53f31071447e\" (UID: \"9455215d-2a98-42df-a801-53f31071447e\") " Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.361219 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6c1db90-61fc-4aa0-8371-eae7ac202752-logs\") pod \"a6c1db90-61fc-4aa0-8371-eae7ac202752\" (UID: \"a6c1db90-61fc-4aa0-8371-eae7ac202752\") " Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.361249 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9455215d-2a98-42df-a801-53f31071447e-config-data-custom\") pod \"9455215d-2a98-42df-a801-53f31071447e\" (UID: \"9455215d-2a98-42df-a801-53f31071447e\") " Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.361281 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9455215d-2a98-42df-a801-53f31071447e-combined-ca-bundle\") pod \"9455215d-2a98-42df-a801-53f31071447e\" (UID: \"9455215d-2a98-42df-a801-53f31071447e\") " Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.361307 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gwnv\" (UniqueName: \"kubernetes.io/projected/a6c1db90-61fc-4aa0-8371-eae7ac202752-kube-api-access-9gwnv\") pod \"a6c1db90-61fc-4aa0-8371-eae7ac202752\" (UID: \"a6c1db90-61fc-4aa0-8371-eae7ac202752\") " Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.361731 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6c1db90-61fc-4aa0-8371-eae7ac202752-logs" (OuterVolumeSpecName: "logs") pod "a6c1db90-61fc-4aa0-8371-eae7ac202752" (UID: "a6c1db90-61fc-4aa0-8371-eae7ac202752"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.362042 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9455215d-2a98-42df-a801-53f31071447e-logs" (OuterVolumeSpecName: "logs") pod "9455215d-2a98-42df-a801-53f31071447e" (UID: "9455215d-2a98-42df-a801-53f31071447e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.372079 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9455215d-2a98-42df-a801-53f31071447e-kube-api-access-gg8qw" (OuterVolumeSpecName: "kube-api-access-gg8qw") pod "9455215d-2a98-42df-a801-53f31071447e" (UID: "9455215d-2a98-42df-a801-53f31071447e"). InnerVolumeSpecName "kube-api-access-gg8qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.372108 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9455215d-2a98-42df-a801-53f31071447e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9455215d-2a98-42df-a801-53f31071447e" (UID: "9455215d-2a98-42df-a801-53f31071447e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.373616 4915 generic.go:334] "Generic (PLEG): container finished" podID="a6c1db90-61fc-4aa0-8371-eae7ac202752" containerID="5084446e20df6862f5537b93fd308a566c43f9b09533944070de614530073257" exitCode=0 Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.373640 4915 generic.go:334] "Generic (PLEG): container finished" podID="a6c1db90-61fc-4aa0-8371-eae7ac202752" containerID="a4787e1f3d251eb309be9279ab54509ca98dbe79c0739e086e49b61781ac3ab7" exitCode=143 Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.373685 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" event={"ID":"a6c1db90-61fc-4aa0-8371-eae7ac202752","Type":"ContainerDied","Data":"5084446e20df6862f5537b93fd308a566c43f9b09533944070de614530073257"} Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.373715 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" event={"ID":"a6c1db90-61fc-4aa0-8371-eae7ac202752","Type":"ContainerDied","Data":"a4787e1f3d251eb309be9279ab54509ca98dbe79c0739e086e49b61781ac3ab7"} Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.373731 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" event={"ID":"a6c1db90-61fc-4aa0-8371-eae7ac202752","Type":"ContainerDied","Data":"71f4ace2955425eb87c5118964008f2ca4f570e18e4e221df8dd2338bffa18ab"} Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.373751 4915 scope.go:117] "RemoveContainer" containerID="5084446e20df6862f5537b93fd308a566c43f9b09533944070de614530073257" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.373937 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7686c5764d-hfh9t" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.373964 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c1db90-61fc-4aa0-8371-eae7ac202752-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a6c1db90-61fc-4aa0-8371-eae7ac202752" (UID: "a6c1db90-61fc-4aa0-8371-eae7ac202752"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.375784 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6c1db90-61fc-4aa0-8371-eae7ac202752-kube-api-access-9gwnv" (OuterVolumeSpecName: "kube-api-access-9gwnv") pod "a6c1db90-61fc-4aa0-8371-eae7ac202752" (UID: "a6c1db90-61fc-4aa0-8371-eae7ac202752"). InnerVolumeSpecName "kube-api-access-9gwnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.377477 4915 generic.go:334] "Generic (PLEG): container finished" podID="9455215d-2a98-42df-a801-53f31071447e" containerID="242b85c3ebcbf86617e37e0baab13393e52a400ba4ba2e7ad14eb374b26149a4" exitCode=0 Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.377499 4915 generic.go:334] "Generic (PLEG): container finished" podID="9455215d-2a98-42df-a801-53f31071447e" containerID="e1099dce504cb119e341c86eb7e8e9e9820968ff40772d97e4d942a2883762d6" exitCode=143 Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.377521 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5bb5986567-mfzn6" event={"ID":"9455215d-2a98-42df-a801-53f31071447e","Type":"ContainerDied","Data":"242b85c3ebcbf86617e37e0baab13393e52a400ba4ba2e7ad14eb374b26149a4"} Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.377540 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5bb5986567-mfzn6" event={"ID":"9455215d-2a98-42df-a801-53f31071447e","Type":"ContainerDied","Data":"e1099dce504cb119e341c86eb7e8e9e9820968ff40772d97e4d942a2883762d6"} Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.377656 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5bb5986567-mfzn6" event={"ID":"9455215d-2a98-42df-a801-53f31071447e","Type":"ContainerDied","Data":"51f19db988a90c12173bf7469812b2b41d8d54e9f4debf7e7701ded8ee263f01"} Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.377715 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5bb5986567-mfzn6" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.401352 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9455215d-2a98-42df-a801-53f31071447e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9455215d-2a98-42df-a801-53f31071447e" (UID: "9455215d-2a98-42df-a801-53f31071447e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.404743 4915 scope.go:117] "RemoveContainer" containerID="a4787e1f3d251eb309be9279ab54509ca98dbe79c0739e086e49b61781ac3ab7" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.421994 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c1db90-61fc-4aa0-8371-eae7ac202752-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6c1db90-61fc-4aa0-8371-eae7ac202752" (UID: "a6c1db90-61fc-4aa0-8371-eae7ac202752"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.423607 4915 scope.go:117] "RemoveContainer" containerID="5084446e20df6862f5537b93fd308a566c43f9b09533944070de614530073257" Jan 27 19:01:50 crc kubenswrapper[4915]: E0127 19:01:50.424133 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5084446e20df6862f5537b93fd308a566c43f9b09533944070de614530073257\": container with ID starting with 5084446e20df6862f5537b93fd308a566c43f9b09533944070de614530073257 not found: ID does not exist" containerID="5084446e20df6862f5537b93fd308a566c43f9b09533944070de614530073257" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.424181 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5084446e20df6862f5537b93fd308a566c43f9b09533944070de614530073257"} err="failed to get container status \"5084446e20df6862f5537b93fd308a566c43f9b09533944070de614530073257\": rpc error: code = NotFound desc = could not find container \"5084446e20df6862f5537b93fd308a566c43f9b09533944070de614530073257\": container with ID starting with 5084446e20df6862f5537b93fd308a566c43f9b09533944070de614530073257 not found: ID does not exist" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.424209 4915 scope.go:117] "RemoveContainer" containerID="a4787e1f3d251eb309be9279ab54509ca98dbe79c0739e086e49b61781ac3ab7" Jan 27 19:01:50 crc kubenswrapper[4915]: E0127 19:01:50.424710 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4787e1f3d251eb309be9279ab54509ca98dbe79c0739e086e49b61781ac3ab7\": container with ID starting with a4787e1f3d251eb309be9279ab54509ca98dbe79c0739e086e49b61781ac3ab7 not found: ID does not exist" containerID="a4787e1f3d251eb309be9279ab54509ca98dbe79c0739e086e49b61781ac3ab7" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.424762 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4787e1f3d251eb309be9279ab54509ca98dbe79c0739e086e49b61781ac3ab7"} err="failed to get container status \"a4787e1f3d251eb309be9279ab54509ca98dbe79c0739e086e49b61781ac3ab7\": rpc error: code = NotFound desc = could not find container \"a4787e1f3d251eb309be9279ab54509ca98dbe79c0739e086e49b61781ac3ab7\": container with ID starting with a4787e1f3d251eb309be9279ab54509ca98dbe79c0739e086e49b61781ac3ab7 not found: ID does not exist" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.424847 4915 scope.go:117] "RemoveContainer" containerID="5084446e20df6862f5537b93fd308a566c43f9b09533944070de614530073257" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.425181 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5084446e20df6862f5537b93fd308a566c43f9b09533944070de614530073257"} err="failed to get container status \"5084446e20df6862f5537b93fd308a566c43f9b09533944070de614530073257\": rpc error: code = NotFound desc = could not find container \"5084446e20df6862f5537b93fd308a566c43f9b09533944070de614530073257\": container with ID starting with 5084446e20df6862f5537b93fd308a566c43f9b09533944070de614530073257 not found: ID does not exist" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.425210 4915 scope.go:117] "RemoveContainer" containerID="a4787e1f3d251eb309be9279ab54509ca98dbe79c0739e086e49b61781ac3ab7" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.425649 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4787e1f3d251eb309be9279ab54509ca98dbe79c0739e086e49b61781ac3ab7"} err="failed to get container status \"a4787e1f3d251eb309be9279ab54509ca98dbe79c0739e086e49b61781ac3ab7\": rpc error: code = NotFound desc = could not find container \"a4787e1f3d251eb309be9279ab54509ca98dbe79c0739e086e49b61781ac3ab7\": container with ID starting with a4787e1f3d251eb309be9279ab54509ca98dbe79c0739e086e49b61781ac3ab7 not found: ID does not exist" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.425694 4915 scope.go:117] "RemoveContainer" containerID="242b85c3ebcbf86617e37e0baab13393e52a400ba4ba2e7ad14eb374b26149a4" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.432959 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9455215d-2a98-42df-a801-53f31071447e-config-data" (OuterVolumeSpecName: "config-data") pod "9455215d-2a98-42df-a801-53f31071447e" (UID: "9455215d-2a98-42df-a801-53f31071447e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.438020 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c1db90-61fc-4aa0-8371-eae7ac202752-config-data" (OuterVolumeSpecName: "config-data") pod "a6c1db90-61fc-4aa0-8371-eae7ac202752" (UID: "a6c1db90-61fc-4aa0-8371-eae7ac202752"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.442912 4915 scope.go:117] "RemoveContainer" containerID="e1099dce504cb119e341c86eb7e8e9e9820968ff40772d97e4d942a2883762d6" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.459112 4915 scope.go:117] "RemoveContainer" containerID="242b85c3ebcbf86617e37e0baab13393e52a400ba4ba2e7ad14eb374b26149a4" Jan 27 19:01:50 crc kubenswrapper[4915]: E0127 19:01:50.459535 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"242b85c3ebcbf86617e37e0baab13393e52a400ba4ba2e7ad14eb374b26149a4\": container with ID starting with 242b85c3ebcbf86617e37e0baab13393e52a400ba4ba2e7ad14eb374b26149a4 not found: ID does not exist" containerID="242b85c3ebcbf86617e37e0baab13393e52a400ba4ba2e7ad14eb374b26149a4" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.459588 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"242b85c3ebcbf86617e37e0baab13393e52a400ba4ba2e7ad14eb374b26149a4"} err="failed to get container status \"242b85c3ebcbf86617e37e0baab13393e52a400ba4ba2e7ad14eb374b26149a4\": rpc error: code = NotFound desc = could not find container \"242b85c3ebcbf86617e37e0baab13393e52a400ba4ba2e7ad14eb374b26149a4\": container with ID starting with 242b85c3ebcbf86617e37e0baab13393e52a400ba4ba2e7ad14eb374b26149a4 not found: ID does not exist" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.459614 4915 scope.go:117] "RemoveContainer" containerID="e1099dce504cb119e341c86eb7e8e9e9820968ff40772d97e4d942a2883762d6" Jan 27 19:01:50 crc kubenswrapper[4915]: E0127 19:01:50.461041 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1099dce504cb119e341c86eb7e8e9e9820968ff40772d97e4d942a2883762d6\": container with ID starting with e1099dce504cb119e341c86eb7e8e9e9820968ff40772d97e4d942a2883762d6 not found: ID does not exist" containerID="e1099dce504cb119e341c86eb7e8e9e9820968ff40772d97e4d942a2883762d6" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.461071 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1099dce504cb119e341c86eb7e8e9e9820968ff40772d97e4d942a2883762d6"} err="failed to get container status \"e1099dce504cb119e341c86eb7e8e9e9820968ff40772d97e4d942a2883762d6\": rpc error: code = NotFound desc = could not find container \"e1099dce504cb119e341c86eb7e8e9e9820968ff40772d97e4d942a2883762d6\": container with ID starting with e1099dce504cb119e341c86eb7e8e9e9820968ff40772d97e4d942a2883762d6 not found: ID does not exist" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.461087 4915 scope.go:117] "RemoveContainer" containerID="242b85c3ebcbf86617e37e0baab13393e52a400ba4ba2e7ad14eb374b26149a4" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.461286 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"242b85c3ebcbf86617e37e0baab13393e52a400ba4ba2e7ad14eb374b26149a4"} err="failed to get container status \"242b85c3ebcbf86617e37e0baab13393e52a400ba4ba2e7ad14eb374b26149a4\": rpc error: code = NotFound desc = could not find container \"242b85c3ebcbf86617e37e0baab13393e52a400ba4ba2e7ad14eb374b26149a4\": container with ID starting with 242b85c3ebcbf86617e37e0baab13393e52a400ba4ba2e7ad14eb374b26149a4 not found: ID does not exist" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.461307 4915 scope.go:117] "RemoveContainer" containerID="e1099dce504cb119e341c86eb7e8e9e9820968ff40772d97e4d942a2883762d6" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.461860 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1099dce504cb119e341c86eb7e8e9e9820968ff40772d97e4d942a2883762d6"} err="failed to get container status \"e1099dce504cb119e341c86eb7e8e9e9820968ff40772d97e4d942a2883762d6\": rpc error: code = NotFound desc = could not find container \"e1099dce504cb119e341c86eb7e8e9e9820968ff40772d97e4d942a2883762d6\": container with ID starting with e1099dce504cb119e341c86eb7e8e9e9820968ff40772d97e4d942a2883762d6 not found: ID does not exist" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.462927 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6c1db90-61fc-4aa0-8371-eae7ac202752-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.462946 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9455215d-2a98-42df-a801-53f31071447e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.462956 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9455215d-2a98-42df-a801-53f31071447e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.462965 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gwnv\" (UniqueName: \"kubernetes.io/projected/a6c1db90-61fc-4aa0-8371-eae7ac202752-kube-api-access-9gwnv\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.462974 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c1db90-61fc-4aa0-8371-eae7ac202752-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.462984 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9455215d-2a98-42df-a801-53f31071447e-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.462994 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c1db90-61fc-4aa0-8371-eae7ac202752-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.463002 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6c1db90-61fc-4aa0-8371-eae7ac202752-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.463010 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9455215d-2a98-42df-a801-53f31071447e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.463018 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg8qw\" (UniqueName: \"kubernetes.io/projected/9455215d-2a98-42df-a801-53f31071447e-kube-api-access-gg8qw\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.624975 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.625052 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.713658 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-7686c5764d-hfh9t"] Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.723885 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-7686c5764d-hfh9t"] Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.731456 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5bb5986567-mfzn6"] Jan 27 19:01:50 crc kubenswrapper[4915]: I0127 19:01:50.739162 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-5bb5986567-mfzn6"] Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.377819 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9455215d-2a98-42df-a801-53f31071447e" path="/var/lib/kubelet/pods/9455215d-2a98-42df-a801-53f31071447e/volumes" Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.378528 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6c1db90-61fc-4aa0-8371-eae7ac202752" path="/var/lib/kubelet/pods/a6c1db90-61fc-4aa0-8371-eae7ac202752/volumes" Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.391274 4915 generic.go:334] "Generic (PLEG): container finished" podID="fc00196d-5999-4fb5-ad85-c2ed51b570ae" containerID="5a76fa29bc2eda93d30186a1db18d9c67ba10a4fa44315cfee2b9ff812297c96" exitCode=0 Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.391351 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc00196d-5999-4fb5-ad85-c2ed51b570ae","Type":"ContainerDied","Data":"5a76fa29bc2eda93d30186a1db18d9c67ba10a4fa44315cfee2b9ff812297c96"} Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.394226 4915 generic.go:334] "Generic (PLEG): container finished" podID="99e935b3-c64c-4c02-821b-18301c6b6c27" containerID="1b588bc3241f6527c713e1b6ce11c85307d42360eacca87a03d21e1910e5ed34" exitCode=0 Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.394260 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7drwm" event={"ID":"99e935b3-c64c-4c02-821b-18301c6b6c27","Type":"ContainerDied","Data":"1b588bc3241f6527c713e1b6ce11c85307d42360eacca87a03d21e1910e5ed34"} Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.712392 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.882359 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjr4f\" (UniqueName: \"kubernetes.io/projected/fc00196d-5999-4fb5-ad85-c2ed51b570ae-kube-api-access-cjr4f\") pod \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.882407 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc00196d-5999-4fb5-ad85-c2ed51b570ae-config-data\") pod \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.882464 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc00196d-5999-4fb5-ad85-c2ed51b570ae-scripts\") pod \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.883170 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc00196d-5999-4fb5-ad85-c2ed51b570ae-combined-ca-bundle\") pod \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.883230 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fc00196d-5999-4fb5-ad85-c2ed51b570ae-sg-core-conf-yaml\") pod \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.883247 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc00196d-5999-4fb5-ad85-c2ed51b570ae-log-httpd\") pod \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.883264 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc00196d-5999-4fb5-ad85-c2ed51b570ae-run-httpd\") pod \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\" (UID: \"fc00196d-5999-4fb5-ad85-c2ed51b570ae\") " Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.883970 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc00196d-5999-4fb5-ad85-c2ed51b570ae-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fc00196d-5999-4fb5-ad85-c2ed51b570ae" (UID: "fc00196d-5999-4fb5-ad85-c2ed51b570ae"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.884278 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc00196d-5999-4fb5-ad85-c2ed51b570ae-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fc00196d-5999-4fb5-ad85-c2ed51b570ae" (UID: "fc00196d-5999-4fb5-ad85-c2ed51b570ae"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.887781 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc00196d-5999-4fb5-ad85-c2ed51b570ae-scripts" (OuterVolumeSpecName: "scripts") pod "fc00196d-5999-4fb5-ad85-c2ed51b570ae" (UID: "fc00196d-5999-4fb5-ad85-c2ed51b570ae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.900739 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc00196d-5999-4fb5-ad85-c2ed51b570ae-kube-api-access-cjr4f" (OuterVolumeSpecName: "kube-api-access-cjr4f") pod "fc00196d-5999-4fb5-ad85-c2ed51b570ae" (UID: "fc00196d-5999-4fb5-ad85-c2ed51b570ae"). InnerVolumeSpecName "kube-api-access-cjr4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.920547 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc00196d-5999-4fb5-ad85-c2ed51b570ae-config-data" (OuterVolumeSpecName: "config-data") pod "fc00196d-5999-4fb5-ad85-c2ed51b570ae" (UID: "fc00196d-5999-4fb5-ad85-c2ed51b570ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.935481 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc00196d-5999-4fb5-ad85-c2ed51b570ae-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fc00196d-5999-4fb5-ad85-c2ed51b570ae" (UID: "fc00196d-5999-4fb5-ad85-c2ed51b570ae"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.949373 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc00196d-5999-4fb5-ad85-c2ed51b570ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc00196d-5999-4fb5-ad85-c2ed51b570ae" (UID: "fc00196d-5999-4fb5-ad85-c2ed51b570ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.984982 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjr4f\" (UniqueName: \"kubernetes.io/projected/fc00196d-5999-4fb5-ad85-c2ed51b570ae-kube-api-access-cjr4f\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.985110 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc00196d-5999-4fb5-ad85-c2ed51b570ae-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.985212 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc00196d-5999-4fb5-ad85-c2ed51b570ae-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.985315 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc00196d-5999-4fb5-ad85-c2ed51b570ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.985385 4915 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc00196d-5999-4fb5-ad85-c2ed51b570ae-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.985452 4915 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fc00196d-5999-4fb5-ad85-c2ed51b570ae-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:51 crc kubenswrapper[4915]: I0127 19:01:51.985522 4915 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc00196d-5999-4fb5-ad85-c2ed51b570ae-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.406966 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fc00196d-5999-4fb5-ad85-c2ed51b570ae","Type":"ContainerDied","Data":"95b5ae3c4e845048c16aaba4c0cc8813c2b9538cefec8715e5045ef1a3410d9d"} Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.407022 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.407068 4915 scope.go:117] "RemoveContainer" containerID="e0a916e110308d756b2c4be697a99ac33c79fa849fb72cada723642f15d817d7" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.466463 4915 scope.go:117] "RemoveContainer" containerID="5a76fa29bc2eda93d30186a1db18d9c67ba10a4fa44315cfee2b9ff812297c96" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.479822 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.491587 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.546041 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:52 crc kubenswrapper[4915]: E0127 19:01:52.546373 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fc52a32-aa71-44cf-81f3-ac7a1545c3b0" containerName="dnsmasq-dns" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.546388 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fc52a32-aa71-44cf-81f3-ac7a1545c3b0" containerName="dnsmasq-dns" Jan 27 19:01:52 crc kubenswrapper[4915]: E0127 19:01:52.546403 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc00196d-5999-4fb5-ad85-c2ed51b570ae" containerName="ceilometer-notification-agent" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.546409 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc00196d-5999-4fb5-ad85-c2ed51b570ae" containerName="ceilometer-notification-agent" Jan 27 19:01:52 crc kubenswrapper[4915]: E0127 19:01:52.546422 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9455215d-2a98-42df-a801-53f31071447e" containerName="barbican-worker" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.546428 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="9455215d-2a98-42df-a801-53f31071447e" containerName="barbican-worker" Jan 27 19:01:52 crc kubenswrapper[4915]: E0127 19:01:52.546441 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f6a6a3d-1872-4859-9d04-cbe10f799f89" containerName="barbican-api-log" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.546447 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f6a6a3d-1872-4859-9d04-cbe10f799f89" containerName="barbican-api-log" Jan 27 19:01:52 crc kubenswrapper[4915]: E0127 19:01:52.546459 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fc52a32-aa71-44cf-81f3-ac7a1545c3b0" containerName="init" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.546465 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fc52a32-aa71-44cf-81f3-ac7a1545c3b0" containerName="init" Jan 27 19:01:52 crc kubenswrapper[4915]: E0127 19:01:52.546479 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9455215d-2a98-42df-a801-53f31071447e" containerName="barbican-worker-log" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.546485 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="9455215d-2a98-42df-a801-53f31071447e" containerName="barbican-worker-log" Jan 27 19:01:52 crc kubenswrapper[4915]: E0127 19:01:52.546500 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc00196d-5999-4fb5-ad85-c2ed51b570ae" containerName="sg-core" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.546506 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc00196d-5999-4fb5-ad85-c2ed51b570ae" containerName="sg-core" Jan 27 19:01:52 crc kubenswrapper[4915]: E0127 19:01:52.546517 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f6a6a3d-1872-4859-9d04-cbe10f799f89" containerName="barbican-api" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.546523 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f6a6a3d-1872-4859-9d04-cbe10f799f89" containerName="barbican-api" Jan 27 19:01:52 crc kubenswrapper[4915]: E0127 19:01:52.546529 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c1db90-61fc-4aa0-8371-eae7ac202752" containerName="barbican-keystone-listener" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.546534 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c1db90-61fc-4aa0-8371-eae7ac202752" containerName="barbican-keystone-listener" Jan 27 19:01:52 crc kubenswrapper[4915]: E0127 19:01:52.546545 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c1db90-61fc-4aa0-8371-eae7ac202752" containerName="barbican-keystone-listener-log" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.546550 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c1db90-61fc-4aa0-8371-eae7ac202752" containerName="barbican-keystone-listener-log" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.546701 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc00196d-5999-4fb5-ad85-c2ed51b570ae" containerName="sg-core" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.546715 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f6a6a3d-1872-4859-9d04-cbe10f799f89" containerName="barbican-api-log" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.546724 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="9455215d-2a98-42df-a801-53f31071447e" containerName="barbican-worker" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.546736 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c1db90-61fc-4aa0-8371-eae7ac202752" containerName="barbican-keystone-listener" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.546748 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c1db90-61fc-4aa0-8371-eae7ac202752" containerName="barbican-keystone-listener-log" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.546757 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fc52a32-aa71-44cf-81f3-ac7a1545c3b0" containerName="dnsmasq-dns" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.546767 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc00196d-5999-4fb5-ad85-c2ed51b570ae" containerName="ceilometer-notification-agent" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.546776 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f6a6a3d-1872-4859-9d04-cbe10f799f89" containerName="barbican-api" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.546786 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="9455215d-2a98-42df-a801-53f31071447e" containerName="barbican-worker-log" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.552547 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.559681 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.559693 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.568350 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.701241 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-scripts\") pod \"ceilometer-0\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " pod="openstack/ceilometer-0" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.701693 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb7gf\" (UniqueName: \"kubernetes.io/projected/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-kube-api-access-tb7gf\") pod \"ceilometer-0\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " pod="openstack/ceilometer-0" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.701779 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-log-httpd\") pod \"ceilometer-0\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " pod="openstack/ceilometer-0" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.701856 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-config-data\") pod \"ceilometer-0\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " pod="openstack/ceilometer-0" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.701890 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " pod="openstack/ceilometer-0" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.701912 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " pod="openstack/ceilometer-0" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.701956 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-run-httpd\") pod \"ceilometer-0\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " pod="openstack/ceilometer-0" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.803705 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-scripts\") pod \"ceilometer-0\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " pod="openstack/ceilometer-0" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.803772 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb7gf\" (UniqueName: \"kubernetes.io/projected/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-kube-api-access-tb7gf\") pod \"ceilometer-0\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " pod="openstack/ceilometer-0" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.803948 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-log-httpd\") pod \"ceilometer-0\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " pod="openstack/ceilometer-0" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.804012 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-config-data\") pod \"ceilometer-0\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " pod="openstack/ceilometer-0" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.804048 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " pod="openstack/ceilometer-0" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.804071 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " pod="openstack/ceilometer-0" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.804116 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-run-httpd\") pod \"ceilometer-0\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " pod="openstack/ceilometer-0" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.804707 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-run-httpd\") pod \"ceilometer-0\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " pod="openstack/ceilometer-0" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.804720 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-log-httpd\") pod \"ceilometer-0\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " pod="openstack/ceilometer-0" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.809119 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " pod="openstack/ceilometer-0" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.809387 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " pod="openstack/ceilometer-0" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.809648 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-scripts\") pod \"ceilometer-0\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " pod="openstack/ceilometer-0" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.809668 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-config-data\") pod \"ceilometer-0\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " pod="openstack/ceilometer-0" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.819030 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb7gf\" (UniqueName: \"kubernetes.io/projected/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-kube-api-access-tb7gf\") pod \"ceilometer-0\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " pod="openstack/ceilometer-0" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.875409 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:01:52 crc kubenswrapper[4915]: I0127 19:01:52.888344 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7drwm" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.007585 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvbtc\" (UniqueName: \"kubernetes.io/projected/99e935b3-c64c-4c02-821b-18301c6b6c27-kube-api-access-kvbtc\") pod \"99e935b3-c64c-4c02-821b-18301c6b6c27\" (UID: \"99e935b3-c64c-4c02-821b-18301c6b6c27\") " Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.007627 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99e935b3-c64c-4c02-821b-18301c6b6c27-combined-ca-bundle\") pod \"99e935b3-c64c-4c02-821b-18301c6b6c27\" (UID: \"99e935b3-c64c-4c02-821b-18301c6b6c27\") " Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.007719 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99e935b3-c64c-4c02-821b-18301c6b6c27-config-data\") pod \"99e935b3-c64c-4c02-821b-18301c6b6c27\" (UID: \"99e935b3-c64c-4c02-821b-18301c6b6c27\") " Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.007785 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99e935b3-c64c-4c02-821b-18301c6b6c27-etc-machine-id\") pod \"99e935b3-c64c-4c02-821b-18301c6b6c27\" (UID: \"99e935b3-c64c-4c02-821b-18301c6b6c27\") " Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.007921 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/99e935b3-c64c-4c02-821b-18301c6b6c27-db-sync-config-data\") pod \"99e935b3-c64c-4c02-821b-18301c6b6c27\" (UID: \"99e935b3-c64c-4c02-821b-18301c6b6c27\") " Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.008078 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99e935b3-c64c-4c02-821b-18301c6b6c27-scripts\") pod \"99e935b3-c64c-4c02-821b-18301c6b6c27\" (UID: \"99e935b3-c64c-4c02-821b-18301c6b6c27\") " Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.009962 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99e935b3-c64c-4c02-821b-18301c6b6c27-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "99e935b3-c64c-4c02-821b-18301c6b6c27" (UID: "99e935b3-c64c-4c02-821b-18301c6b6c27"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.012665 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99e935b3-c64c-4c02-821b-18301c6b6c27-kube-api-access-kvbtc" (OuterVolumeSpecName: "kube-api-access-kvbtc") pod "99e935b3-c64c-4c02-821b-18301c6b6c27" (UID: "99e935b3-c64c-4c02-821b-18301c6b6c27"). InnerVolumeSpecName "kube-api-access-kvbtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.012949 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99e935b3-c64c-4c02-821b-18301c6b6c27-scripts" (OuterVolumeSpecName: "scripts") pod "99e935b3-c64c-4c02-821b-18301c6b6c27" (UID: "99e935b3-c64c-4c02-821b-18301c6b6c27"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.014913 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99e935b3-c64c-4c02-821b-18301c6b6c27-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "99e935b3-c64c-4c02-821b-18301c6b6c27" (UID: "99e935b3-c64c-4c02-821b-18301c6b6c27"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.050674 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99e935b3-c64c-4c02-821b-18301c6b6c27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99e935b3-c64c-4c02-821b-18301c6b6c27" (UID: "99e935b3-c64c-4c02-821b-18301c6b6c27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.064009 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99e935b3-c64c-4c02-821b-18301c6b6c27-config-data" (OuterVolumeSpecName: "config-data") pod "99e935b3-c64c-4c02-821b-18301c6b6c27" (UID: "99e935b3-c64c-4c02-821b-18301c6b6c27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.121086 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99e935b3-c64c-4c02-821b-18301c6b6c27-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.121122 4915 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99e935b3-c64c-4c02-821b-18301c6b6c27-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.121132 4915 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/99e935b3-c64c-4c02-821b-18301c6b6c27-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.121140 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99e935b3-c64c-4c02-821b-18301c6b6c27-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.121148 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99e935b3-c64c-4c02-821b-18301c6b6c27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.121156 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvbtc\" (UniqueName: \"kubernetes.io/projected/99e935b3-c64c-4c02-821b-18301c6b6c27-kube-api-access-kvbtc\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.320610 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.369664 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc00196d-5999-4fb5-ad85-c2ed51b570ae" path="/var/lib/kubelet/pods/fc00196d-5999-4fb5-ad85-c2ed51b570ae/volumes" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.416057 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261","Type":"ContainerStarted","Data":"2ff177e0fa03679d08aafc69fb7c980c63d9b9312f063e17543baef940ceb413"} Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.419011 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7drwm" event={"ID":"99e935b3-c64c-4c02-821b-18301c6b6c27","Type":"ContainerDied","Data":"f7558eccc1ef72598e2149a47aa4b003380a4ba3f2928ff7147a818001d7b66b"} Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.419047 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7558eccc1ef72598e2149a47aa4b003380a4ba3f2928ff7147a818001d7b66b" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.419172 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7drwm" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.740083 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 19:01:53 crc kubenswrapper[4915]: E0127 19:01:53.740411 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99e935b3-c64c-4c02-821b-18301c6b6c27" containerName="cinder-db-sync" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.740424 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="99e935b3-c64c-4c02-821b-18301c6b6c27" containerName="cinder-db-sync" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.740597 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="99e935b3-c64c-4c02-821b-18301c6b6c27" containerName="cinder-db-sync" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.741445 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.758444 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.759393 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-hfk4n" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.759589 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.759719 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.837260 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.846148 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c2ba1dc4-00a6-4706-94f6-54e6badb718c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.848364 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2ba1dc4-00a6-4706-94f6-54e6badb718c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.848425 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2ba1dc4-00a6-4706-94f6-54e6badb718c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.848457 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2ba1dc4-00a6-4706-94f6-54e6badb718c-scripts\") pod \"cinder-scheduler-0\" (UID: \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.848579 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnctd\" (UniqueName: \"kubernetes.io/projected/c2ba1dc4-00a6-4706-94f6-54e6badb718c-kube-api-access-qnctd\") pod \"cinder-scheduler-0\" (UID: \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.848622 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2ba1dc4-00a6-4706-94f6-54e6badb718c-config-data\") pod \"cinder-scheduler-0\" (UID: \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.874940 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-67xmd"] Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.876848 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-67xmd" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.889442 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-67xmd"] Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.956096 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-config\") pod \"dnsmasq-dns-6578955fd5-67xmd\" (UID: \"7d649213-0d40-4201-9012-222a20cb227d\") " pod="openstack/dnsmasq-dns-6578955fd5-67xmd" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.956144 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2ba1dc4-00a6-4706-94f6-54e6badb718c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.956182 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2ba1dc4-00a6-4706-94f6-54e6badb718c-scripts\") pod \"cinder-scheduler-0\" (UID: \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.956235 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-67xmd\" (UID: \"7d649213-0d40-4201-9012-222a20cb227d\") " pod="openstack/dnsmasq-dns-6578955fd5-67xmd" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.956266 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-67xmd\" (UID: \"7d649213-0d40-4201-9012-222a20cb227d\") " pod="openstack/dnsmasq-dns-6578955fd5-67xmd" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.956289 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-dns-svc\") pod \"dnsmasq-dns-6578955fd5-67xmd\" (UID: \"7d649213-0d40-4201-9012-222a20cb227d\") " pod="openstack/dnsmasq-dns-6578955fd5-67xmd" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.956349 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnctd\" (UniqueName: \"kubernetes.io/projected/c2ba1dc4-00a6-4706-94f6-54e6badb718c-kube-api-access-qnctd\") pod \"cinder-scheduler-0\" (UID: \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.956374 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpg6n\" (UniqueName: \"kubernetes.io/projected/7d649213-0d40-4201-9012-222a20cb227d-kube-api-access-jpg6n\") pod \"dnsmasq-dns-6578955fd5-67xmd\" (UID: \"7d649213-0d40-4201-9012-222a20cb227d\") " pod="openstack/dnsmasq-dns-6578955fd5-67xmd" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.956392 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2ba1dc4-00a6-4706-94f6-54e6badb718c-config-data\") pod \"cinder-scheduler-0\" (UID: \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.956419 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c2ba1dc4-00a6-4706-94f6-54e6badb718c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.956435 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-67xmd\" (UID: \"7d649213-0d40-4201-9012-222a20cb227d\") " pod="openstack/dnsmasq-dns-6578955fd5-67xmd" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.956482 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2ba1dc4-00a6-4706-94f6-54e6badb718c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.963274 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c2ba1dc4-00a6-4706-94f6-54e6badb718c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.964250 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2ba1dc4-00a6-4706-94f6-54e6badb718c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.966631 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2ba1dc4-00a6-4706-94f6-54e6badb718c-config-data\") pod \"cinder-scheduler-0\" (UID: \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.981050 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2ba1dc4-00a6-4706-94f6-54e6badb718c-scripts\") pod \"cinder-scheduler-0\" (UID: \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.989110 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2ba1dc4-00a6-4706-94f6-54e6badb718c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.989888 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnctd\" (UniqueName: \"kubernetes.io/projected/c2ba1dc4-00a6-4706-94f6-54e6badb718c-kube-api-access-qnctd\") pod \"cinder-scheduler-0\" (UID: \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\") " pod="openstack/cinder-scheduler-0" Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.996451 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 19:01:53 crc kubenswrapper[4915]: I0127 19:01:53.998762 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.004981 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.016757 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.059812 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-config\") pod \"dnsmasq-dns-6578955fd5-67xmd\" (UID: \"7d649213-0d40-4201-9012-222a20cb227d\") " pod="openstack/dnsmasq-dns-6578955fd5-67xmd" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.057709 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-config\") pod \"dnsmasq-dns-6578955fd5-67xmd\" (UID: \"7d649213-0d40-4201-9012-222a20cb227d\") " pod="openstack/dnsmasq-dns-6578955fd5-67xmd" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.061485 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ace79e7e-87c2-4d54-89df-b272912bfcba-config-data\") pod \"cinder-api-0\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " pod="openstack/cinder-api-0" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.061606 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ace79e7e-87c2-4d54-89df-b272912bfcba-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " pod="openstack/cinder-api-0" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.061657 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ace79e7e-87c2-4d54-89df-b272912bfcba-logs\") pod \"cinder-api-0\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " pod="openstack/cinder-api-0" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.061711 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-67xmd\" (UID: \"7d649213-0d40-4201-9012-222a20cb227d\") " pod="openstack/dnsmasq-dns-6578955fd5-67xmd" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.061769 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-67xmd\" (UID: \"7d649213-0d40-4201-9012-222a20cb227d\") " pod="openstack/dnsmasq-dns-6578955fd5-67xmd" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.061829 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-dns-svc\") pod \"dnsmasq-dns-6578955fd5-67xmd\" (UID: \"7d649213-0d40-4201-9012-222a20cb227d\") " pod="openstack/dnsmasq-dns-6578955fd5-67xmd" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.061935 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ace79e7e-87c2-4d54-89df-b272912bfcba-config-data-custom\") pod \"cinder-api-0\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " pod="openstack/cinder-api-0" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.061955 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ace79e7e-87c2-4d54-89df-b272912bfcba-scripts\") pod \"cinder-api-0\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " pod="openstack/cinder-api-0" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.061991 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bj6c\" (UniqueName: \"kubernetes.io/projected/ace79e7e-87c2-4d54-89df-b272912bfcba-kube-api-access-7bj6c\") pod \"cinder-api-0\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " pod="openstack/cinder-api-0" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.062019 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpg6n\" (UniqueName: \"kubernetes.io/projected/7d649213-0d40-4201-9012-222a20cb227d-kube-api-access-jpg6n\") pod \"dnsmasq-dns-6578955fd5-67xmd\" (UID: \"7d649213-0d40-4201-9012-222a20cb227d\") " pod="openstack/dnsmasq-dns-6578955fd5-67xmd" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.062074 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-67xmd\" (UID: \"7d649213-0d40-4201-9012-222a20cb227d\") " pod="openstack/dnsmasq-dns-6578955fd5-67xmd" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.062156 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ace79e7e-87c2-4d54-89df-b272912bfcba-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " pod="openstack/cinder-api-0" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.062567 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-67xmd\" (UID: \"7d649213-0d40-4201-9012-222a20cb227d\") " pod="openstack/dnsmasq-dns-6578955fd5-67xmd" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.065648 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-67xmd\" (UID: \"7d649213-0d40-4201-9012-222a20cb227d\") " pod="openstack/dnsmasq-dns-6578955fd5-67xmd" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.066177 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-67xmd\" (UID: \"7d649213-0d40-4201-9012-222a20cb227d\") " pod="openstack/dnsmasq-dns-6578955fd5-67xmd" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.066834 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-dns-svc\") pod \"dnsmasq-dns-6578955fd5-67xmd\" (UID: \"7d649213-0d40-4201-9012-222a20cb227d\") " pod="openstack/dnsmasq-dns-6578955fd5-67xmd" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.085929 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.090670 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpg6n\" (UniqueName: \"kubernetes.io/projected/7d649213-0d40-4201-9012-222a20cb227d-kube-api-access-jpg6n\") pod \"dnsmasq-dns-6578955fd5-67xmd\" (UID: \"7d649213-0d40-4201-9012-222a20cb227d\") " pod="openstack/dnsmasq-dns-6578955fd5-67xmd" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.167725 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ace79e7e-87c2-4d54-89df-b272912bfcba-config-data-custom\") pod \"cinder-api-0\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " pod="openstack/cinder-api-0" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.167780 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ace79e7e-87c2-4d54-89df-b272912bfcba-scripts\") pod \"cinder-api-0\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " pod="openstack/cinder-api-0" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.167817 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bj6c\" (UniqueName: \"kubernetes.io/projected/ace79e7e-87c2-4d54-89df-b272912bfcba-kube-api-access-7bj6c\") pod \"cinder-api-0\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " pod="openstack/cinder-api-0" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.167872 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ace79e7e-87c2-4d54-89df-b272912bfcba-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " pod="openstack/cinder-api-0" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.167946 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ace79e7e-87c2-4d54-89df-b272912bfcba-config-data\") pod \"cinder-api-0\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " pod="openstack/cinder-api-0" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.167977 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ace79e7e-87c2-4d54-89df-b272912bfcba-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " pod="openstack/cinder-api-0" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.168002 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ace79e7e-87c2-4d54-89df-b272912bfcba-logs\") pod \"cinder-api-0\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " pod="openstack/cinder-api-0" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.168488 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ace79e7e-87c2-4d54-89df-b272912bfcba-logs\") pod \"cinder-api-0\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " pod="openstack/cinder-api-0" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.169522 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ace79e7e-87c2-4d54-89df-b272912bfcba-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " pod="openstack/cinder-api-0" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.174040 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ace79e7e-87c2-4d54-89df-b272912bfcba-config-data-custom\") pod \"cinder-api-0\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " pod="openstack/cinder-api-0" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.181031 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ace79e7e-87c2-4d54-89df-b272912bfcba-config-data\") pod \"cinder-api-0\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " pod="openstack/cinder-api-0" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.184951 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ace79e7e-87c2-4d54-89df-b272912bfcba-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " pod="openstack/cinder-api-0" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.187290 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ace79e7e-87c2-4d54-89df-b272912bfcba-scripts\") pod \"cinder-api-0\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " pod="openstack/cinder-api-0" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.193508 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bj6c\" (UniqueName: \"kubernetes.io/projected/ace79e7e-87c2-4d54-89df-b272912bfcba-kube-api-access-7bj6c\") pod \"cinder-api-0\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " pod="openstack/cinder-api-0" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.219335 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.234195 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-67xmd" Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.702236 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.848978 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 19:01:54 crc kubenswrapper[4915]: I0127 19:01:54.985857 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-67xmd"] Jan 27 19:01:55 crc kubenswrapper[4915]: I0127 19:01:55.203837 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:01:55 crc kubenswrapper[4915]: I0127 19:01:55.458557 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ace79e7e-87c2-4d54-89df-b272912bfcba","Type":"ContainerStarted","Data":"94d03b9c7292ecb9d1590484579ae5bde631e673fe9472dd58a3b059203b4d0c"} Jan 27 19:01:55 crc kubenswrapper[4915]: I0127 19:01:55.461093 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261","Type":"ContainerStarted","Data":"303ea831b8f5584be90030e5592d1f0f9bb6a4982969a7d441cf1c2717f674cf"} Jan 27 19:01:55 crc kubenswrapper[4915]: I0127 19:01:55.461130 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261","Type":"ContainerStarted","Data":"121e1ca2e790d4c007a783ec192a5f5d0c67a6afb9213e5ec997b55d9372eeb3"} Jan 27 19:01:55 crc kubenswrapper[4915]: I0127 19:01:55.464561 4915 generic.go:334] "Generic (PLEG): container finished" podID="7d649213-0d40-4201-9012-222a20cb227d" containerID="425c5e8236250a8b8e2984437acdee213e11990b4905f1a2f19fd4e5f5bf5221" exitCode=0 Jan 27 19:01:55 crc kubenswrapper[4915]: I0127 19:01:55.464650 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-67xmd" event={"ID":"7d649213-0d40-4201-9012-222a20cb227d","Type":"ContainerDied","Data":"425c5e8236250a8b8e2984437acdee213e11990b4905f1a2f19fd4e5f5bf5221"} Jan 27 19:01:55 crc kubenswrapper[4915]: I0127 19:01:55.464679 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-67xmd" event={"ID":"7d649213-0d40-4201-9012-222a20cb227d","Type":"ContainerStarted","Data":"4227187e95d8fbdf545e8bb0a1ee52a1941ab0623a48f3e4294e7b8e43a5c4ca"} Jan 27 19:01:55 crc kubenswrapper[4915]: I0127 19:01:55.476283 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c2ba1dc4-00a6-4706-94f6-54e6badb718c","Type":"ContainerStarted","Data":"4fa753b7501a5f1e8bcf1207b7782b6f41d2132b48293558cb50f948406be991"} Jan 27 19:01:55 crc kubenswrapper[4915]: I0127 19:01:55.578412 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:01:55 crc kubenswrapper[4915]: I0127 19:01:55.694243 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-65cdc599f4-6dhwr"] Jan 27 19:01:55 crc kubenswrapper[4915]: I0127 19:01:55.694486 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-65cdc599f4-6dhwr" podUID="5d299c59-4dbe-4032-aadc-3c35ecde70b2" containerName="barbican-api-log" containerID="cri-o://9b2ff4fa11cb679a08a4692b918f9230b644f60da09705a4bf74a03926c47c44" gracePeriod=30 Jan 27 19:01:55 crc kubenswrapper[4915]: I0127 19:01:55.694868 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-65cdc599f4-6dhwr" podUID="5d299c59-4dbe-4032-aadc-3c35ecde70b2" containerName="barbican-api" containerID="cri-o://1f0d61ad56942ea7287b283e39eeb0e031a58874d420cbece9afe0a6b008de9e" gracePeriod=30 Jan 27 19:01:55 crc kubenswrapper[4915]: I0127 19:01:55.806069 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 19:01:56 crc kubenswrapper[4915]: I0127 19:01:56.500208 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c2ba1dc4-00a6-4706-94f6-54e6badb718c","Type":"ContainerStarted","Data":"3941827b4396d0993a73bfdbaef1fab97b3133bb336cbec3806d67488ead2f19"} Jan 27 19:01:56 crc kubenswrapper[4915]: I0127 19:01:56.505120 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ace79e7e-87c2-4d54-89df-b272912bfcba","Type":"ContainerStarted","Data":"b20ae3a1ddf485113076be8bb8019dc02f28942c17f31d9c3c4eb2b71e63a501"} Jan 27 19:01:56 crc kubenswrapper[4915]: I0127 19:01:56.505163 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ace79e7e-87c2-4d54-89df-b272912bfcba","Type":"ContainerStarted","Data":"ce3801bf2a5b527a4d3df4d49d0e7a466aafd70613836972cc676d35614d70db"} Jan 27 19:01:56 crc kubenswrapper[4915]: I0127 19:01:56.505274 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ace79e7e-87c2-4d54-89df-b272912bfcba" containerName="cinder-api-log" containerID="cri-o://ce3801bf2a5b527a4d3df4d49d0e7a466aafd70613836972cc676d35614d70db" gracePeriod=30 Jan 27 19:01:56 crc kubenswrapper[4915]: I0127 19:01:56.505504 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 19:01:56 crc kubenswrapper[4915]: I0127 19:01:56.505739 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ace79e7e-87c2-4d54-89df-b272912bfcba" containerName="cinder-api" containerID="cri-o://b20ae3a1ddf485113076be8bb8019dc02f28942c17f31d9c3c4eb2b71e63a501" gracePeriod=30 Jan 27 19:01:56 crc kubenswrapper[4915]: I0127 19:01:56.518637 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261","Type":"ContainerStarted","Data":"9b57f5253fc08cf41baf11f81a31afe6e950937ff9dd7f6a78ce7d976bd44cb9"} Jan 27 19:01:56 crc kubenswrapper[4915]: I0127 19:01:56.522081 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-67xmd" event={"ID":"7d649213-0d40-4201-9012-222a20cb227d","Type":"ContainerStarted","Data":"8c6168fa75a3ae1fe6ff503932c9a5efeae098ff0353637e894962b17722529b"} Jan 27 19:01:56 crc kubenswrapper[4915]: I0127 19:01:56.522231 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-67xmd" Jan 27 19:01:56 crc kubenswrapper[4915]: I0127 19:01:56.536953 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.536935203 podStartE2EDuration="3.536935203s" podCreationTimestamp="2026-01-27 19:01:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:56.527247576 +0000 UTC m=+1207.885101240" watchObservedRunningTime="2026-01-27 19:01:56.536935203 +0000 UTC m=+1207.894788867" Jan 27 19:01:56 crc kubenswrapper[4915]: I0127 19:01:56.539745 4915 generic.go:334] "Generic (PLEG): container finished" podID="5d299c59-4dbe-4032-aadc-3c35ecde70b2" containerID="9b2ff4fa11cb679a08a4692b918f9230b644f60da09705a4bf74a03926c47c44" exitCode=143 Jan 27 19:01:56 crc kubenswrapper[4915]: I0127 19:01:56.539785 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65cdc599f4-6dhwr" event={"ID":"5d299c59-4dbe-4032-aadc-3c35ecde70b2","Type":"ContainerDied","Data":"9b2ff4fa11cb679a08a4692b918f9230b644f60da09705a4bf74a03926c47c44"} Jan 27 19:01:56 crc kubenswrapper[4915]: I0127 19:01:56.558661 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-67xmd" podStartSLOduration=3.558635724 podStartE2EDuration="3.558635724s" podCreationTimestamp="2026-01-27 19:01:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:56.547872981 +0000 UTC m=+1207.905726645" watchObservedRunningTime="2026-01-27 19:01:56.558635724 +0000 UTC m=+1207.916489388" Jan 27 19:01:57 crc kubenswrapper[4915]: I0127 19:01:57.549407 4915 generic.go:334] "Generic (PLEG): container finished" podID="ace79e7e-87c2-4d54-89df-b272912bfcba" containerID="ce3801bf2a5b527a4d3df4d49d0e7a466aafd70613836972cc676d35614d70db" exitCode=143 Jan 27 19:01:57 crc kubenswrapper[4915]: I0127 19:01:57.549477 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ace79e7e-87c2-4d54-89df-b272912bfcba","Type":"ContainerDied","Data":"ce3801bf2a5b527a4d3df4d49d0e7a466aafd70613836972cc676d35614d70db"} Jan 27 19:01:57 crc kubenswrapper[4915]: I0127 19:01:57.553360 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261","Type":"ContainerStarted","Data":"c4248a94eedc3362a28233c83dbb06be4142c20ed3181b3be320b86ae1c3438d"} Jan 27 19:01:57 crc kubenswrapper[4915]: I0127 19:01:57.554724 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 19:01:57 crc kubenswrapper[4915]: I0127 19:01:57.560254 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c2ba1dc4-00a6-4706-94f6-54e6badb718c","Type":"ContainerStarted","Data":"a7c9798d3141c224935601c4b718221ad709b6e0615ba715809a5a46ccfa2e3f"} Jan 27 19:01:57 crc kubenswrapper[4915]: I0127 19:01:57.587368 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.7410240959999999 podStartE2EDuration="5.587349086s" podCreationTimestamp="2026-01-27 19:01:52 +0000 UTC" firstStartedPulling="2026-01-27 19:01:53.344906627 +0000 UTC m=+1204.702760301" lastFinishedPulling="2026-01-27 19:01:57.191231627 +0000 UTC m=+1208.549085291" observedRunningTime="2026-01-27 19:01:57.578131531 +0000 UTC m=+1208.935985195" watchObservedRunningTime="2026-01-27 19:01:57.587349086 +0000 UTC m=+1208.945202740" Jan 27 19:01:59 crc kubenswrapper[4915]: I0127 19:01:59.087384 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 19:01:59 crc kubenswrapper[4915]: I0127 19:01:59.260861 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-65cdc599f4-6dhwr" podUID="5d299c59-4dbe-4032-aadc-3c35ecde70b2" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": read tcp 10.217.0.2:59480->10.217.0.163:9311: read: connection reset by peer" Jan 27 19:01:59 crc kubenswrapper[4915]: I0127 19:01:59.260869 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-65cdc599f4-6dhwr" podUID="5d299c59-4dbe-4032-aadc-3c35ecde70b2" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": read tcp 10.217.0.2:59482->10.217.0.163:9311: read: connection reset by peer" Jan 27 19:01:59 crc kubenswrapper[4915]: I0127 19:01:59.581534 4915 generic.go:334] "Generic (PLEG): container finished" podID="5d299c59-4dbe-4032-aadc-3c35ecde70b2" containerID="1f0d61ad56942ea7287b283e39eeb0e031a58874d420cbece9afe0a6b008de9e" exitCode=0 Jan 27 19:01:59 crc kubenswrapper[4915]: I0127 19:01:59.581651 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65cdc599f4-6dhwr" event={"ID":"5d299c59-4dbe-4032-aadc-3c35ecde70b2","Type":"ContainerDied","Data":"1f0d61ad56942ea7287b283e39eeb0e031a58874d420cbece9afe0a6b008de9e"} Jan 27 19:01:59 crc kubenswrapper[4915]: I0127 19:01:59.705295 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:01:59 crc kubenswrapper[4915]: I0127 19:01:59.706691 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:01:59 crc kubenswrapper[4915]: I0127 19:01:59.732354 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-65cdc599f4-6dhwr" Jan 27 19:01:59 crc kubenswrapper[4915]: I0127 19:01:59.736604 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.909356082 podStartE2EDuration="6.736576585s" podCreationTimestamp="2026-01-27 19:01:53 +0000 UTC" firstStartedPulling="2026-01-27 19:01:54.707911116 +0000 UTC m=+1206.065764780" lastFinishedPulling="2026-01-27 19:01:55.535131619 +0000 UTC m=+1206.892985283" observedRunningTime="2026-01-27 19:01:57.606937915 +0000 UTC m=+1208.964791579" watchObservedRunningTime="2026-01-27 19:01:59.736576585 +0000 UTC m=+1211.094430289" Jan 27 19:01:59 crc kubenswrapper[4915]: I0127 19:01:59.918628 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d299c59-4dbe-4032-aadc-3c35ecde70b2-config-data\") pod \"5d299c59-4dbe-4032-aadc-3c35ecde70b2\" (UID: \"5d299c59-4dbe-4032-aadc-3c35ecde70b2\") " Jan 27 19:01:59 crc kubenswrapper[4915]: I0127 19:01:59.919324 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rvvp\" (UniqueName: \"kubernetes.io/projected/5d299c59-4dbe-4032-aadc-3c35ecde70b2-kube-api-access-8rvvp\") pod \"5d299c59-4dbe-4032-aadc-3c35ecde70b2\" (UID: \"5d299c59-4dbe-4032-aadc-3c35ecde70b2\") " Jan 27 19:01:59 crc kubenswrapper[4915]: I0127 19:01:59.919357 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d299c59-4dbe-4032-aadc-3c35ecde70b2-logs\") pod \"5d299c59-4dbe-4032-aadc-3c35ecde70b2\" (UID: \"5d299c59-4dbe-4032-aadc-3c35ecde70b2\") " Jan 27 19:01:59 crc kubenswrapper[4915]: I0127 19:01:59.919396 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d299c59-4dbe-4032-aadc-3c35ecde70b2-config-data-custom\") pod \"5d299c59-4dbe-4032-aadc-3c35ecde70b2\" (UID: \"5d299c59-4dbe-4032-aadc-3c35ecde70b2\") " Jan 27 19:01:59 crc kubenswrapper[4915]: I0127 19:01:59.919429 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d299c59-4dbe-4032-aadc-3c35ecde70b2-combined-ca-bundle\") pod \"5d299c59-4dbe-4032-aadc-3c35ecde70b2\" (UID: \"5d299c59-4dbe-4032-aadc-3c35ecde70b2\") " Jan 27 19:01:59 crc kubenswrapper[4915]: I0127 19:01:59.919836 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d299c59-4dbe-4032-aadc-3c35ecde70b2-logs" (OuterVolumeSpecName: "logs") pod "5d299c59-4dbe-4032-aadc-3c35ecde70b2" (UID: "5d299c59-4dbe-4032-aadc-3c35ecde70b2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:01:59 crc kubenswrapper[4915]: I0127 19:01:59.924371 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d299c59-4dbe-4032-aadc-3c35ecde70b2-kube-api-access-8rvvp" (OuterVolumeSpecName: "kube-api-access-8rvvp") pod "5d299c59-4dbe-4032-aadc-3c35ecde70b2" (UID: "5d299c59-4dbe-4032-aadc-3c35ecde70b2"). InnerVolumeSpecName "kube-api-access-8rvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:59 crc kubenswrapper[4915]: I0127 19:01:59.940452 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d299c59-4dbe-4032-aadc-3c35ecde70b2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5d299c59-4dbe-4032-aadc-3c35ecde70b2" (UID: "5d299c59-4dbe-4032-aadc-3c35ecde70b2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:59 crc kubenswrapper[4915]: I0127 19:01:59.981930 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d299c59-4dbe-4032-aadc-3c35ecde70b2-config-data" (OuterVolumeSpecName: "config-data") pod "5d299c59-4dbe-4032-aadc-3c35ecde70b2" (UID: "5d299c59-4dbe-4032-aadc-3c35ecde70b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:59 crc kubenswrapper[4915]: I0127 19:01:59.997323 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d299c59-4dbe-4032-aadc-3c35ecde70b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d299c59-4dbe-4032-aadc-3c35ecde70b2" (UID: "5d299c59-4dbe-4032-aadc-3c35ecde70b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:00 crc kubenswrapper[4915]: I0127 19:02:00.020906 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d299c59-4dbe-4032-aadc-3c35ecde70b2-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:00 crc kubenswrapper[4915]: I0127 19:02:00.020937 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rvvp\" (UniqueName: \"kubernetes.io/projected/5d299c59-4dbe-4032-aadc-3c35ecde70b2-kube-api-access-8rvvp\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:00 crc kubenswrapper[4915]: I0127 19:02:00.020949 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d299c59-4dbe-4032-aadc-3c35ecde70b2-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:00 crc kubenswrapper[4915]: I0127 19:02:00.020957 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d299c59-4dbe-4032-aadc-3c35ecde70b2-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:00 crc kubenswrapper[4915]: I0127 19:02:00.020966 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d299c59-4dbe-4032-aadc-3c35ecde70b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:00 crc kubenswrapper[4915]: I0127 19:02:00.591283 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-65cdc599f4-6dhwr" Jan 27 19:02:00 crc kubenswrapper[4915]: I0127 19:02:00.591260 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65cdc599f4-6dhwr" event={"ID":"5d299c59-4dbe-4032-aadc-3c35ecde70b2","Type":"ContainerDied","Data":"711b392f3670777e08ac006546bfdd2ee1140c99c9e18d8ea026e627ab9b779d"} Jan 27 19:02:00 crc kubenswrapper[4915]: I0127 19:02:00.592109 4915 scope.go:117] "RemoveContainer" containerID="1f0d61ad56942ea7287b283e39eeb0e031a58874d420cbece9afe0a6b008de9e" Jan 27 19:02:00 crc kubenswrapper[4915]: I0127 19:02:00.625873 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-65cdc599f4-6dhwr"] Jan 27 19:02:00 crc kubenswrapper[4915]: I0127 19:02:00.627998 4915 scope.go:117] "RemoveContainer" containerID="9b2ff4fa11cb679a08a4692b918f9230b644f60da09705a4bf74a03926c47c44" Jan 27 19:02:00 crc kubenswrapper[4915]: I0127 19:02:00.635895 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-65cdc599f4-6dhwr"] Jan 27 19:02:01 crc kubenswrapper[4915]: I0127 19:02:01.320562 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-79f7874b76-rrchs" podUID="53bd39be-21df-4cea-aed2-0ac820ce45b6" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.217.0.158:9696/\": dial tcp 10.217.0.158:9696: connect: connection refused" Jan 27 19:02:01 crc kubenswrapper[4915]: I0127 19:02:01.389171 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d299c59-4dbe-4032-aadc-3c35ecde70b2" path="/var/lib/kubelet/pods/5d299c59-4dbe-4032-aadc-3c35ecde70b2/volumes" Jan 27 19:02:01 crc kubenswrapper[4915]: I0127 19:02:01.964949 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-cb96bd94d-x5bv4" Jan 27 19:02:02 crc kubenswrapper[4915]: I0127 19:02:02.764162 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:02:04 crc kubenswrapper[4915]: I0127 19:02:04.210755 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-cc59d8b57-zj69c" Jan 27 19:02:04 crc kubenswrapper[4915]: I0127 19:02:04.236949 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-67xmd" Jan 27 19:02:04 crc kubenswrapper[4915]: I0127 19:02:04.306087 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cb96bd94d-x5bv4"] Jan 27 19:02:04 crc kubenswrapper[4915]: I0127 19:02:04.306328 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-cb96bd94d-x5bv4" podUID="c540b760-c850-4418-ba54-5a404dc3dbbd" containerName="neutron-api" containerID="cri-o://1280df6d409410b600e18da6a6e8aafdbe4fd0199f72fea9509acc39fa8bd675" gracePeriod=30 Jan 27 19:02:04 crc kubenswrapper[4915]: I0127 19:02:04.306838 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-cb96bd94d-x5bv4" podUID="c540b760-c850-4418-ba54-5a404dc3dbbd" containerName="neutron-httpd" containerID="cri-o://e56ef5c3c5d49086313d6f16eebe7dfaaf8ee91810e742a06d368894b5781c22" gracePeriod=30 Jan 27 19:02:04 crc kubenswrapper[4915]: I0127 19:02:04.338349 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-gjvmw"] Jan 27 19:02:04 crc kubenswrapper[4915]: I0127 19:02:04.338746 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" podUID="f6541065-8f84-4ce9-9b90-61ea0f80ee0d" containerName="dnsmasq-dns" containerID="cri-o://bec621d3d287aef537baa9604283134644db9a9872b51b976d0580388579769d" gracePeriod=10 Jan 27 19:02:04 crc kubenswrapper[4915]: I0127 19:02:04.356728 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 19:02:04 crc kubenswrapper[4915]: I0127 19:02:04.421017 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 19:02:04 crc kubenswrapper[4915]: I0127 19:02:04.639472 4915 generic.go:334] "Generic (PLEG): container finished" podID="c540b760-c850-4418-ba54-5a404dc3dbbd" containerID="e56ef5c3c5d49086313d6f16eebe7dfaaf8ee91810e742a06d368894b5781c22" exitCode=0 Jan 27 19:02:04 crc kubenswrapper[4915]: I0127 19:02:04.639856 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cb96bd94d-x5bv4" event={"ID":"c540b760-c850-4418-ba54-5a404dc3dbbd","Type":"ContainerDied","Data":"e56ef5c3c5d49086313d6f16eebe7dfaaf8ee91810e742a06d368894b5781c22"} Jan 27 19:02:04 crc kubenswrapper[4915]: I0127 19:02:04.661101 4915 generic.go:334] "Generic (PLEG): container finished" podID="f6541065-8f84-4ce9-9b90-61ea0f80ee0d" containerID="bec621d3d287aef537baa9604283134644db9a9872b51b976d0580388579769d" exitCode=0 Jan 27 19:02:04 crc kubenswrapper[4915]: I0127 19:02:04.661346 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c2ba1dc4-00a6-4706-94f6-54e6badb718c" containerName="cinder-scheduler" containerID="cri-o://3941827b4396d0993a73bfdbaef1fab97b3133bb336cbec3806d67488ead2f19" gracePeriod=30 Jan 27 19:02:04 crc kubenswrapper[4915]: I0127 19:02:04.661473 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" event={"ID":"f6541065-8f84-4ce9-9b90-61ea0f80ee0d","Type":"ContainerDied","Data":"bec621d3d287aef537baa9604283134644db9a9872b51b976d0580388579769d"} Jan 27 19:02:04 crc kubenswrapper[4915]: I0127 19:02:04.661900 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c2ba1dc4-00a6-4706-94f6-54e6badb718c" containerName="probe" containerID="cri-o://a7c9798d3141c224935601c4b718221ad709b6e0615ba715809a5a46ccfa2e3f" gracePeriod=30 Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.066506 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.125613 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-config\") pod \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\" (UID: \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\") " Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.125680 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9gvm\" (UniqueName: \"kubernetes.io/projected/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-kube-api-access-r9gvm\") pod \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\" (UID: \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\") " Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.125741 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-ovsdbserver-nb\") pod \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\" (UID: \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\") " Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.125845 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-ovsdbserver-sb\") pod \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\" (UID: \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\") " Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.125913 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-dns-swift-storage-0\") pod \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\" (UID: \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\") " Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.125938 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-dns-svc\") pod \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\" (UID: \"f6541065-8f84-4ce9-9b90-61ea0f80ee0d\") " Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.138977 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-kube-api-access-r9gvm" (OuterVolumeSpecName: "kube-api-access-r9gvm") pod "f6541065-8f84-4ce9-9b90-61ea0f80ee0d" (UID: "f6541065-8f84-4ce9-9b90-61ea0f80ee0d"). InnerVolumeSpecName "kube-api-access-r9gvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.184374 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f6541065-8f84-4ce9-9b90-61ea0f80ee0d" (UID: "f6541065-8f84-4ce9-9b90-61ea0f80ee0d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.208529 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f6541065-8f84-4ce9-9b90-61ea0f80ee0d" (UID: "f6541065-8f84-4ce9-9b90-61ea0f80ee0d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.228447 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f6541065-8f84-4ce9-9b90-61ea0f80ee0d" (UID: "f6541065-8f84-4ce9-9b90-61ea0f80ee0d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.233760 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-config" (OuterVolumeSpecName: "config") pod "f6541065-8f84-4ce9-9b90-61ea0f80ee0d" (UID: "f6541065-8f84-4ce9-9b90-61ea0f80ee0d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.234471 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.234514 4915 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.234535 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.234545 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.234555 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9gvm\" (UniqueName: \"kubernetes.io/projected/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-kube-api-access-r9gvm\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.244205 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f6541065-8f84-4ce9-9b90-61ea0f80ee0d" (UID: "f6541065-8f84-4ce9-9b90-61ea0f80ee0d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.336349 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6541065-8f84-4ce9-9b90-61ea0f80ee0d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.701218 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-79f7874b76-rrchs_53bd39be-21df-4cea-aed2-0ac820ce45b6/neutron-api/0.log" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.701457 4915 generic.go:334] "Generic (PLEG): container finished" podID="53bd39be-21df-4cea-aed2-0ac820ce45b6" containerID="2f5ebdc9256b15339951da1e7a6b802d34dbb6a7dde142f5f99b12f9156c78e0" exitCode=137 Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.701509 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79f7874b76-rrchs" event={"ID":"53bd39be-21df-4cea-aed2-0ac820ce45b6","Type":"ContainerDied","Data":"2f5ebdc9256b15339951da1e7a6b802d34dbb6a7dde142f5f99b12f9156c78e0"} Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.701535 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79f7874b76-rrchs" event={"ID":"53bd39be-21df-4cea-aed2-0ac820ce45b6","Type":"ContainerDied","Data":"0cd778ea3fa7bdad7f1b59022fb427177a46e2d2ca6eb9f428a40356c29f7ef2"} Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.701545 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cd778ea3fa7bdad7f1b59022fb427177a46e2d2ca6eb9f428a40356c29f7ef2" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.707110 4915 generic.go:334] "Generic (PLEG): container finished" podID="c2ba1dc4-00a6-4706-94f6-54e6badb718c" containerID="a7c9798d3141c224935601c4b718221ad709b6e0615ba715809a5a46ccfa2e3f" exitCode=0 Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.707174 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c2ba1dc4-00a6-4706-94f6-54e6badb718c","Type":"ContainerDied","Data":"a7c9798d3141c224935601c4b718221ad709b6e0615ba715809a5a46ccfa2e3f"} Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.708604 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" event={"ID":"f6541065-8f84-4ce9-9b90-61ea0f80ee0d","Type":"ContainerDied","Data":"443ef3f650d8fbe32e72497c1c38bba1b7b16635faf02be9c3449fc7ba19b248"} Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.708689 4915 scope.go:117] "RemoveContainer" containerID="bec621d3d287aef537baa9604283134644db9a9872b51b976d0580388579769d" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.708870 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-gjvmw" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.715401 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-79f7874b76-rrchs_53bd39be-21df-4cea-aed2-0ac820ce45b6/neutron-api/0.log" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.715501 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-79f7874b76-rrchs" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.733038 4915 scope.go:117] "RemoveContainer" containerID="9dba18cdec925441030673e35b291830b0f2392b2ae376e795f0c6564d26e348" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.743160 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-gjvmw"] Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.757026 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-gjvmw"] Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.844748 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/53bd39be-21df-4cea-aed2-0ac820ce45b6-httpd-config\") pod \"53bd39be-21df-4cea-aed2-0ac820ce45b6\" (UID: \"53bd39be-21df-4cea-aed2-0ac820ce45b6\") " Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.844812 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wbvf\" (UniqueName: \"kubernetes.io/projected/53bd39be-21df-4cea-aed2-0ac820ce45b6-kube-api-access-6wbvf\") pod \"53bd39be-21df-4cea-aed2-0ac820ce45b6\" (UID: \"53bd39be-21df-4cea-aed2-0ac820ce45b6\") " Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.844848 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/53bd39be-21df-4cea-aed2-0ac820ce45b6-ovndb-tls-certs\") pod \"53bd39be-21df-4cea-aed2-0ac820ce45b6\" (UID: \"53bd39be-21df-4cea-aed2-0ac820ce45b6\") " Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.844945 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/53bd39be-21df-4cea-aed2-0ac820ce45b6-config\") pod \"53bd39be-21df-4cea-aed2-0ac820ce45b6\" (UID: \"53bd39be-21df-4cea-aed2-0ac820ce45b6\") " Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.845006 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53bd39be-21df-4cea-aed2-0ac820ce45b6-combined-ca-bundle\") pod \"53bd39be-21df-4cea-aed2-0ac820ce45b6\" (UID: \"53bd39be-21df-4cea-aed2-0ac820ce45b6\") " Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.854954 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53bd39be-21df-4cea-aed2-0ac820ce45b6-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "53bd39be-21df-4cea-aed2-0ac820ce45b6" (UID: "53bd39be-21df-4cea-aed2-0ac820ce45b6"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.855037 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53bd39be-21df-4cea-aed2-0ac820ce45b6-kube-api-access-6wbvf" (OuterVolumeSpecName: "kube-api-access-6wbvf") pod "53bd39be-21df-4cea-aed2-0ac820ce45b6" (UID: "53bd39be-21df-4cea-aed2-0ac820ce45b6"). InnerVolumeSpecName "kube-api-access-6wbvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.905036 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53bd39be-21df-4cea-aed2-0ac820ce45b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53bd39be-21df-4cea-aed2-0ac820ce45b6" (UID: "53bd39be-21df-4cea-aed2-0ac820ce45b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.922950 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53bd39be-21df-4cea-aed2-0ac820ce45b6-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "53bd39be-21df-4cea-aed2-0ac820ce45b6" (UID: "53bd39be-21df-4cea-aed2-0ac820ce45b6"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.932009 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53bd39be-21df-4cea-aed2-0ac820ce45b6-config" (OuterVolumeSpecName: "config") pod "53bd39be-21df-4cea-aed2-0ac820ce45b6" (UID: "53bd39be-21df-4cea-aed2-0ac820ce45b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.947132 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53bd39be-21df-4cea-aed2-0ac820ce45b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.947171 4915 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/53bd39be-21df-4cea-aed2-0ac820ce45b6-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.947183 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wbvf\" (UniqueName: \"kubernetes.io/projected/53bd39be-21df-4cea-aed2-0ac820ce45b6-kube-api-access-6wbvf\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.947195 4915 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/53bd39be-21df-4cea-aed2-0ac820ce45b6-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:05 crc kubenswrapper[4915]: I0127 19:02:05.947207 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/53bd39be-21df-4cea-aed2-0ac820ce45b6-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.537479 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.643271 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 27 19:02:06 crc kubenswrapper[4915]: E0127 19:02:06.643610 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d299c59-4dbe-4032-aadc-3c35ecde70b2" containerName="barbican-api-log" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.643623 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d299c59-4dbe-4032-aadc-3c35ecde70b2" containerName="barbican-api-log" Jan 27 19:02:06 crc kubenswrapper[4915]: E0127 19:02:06.643633 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53bd39be-21df-4cea-aed2-0ac820ce45b6" containerName="neutron-api" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.643640 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="53bd39be-21df-4cea-aed2-0ac820ce45b6" containerName="neutron-api" Jan 27 19:02:06 crc kubenswrapper[4915]: E0127 19:02:06.643649 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53bd39be-21df-4cea-aed2-0ac820ce45b6" containerName="neutron-httpd" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.643655 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="53bd39be-21df-4cea-aed2-0ac820ce45b6" containerName="neutron-httpd" Jan 27 19:02:06 crc kubenswrapper[4915]: E0127 19:02:06.643665 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6541065-8f84-4ce9-9b90-61ea0f80ee0d" containerName="init" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.643672 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6541065-8f84-4ce9-9b90-61ea0f80ee0d" containerName="init" Jan 27 19:02:06 crc kubenswrapper[4915]: E0127 19:02:06.643696 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d299c59-4dbe-4032-aadc-3c35ecde70b2" containerName="barbican-api" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.643702 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d299c59-4dbe-4032-aadc-3c35ecde70b2" containerName="barbican-api" Jan 27 19:02:06 crc kubenswrapper[4915]: E0127 19:02:06.643718 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6541065-8f84-4ce9-9b90-61ea0f80ee0d" containerName="dnsmasq-dns" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.643725 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6541065-8f84-4ce9-9b90-61ea0f80ee0d" containerName="dnsmasq-dns" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.643908 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d299c59-4dbe-4032-aadc-3c35ecde70b2" containerName="barbican-api" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.643918 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="53bd39be-21df-4cea-aed2-0ac820ce45b6" containerName="neutron-api" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.643932 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="53bd39be-21df-4cea-aed2-0ac820ce45b6" containerName="neutron-httpd" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.643943 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d299c59-4dbe-4032-aadc-3c35ecde70b2" containerName="barbican-api-log" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.643951 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6541065-8f84-4ce9-9b90-61ea0f80ee0d" containerName="dnsmasq-dns" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.644488 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.650857 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.651032 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.651149 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-km84r" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.675577 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.730960 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-79f7874b76-rrchs" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.767078 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-79f7874b76-rrchs"] Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.767998 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0060401c-f9b6-4772-bce5-bda7633de81a-openstack-config\") pod \"openstackclient\" (UID: \"0060401c-f9b6-4772-bce5-bda7633de81a\") " pod="openstack/openstackclient" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.768031 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0060401c-f9b6-4772-bce5-bda7633de81a-openstack-config-secret\") pod \"openstackclient\" (UID: \"0060401c-f9b6-4772-bce5-bda7633de81a\") " pod="openstack/openstackclient" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.768160 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4ssx\" (UniqueName: \"kubernetes.io/projected/0060401c-f9b6-4772-bce5-bda7633de81a-kube-api-access-z4ssx\") pod \"openstackclient\" (UID: \"0060401c-f9b6-4772-bce5-bda7633de81a\") " pod="openstack/openstackclient" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.768185 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0060401c-f9b6-4772-bce5-bda7633de81a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0060401c-f9b6-4772-bce5-bda7633de81a\") " pod="openstack/openstackclient" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.773943 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-79f7874b76-rrchs"] Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.869237 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4ssx\" (UniqueName: \"kubernetes.io/projected/0060401c-f9b6-4772-bce5-bda7633de81a-kube-api-access-z4ssx\") pod \"openstackclient\" (UID: \"0060401c-f9b6-4772-bce5-bda7633de81a\") " pod="openstack/openstackclient" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.869291 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0060401c-f9b6-4772-bce5-bda7633de81a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0060401c-f9b6-4772-bce5-bda7633de81a\") " pod="openstack/openstackclient" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.869367 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0060401c-f9b6-4772-bce5-bda7633de81a-openstack-config\") pod \"openstackclient\" (UID: \"0060401c-f9b6-4772-bce5-bda7633de81a\") " pod="openstack/openstackclient" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.869382 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0060401c-f9b6-4772-bce5-bda7633de81a-openstack-config-secret\") pod \"openstackclient\" (UID: \"0060401c-f9b6-4772-bce5-bda7633de81a\") " pod="openstack/openstackclient" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.870262 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0060401c-f9b6-4772-bce5-bda7633de81a-openstack-config\") pod \"openstackclient\" (UID: \"0060401c-f9b6-4772-bce5-bda7633de81a\") " pod="openstack/openstackclient" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.873500 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0060401c-f9b6-4772-bce5-bda7633de81a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0060401c-f9b6-4772-bce5-bda7633de81a\") " pod="openstack/openstackclient" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.876171 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0060401c-f9b6-4772-bce5-bda7633de81a-openstack-config-secret\") pod \"openstackclient\" (UID: \"0060401c-f9b6-4772-bce5-bda7633de81a\") " pod="openstack/openstackclient" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.901845 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4ssx\" (UniqueName: \"kubernetes.io/projected/0060401c-f9b6-4772-bce5-bda7633de81a-kube-api-access-z4ssx\") pod \"openstackclient\" (UID: \"0060401c-f9b6-4772-bce5-bda7633de81a\") " pod="openstack/openstackclient" Jan 27 19:02:06 crc kubenswrapper[4915]: I0127 19:02:06.980378 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 19:02:07 crc kubenswrapper[4915]: I0127 19:02:07.367038 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53bd39be-21df-4cea-aed2-0ac820ce45b6" path="/var/lib/kubelet/pods/53bd39be-21df-4cea-aed2-0ac820ce45b6/volumes" Jan 27 19:02:07 crc kubenswrapper[4915]: I0127 19:02:07.368089 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6541065-8f84-4ce9-9b90-61ea0f80ee0d" path="/var/lib/kubelet/pods/f6541065-8f84-4ce9-9b90-61ea0f80ee0d/volumes" Jan 27 19:02:07 crc kubenswrapper[4915]: I0127 19:02:07.470898 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 19:02:07 crc kubenswrapper[4915]: I0127 19:02:07.740316 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0060401c-f9b6-4772-bce5-bda7633de81a","Type":"ContainerStarted","Data":"97d99b9298dfd763ca672f9a0ce5f375f821869ca421486b87a43b064fb04a9f"} Jan 27 19:02:08 crc kubenswrapper[4915]: E0127 19:02:08.883468 4915 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9455215d_2a98_42df_a801_53f31071447e.slice/crio-e1099dce504cb119e341c86eb7e8e9e9820968ff40772d97e4d942a2883762d6.scope: Error finding container e1099dce504cb119e341c86eb7e8e9e9820968ff40772d97e4d942a2883762d6: Status 404 returned error can't find the container with id e1099dce504cb119e341c86eb7e8e9e9820968ff40772d97e4d942a2883762d6 Jan 27 19:02:09 crc kubenswrapper[4915]: E0127 19:02:09.070340 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9455215d_2a98_42df_a801_53f31071447e.slice/crio-conmon-e1099dce504cb119e341c86eb7e8e9e9820968ff40772d97e4d942a2883762d6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6c1db90_61fc_4aa0_8371_eae7ac202752.slice/crio-71f4ace2955425eb87c5118964008f2ca4f570e18e4e221df8dd2338bffa18ab\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9455215d_2a98_42df_a801_53f31071447e.slice/crio-51f19db988a90c12173bf7469812b2b41d8d54e9f4debf7e7701ded8ee263f01\": RecentStats: unable to find data in memory cache]" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.331847 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.411315 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c2ba1dc4-00a6-4706-94f6-54e6badb718c-etc-machine-id\") pod \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\" (UID: \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\") " Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.411390 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2ba1dc4-00a6-4706-94f6-54e6badb718c-scripts\") pod \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\" (UID: \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\") " Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.411497 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2ba1dc4-00a6-4706-94f6-54e6badb718c-config-data-custom\") pod \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\" (UID: \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\") " Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.411557 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnctd\" (UniqueName: \"kubernetes.io/projected/c2ba1dc4-00a6-4706-94f6-54e6badb718c-kube-api-access-qnctd\") pod \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\" (UID: \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\") " Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.411630 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2ba1dc4-00a6-4706-94f6-54e6badb718c-config-data\") pod \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\" (UID: \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\") " Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.411672 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2ba1dc4-00a6-4706-94f6-54e6badb718c-combined-ca-bundle\") pod \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\" (UID: \"c2ba1dc4-00a6-4706-94f6-54e6badb718c\") " Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.412914 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ba1dc4-00a6-4706-94f6-54e6badb718c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c2ba1dc4-00a6-4706-94f6-54e6badb718c" (UID: "c2ba1dc4-00a6-4706-94f6-54e6badb718c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.426198 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2ba1dc4-00a6-4706-94f6-54e6badb718c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c2ba1dc4-00a6-4706-94f6-54e6badb718c" (UID: "c2ba1dc4-00a6-4706-94f6-54e6badb718c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.450970 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2ba1dc4-00a6-4706-94f6-54e6badb718c-kube-api-access-qnctd" (OuterVolumeSpecName: "kube-api-access-qnctd") pod "c2ba1dc4-00a6-4706-94f6-54e6badb718c" (UID: "c2ba1dc4-00a6-4706-94f6-54e6badb718c"). InnerVolumeSpecName "kube-api-access-qnctd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.452319 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2ba1dc4-00a6-4706-94f6-54e6badb718c-scripts" (OuterVolumeSpecName: "scripts") pod "c2ba1dc4-00a6-4706-94f6-54e6badb718c" (UID: "c2ba1dc4-00a6-4706-94f6-54e6badb718c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.503871 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2ba1dc4-00a6-4706-94f6-54e6badb718c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2ba1dc4-00a6-4706-94f6-54e6badb718c" (UID: "c2ba1dc4-00a6-4706-94f6-54e6badb718c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.516446 4915 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c2ba1dc4-00a6-4706-94f6-54e6badb718c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.516475 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2ba1dc4-00a6-4706-94f6-54e6badb718c-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.516485 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2ba1dc4-00a6-4706-94f6-54e6badb718c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.516495 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnctd\" (UniqueName: \"kubernetes.io/projected/c2ba1dc4-00a6-4706-94f6-54e6badb718c-kube-api-access-qnctd\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.516505 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2ba1dc4-00a6-4706-94f6-54e6badb718c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.579922 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2ba1dc4-00a6-4706-94f6-54e6badb718c-config-data" (OuterVolumeSpecName: "config-data") pod "c2ba1dc4-00a6-4706-94f6-54e6badb718c" (UID: "c2ba1dc4-00a6-4706-94f6-54e6badb718c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.618370 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2ba1dc4-00a6-4706-94f6-54e6badb718c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.767946 4915 generic.go:334] "Generic (PLEG): container finished" podID="c2ba1dc4-00a6-4706-94f6-54e6badb718c" containerID="3941827b4396d0993a73bfdbaef1fab97b3133bb336cbec3806d67488ead2f19" exitCode=0 Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.768001 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.768015 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c2ba1dc4-00a6-4706-94f6-54e6badb718c","Type":"ContainerDied","Data":"3941827b4396d0993a73bfdbaef1fab97b3133bb336cbec3806d67488ead2f19"} Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.768042 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c2ba1dc4-00a6-4706-94f6-54e6badb718c","Type":"ContainerDied","Data":"4fa753b7501a5f1e8bcf1207b7782b6f41d2132b48293558cb50f948406be991"} Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.768085 4915 scope.go:117] "RemoveContainer" containerID="a7c9798d3141c224935601c4b718221ad709b6e0615ba715809a5a46ccfa2e3f" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.782645 4915 generic.go:334] "Generic (PLEG): container finished" podID="c540b760-c850-4418-ba54-5a404dc3dbbd" containerID="1280df6d409410b600e18da6a6e8aafdbe4fd0199f72fea9509acc39fa8bd675" exitCode=0 Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.782697 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cb96bd94d-x5bv4" event={"ID":"c540b760-c850-4418-ba54-5a404dc3dbbd","Type":"ContainerDied","Data":"1280df6d409410b600e18da6a6e8aafdbe4fd0199f72fea9509acc39fa8bd675"} Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.824570 4915 scope.go:117] "RemoveContainer" containerID="3941827b4396d0993a73bfdbaef1fab97b3133bb336cbec3806d67488ead2f19" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.824679 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.856056 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.867826 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 19:02:09 crc kubenswrapper[4915]: E0127 19:02:09.868161 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2ba1dc4-00a6-4706-94f6-54e6badb718c" containerName="cinder-scheduler" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.868187 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2ba1dc4-00a6-4706-94f6-54e6badb718c" containerName="cinder-scheduler" Jan 27 19:02:09 crc kubenswrapper[4915]: E0127 19:02:09.868202 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2ba1dc4-00a6-4706-94f6-54e6badb718c" containerName="probe" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.868208 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2ba1dc4-00a6-4706-94f6-54e6badb718c" containerName="probe" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.868393 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2ba1dc4-00a6-4706-94f6-54e6badb718c" containerName="probe" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.868424 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2ba1dc4-00a6-4706-94f6-54e6badb718c" containerName="cinder-scheduler" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.869229 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.872073 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.877705 4915 scope.go:117] "RemoveContainer" containerID="a7c9798d3141c224935601c4b718221ad709b6e0615ba715809a5a46ccfa2e3f" Jan 27 19:02:09 crc kubenswrapper[4915]: E0127 19:02:09.879966 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7c9798d3141c224935601c4b718221ad709b6e0615ba715809a5a46ccfa2e3f\": container with ID starting with a7c9798d3141c224935601c4b718221ad709b6e0615ba715809a5a46ccfa2e3f not found: ID does not exist" containerID="a7c9798d3141c224935601c4b718221ad709b6e0615ba715809a5a46ccfa2e3f" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.879995 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7c9798d3141c224935601c4b718221ad709b6e0615ba715809a5a46ccfa2e3f"} err="failed to get container status \"a7c9798d3141c224935601c4b718221ad709b6e0615ba715809a5a46ccfa2e3f\": rpc error: code = NotFound desc = could not find container \"a7c9798d3141c224935601c4b718221ad709b6e0615ba715809a5a46ccfa2e3f\": container with ID starting with a7c9798d3141c224935601c4b718221ad709b6e0615ba715809a5a46ccfa2e3f not found: ID does not exist" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.880019 4915 scope.go:117] "RemoveContainer" containerID="3941827b4396d0993a73bfdbaef1fab97b3133bb336cbec3806d67488ead2f19" Jan 27 19:02:09 crc kubenswrapper[4915]: E0127 19:02:09.880224 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3941827b4396d0993a73bfdbaef1fab97b3133bb336cbec3806d67488ead2f19\": container with ID starting with 3941827b4396d0993a73bfdbaef1fab97b3133bb336cbec3806d67488ead2f19 not found: ID does not exist" containerID="3941827b4396d0993a73bfdbaef1fab97b3133bb336cbec3806d67488ead2f19" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.880245 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3941827b4396d0993a73bfdbaef1fab97b3133bb336cbec3806d67488ead2f19"} err="failed to get container status \"3941827b4396d0993a73bfdbaef1fab97b3133bb336cbec3806d67488ead2f19\": rpc error: code = NotFound desc = could not find container \"3941827b4396d0993a73bfdbaef1fab97b3133bb336cbec3806d67488ead2f19\": container with ID starting with 3941827b4396d0993a73bfdbaef1fab97b3133bb336cbec3806d67488ead2f19 not found: ID does not exist" Jan 27 19:02:09 crc kubenswrapper[4915]: I0127 19:02:09.881425 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.025616 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-scripts\") pod \"cinder-scheduler-0\" (UID: \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\") " pod="openstack/cinder-scheduler-0" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.025722 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\") " pod="openstack/cinder-scheduler-0" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.025763 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\") " pod="openstack/cinder-scheduler-0" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.025969 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-config-data\") pod \"cinder-scheduler-0\" (UID: \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\") " pod="openstack/cinder-scheduler-0" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.026038 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg44l\" (UniqueName: \"kubernetes.io/projected/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-kube-api-access-rg44l\") pod \"cinder-scheduler-0\" (UID: \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\") " pod="openstack/cinder-scheduler-0" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.026207 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\") " pod="openstack/cinder-scheduler-0" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.129004 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\") " pod="openstack/cinder-scheduler-0" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.129056 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\") " pod="openstack/cinder-scheduler-0" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.129095 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-config-data\") pod \"cinder-scheduler-0\" (UID: \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\") " pod="openstack/cinder-scheduler-0" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.129128 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg44l\" (UniqueName: \"kubernetes.io/projected/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-kube-api-access-rg44l\") pod \"cinder-scheduler-0\" (UID: \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\") " pod="openstack/cinder-scheduler-0" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.129185 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\") " pod="openstack/cinder-scheduler-0" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.129256 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-scripts\") pod \"cinder-scheduler-0\" (UID: \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\") " pod="openstack/cinder-scheduler-0" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.130244 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\") " pod="openstack/cinder-scheduler-0" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.135725 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\") " pod="openstack/cinder-scheduler-0" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.138757 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-config-data\") pod \"cinder-scheduler-0\" (UID: \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\") " pod="openstack/cinder-scheduler-0" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.139088 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-scripts\") pod \"cinder-scheduler-0\" (UID: \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\") " pod="openstack/cinder-scheduler-0" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.151819 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg44l\" (UniqueName: \"kubernetes.io/projected/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-kube-api-access-rg44l\") pod \"cinder-scheduler-0\" (UID: \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\") " pod="openstack/cinder-scheduler-0" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.164563 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\") " pod="openstack/cinder-scheduler-0" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.187100 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.272238 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cb96bd94d-x5bv4" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.445388 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c540b760-c850-4418-ba54-5a404dc3dbbd-ovndb-tls-certs\") pod \"c540b760-c850-4418-ba54-5a404dc3dbbd\" (UID: \"c540b760-c850-4418-ba54-5a404dc3dbbd\") " Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.445812 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c540b760-c850-4418-ba54-5a404dc3dbbd-config\") pod \"c540b760-c850-4418-ba54-5a404dc3dbbd\" (UID: \"c540b760-c850-4418-ba54-5a404dc3dbbd\") " Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.445845 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c540b760-c850-4418-ba54-5a404dc3dbbd-combined-ca-bundle\") pod \"c540b760-c850-4418-ba54-5a404dc3dbbd\" (UID: \"c540b760-c850-4418-ba54-5a404dc3dbbd\") " Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.445953 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc8z2\" (UniqueName: \"kubernetes.io/projected/c540b760-c850-4418-ba54-5a404dc3dbbd-kube-api-access-pc8z2\") pod \"c540b760-c850-4418-ba54-5a404dc3dbbd\" (UID: \"c540b760-c850-4418-ba54-5a404dc3dbbd\") " Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.445985 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c540b760-c850-4418-ba54-5a404dc3dbbd-httpd-config\") pod \"c540b760-c850-4418-ba54-5a404dc3dbbd\" (UID: \"c540b760-c850-4418-ba54-5a404dc3dbbd\") " Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.451997 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c540b760-c850-4418-ba54-5a404dc3dbbd-kube-api-access-pc8z2" (OuterVolumeSpecName: "kube-api-access-pc8z2") pod "c540b760-c850-4418-ba54-5a404dc3dbbd" (UID: "c540b760-c850-4418-ba54-5a404dc3dbbd"). InnerVolumeSpecName "kube-api-access-pc8z2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.454064 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c540b760-c850-4418-ba54-5a404dc3dbbd-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "c540b760-c850-4418-ba54-5a404dc3dbbd" (UID: "c540b760-c850-4418-ba54-5a404dc3dbbd"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.506491 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c540b760-c850-4418-ba54-5a404dc3dbbd-config" (OuterVolumeSpecName: "config") pod "c540b760-c850-4418-ba54-5a404dc3dbbd" (UID: "c540b760-c850-4418-ba54-5a404dc3dbbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.513249 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c540b760-c850-4418-ba54-5a404dc3dbbd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c540b760-c850-4418-ba54-5a404dc3dbbd" (UID: "c540b760-c850-4418-ba54-5a404dc3dbbd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.547217 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c540b760-c850-4418-ba54-5a404dc3dbbd-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "c540b760-c850-4418-ba54-5a404dc3dbbd" (UID: "c540b760-c850-4418-ba54-5a404dc3dbbd"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.547702 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c540b760-c850-4418-ba54-5a404dc3dbbd-ovndb-tls-certs\") pod \"c540b760-c850-4418-ba54-5a404dc3dbbd\" (UID: \"c540b760-c850-4418-ba54-5a404dc3dbbd\") " Jan 27 19:02:10 crc kubenswrapper[4915]: W0127 19:02:10.547910 4915 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/c540b760-c850-4418-ba54-5a404dc3dbbd/volumes/kubernetes.io~secret/ovndb-tls-certs Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.547936 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c540b760-c850-4418-ba54-5a404dc3dbbd-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "c540b760-c850-4418-ba54-5a404dc3dbbd" (UID: "c540b760-c850-4418-ba54-5a404dc3dbbd"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.548199 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc8z2\" (UniqueName: \"kubernetes.io/projected/c540b760-c850-4418-ba54-5a404dc3dbbd-kube-api-access-pc8z2\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.548229 4915 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c540b760-c850-4418-ba54-5a404dc3dbbd-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.548238 4915 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c540b760-c850-4418-ba54-5a404dc3dbbd-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.548247 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c540b760-c850-4418-ba54-5a404dc3dbbd-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.548255 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c540b760-c850-4418-ba54-5a404dc3dbbd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.650926 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.805115 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cb96bd94d-x5bv4" event={"ID":"c540b760-c850-4418-ba54-5a404dc3dbbd","Type":"ContainerDied","Data":"687c50bfe1b3eeebc2fda8e8986896f3b9229925aab3f64bacbce732edac5ae9"} Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.805387 4915 scope.go:117] "RemoveContainer" containerID="e56ef5c3c5d49086313d6f16eebe7dfaaf8ee91810e742a06d368894b5781c22" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.805611 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cb96bd94d-x5bv4" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.827008 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fa88bf20-ee44-4049-b46d-75f3a64d3a4d","Type":"ContainerStarted","Data":"b59f2e5a9d96ace452f57e3f94e15694be225aeb92d27dacfb40cfd021cd2cc9"} Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.847205 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cb96bd94d-x5bv4"] Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.849606 4915 scope.go:117] "RemoveContainer" containerID="1280df6d409410b600e18da6a6e8aafdbe4fd0199f72fea9509acc39fa8bd675" Jan 27 19:02:10 crc kubenswrapper[4915]: I0127 19:02:10.858584 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-cb96bd94d-x5bv4"] Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.373393 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2ba1dc4-00a6-4706-94f6-54e6badb718c" path="/var/lib/kubelet/pods/c2ba1dc4-00a6-4706-94f6-54e6badb718c/volumes" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.376189 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c540b760-c850-4418-ba54-5a404dc3dbbd" path="/var/lib/kubelet/pods/c540b760-c850-4418-ba54-5a404dc3dbbd/volumes" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.697824 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5fb74bb44c-7sqss"] Jan 27 19:02:11 crc kubenswrapper[4915]: E0127 19:02:11.698474 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c540b760-c850-4418-ba54-5a404dc3dbbd" containerName="neutron-api" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.698494 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c540b760-c850-4418-ba54-5a404dc3dbbd" containerName="neutron-api" Jan 27 19:02:11 crc kubenswrapper[4915]: E0127 19:02:11.698517 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c540b760-c850-4418-ba54-5a404dc3dbbd" containerName="neutron-httpd" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.698524 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c540b760-c850-4418-ba54-5a404dc3dbbd" containerName="neutron-httpd" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.698685 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c540b760-c850-4418-ba54-5a404dc3dbbd" containerName="neutron-api" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.698708 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c540b760-c850-4418-ba54-5a404dc3dbbd" containerName="neutron-httpd" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.699693 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.701614 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.701915 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.702293 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.711896 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5fb74bb44c-7sqss"] Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.798944 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b2c635a-a36f-415f-9746-97620456c8c8-public-tls-certs\") pod \"swift-proxy-5fb74bb44c-7sqss\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.799022 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2c635a-a36f-415f-9746-97620456c8c8-combined-ca-bundle\") pod \"swift-proxy-5fb74bb44c-7sqss\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.799101 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2c635a-a36f-415f-9746-97620456c8c8-config-data\") pod \"swift-proxy-5fb74bb44c-7sqss\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.799921 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b2c635a-a36f-415f-9746-97620456c8c8-run-httpd\") pod \"swift-proxy-5fb74bb44c-7sqss\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.800091 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1b2c635a-a36f-415f-9746-97620456c8c8-etc-swift\") pod \"swift-proxy-5fb74bb44c-7sqss\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.800239 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b2c635a-a36f-415f-9746-97620456c8c8-internal-tls-certs\") pod \"swift-proxy-5fb74bb44c-7sqss\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.800290 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b2c635a-a36f-415f-9746-97620456c8c8-log-httpd\") pod \"swift-proxy-5fb74bb44c-7sqss\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.844745 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fa88bf20-ee44-4049-b46d-75f3a64d3a4d","Type":"ContainerStarted","Data":"cae1cda15bc05c31ba74684aadd103b351b56954bbbc994031b930a06b055b44"} Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.902034 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2c635a-a36f-415f-9746-97620456c8c8-config-data\") pod \"swift-proxy-5fb74bb44c-7sqss\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.902103 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b2c635a-a36f-415f-9746-97620456c8c8-run-httpd\") pod \"swift-proxy-5fb74bb44c-7sqss\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.902156 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1b2c635a-a36f-415f-9746-97620456c8c8-etc-swift\") pod \"swift-proxy-5fb74bb44c-7sqss\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.902199 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp4q7\" (UniqueName: \"kubernetes.io/projected/1b2c635a-a36f-415f-9746-97620456c8c8-kube-api-access-zp4q7\") pod \"swift-proxy-5fb74bb44c-7sqss\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.902237 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b2c635a-a36f-415f-9746-97620456c8c8-internal-tls-certs\") pod \"swift-proxy-5fb74bb44c-7sqss\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.902266 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b2c635a-a36f-415f-9746-97620456c8c8-log-httpd\") pod \"swift-proxy-5fb74bb44c-7sqss\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.902308 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b2c635a-a36f-415f-9746-97620456c8c8-public-tls-certs\") pod \"swift-proxy-5fb74bb44c-7sqss\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.902348 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2c635a-a36f-415f-9746-97620456c8c8-combined-ca-bundle\") pod \"swift-proxy-5fb74bb44c-7sqss\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.903694 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b2c635a-a36f-415f-9746-97620456c8c8-log-httpd\") pod \"swift-proxy-5fb74bb44c-7sqss\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.903722 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b2c635a-a36f-415f-9746-97620456c8c8-run-httpd\") pod \"swift-proxy-5fb74bb44c-7sqss\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.912542 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1b2c635a-a36f-415f-9746-97620456c8c8-etc-swift\") pod \"swift-proxy-5fb74bb44c-7sqss\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.929850 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2c635a-a36f-415f-9746-97620456c8c8-config-data\") pod \"swift-proxy-5fb74bb44c-7sqss\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.930570 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2c635a-a36f-415f-9746-97620456c8c8-combined-ca-bundle\") pod \"swift-proxy-5fb74bb44c-7sqss\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.933708 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b2c635a-a36f-415f-9746-97620456c8c8-internal-tls-certs\") pod \"swift-proxy-5fb74bb44c-7sqss\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:11 crc kubenswrapper[4915]: I0127 19:02:11.935308 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b2c635a-a36f-415f-9746-97620456c8c8-public-tls-certs\") pod \"swift-proxy-5fb74bb44c-7sqss\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:12 crc kubenswrapper[4915]: I0127 19:02:12.004422 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp4q7\" (UniqueName: \"kubernetes.io/projected/1b2c635a-a36f-415f-9746-97620456c8c8-kube-api-access-zp4q7\") pod \"swift-proxy-5fb74bb44c-7sqss\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:12 crc kubenswrapper[4915]: I0127 19:02:12.021938 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp4q7\" (UniqueName: \"kubernetes.io/projected/1b2c635a-a36f-415f-9746-97620456c8c8-kube-api-access-zp4q7\") pod \"swift-proxy-5fb74bb44c-7sqss\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:12 crc kubenswrapper[4915]: I0127 19:02:12.042203 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:12 crc kubenswrapper[4915]: I0127 19:02:12.210966 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:02:12 crc kubenswrapper[4915]: I0127 19:02:12.211624 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" containerName="ceilometer-central-agent" containerID="cri-o://121e1ca2e790d4c007a783ec192a5f5d0c67a6afb9213e5ec997b55d9372eeb3" gracePeriod=30 Jan 27 19:02:12 crc kubenswrapper[4915]: I0127 19:02:12.212289 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" containerName="proxy-httpd" containerID="cri-o://c4248a94eedc3362a28233c83dbb06be4142c20ed3181b3be320b86ae1c3438d" gracePeriod=30 Jan 27 19:02:12 crc kubenswrapper[4915]: I0127 19:02:12.212368 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" containerName="ceilometer-notification-agent" containerID="cri-o://303ea831b8f5584be90030e5592d1f0f9bb6a4982969a7d441cf1c2717f674cf" gracePeriod=30 Jan 27 19:02:12 crc kubenswrapper[4915]: I0127 19:02:12.212504 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" containerName="sg-core" containerID="cri-o://9b57f5253fc08cf41baf11f81a31afe6e950937ff9dd7f6a78ce7d976bd44cb9" gracePeriod=30 Jan 27 19:02:12 crc kubenswrapper[4915]: I0127 19:02:12.231563 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.167:3000/\": EOF" Jan 27 19:02:12 crc kubenswrapper[4915]: I0127 19:02:12.661396 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5fb74bb44c-7sqss"] Jan 27 19:02:12 crc kubenswrapper[4915]: I0127 19:02:12.861672 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fa88bf20-ee44-4049-b46d-75f3a64d3a4d","Type":"ContainerStarted","Data":"c41546d68783dedc9647f66f8a74dbf9122942d0f78c5b48713646f399bbd1fc"} Jan 27 19:02:12 crc kubenswrapper[4915]: I0127 19:02:12.864102 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5fb74bb44c-7sqss" event={"ID":"1b2c635a-a36f-415f-9746-97620456c8c8","Type":"ContainerStarted","Data":"b0f89e18a1656926b9da6f886ebd76bd323d27b6aecdce1513a27fa2441d920b"} Jan 27 19:02:12 crc kubenswrapper[4915]: I0127 19:02:12.870743 4915 generic.go:334] "Generic (PLEG): container finished" podID="7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" containerID="c4248a94eedc3362a28233c83dbb06be4142c20ed3181b3be320b86ae1c3438d" exitCode=0 Jan 27 19:02:12 crc kubenswrapper[4915]: I0127 19:02:12.870768 4915 generic.go:334] "Generic (PLEG): container finished" podID="7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" containerID="9b57f5253fc08cf41baf11f81a31afe6e950937ff9dd7f6a78ce7d976bd44cb9" exitCode=2 Jan 27 19:02:12 crc kubenswrapper[4915]: I0127 19:02:12.870779 4915 generic.go:334] "Generic (PLEG): container finished" podID="7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" containerID="121e1ca2e790d4c007a783ec192a5f5d0c67a6afb9213e5ec997b55d9372eeb3" exitCode=0 Jan 27 19:02:12 crc kubenswrapper[4915]: I0127 19:02:12.870818 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261","Type":"ContainerDied","Data":"c4248a94eedc3362a28233c83dbb06be4142c20ed3181b3be320b86ae1c3438d"} Jan 27 19:02:12 crc kubenswrapper[4915]: I0127 19:02:12.870842 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261","Type":"ContainerDied","Data":"9b57f5253fc08cf41baf11f81a31afe6e950937ff9dd7f6a78ce7d976bd44cb9"} Jan 27 19:02:12 crc kubenswrapper[4915]: I0127 19:02:12.870851 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261","Type":"ContainerDied","Data":"121e1ca2e790d4c007a783ec192a5f5d0c67a6afb9213e5ec997b55d9372eeb3"} Jan 27 19:02:12 crc kubenswrapper[4915]: I0127 19:02:12.890097 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.890076237 podStartE2EDuration="3.890076237s" podCreationTimestamp="2026-01-27 19:02:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:02:12.886337586 +0000 UTC m=+1224.244191260" watchObservedRunningTime="2026-01-27 19:02:12.890076237 +0000 UTC m=+1224.247929901" Jan 27 19:02:13 crc kubenswrapper[4915]: I0127 19:02:13.894916 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5fb74bb44c-7sqss" event={"ID":"1b2c635a-a36f-415f-9746-97620456c8c8","Type":"ContainerStarted","Data":"f63983754e0643b78d05ce039d979443c2034e2c687b210c29e0e9c43c6c94c7"} Jan 27 19:02:13 crc kubenswrapper[4915]: I0127 19:02:13.895456 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5fb74bb44c-7sqss" event={"ID":"1b2c635a-a36f-415f-9746-97620456c8c8","Type":"ContainerStarted","Data":"ad6cf2c775928f1b27edb4cbc6dc646165d5a44a7fe20aa97f96400f3264d8fe"} Jan 27 19:02:13 crc kubenswrapper[4915]: I0127 19:02:13.920737 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5fb74bb44c-7sqss" podStartSLOduration=2.920712966 podStartE2EDuration="2.920712966s" podCreationTimestamp="2026-01-27 19:02:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:02:13.911013918 +0000 UTC m=+1225.268867592" watchObservedRunningTime="2026-01-27 19:02:13.920712966 +0000 UTC m=+1225.278566630" Jan 27 19:02:14 crc kubenswrapper[4915]: I0127 19:02:14.909507 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:14 crc kubenswrapper[4915]: I0127 19:02:14.909824 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:15 crc kubenswrapper[4915]: I0127 19:02:15.187825 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 19:02:16 crc kubenswrapper[4915]: I0127 19:02:16.942316 4915 generic.go:334] "Generic (PLEG): container finished" podID="7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" containerID="303ea831b8f5584be90030e5592d1f0f9bb6a4982969a7d441cf1c2717f674cf" exitCode=0 Jan 27 19:02:16 crc kubenswrapper[4915]: I0127 19:02:16.942393 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261","Type":"ContainerDied","Data":"303ea831b8f5584be90030e5592d1f0f9bb6a4982969a7d441cf1c2717f674cf"} Jan 27 19:02:17 crc kubenswrapper[4915]: I0127 19:02:17.078780 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:19 crc kubenswrapper[4915]: I0127 19:02:19.668464 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:02:19 crc kubenswrapper[4915]: I0127 19:02:19.849623 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb7gf\" (UniqueName: \"kubernetes.io/projected/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-kube-api-access-tb7gf\") pod \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " Jan 27 19:02:19 crc kubenswrapper[4915]: I0127 19:02:19.849716 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-log-httpd\") pod \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " Jan 27 19:02:19 crc kubenswrapper[4915]: I0127 19:02:19.849859 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-config-data\") pod \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " Jan 27 19:02:19 crc kubenswrapper[4915]: I0127 19:02:19.850323 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" (UID: "7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:02:19 crc kubenswrapper[4915]: I0127 19:02:19.850528 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-scripts\") pod \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " Jan 27 19:02:19 crc kubenswrapper[4915]: I0127 19:02:19.850555 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-run-httpd\") pod \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " Jan 27 19:02:19 crc kubenswrapper[4915]: I0127 19:02:19.850584 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-combined-ca-bundle\") pod \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " Jan 27 19:02:19 crc kubenswrapper[4915]: I0127 19:02:19.850608 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-sg-core-conf-yaml\") pod \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\" (UID: \"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261\") " Jan 27 19:02:19 crc kubenswrapper[4915]: I0127 19:02:19.850871 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" (UID: "7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:02:19 crc kubenswrapper[4915]: I0127 19:02:19.850887 4915 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:19 crc kubenswrapper[4915]: I0127 19:02:19.854763 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-scripts" (OuterVolumeSpecName: "scripts") pod "7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" (UID: "7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:19 crc kubenswrapper[4915]: I0127 19:02:19.854970 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-kube-api-access-tb7gf" (OuterVolumeSpecName: "kube-api-access-tb7gf") pod "7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" (UID: "7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261"). InnerVolumeSpecName "kube-api-access-tb7gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:19 crc kubenswrapper[4915]: I0127 19:02:19.875654 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" (UID: "7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:19 crc kubenswrapper[4915]: I0127 19:02:19.940643 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" (UID: "7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:19 crc kubenswrapper[4915]: I0127 19:02:19.953539 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb7gf\" (UniqueName: \"kubernetes.io/projected/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-kube-api-access-tb7gf\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:19 crc kubenswrapper[4915]: I0127 19:02:19.953580 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:19 crc kubenswrapper[4915]: I0127 19:02:19.953589 4915 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:19 crc kubenswrapper[4915]: I0127 19:02:19.953598 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:19 crc kubenswrapper[4915]: I0127 19:02:19.953606 4915 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:19 crc kubenswrapper[4915]: I0127 19:02:19.969032 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-config-data" (OuterVolumeSpecName: "config-data") pod "7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" (UID: "7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:19 crc kubenswrapper[4915]: I0127 19:02:19.983640 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261","Type":"ContainerDied","Data":"2ff177e0fa03679d08aafc69fb7c980c63d9b9312f063e17543baef940ceb413"} Jan 27 19:02:19 crc kubenswrapper[4915]: I0127 19:02:19.983686 4915 scope.go:117] "RemoveContainer" containerID="c4248a94eedc3362a28233c83dbb06be4142c20ed3181b3be320b86ae1c3438d" Jan 27 19:02:19 crc kubenswrapper[4915]: I0127 19:02:19.983828 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:02:19 crc kubenswrapper[4915]: I0127 19:02:19.993459 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0060401c-f9b6-4772-bce5-bda7633de81a","Type":"ContainerStarted","Data":"e18f372d20db23571278d2902be8558c9c9c1236c8adb8a38d079d6827836640"} Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.013357 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.149837173 podStartE2EDuration="14.012423648s" podCreationTimestamp="2026-01-27 19:02:06 +0000 UTC" firstStartedPulling="2026-01-27 19:02:07.484039877 +0000 UTC m=+1218.841893541" lastFinishedPulling="2026-01-27 19:02:19.346626332 +0000 UTC m=+1230.704480016" observedRunningTime="2026-01-27 19:02:20.007087357 +0000 UTC m=+1231.364941021" watchObservedRunningTime="2026-01-27 19:02:20.012423648 +0000 UTC m=+1231.370277322" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.033931 4915 scope.go:117] "RemoveContainer" containerID="9b57f5253fc08cf41baf11f81a31afe6e950937ff9dd7f6a78ce7d976bd44cb9" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.051142 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.056431 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.059090 4915 scope.go:117] "RemoveContainer" containerID="303ea831b8f5584be90030e5592d1f0f9bb6a4982969a7d441cf1c2717f674cf" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.068006 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.079654 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:02:20 crc kubenswrapper[4915]: E0127 19:02:20.080104 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" containerName="proxy-httpd" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.080119 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" containerName="proxy-httpd" Jan 27 19:02:20 crc kubenswrapper[4915]: E0127 19:02:20.080138 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" containerName="sg-core" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.080144 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" containerName="sg-core" Jan 27 19:02:20 crc kubenswrapper[4915]: E0127 19:02:20.080151 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" containerName="ceilometer-notification-agent" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.080157 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" containerName="ceilometer-notification-agent" Jan 27 19:02:20 crc kubenswrapper[4915]: E0127 19:02:20.080168 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" containerName="ceilometer-central-agent" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.080174 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" containerName="ceilometer-central-agent" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.080350 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" containerName="ceilometer-notification-agent" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.080363 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" containerName="sg-core" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.080377 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" containerName="ceilometer-central-agent" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.080393 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" containerName="proxy-httpd" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.082638 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.085493 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.087161 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.088335 4915 scope.go:117] "RemoveContainer" containerID="121e1ca2e790d4c007a783ec192a5f5d0c67a6afb9213e5ec997b55d9372eeb3" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.090515 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.258862 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd52116b-0ba6-450d-be65-ac0d409827f5-run-httpd\") pod \"ceilometer-0\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " pod="openstack/ceilometer-0" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.258954 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd52116b-0ba6-450d-be65-ac0d409827f5-config-data\") pod \"ceilometer-0\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " pod="openstack/ceilometer-0" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.258981 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w9rg\" (UniqueName: \"kubernetes.io/projected/cd52116b-0ba6-450d-be65-ac0d409827f5-kube-api-access-6w9rg\") pod \"ceilometer-0\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " pod="openstack/ceilometer-0" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.258999 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd52116b-0ba6-450d-be65-ac0d409827f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " pod="openstack/ceilometer-0" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.259038 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd52116b-0ba6-450d-be65-ac0d409827f5-log-httpd\") pod \"ceilometer-0\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " pod="openstack/ceilometer-0" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.259101 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd52116b-0ba6-450d-be65-ac0d409827f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " pod="openstack/ceilometer-0" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.259140 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd52116b-0ba6-450d-be65-ac0d409827f5-scripts\") pod \"ceilometer-0\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " pod="openstack/ceilometer-0" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.361170 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd52116b-0ba6-450d-be65-ac0d409827f5-config-data\") pod \"ceilometer-0\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " pod="openstack/ceilometer-0" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.361225 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w9rg\" (UniqueName: \"kubernetes.io/projected/cd52116b-0ba6-450d-be65-ac0d409827f5-kube-api-access-6w9rg\") pod \"ceilometer-0\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " pod="openstack/ceilometer-0" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.361262 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd52116b-0ba6-450d-be65-ac0d409827f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " pod="openstack/ceilometer-0" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.361286 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd52116b-0ba6-450d-be65-ac0d409827f5-log-httpd\") pod \"ceilometer-0\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " pod="openstack/ceilometer-0" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.361347 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd52116b-0ba6-450d-be65-ac0d409827f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " pod="openstack/ceilometer-0" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.361371 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd52116b-0ba6-450d-be65-ac0d409827f5-scripts\") pod \"ceilometer-0\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " pod="openstack/ceilometer-0" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.361489 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd52116b-0ba6-450d-be65-ac0d409827f5-run-httpd\") pod \"ceilometer-0\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " pod="openstack/ceilometer-0" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.363005 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd52116b-0ba6-450d-be65-ac0d409827f5-log-httpd\") pod \"ceilometer-0\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " pod="openstack/ceilometer-0" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.363179 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd52116b-0ba6-450d-be65-ac0d409827f5-run-httpd\") pod \"ceilometer-0\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " pod="openstack/ceilometer-0" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.365583 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd52116b-0ba6-450d-be65-ac0d409827f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " pod="openstack/ceilometer-0" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.367964 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd52116b-0ba6-450d-be65-ac0d409827f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " pod="openstack/ceilometer-0" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.372776 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd52116b-0ba6-450d-be65-ac0d409827f5-config-data\") pod \"ceilometer-0\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " pod="openstack/ceilometer-0" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.378656 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w9rg\" (UniqueName: \"kubernetes.io/projected/cd52116b-0ba6-450d-be65-ac0d409827f5-kube-api-access-6w9rg\") pod \"ceilometer-0\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " pod="openstack/ceilometer-0" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.385695 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd52116b-0ba6-450d-be65-ac0d409827f5-scripts\") pod \"ceilometer-0\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " pod="openstack/ceilometer-0" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.399740 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.441846 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.624401 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.624451 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.624490 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.625152 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f848ad2f1cae042bc567d9f4705384b39a6ceca79b0cc5f51ad49a89ebe4229a"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.625204 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://f848ad2f1cae042bc567d9f4705384b39a6ceca79b0cc5f51ad49a89ebe4229a" gracePeriod=600 Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.737663 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:02:20 crc kubenswrapper[4915]: I0127 19:02:20.879344 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:02:21 crc kubenswrapper[4915]: I0127 19:02:21.001582 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd52116b-0ba6-450d-be65-ac0d409827f5","Type":"ContainerStarted","Data":"558c1bc1b0d6cdb1c535909aeabb4089eca607288562b9b74589df1a83892e3a"} Jan 27 19:02:21 crc kubenswrapper[4915]: I0127 19:02:21.006168 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="f848ad2f1cae042bc567d9f4705384b39a6ceca79b0cc5f51ad49a89ebe4229a" exitCode=0 Jan 27 19:02:21 crc kubenswrapper[4915]: I0127 19:02:21.006244 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"f848ad2f1cae042bc567d9f4705384b39a6ceca79b0cc5f51ad49a89ebe4229a"} Jan 27 19:02:21 crc kubenswrapper[4915]: I0127 19:02:21.006296 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"0ffdf2d8bcfcee0fd3a2af2a920f2189c02f8aef159f2288deb6331f5a73c9e0"} Jan 27 19:02:21 crc kubenswrapper[4915]: I0127 19:02:21.006318 4915 scope.go:117] "RemoveContainer" containerID="0fd9f5796cc3a57e9fc7c4db2c85fe065ddeb03c518ee01197233c97b80d5a44" Jan 27 19:02:21 crc kubenswrapper[4915]: I0127 19:02:21.380085 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261" path="/var/lib/kubelet/pods/7bcd1cfb-d2ba-4f8e-8d6b-d90f4d1b1261/volumes" Jan 27 19:02:22 crc kubenswrapper[4915]: I0127 19:02:22.024312 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd52116b-0ba6-450d-be65-ac0d409827f5","Type":"ContainerStarted","Data":"8cd3982a3f8882e1a1f2a3a43dd50bbc3c4aff2a860d83d85457ed541934c0dd"} Jan 27 19:02:22 crc kubenswrapper[4915]: I0127 19:02:22.049226 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:02:22 crc kubenswrapper[4915]: I0127 19:02:22.910002 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 19:02:22 crc kubenswrapper[4915]: I0127 19:02:22.911352 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ab212b87-cac4-4075-9899-28c8ca8bae4b" containerName="glance-log" containerID="cri-o://eac834827418aba18158643238f641ee146250a04d7df698569ab918434a53d5" gracePeriod=30 Jan 27 19:02:22 crc kubenswrapper[4915]: I0127 19:02:22.911663 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ab212b87-cac4-4075-9899-28c8ca8bae4b" containerName="glance-httpd" containerID="cri-o://36403e6d91375fdf20e47e86e7e39de77372ec912070c6ea562a9bbcb00877c9" gracePeriod=30 Jan 27 19:02:23 crc kubenswrapper[4915]: I0127 19:02:23.042062 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd52116b-0ba6-450d-be65-ac0d409827f5","Type":"ContainerStarted","Data":"53083e2ba0aec9c683e88400fa37c10d11ffea5a8da161d8cd9fc59297bd294d"} Jan 27 19:02:23 crc kubenswrapper[4915]: I0127 19:02:23.045767 4915 generic.go:334] "Generic (PLEG): container finished" podID="ab212b87-cac4-4075-9899-28c8ca8bae4b" containerID="eac834827418aba18158643238f641ee146250a04d7df698569ab918434a53d5" exitCode=143 Jan 27 19:02:23 crc kubenswrapper[4915]: I0127 19:02:23.045833 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ab212b87-cac4-4075-9899-28c8ca8bae4b","Type":"ContainerDied","Data":"eac834827418aba18158643238f641ee146250a04d7df698569ab918434a53d5"} Jan 27 19:02:24 crc kubenswrapper[4915]: I0127 19:02:24.056781 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd52116b-0ba6-450d-be65-ac0d409827f5","Type":"ContainerStarted","Data":"dc6091a43f5d7c5ed6b03f9d758e430eb03928414a9db9dbb5570cb2d8bb55d3"} Jan 27 19:02:25 crc kubenswrapper[4915]: I0127 19:02:25.069084 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd52116b-0ba6-450d-be65-ac0d409827f5","Type":"ContainerStarted","Data":"641e9fdf4baae3d740f09c1d1b16af80fa9de4c484c4c3080f571c8b21550b91"} Jan 27 19:02:25 crc kubenswrapper[4915]: I0127 19:02:25.069356 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 19:02:25 crc kubenswrapper[4915]: I0127 19:02:25.069328 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cd52116b-0ba6-450d-be65-ac0d409827f5" containerName="ceilometer-central-agent" containerID="cri-o://8cd3982a3f8882e1a1f2a3a43dd50bbc3c4aff2a860d83d85457ed541934c0dd" gracePeriod=30 Jan 27 19:02:25 crc kubenswrapper[4915]: I0127 19:02:25.069387 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cd52116b-0ba6-450d-be65-ac0d409827f5" containerName="ceilometer-notification-agent" containerID="cri-o://53083e2ba0aec9c683e88400fa37c10d11ffea5a8da161d8cd9fc59297bd294d" gracePeriod=30 Jan 27 19:02:25 crc kubenswrapper[4915]: I0127 19:02:25.069393 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cd52116b-0ba6-450d-be65-ac0d409827f5" containerName="proxy-httpd" containerID="cri-o://641e9fdf4baae3d740f09c1d1b16af80fa9de4c484c4c3080f571c8b21550b91" gracePeriod=30 Jan 27 19:02:25 crc kubenswrapper[4915]: I0127 19:02:25.069364 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cd52116b-0ba6-450d-be65-ac0d409827f5" containerName="sg-core" containerID="cri-o://dc6091a43f5d7c5ed6b03f9d758e430eb03928414a9db9dbb5570cb2d8bb55d3" gracePeriod=30 Jan 27 19:02:25 crc kubenswrapper[4915]: I0127 19:02:25.089817 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.297756964 podStartE2EDuration="5.089802975s" podCreationTimestamp="2026-01-27 19:02:20 +0000 UTC" firstStartedPulling="2026-01-27 19:02:20.90257427 +0000 UTC m=+1232.260427934" lastFinishedPulling="2026-01-27 19:02:24.694620271 +0000 UTC m=+1236.052473945" observedRunningTime="2026-01-27 19:02:25.08641713 +0000 UTC m=+1236.444270794" watchObservedRunningTime="2026-01-27 19:02:25.089802975 +0000 UTC m=+1236.447656639" Jan 27 19:02:26 crc kubenswrapper[4915]: I0127 19:02:26.079989 4915 generic.go:334] "Generic (PLEG): container finished" podID="cd52116b-0ba6-450d-be65-ac0d409827f5" containerID="dc6091a43f5d7c5ed6b03f9d758e430eb03928414a9db9dbb5570cb2d8bb55d3" exitCode=2 Jan 27 19:02:26 crc kubenswrapper[4915]: I0127 19:02:26.080233 4915 generic.go:334] "Generic (PLEG): container finished" podID="cd52116b-0ba6-450d-be65-ac0d409827f5" containerID="53083e2ba0aec9c683e88400fa37c10d11ffea5a8da161d8cd9fc59297bd294d" exitCode=0 Jan 27 19:02:26 crc kubenswrapper[4915]: I0127 19:02:26.080056 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd52116b-0ba6-450d-be65-ac0d409827f5","Type":"ContainerDied","Data":"dc6091a43f5d7c5ed6b03f9d758e430eb03928414a9db9dbb5570cb2d8bb55d3"} Jan 27 19:02:26 crc kubenswrapper[4915]: I0127 19:02:26.080267 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd52116b-0ba6-450d-be65-ac0d409827f5","Type":"ContainerDied","Data":"53083e2ba0aec9c683e88400fa37c10d11ffea5a8da161d8cd9fc59297bd294d"} Jan 27 19:02:26 crc kubenswrapper[4915]: W0127 19:02:26.611931 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bcd1cfb_d2ba_4f8e_8d6b_d90f4d1b1261.slice/crio-121e1ca2e790d4c007a783ec192a5f5d0c67a6afb9213e5ec997b55d9372eeb3.scope WatchSource:0}: Error finding container 121e1ca2e790d4c007a783ec192a5f5d0c67a6afb9213e5ec997b55d9372eeb3: Status 404 returned error can't find the container with id 121e1ca2e790d4c007a783ec192a5f5d0c67a6afb9213e5ec997b55d9372eeb3 Jan 27 19:02:26 crc kubenswrapper[4915]: W0127 19:02:26.614580 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bcd1cfb_d2ba_4f8e_8d6b_d90f4d1b1261.slice/crio-303ea831b8f5584be90030e5592d1f0f9bb6a4982969a7d441cf1c2717f674cf.scope WatchSource:0}: Error finding container 303ea831b8f5584be90030e5592d1f0f9bb6a4982969a7d441cf1c2717f674cf: Status 404 returned error can't find the container with id 303ea831b8f5584be90030e5592d1f0f9bb6a4982969a7d441cf1c2717f674cf Jan 27 19:02:26 crc kubenswrapper[4915]: W0127 19:02:26.614902 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bcd1cfb_d2ba_4f8e_8d6b_d90f4d1b1261.slice/crio-9b57f5253fc08cf41baf11f81a31afe6e950937ff9dd7f6a78ce7d976bd44cb9.scope WatchSource:0}: Error finding container 9b57f5253fc08cf41baf11f81a31afe6e950937ff9dd7f6a78ce7d976bd44cb9: Status 404 returned error can't find the container with id 9b57f5253fc08cf41baf11f81a31afe6e950937ff9dd7f6a78ce7d976bd44cb9 Jan 27 19:02:26 crc kubenswrapper[4915]: W0127 19:02:26.615133 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bcd1cfb_d2ba_4f8e_8d6b_d90f4d1b1261.slice/crio-c4248a94eedc3362a28233c83dbb06be4142c20ed3181b3be320b86ae1c3438d.scope WatchSource:0}: Error finding container c4248a94eedc3362a28233c83dbb06be4142c20ed3181b3be320b86ae1c3438d: Status 404 returned error can't find the container with id c4248a94eedc3362a28233c83dbb06be4142c20ed3181b3be320b86ae1c3438d Jan 27 19:02:26 crc kubenswrapper[4915]: I0127 19:02:26.784539 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 19:02:26 crc kubenswrapper[4915]: I0127 19:02:26.984759 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ab212b87-cac4-4075-9899-28c8ca8bae4b\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " Jan 27 19:02:26 crc kubenswrapper[4915]: I0127 19:02:26.984838 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab212b87-cac4-4075-9899-28c8ca8bae4b-internal-tls-certs\") pod \"ab212b87-cac4-4075-9899-28c8ca8bae4b\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " Jan 27 19:02:26 crc kubenswrapper[4915]: I0127 19:02:26.984864 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab212b87-cac4-4075-9899-28c8ca8bae4b-scripts\") pod \"ab212b87-cac4-4075-9899-28c8ca8bae4b\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " Jan 27 19:02:26 crc kubenswrapper[4915]: I0127 19:02:26.984928 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab212b87-cac4-4075-9899-28c8ca8bae4b-httpd-run\") pod \"ab212b87-cac4-4075-9899-28c8ca8bae4b\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " Jan 27 19:02:26 crc kubenswrapper[4915]: I0127 19:02:26.984948 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab212b87-cac4-4075-9899-28c8ca8bae4b-logs\") pod \"ab212b87-cac4-4075-9899-28c8ca8bae4b\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " Jan 27 19:02:26 crc kubenswrapper[4915]: I0127 19:02:26.985071 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab212b87-cac4-4075-9899-28c8ca8bae4b-combined-ca-bundle\") pod \"ab212b87-cac4-4075-9899-28c8ca8bae4b\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " Jan 27 19:02:26 crc kubenswrapper[4915]: I0127 19:02:26.985115 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qkct\" (UniqueName: \"kubernetes.io/projected/ab212b87-cac4-4075-9899-28c8ca8bae4b-kube-api-access-2qkct\") pod \"ab212b87-cac4-4075-9899-28c8ca8bae4b\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " Jan 27 19:02:26 crc kubenswrapper[4915]: I0127 19:02:26.985185 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab212b87-cac4-4075-9899-28c8ca8bae4b-config-data\") pod \"ab212b87-cac4-4075-9899-28c8ca8bae4b\" (UID: \"ab212b87-cac4-4075-9899-28c8ca8bae4b\") " Jan 27 19:02:26 crc kubenswrapper[4915]: I0127 19:02:26.985542 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab212b87-cac4-4075-9899-28c8ca8bae4b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ab212b87-cac4-4075-9899-28c8ca8bae4b" (UID: "ab212b87-cac4-4075-9899-28c8ca8bae4b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:02:26 crc kubenswrapper[4915]: I0127 19:02:26.985748 4915 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab212b87-cac4-4075-9899-28c8ca8bae4b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:26 crc kubenswrapper[4915]: I0127 19:02:26.986098 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab212b87-cac4-4075-9899-28c8ca8bae4b-logs" (OuterVolumeSpecName: "logs") pod "ab212b87-cac4-4075-9899-28c8ca8bae4b" (UID: "ab212b87-cac4-4075-9899-28c8ca8bae4b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:02:26 crc kubenswrapper[4915]: I0127 19:02:26.991406 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "ab212b87-cac4-4075-9899-28c8ca8bae4b" (UID: "ab212b87-cac4-4075-9899-28c8ca8bae4b"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 19:02:26 crc kubenswrapper[4915]: I0127 19:02:26.993142 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab212b87-cac4-4075-9899-28c8ca8bae4b-scripts" (OuterVolumeSpecName: "scripts") pod "ab212b87-cac4-4075-9899-28c8ca8bae4b" (UID: "ab212b87-cac4-4075-9899-28c8ca8bae4b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:26 crc kubenswrapper[4915]: I0127 19:02:26.994104 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab212b87-cac4-4075-9899-28c8ca8bae4b-kube-api-access-2qkct" (OuterVolumeSpecName: "kube-api-access-2qkct") pod "ab212b87-cac4-4075-9899-28c8ca8bae4b" (UID: "ab212b87-cac4-4075-9899-28c8ca8bae4b"). InnerVolumeSpecName "kube-api-access-2qkct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.032050 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab212b87-cac4-4075-9899-28c8ca8bae4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab212b87-cac4-4075-9899-28c8ca8bae4b" (UID: "ab212b87-cac4-4075-9899-28c8ca8bae4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.038361 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab212b87-cac4-4075-9899-28c8ca8bae4b-config-data" (OuterVolumeSpecName: "config-data") pod "ab212b87-cac4-4075-9899-28c8ca8bae4b" (UID: "ab212b87-cac4-4075-9899-28c8ca8bae4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.041299 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab212b87-cac4-4075-9899-28c8ca8bae4b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ab212b87-cac4-4075-9899-28c8ca8bae4b" (UID: "ab212b87-cac4-4075-9899-28c8ca8bae4b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.086580 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab212b87-cac4-4075-9899-28c8ca8bae4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.086623 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qkct\" (UniqueName: \"kubernetes.io/projected/ab212b87-cac4-4075-9899-28c8ca8bae4b-kube-api-access-2qkct\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.086638 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab212b87-cac4-4075-9899-28c8ca8bae4b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.086667 4915 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.086679 4915 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab212b87-cac4-4075-9899-28c8ca8bae4b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.086694 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab212b87-cac4-4075-9899-28c8ca8bae4b-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.086707 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab212b87-cac4-4075-9899-28c8ca8bae4b-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.090628 4915 generic.go:334] "Generic (PLEG): container finished" podID="ace79e7e-87c2-4d54-89df-b272912bfcba" containerID="b20ae3a1ddf485113076be8bb8019dc02f28942c17f31d9c3c4eb2b71e63a501" exitCode=137 Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.090840 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ace79e7e-87c2-4d54-89df-b272912bfcba","Type":"ContainerDied","Data":"b20ae3a1ddf485113076be8bb8019dc02f28942c17f31d9c3c4eb2b71e63a501"} Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.092701 4915 generic.go:334] "Generic (PLEG): container finished" podID="ab212b87-cac4-4075-9899-28c8ca8bae4b" containerID="36403e6d91375fdf20e47e86e7e39de77372ec912070c6ea562a9bbcb00877c9" exitCode=0 Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.092824 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ab212b87-cac4-4075-9899-28c8ca8bae4b","Type":"ContainerDied","Data":"36403e6d91375fdf20e47e86e7e39de77372ec912070c6ea562a9bbcb00877c9"} Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.092914 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ab212b87-cac4-4075-9899-28c8ca8bae4b","Type":"ContainerDied","Data":"3449e9500fbf287fb5075c1f5b1da2dd76e56098add32cdf199e332ffe41412c"} Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.092998 4915 scope.go:117] "RemoveContainer" containerID="36403e6d91375fdf20e47e86e7e39de77372ec912070c6ea562a9bbcb00877c9" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.093215 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.109234 4915 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.143862 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.188383 4915 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.188450 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.199105 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 19:02:27 crc kubenswrapper[4915]: E0127 19:02:27.199609 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab212b87-cac4-4075-9899-28c8ca8bae4b" containerName="glance-httpd" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.199654 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab212b87-cac4-4075-9899-28c8ca8bae4b" containerName="glance-httpd" Jan 27 19:02:27 crc kubenswrapper[4915]: E0127 19:02:27.199682 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab212b87-cac4-4075-9899-28c8ca8bae4b" containerName="glance-log" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.199691 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab212b87-cac4-4075-9899-28c8ca8bae4b" containerName="glance-log" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.199948 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab212b87-cac4-4075-9899-28c8ca8bae4b" containerName="glance-httpd" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.199974 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab212b87-cac4-4075-9899-28c8ca8bae4b" containerName="glance-log" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.201106 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.203091 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.203337 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.207298 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.235315 4915 scope.go:117] "RemoveContainer" containerID="eac834827418aba18158643238f641ee146250a04d7df698569ab918434a53d5" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.366729 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab212b87-cac4-4075-9899-28c8ca8bae4b" path="/var/lib/kubelet/pods/ab212b87-cac4-4075-9899-28c8ca8bae4b/volumes" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.391881 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.392089 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02e16a69-5c98-4e52-ad1c-bf08c989cd88-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.392201 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/02e16a69-5c98-4e52-ad1c-bf08c989cd88-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.392277 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e16a69-5c98-4e52-ad1c-bf08c989cd88-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.392351 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdmjj\" (UniqueName: \"kubernetes.io/projected/02e16a69-5c98-4e52-ad1c-bf08c989cd88-kube-api-access-tdmjj\") pod \"glance-default-internal-api-0\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.392463 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02e16a69-5c98-4e52-ad1c-bf08c989cd88-scripts\") pod \"glance-default-internal-api-0\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.392566 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e16a69-5c98-4e52-ad1c-bf08c989cd88-config-data\") pod \"glance-default-internal-api-0\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.392674 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02e16a69-5c98-4e52-ad1c-bf08c989cd88-logs\") pod \"glance-default-internal-api-0\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.442303 4915 scope.go:117] "RemoveContainer" containerID="36403e6d91375fdf20e47e86e7e39de77372ec912070c6ea562a9bbcb00877c9" Jan 27 19:02:27 crc kubenswrapper[4915]: E0127 19:02:27.442817 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36403e6d91375fdf20e47e86e7e39de77372ec912070c6ea562a9bbcb00877c9\": container with ID starting with 36403e6d91375fdf20e47e86e7e39de77372ec912070c6ea562a9bbcb00877c9 not found: ID does not exist" containerID="36403e6d91375fdf20e47e86e7e39de77372ec912070c6ea562a9bbcb00877c9" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.442864 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36403e6d91375fdf20e47e86e7e39de77372ec912070c6ea562a9bbcb00877c9"} err="failed to get container status \"36403e6d91375fdf20e47e86e7e39de77372ec912070c6ea562a9bbcb00877c9\": rpc error: code = NotFound desc = could not find container \"36403e6d91375fdf20e47e86e7e39de77372ec912070c6ea562a9bbcb00877c9\": container with ID starting with 36403e6d91375fdf20e47e86e7e39de77372ec912070c6ea562a9bbcb00877c9 not found: ID does not exist" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.442893 4915 scope.go:117] "RemoveContainer" containerID="eac834827418aba18158643238f641ee146250a04d7df698569ab918434a53d5" Jan 27 19:02:27 crc kubenswrapper[4915]: E0127 19:02:27.443202 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eac834827418aba18158643238f641ee146250a04d7df698569ab918434a53d5\": container with ID starting with eac834827418aba18158643238f641ee146250a04d7df698569ab918434a53d5 not found: ID does not exist" containerID="eac834827418aba18158643238f641ee146250a04d7df698569ab918434a53d5" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.443228 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eac834827418aba18158643238f641ee146250a04d7df698569ab918434a53d5"} err="failed to get container status \"eac834827418aba18158643238f641ee146250a04d7df698569ab918434a53d5\": rpc error: code = NotFound desc = could not find container \"eac834827418aba18158643238f641ee146250a04d7df698569ab918434a53d5\": container with ID starting with eac834827418aba18158643238f641ee146250a04d7df698569ab918434a53d5 not found: ID does not exist" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.494844 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/02e16a69-5c98-4e52-ad1c-bf08c989cd88-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.494917 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e16a69-5c98-4e52-ad1c-bf08c989cd88-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.494950 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdmjj\" (UniqueName: \"kubernetes.io/projected/02e16a69-5c98-4e52-ad1c-bf08c989cd88-kube-api-access-tdmjj\") pod \"glance-default-internal-api-0\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.495022 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02e16a69-5c98-4e52-ad1c-bf08c989cd88-scripts\") pod \"glance-default-internal-api-0\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.495098 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e16a69-5c98-4e52-ad1c-bf08c989cd88-config-data\") pod \"glance-default-internal-api-0\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.495177 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02e16a69-5c98-4e52-ad1c-bf08c989cd88-logs\") pod \"glance-default-internal-api-0\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.495231 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.495254 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02e16a69-5c98-4e52-ad1c-bf08c989cd88-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.495451 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/02e16a69-5c98-4e52-ad1c-bf08c989cd88-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.495594 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.495594 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02e16a69-5c98-4e52-ad1c-bf08c989cd88-logs\") pod \"glance-default-internal-api-0\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.500369 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e16a69-5c98-4e52-ad1c-bf08c989cd88-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.500452 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02e16a69-5c98-4e52-ad1c-bf08c989cd88-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.507831 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e16a69-5c98-4e52-ad1c-bf08c989cd88-config-data\") pod \"glance-default-internal-api-0\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.526501 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02e16a69-5c98-4e52-ad1c-bf08c989cd88-scripts\") pod \"glance-default-internal-api-0\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.543170 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdmjj\" (UniqueName: \"kubernetes.io/projected/02e16a69-5c98-4e52-ad1c-bf08c989cd88-kube-api-access-tdmjj\") pod \"glance-default-internal-api-0\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.569362 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.622760 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-979tj"] Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.624262 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-979tj" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.631764 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-979tj"] Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.704186 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9960a75-b93c-4b1b-b6f8-6f2168b597df-operator-scripts\") pod \"nova-api-db-create-979tj\" (UID: \"e9960a75-b93c-4b1b-b6f8-6f2168b597df\") " pod="openstack/nova-api-db-create-979tj" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.704290 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9twwt\" (UniqueName: \"kubernetes.io/projected/e9960a75-b93c-4b1b-b6f8-6f2168b597df-kube-api-access-9twwt\") pod \"nova-api-db-create-979tj\" (UID: \"e9960a75-b93c-4b1b-b6f8-6f2168b597df\") " pod="openstack/nova-api-db-create-979tj" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.716469 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-6ch54"] Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.717611 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6ch54" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.752841 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6ch54"] Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.805191 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9960a75-b93c-4b1b-b6f8-6f2168b597df-operator-scripts\") pod \"nova-api-db-create-979tj\" (UID: \"e9960a75-b93c-4b1b-b6f8-6f2168b597df\") " pod="openstack/nova-api-db-create-979tj" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.805267 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28d4g\" (UniqueName: \"kubernetes.io/projected/d786e434-5fa6-4259-a6bf-594f61b7364a-kube-api-access-28d4g\") pod \"nova-cell0-db-create-6ch54\" (UID: \"d786e434-5fa6-4259-a6bf-594f61b7364a\") " pod="openstack/nova-cell0-db-create-6ch54" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.805294 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9twwt\" (UniqueName: \"kubernetes.io/projected/e9960a75-b93c-4b1b-b6f8-6f2168b597df-kube-api-access-9twwt\") pod \"nova-api-db-create-979tj\" (UID: \"e9960a75-b93c-4b1b-b6f8-6f2168b597df\") " pod="openstack/nova-api-db-create-979tj" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.805344 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d786e434-5fa6-4259-a6bf-594f61b7364a-operator-scripts\") pod \"nova-cell0-db-create-6ch54\" (UID: \"d786e434-5fa6-4259-a6bf-594f61b7364a\") " pod="openstack/nova-cell0-db-create-6ch54" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.806048 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9960a75-b93c-4b1b-b6f8-6f2168b597df-operator-scripts\") pod \"nova-api-db-create-979tj\" (UID: \"e9960a75-b93c-4b1b-b6f8-6f2168b597df\") " pod="openstack/nova-api-db-create-979tj" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.807054 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-dwgr4"] Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.808067 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dwgr4" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.827608 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.833969 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9twwt\" (UniqueName: \"kubernetes.io/projected/e9960a75-b93c-4b1b-b6f8-6f2168b597df-kube-api-access-9twwt\") pod \"nova-api-db-create-979tj\" (UID: \"e9960a75-b93c-4b1b-b6f8-6f2168b597df\") " pod="openstack/nova-api-db-create-979tj" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.835021 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-8e80-account-create-update-dbknm"] Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.836074 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8e80-account-create-update-dbknm" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.843143 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.847812 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-dwgr4"] Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.865559 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.882261 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8e80-account-create-update-dbknm"] Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.934758 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28d4g\" (UniqueName: \"kubernetes.io/projected/d786e434-5fa6-4259-a6bf-594f61b7364a-kube-api-access-28d4g\") pod \"nova-cell0-db-create-6ch54\" (UID: \"d786e434-5fa6-4259-a6bf-594f61b7364a\") " pod="openstack/nova-cell0-db-create-6ch54" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.934847 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d786e434-5fa6-4259-a6bf-594f61b7364a-operator-scripts\") pod \"nova-cell0-db-create-6ch54\" (UID: \"d786e434-5fa6-4259-a6bf-594f61b7364a\") " pod="openstack/nova-cell0-db-create-6ch54" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.935458 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d786e434-5fa6-4259-a6bf-594f61b7364a-operator-scripts\") pod \"nova-cell0-db-create-6ch54\" (UID: \"d786e434-5fa6-4259-a6bf-594f61b7364a\") " pod="openstack/nova-cell0-db-create-6ch54" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.952512 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-979tj" Jan 27 19:02:27 crc kubenswrapper[4915]: I0127 19:02:27.957395 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28d4g\" (UniqueName: \"kubernetes.io/projected/d786e434-5fa6-4259-a6bf-594f61b7364a-kube-api-access-28d4g\") pod \"nova-cell0-db-create-6ch54\" (UID: \"d786e434-5fa6-4259-a6bf-594f61b7364a\") " pod="openstack/nova-cell0-db-create-6ch54" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.019895 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-fc87-account-create-update-rd4bq"] Jan 27 19:02:28 crc kubenswrapper[4915]: E0127 19:02:28.020516 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace79e7e-87c2-4d54-89df-b272912bfcba" containerName="cinder-api" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.020527 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace79e7e-87c2-4d54-89df-b272912bfcba" containerName="cinder-api" Jan 27 19:02:28 crc kubenswrapper[4915]: E0127 19:02:28.020540 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace79e7e-87c2-4d54-89df-b272912bfcba" containerName="cinder-api-log" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.020546 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace79e7e-87c2-4d54-89df-b272912bfcba" containerName="cinder-api-log" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.020754 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="ace79e7e-87c2-4d54-89df-b272912bfcba" containerName="cinder-api" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.020773 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="ace79e7e-87c2-4d54-89df-b272912bfcba" containerName="cinder-api-log" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.021311 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fc87-account-create-update-rd4bq" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.025876 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.037557 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ace79e7e-87c2-4d54-89df-b272912bfcba-config-data-custom\") pod \"ace79e7e-87c2-4d54-89df-b272912bfcba\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.037602 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ace79e7e-87c2-4d54-89df-b272912bfcba-logs\") pod \"ace79e7e-87c2-4d54-89df-b272912bfcba\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.037653 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ace79e7e-87c2-4d54-89df-b272912bfcba-config-data\") pod \"ace79e7e-87c2-4d54-89df-b272912bfcba\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.037705 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bj6c\" (UniqueName: \"kubernetes.io/projected/ace79e7e-87c2-4d54-89df-b272912bfcba-kube-api-access-7bj6c\") pod \"ace79e7e-87c2-4d54-89df-b272912bfcba\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.037734 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ace79e7e-87c2-4d54-89df-b272912bfcba-scripts\") pod \"ace79e7e-87c2-4d54-89df-b272912bfcba\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.038028 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ace79e7e-87c2-4d54-89df-b272912bfcba-etc-machine-id\") pod \"ace79e7e-87c2-4d54-89df-b272912bfcba\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.038087 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ace79e7e-87c2-4d54-89df-b272912bfcba-combined-ca-bundle\") pod \"ace79e7e-87c2-4d54-89df-b272912bfcba\" (UID: \"ace79e7e-87c2-4d54-89df-b272912bfcba\") " Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.038295 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp8q6\" (UniqueName: \"kubernetes.io/projected/08034122-eb86-44a0-aba7-6d6d07afee31-kube-api-access-lp8q6\") pod \"nova-cell1-db-create-dwgr4\" (UID: \"08034122-eb86-44a0-aba7-6d6d07afee31\") " pod="openstack/nova-cell1-db-create-dwgr4" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.038325 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b270a3c-b3a7-484a-ba5d-3acd08465527-operator-scripts\") pod \"nova-api-8e80-account-create-update-dbknm\" (UID: \"4b270a3c-b3a7-484a-ba5d-3acd08465527\") " pod="openstack/nova-api-8e80-account-create-update-dbknm" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.038388 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08034122-eb86-44a0-aba7-6d6d07afee31-operator-scripts\") pod \"nova-cell1-db-create-dwgr4\" (UID: \"08034122-eb86-44a0-aba7-6d6d07afee31\") " pod="openstack/nova-cell1-db-create-dwgr4" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.038430 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bfsv\" (UniqueName: \"kubernetes.io/projected/4b270a3c-b3a7-484a-ba5d-3acd08465527-kube-api-access-8bfsv\") pod \"nova-api-8e80-account-create-update-dbknm\" (UID: \"4b270a3c-b3a7-484a-ba5d-3acd08465527\") " pod="openstack/nova-api-8e80-account-create-update-dbknm" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.038523 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddd31932-4bdb-4426-b2bc-77c6cf650a50-operator-scripts\") pod \"nova-cell0-fc87-account-create-update-rd4bq\" (UID: \"ddd31932-4bdb-4426-b2bc-77c6cf650a50\") " pod="openstack/nova-cell0-fc87-account-create-update-rd4bq" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.038547 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r8gn\" (UniqueName: \"kubernetes.io/projected/ddd31932-4bdb-4426-b2bc-77c6cf650a50-kube-api-access-9r8gn\") pod \"nova-cell0-fc87-account-create-update-rd4bq\" (UID: \"ddd31932-4bdb-4426-b2bc-77c6cf650a50\") " pod="openstack/nova-cell0-fc87-account-create-update-rd4bq" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.042434 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-fc87-account-create-update-rd4bq"] Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.045507 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ace79e7e-87c2-4d54-89df-b272912bfcba-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ace79e7e-87c2-4d54-89df-b272912bfcba" (UID: "ace79e7e-87c2-4d54-89df-b272912bfcba"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.046137 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ace79e7e-87c2-4d54-89df-b272912bfcba-scripts" (OuterVolumeSpecName: "scripts") pod "ace79e7e-87c2-4d54-89df-b272912bfcba" (UID: "ace79e7e-87c2-4d54-89df-b272912bfcba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.046161 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6ch54" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.046383 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ace79e7e-87c2-4d54-89df-b272912bfcba-logs" (OuterVolumeSpecName: "logs") pod "ace79e7e-87c2-4d54-89df-b272912bfcba" (UID: "ace79e7e-87c2-4d54-89df-b272912bfcba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.048224 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ace79e7e-87c2-4d54-89df-b272912bfcba-kube-api-access-7bj6c" (OuterVolumeSpecName: "kube-api-access-7bj6c") pod "ace79e7e-87c2-4d54-89df-b272912bfcba" (UID: "ace79e7e-87c2-4d54-89df-b272912bfcba"). InnerVolumeSpecName "kube-api-access-7bj6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.048314 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ace79e7e-87c2-4d54-89df-b272912bfcba-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ace79e7e-87c2-4d54-89df-b272912bfcba" (UID: "ace79e7e-87c2-4d54-89df-b272912bfcba"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.090463 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ace79e7e-87c2-4d54-89df-b272912bfcba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ace79e7e-87c2-4d54-89df-b272912bfcba" (UID: "ace79e7e-87c2-4d54-89df-b272912bfcba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.106298 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ace79e7e-87c2-4d54-89df-b272912bfcba","Type":"ContainerDied","Data":"94d03b9c7292ecb9d1590484579ae5bde631e673fe9472dd58a3b059203b4d0c"} Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.106361 4915 scope.go:117] "RemoveContainer" containerID="b20ae3a1ddf485113076be8bb8019dc02f28942c17f31d9c3c4eb2b71e63a501" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.106468 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.139924 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddd31932-4bdb-4426-b2bc-77c6cf650a50-operator-scripts\") pod \"nova-cell0-fc87-account-create-update-rd4bq\" (UID: \"ddd31932-4bdb-4426-b2bc-77c6cf650a50\") " pod="openstack/nova-cell0-fc87-account-create-update-rd4bq" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.139975 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r8gn\" (UniqueName: \"kubernetes.io/projected/ddd31932-4bdb-4426-b2bc-77c6cf650a50-kube-api-access-9r8gn\") pod \"nova-cell0-fc87-account-create-update-rd4bq\" (UID: \"ddd31932-4bdb-4426-b2bc-77c6cf650a50\") " pod="openstack/nova-cell0-fc87-account-create-update-rd4bq" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.140058 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp8q6\" (UniqueName: \"kubernetes.io/projected/08034122-eb86-44a0-aba7-6d6d07afee31-kube-api-access-lp8q6\") pod \"nova-cell1-db-create-dwgr4\" (UID: \"08034122-eb86-44a0-aba7-6d6d07afee31\") " pod="openstack/nova-cell1-db-create-dwgr4" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.140092 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b270a3c-b3a7-484a-ba5d-3acd08465527-operator-scripts\") pod \"nova-api-8e80-account-create-update-dbknm\" (UID: \"4b270a3c-b3a7-484a-ba5d-3acd08465527\") " pod="openstack/nova-api-8e80-account-create-update-dbknm" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.140160 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08034122-eb86-44a0-aba7-6d6d07afee31-operator-scripts\") pod \"nova-cell1-db-create-dwgr4\" (UID: \"08034122-eb86-44a0-aba7-6d6d07afee31\") " pod="openstack/nova-cell1-db-create-dwgr4" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.140204 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bfsv\" (UniqueName: \"kubernetes.io/projected/4b270a3c-b3a7-484a-ba5d-3acd08465527-kube-api-access-8bfsv\") pod \"nova-api-8e80-account-create-update-dbknm\" (UID: \"4b270a3c-b3a7-484a-ba5d-3acd08465527\") " pod="openstack/nova-api-8e80-account-create-update-dbknm" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.140285 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bj6c\" (UniqueName: \"kubernetes.io/projected/ace79e7e-87c2-4d54-89df-b272912bfcba-kube-api-access-7bj6c\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.140302 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ace79e7e-87c2-4d54-89df-b272912bfcba-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.140315 4915 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ace79e7e-87c2-4d54-89df-b272912bfcba-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.140327 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ace79e7e-87c2-4d54-89df-b272912bfcba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.140340 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ace79e7e-87c2-4d54-89df-b272912bfcba-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.140352 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ace79e7e-87c2-4d54-89df-b272912bfcba-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.140975 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddd31932-4bdb-4426-b2bc-77c6cf650a50-operator-scripts\") pod \"nova-cell0-fc87-account-create-update-rd4bq\" (UID: \"ddd31932-4bdb-4426-b2bc-77c6cf650a50\") " pod="openstack/nova-cell0-fc87-account-create-update-rd4bq" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.141360 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b270a3c-b3a7-484a-ba5d-3acd08465527-operator-scripts\") pod \"nova-api-8e80-account-create-update-dbknm\" (UID: \"4b270a3c-b3a7-484a-ba5d-3acd08465527\") " pod="openstack/nova-api-8e80-account-create-update-dbknm" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.141613 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08034122-eb86-44a0-aba7-6d6d07afee31-operator-scripts\") pod \"nova-cell1-db-create-dwgr4\" (UID: \"08034122-eb86-44a0-aba7-6d6d07afee31\") " pod="openstack/nova-cell1-db-create-dwgr4" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.165499 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r8gn\" (UniqueName: \"kubernetes.io/projected/ddd31932-4bdb-4426-b2bc-77c6cf650a50-kube-api-access-9r8gn\") pod \"nova-cell0-fc87-account-create-update-rd4bq\" (UID: \"ddd31932-4bdb-4426-b2bc-77c6cf650a50\") " pod="openstack/nova-cell0-fc87-account-create-update-rd4bq" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.174428 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ace79e7e-87c2-4d54-89df-b272912bfcba-config-data" (OuterVolumeSpecName: "config-data") pod "ace79e7e-87c2-4d54-89df-b272912bfcba" (UID: "ace79e7e-87c2-4d54-89df-b272912bfcba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.175052 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bfsv\" (UniqueName: \"kubernetes.io/projected/4b270a3c-b3a7-484a-ba5d-3acd08465527-kube-api-access-8bfsv\") pod \"nova-api-8e80-account-create-update-dbknm\" (UID: \"4b270a3c-b3a7-484a-ba5d-3acd08465527\") " pod="openstack/nova-api-8e80-account-create-update-dbknm" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.195088 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp8q6\" (UniqueName: \"kubernetes.io/projected/08034122-eb86-44a0-aba7-6d6d07afee31-kube-api-access-lp8q6\") pod \"nova-cell1-db-create-dwgr4\" (UID: \"08034122-eb86-44a0-aba7-6d6d07afee31\") " pod="openstack/nova-cell1-db-create-dwgr4" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.236807 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8e80-account-create-update-dbknm" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.244731 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ace79e7e-87c2-4d54-89df-b272912bfcba-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.262145 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-5a93-account-create-update-zkn82"] Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.269724 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5a93-account-create-update-zkn82" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.272356 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.333190 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5a93-account-create-update-zkn82"] Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.365667 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fc87-account-create-update-rd4bq" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.375183 4915 scope.go:117] "RemoveContainer" containerID="ce3801bf2a5b527a4d3df4d49d0e7a466aafd70613836972cc676d35614d70db" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.465861 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.469155 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44c9af00-6a16-4e30-b375-95d5bbe7bec6-operator-scripts\") pod \"nova-cell1-5a93-account-create-update-zkn82\" (UID: \"44c9af00-6a16-4e30-b375-95d5bbe7bec6\") " pod="openstack/nova-cell1-5a93-account-create-update-zkn82" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.469308 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj6xz\" (UniqueName: \"kubernetes.io/projected/44c9af00-6a16-4e30-b375-95d5bbe7bec6-kube-api-access-mj6xz\") pod \"nova-cell1-5a93-account-create-update-zkn82\" (UID: \"44c9af00-6a16-4e30-b375-95d5bbe7bec6\") " pod="openstack/nova-cell1-5a93-account-create-update-zkn82" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.477457 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dwgr4" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.481876 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.495589 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.514409 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.517914 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.518191 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.518986 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.563803 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.571926 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj6xz\" (UniqueName: \"kubernetes.io/projected/44c9af00-6a16-4e30-b375-95d5bbe7bec6-kube-api-access-mj6xz\") pod \"nova-cell1-5a93-account-create-update-zkn82\" (UID: \"44c9af00-6a16-4e30-b375-95d5bbe7bec6\") " pod="openstack/nova-cell1-5a93-account-create-update-zkn82" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.572015 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44c9af00-6a16-4e30-b375-95d5bbe7bec6-operator-scripts\") pod \"nova-cell1-5a93-account-create-update-zkn82\" (UID: \"44c9af00-6a16-4e30-b375-95d5bbe7bec6\") " pod="openstack/nova-cell1-5a93-account-create-update-zkn82" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.574736 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44c9af00-6a16-4e30-b375-95d5bbe7bec6-operator-scripts\") pod \"nova-cell1-5a93-account-create-update-zkn82\" (UID: \"44c9af00-6a16-4e30-b375-95d5bbe7bec6\") " pod="openstack/nova-cell1-5a93-account-create-update-zkn82" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.603422 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj6xz\" (UniqueName: \"kubernetes.io/projected/44c9af00-6a16-4e30-b375-95d5bbe7bec6-kube-api-access-mj6xz\") pod \"nova-cell1-5a93-account-create-update-zkn82\" (UID: \"44c9af00-6a16-4e30-b375-95d5bbe7bec6\") " pod="openstack/nova-cell1-5a93-account-create-update-zkn82" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.652641 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.675679 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-scripts\") pod \"cinder-api-0\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.675732 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-config-data-custom\") pod \"cinder-api-0\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.675751 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.675840 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-config-data\") pod \"cinder-api-0\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.675862 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5debd92-961a-492a-8e9d-a51652a3a84a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.675886 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c7qh\" (UniqueName: \"kubernetes.io/projected/b5debd92-961a-492a-8e9d-a51652a3a84a-kube-api-access-7c7qh\") pod \"cinder-api-0\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.675914 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5debd92-961a-492a-8e9d-a51652a3a84a-logs\") pod \"cinder-api-0\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.675951 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.675968 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.695903 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-979tj"] Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.715846 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5a93-account-create-update-zkn82" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.778543 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-config-data\") pod \"cinder-api-0\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.778950 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5debd92-961a-492a-8e9d-a51652a3a84a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.779005 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c7qh\" (UniqueName: \"kubernetes.io/projected/b5debd92-961a-492a-8e9d-a51652a3a84a-kube-api-access-7c7qh\") pod \"cinder-api-0\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.779060 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5debd92-961a-492a-8e9d-a51652a3a84a-logs\") pod \"cinder-api-0\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.779129 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.779104 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5debd92-961a-492a-8e9d-a51652a3a84a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.779149 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.779255 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-scripts\") pod \"cinder-api-0\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.779314 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-config-data-custom\") pod \"cinder-api-0\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.779336 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.781233 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5debd92-961a-492a-8e9d-a51652a3a84a-logs\") pod \"cinder-api-0\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.783419 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.786337 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-scripts\") pod \"cinder-api-0\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.790593 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.798697 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-config-data\") pod \"cinder-api-0\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.801000 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-config-data-custom\") pod \"cinder-api-0\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.801423 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.802970 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c7qh\" (UniqueName: \"kubernetes.io/projected/b5debd92-961a-492a-8e9d-a51652a3a84a-kube-api-access-7c7qh\") pod \"cinder-api-0\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.845908 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 19:02:28 crc kubenswrapper[4915]: I0127 19:02:28.876328 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6ch54"] Jan 27 19:02:29 crc kubenswrapper[4915]: I0127 19:02:29.013380 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8e80-account-create-update-dbknm"] Jan 27 19:02:29 crc kubenswrapper[4915]: I0127 19:02:29.085083 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-dwgr4"] Jan 27 19:02:29 crc kubenswrapper[4915]: I0127 19:02:29.096632 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-fc87-account-create-update-rd4bq"] Jan 27 19:02:29 crc kubenswrapper[4915]: I0127 19:02:29.150311 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8e80-account-create-update-dbknm" event={"ID":"4b270a3c-b3a7-484a-ba5d-3acd08465527","Type":"ContainerStarted","Data":"7b981dd3b9193e04af631b21303d37648dd055b1ffe0017f003baa09bb79dbcb"} Jan 27 19:02:29 crc kubenswrapper[4915]: I0127 19:02:29.152454 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"02e16a69-5c98-4e52-ad1c-bf08c989cd88","Type":"ContainerStarted","Data":"e380097d1f9c4506ffc90c27f63341d15c33021f6e9ed9484e26a305efd420d2"} Jan 27 19:02:29 crc kubenswrapper[4915]: I0127 19:02:29.185555 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-979tj" event={"ID":"e9960a75-b93c-4b1b-b6f8-6f2168b597df","Type":"ContainerStarted","Data":"db398784d22a8ab35ea75bbe90593b343f7ea20905f57079cee11f49dd81b517"} Jan 27 19:02:29 crc kubenswrapper[4915]: I0127 19:02:29.185595 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-979tj" event={"ID":"e9960a75-b93c-4b1b-b6f8-6f2168b597df","Type":"ContainerStarted","Data":"c0d64659bf0dc9eba16f7c6df86d5d02dd32e1f4ed6ea654371489fb6e805cef"} Jan 27 19:02:29 crc kubenswrapper[4915]: I0127 19:02:29.188847 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dwgr4" event={"ID":"08034122-eb86-44a0-aba7-6d6d07afee31","Type":"ContainerStarted","Data":"725cfdb21a97d7b10ca999d7cdb1d5f611863288628d99d9175f50629b9c42b5"} Jan 27 19:02:29 crc kubenswrapper[4915]: I0127 19:02:29.191605 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fc87-account-create-update-rd4bq" event={"ID":"ddd31932-4bdb-4426-b2bc-77c6cf650a50","Type":"ContainerStarted","Data":"ab2c803a554766c133c8eddd307ca241885676a79096ed215d69bc7e11e21672"} Jan 27 19:02:29 crc kubenswrapper[4915]: I0127 19:02:29.193415 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6ch54" event={"ID":"d786e434-5fa6-4259-a6bf-594f61b7364a","Type":"ContainerStarted","Data":"61a56d6aad544cfbb1193b75525a7581b1849c4666207e170a53fe82ac0995c8"} Jan 27 19:02:29 crc kubenswrapper[4915]: I0127 19:02:29.217986 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-979tj" podStartSLOduration=2.217961428 podStartE2EDuration="2.217961428s" podCreationTimestamp="2026-01-27 19:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:02:29.211706289 +0000 UTC m=+1240.569559953" watchObservedRunningTime="2026-01-27 19:02:29.217961428 +0000 UTC m=+1240.575815092" Jan 27 19:02:29 crc kubenswrapper[4915]: I0127 19:02:29.311195 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5a93-account-create-update-zkn82"] Jan 27 19:02:29 crc kubenswrapper[4915]: I0127 19:02:29.374765 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ace79e7e-87c2-4d54-89df-b272912bfcba" path="/var/lib/kubelet/pods/ace79e7e-87c2-4d54-89df-b272912bfcba/volumes" Jan 27 19:02:29 crc kubenswrapper[4915]: I0127 19:02:29.430639 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 19:02:30 crc kubenswrapper[4915]: I0127 19:02:30.205547 4915 generic.go:334] "Generic (PLEG): container finished" podID="e9960a75-b93c-4b1b-b6f8-6f2168b597df" containerID="db398784d22a8ab35ea75bbe90593b343f7ea20905f57079cee11f49dd81b517" exitCode=0 Jan 27 19:02:30 crc kubenswrapper[4915]: I0127 19:02:30.205634 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-979tj" event={"ID":"e9960a75-b93c-4b1b-b6f8-6f2168b597df","Type":"ContainerDied","Data":"db398784d22a8ab35ea75bbe90593b343f7ea20905f57079cee11f49dd81b517"} Jan 27 19:02:30 crc kubenswrapper[4915]: I0127 19:02:30.208748 4915 generic.go:334] "Generic (PLEG): container finished" podID="08034122-eb86-44a0-aba7-6d6d07afee31" containerID="94b37cb1181144ca154da470500875bce1c1eb852ba38ce3063c820112e7cac3" exitCode=0 Jan 27 19:02:30 crc kubenswrapper[4915]: I0127 19:02:30.208831 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dwgr4" event={"ID":"08034122-eb86-44a0-aba7-6d6d07afee31","Type":"ContainerDied","Data":"94b37cb1181144ca154da470500875bce1c1eb852ba38ce3063c820112e7cac3"} Jan 27 19:02:30 crc kubenswrapper[4915]: I0127 19:02:30.212672 4915 generic.go:334] "Generic (PLEG): container finished" podID="d786e434-5fa6-4259-a6bf-594f61b7364a" containerID="f597c72d6e48db1ed7f8a9a65cb96d325a3bf4ccf903fe72a0904e1613c1cfca" exitCode=0 Jan 27 19:02:30 crc kubenswrapper[4915]: I0127 19:02:30.212731 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6ch54" event={"ID":"d786e434-5fa6-4259-a6bf-594f61b7364a","Type":"ContainerDied","Data":"f597c72d6e48db1ed7f8a9a65cb96d325a3bf4ccf903fe72a0904e1613c1cfca"} Jan 27 19:02:30 crc kubenswrapper[4915]: I0127 19:02:30.220991 4915 generic.go:334] "Generic (PLEG): container finished" podID="4b270a3c-b3a7-484a-ba5d-3acd08465527" containerID="e41e2a390a13486abb25e163c192e86770faad8b33152303528c9702730fc6bb" exitCode=0 Jan 27 19:02:30 crc kubenswrapper[4915]: I0127 19:02:30.221129 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8e80-account-create-update-dbknm" event={"ID":"4b270a3c-b3a7-484a-ba5d-3acd08465527","Type":"ContainerDied","Data":"e41e2a390a13486abb25e163c192e86770faad8b33152303528c9702730fc6bb"} Jan 27 19:02:30 crc kubenswrapper[4915]: I0127 19:02:30.225082 4915 generic.go:334] "Generic (PLEG): container finished" podID="44c9af00-6a16-4e30-b375-95d5bbe7bec6" containerID="e48ae78e0ee85508aad39735796b0214f22e733b94c1e67c95c5d48eb01c99e4" exitCode=0 Jan 27 19:02:30 crc kubenswrapper[4915]: I0127 19:02:30.225282 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5a93-account-create-update-zkn82" event={"ID":"44c9af00-6a16-4e30-b375-95d5bbe7bec6","Type":"ContainerDied","Data":"e48ae78e0ee85508aad39735796b0214f22e733b94c1e67c95c5d48eb01c99e4"} Jan 27 19:02:30 crc kubenswrapper[4915]: I0127 19:02:30.225331 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5a93-account-create-update-zkn82" event={"ID":"44c9af00-6a16-4e30-b375-95d5bbe7bec6","Type":"ContainerStarted","Data":"5407753cd6d9ccda8beb0ca357aaf549e4621d138971d3b505733ddefe6f419a"} Jan 27 19:02:30 crc kubenswrapper[4915]: I0127 19:02:30.228691 4915 generic.go:334] "Generic (PLEG): container finished" podID="cd52116b-0ba6-450d-be65-ac0d409827f5" containerID="8cd3982a3f8882e1a1f2a3a43dd50bbc3c4aff2a860d83d85457ed541934c0dd" exitCode=0 Jan 27 19:02:30 crc kubenswrapper[4915]: I0127 19:02:30.228780 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd52116b-0ba6-450d-be65-ac0d409827f5","Type":"ContainerDied","Data":"8cd3982a3f8882e1a1f2a3a43dd50bbc3c4aff2a860d83d85457ed541934c0dd"} Jan 27 19:02:30 crc kubenswrapper[4915]: I0127 19:02:30.229702 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b5debd92-961a-492a-8e9d-a51652a3a84a","Type":"ContainerStarted","Data":"f9ad03f4f9e761435ad7cf78ce899ccee0fa26fae2f7c76ed3e984839f42ecc2"} Jan 27 19:02:30 crc kubenswrapper[4915]: I0127 19:02:30.230637 4915 generic.go:334] "Generic (PLEG): container finished" podID="ddd31932-4bdb-4426-b2bc-77c6cf650a50" containerID="fb3794bee7f36e17deb1063b4afdf6f679603461eaf2081a52d75e27cbc706d1" exitCode=0 Jan 27 19:02:30 crc kubenswrapper[4915]: I0127 19:02:30.230671 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fc87-account-create-update-rd4bq" event={"ID":"ddd31932-4bdb-4426-b2bc-77c6cf650a50","Type":"ContainerDied","Data":"fb3794bee7f36e17deb1063b4afdf6f679603461eaf2081a52d75e27cbc706d1"} Jan 27 19:02:30 crc kubenswrapper[4915]: I0127 19:02:30.238022 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"02e16a69-5c98-4e52-ad1c-bf08c989cd88","Type":"ContainerStarted","Data":"cd55b928203a100adbd69bb6a3576cba209345a89c3fe528d411607287b3964d"} Jan 27 19:02:31 crc kubenswrapper[4915]: I0127 19:02:31.251541 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"02e16a69-5c98-4e52-ad1c-bf08c989cd88","Type":"ContainerStarted","Data":"f76d58b03c67cca894c7c03c90ad9ed4cc48b08790c07b0c5345806e5f49dc1d"} Jan 27 19:02:31 crc kubenswrapper[4915]: I0127 19:02:31.255364 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b5debd92-961a-492a-8e9d-a51652a3a84a","Type":"ContainerStarted","Data":"8fca0d05b5fe82e3c340f29da10ad9c25f93a4f93ae8f0f3914c11e06c53298f"} Jan 27 19:02:31 crc kubenswrapper[4915]: I0127 19:02:31.255405 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b5debd92-961a-492a-8e9d-a51652a3a84a","Type":"ContainerStarted","Data":"a1132d8d6f288010605cbaf9d48ce537f25622fc42632084c22783f5251724a2"} Jan 27 19:02:31 crc kubenswrapper[4915]: I0127 19:02:31.873041 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8e80-account-create-update-dbknm" Jan 27 19:02:31 crc kubenswrapper[4915]: I0127 19:02:31.915397 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.915374602 podStartE2EDuration="4.915374602s" podCreationTimestamp="2026-01-27 19:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:02:31.273888197 +0000 UTC m=+1242.631741881" watchObservedRunningTime="2026-01-27 19:02:31.915374602 +0000 UTC m=+1243.273228266" Jan 27 19:02:31 crc kubenswrapper[4915]: I0127 19:02:31.992188 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b270a3c-b3a7-484a-ba5d-3acd08465527-operator-scripts\") pod \"4b270a3c-b3a7-484a-ba5d-3acd08465527\" (UID: \"4b270a3c-b3a7-484a-ba5d-3acd08465527\") " Jan 27 19:02:31 crc kubenswrapper[4915]: I0127 19:02:31.992348 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bfsv\" (UniqueName: \"kubernetes.io/projected/4b270a3c-b3a7-484a-ba5d-3acd08465527-kube-api-access-8bfsv\") pod \"4b270a3c-b3a7-484a-ba5d-3acd08465527\" (UID: \"4b270a3c-b3a7-484a-ba5d-3acd08465527\") " Jan 27 19:02:31 crc kubenswrapper[4915]: I0127 19:02:31.993956 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b270a3c-b3a7-484a-ba5d-3acd08465527-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4b270a3c-b3a7-484a-ba5d-3acd08465527" (UID: "4b270a3c-b3a7-484a-ba5d-3acd08465527"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.003059 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b270a3c-b3a7-484a-ba5d-3acd08465527-kube-api-access-8bfsv" (OuterVolumeSpecName: "kube-api-access-8bfsv") pod "4b270a3c-b3a7-484a-ba5d-3acd08465527" (UID: "4b270a3c-b3a7-484a-ba5d-3acd08465527"). InnerVolumeSpecName "kube-api-access-8bfsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.095968 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b270a3c-b3a7-484a-ba5d-3acd08465527-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.096000 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bfsv\" (UniqueName: \"kubernetes.io/projected/4b270a3c-b3a7-484a-ba5d-3acd08465527-kube-api-access-8bfsv\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.125050 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5a93-account-create-update-zkn82" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.129803 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fc87-account-create-update-rd4bq" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.138041 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dwgr4" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.158505 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-979tj" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.172019 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6ch54" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.274489 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8e80-account-create-update-dbknm" event={"ID":"4b270a3c-b3a7-484a-ba5d-3acd08465527","Type":"ContainerDied","Data":"7b981dd3b9193e04af631b21303d37648dd055b1ffe0017f003baa09bb79dbcb"} Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.274535 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b981dd3b9193e04af631b21303d37648dd055b1ffe0017f003baa09bb79dbcb" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.274577 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8e80-account-create-update-dbknm" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.275937 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5a93-account-create-update-zkn82" event={"ID":"44c9af00-6a16-4e30-b375-95d5bbe7bec6","Type":"ContainerDied","Data":"5407753cd6d9ccda8beb0ca357aaf549e4621d138971d3b505733ddefe6f419a"} Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.275979 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5407753cd6d9ccda8beb0ca357aaf549e4621d138971d3b505733ddefe6f419a" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.276048 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5a93-account-create-update-zkn82" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.277160 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-979tj" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.277187 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-979tj" event={"ID":"e9960a75-b93c-4b1b-b6f8-6f2168b597df","Type":"ContainerDied","Data":"c0d64659bf0dc9eba16f7c6df86d5d02dd32e1f4ed6ea654371489fb6e805cef"} Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.277233 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0d64659bf0dc9eba16f7c6df86d5d02dd32e1f4ed6ea654371489fb6e805cef" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.278621 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dwgr4" event={"ID":"08034122-eb86-44a0-aba7-6d6d07afee31","Type":"ContainerDied","Data":"725cfdb21a97d7b10ca999d7cdb1d5f611863288628d99d9175f50629b9c42b5"} Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.278660 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dwgr4" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.278665 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="725cfdb21a97d7b10ca999d7cdb1d5f611863288628d99d9175f50629b9c42b5" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.280073 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fc87-account-create-update-rd4bq" event={"ID":"ddd31932-4bdb-4426-b2bc-77c6cf650a50","Type":"ContainerDied","Data":"ab2c803a554766c133c8eddd307ca241885676a79096ed215d69bc7e11e21672"} Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.280098 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab2c803a554766c133c8eddd307ca241885676a79096ed215d69bc7e11e21672" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.280137 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fc87-account-create-update-rd4bq" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.282045 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6ch54" event={"ID":"d786e434-5fa6-4259-a6bf-594f61b7364a","Type":"ContainerDied","Data":"61a56d6aad544cfbb1193b75525a7581b1849c4666207e170a53fe82ac0995c8"} Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.282090 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61a56d6aad544cfbb1193b75525a7581b1849c4666207e170a53fe82ac0995c8" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.282169 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6ch54" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.282878 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.299391 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9twwt\" (UniqueName: \"kubernetes.io/projected/e9960a75-b93c-4b1b-b6f8-6f2168b597df-kube-api-access-9twwt\") pod \"e9960a75-b93c-4b1b-b6f8-6f2168b597df\" (UID: \"e9960a75-b93c-4b1b-b6f8-6f2168b597df\") " Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.299501 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44c9af00-6a16-4e30-b375-95d5bbe7bec6-operator-scripts\") pod \"44c9af00-6a16-4e30-b375-95d5bbe7bec6\" (UID: \"44c9af00-6a16-4e30-b375-95d5bbe7bec6\") " Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.299571 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp8q6\" (UniqueName: \"kubernetes.io/projected/08034122-eb86-44a0-aba7-6d6d07afee31-kube-api-access-lp8q6\") pod \"08034122-eb86-44a0-aba7-6d6d07afee31\" (UID: \"08034122-eb86-44a0-aba7-6d6d07afee31\") " Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.299646 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj6xz\" (UniqueName: \"kubernetes.io/projected/44c9af00-6a16-4e30-b375-95d5bbe7bec6-kube-api-access-mj6xz\") pod \"44c9af00-6a16-4e30-b375-95d5bbe7bec6\" (UID: \"44c9af00-6a16-4e30-b375-95d5bbe7bec6\") " Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.299715 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28d4g\" (UniqueName: \"kubernetes.io/projected/d786e434-5fa6-4259-a6bf-594f61b7364a-kube-api-access-28d4g\") pod \"d786e434-5fa6-4259-a6bf-594f61b7364a\" (UID: \"d786e434-5fa6-4259-a6bf-594f61b7364a\") " Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.299757 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d786e434-5fa6-4259-a6bf-594f61b7364a-operator-scripts\") pod \"d786e434-5fa6-4259-a6bf-594f61b7364a\" (UID: \"d786e434-5fa6-4259-a6bf-594f61b7364a\") " Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.299811 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r8gn\" (UniqueName: \"kubernetes.io/projected/ddd31932-4bdb-4426-b2bc-77c6cf650a50-kube-api-access-9r8gn\") pod \"ddd31932-4bdb-4426-b2bc-77c6cf650a50\" (UID: \"ddd31932-4bdb-4426-b2bc-77c6cf650a50\") " Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.299887 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08034122-eb86-44a0-aba7-6d6d07afee31-operator-scripts\") pod \"08034122-eb86-44a0-aba7-6d6d07afee31\" (UID: \"08034122-eb86-44a0-aba7-6d6d07afee31\") " Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.299956 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9960a75-b93c-4b1b-b6f8-6f2168b597df-operator-scripts\") pod \"e9960a75-b93c-4b1b-b6f8-6f2168b597df\" (UID: \"e9960a75-b93c-4b1b-b6f8-6f2168b597df\") " Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.300000 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddd31932-4bdb-4426-b2bc-77c6cf650a50-operator-scripts\") pod \"ddd31932-4bdb-4426-b2bc-77c6cf650a50\" (UID: \"ddd31932-4bdb-4426-b2bc-77c6cf650a50\") " Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.300285 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d786e434-5fa6-4259-a6bf-594f61b7364a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d786e434-5fa6-4259-a6bf-594f61b7364a" (UID: "d786e434-5fa6-4259-a6bf-594f61b7364a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.300593 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d786e434-5fa6-4259-a6bf-594f61b7364a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.300740 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08034122-eb86-44a0-aba7-6d6d07afee31-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08034122-eb86-44a0-aba7-6d6d07afee31" (UID: "08034122-eb86-44a0-aba7-6d6d07afee31"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.300760 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9960a75-b93c-4b1b-b6f8-6f2168b597df-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e9960a75-b93c-4b1b-b6f8-6f2168b597df" (UID: "e9960a75-b93c-4b1b-b6f8-6f2168b597df"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.300847 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddd31932-4bdb-4426-b2bc-77c6cf650a50-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ddd31932-4bdb-4426-b2bc-77c6cf650a50" (UID: "ddd31932-4bdb-4426-b2bc-77c6cf650a50"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.301953 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44c9af00-6a16-4e30-b375-95d5bbe7bec6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "44c9af00-6a16-4e30-b375-95d5bbe7bec6" (UID: "44c9af00-6a16-4e30-b375-95d5bbe7bec6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.312533 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.31251692 podStartE2EDuration="4.31251692s" podCreationTimestamp="2026-01-27 19:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:02:32.301390492 +0000 UTC m=+1243.659244156" watchObservedRunningTime="2026-01-27 19:02:32.31251692 +0000 UTC m=+1243.670370574" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.319935 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44c9af00-6a16-4e30-b375-95d5bbe7bec6-kube-api-access-mj6xz" (OuterVolumeSpecName: "kube-api-access-mj6xz") pod "44c9af00-6a16-4e30-b375-95d5bbe7bec6" (UID: "44c9af00-6a16-4e30-b375-95d5bbe7bec6"). InnerVolumeSpecName "kube-api-access-mj6xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.320747 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9960a75-b93c-4b1b-b6f8-6f2168b597df-kube-api-access-9twwt" (OuterVolumeSpecName: "kube-api-access-9twwt") pod "e9960a75-b93c-4b1b-b6f8-6f2168b597df" (UID: "e9960a75-b93c-4b1b-b6f8-6f2168b597df"). InnerVolumeSpecName "kube-api-access-9twwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.331599 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddd31932-4bdb-4426-b2bc-77c6cf650a50-kube-api-access-9r8gn" (OuterVolumeSpecName: "kube-api-access-9r8gn") pod "ddd31932-4bdb-4426-b2bc-77c6cf650a50" (UID: "ddd31932-4bdb-4426-b2bc-77c6cf650a50"). InnerVolumeSpecName "kube-api-access-9r8gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.331667 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d786e434-5fa6-4259-a6bf-594f61b7364a-kube-api-access-28d4g" (OuterVolumeSpecName: "kube-api-access-28d4g") pod "d786e434-5fa6-4259-a6bf-594f61b7364a" (UID: "d786e434-5fa6-4259-a6bf-594f61b7364a"). InnerVolumeSpecName "kube-api-access-28d4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.331751 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08034122-eb86-44a0-aba7-6d6d07afee31-kube-api-access-lp8q6" (OuterVolumeSpecName: "kube-api-access-lp8q6") pod "08034122-eb86-44a0-aba7-6d6d07afee31" (UID: "08034122-eb86-44a0-aba7-6d6d07afee31"). InnerVolumeSpecName "kube-api-access-lp8q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.403961 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9960a75-b93c-4b1b-b6f8-6f2168b597df-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.403990 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddd31932-4bdb-4426-b2bc-77c6cf650a50-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.404000 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9twwt\" (UniqueName: \"kubernetes.io/projected/e9960a75-b93c-4b1b-b6f8-6f2168b597df-kube-api-access-9twwt\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.404011 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44c9af00-6a16-4e30-b375-95d5bbe7bec6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.404020 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp8q6\" (UniqueName: \"kubernetes.io/projected/08034122-eb86-44a0-aba7-6d6d07afee31-kube-api-access-lp8q6\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.404031 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj6xz\" (UniqueName: \"kubernetes.io/projected/44c9af00-6a16-4e30-b375-95d5bbe7bec6-kube-api-access-mj6xz\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.404040 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28d4g\" (UniqueName: \"kubernetes.io/projected/d786e434-5fa6-4259-a6bf-594f61b7364a-kube-api-access-28d4g\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.404050 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r8gn\" (UniqueName: \"kubernetes.io/projected/ddd31932-4bdb-4426-b2bc-77c6cf650a50-kube-api-access-9r8gn\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.404059 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08034122-eb86-44a0-aba7-6d6d07afee31-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.944091 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.944737 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c4dfd337-f738-4898-82e2-1e0362890db6" containerName="glance-httpd" containerID="cri-o://3d1a5e0f164c523c27775e8d8045682b87e3d22b442149c1d46c8e2ad5a7ff75" gracePeriod=30 Jan 27 19:02:32 crc kubenswrapper[4915]: I0127 19:02:32.944951 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c4dfd337-f738-4898-82e2-1e0362890db6" containerName="glance-log" containerID="cri-o://21810529ed8ee02366c51c1050f3fb71bd8d8b4d18df7ce556caba17c5d367cd" gracePeriod=30 Jan 27 19:02:33 crc kubenswrapper[4915]: I0127 19:02:33.296203 4915 generic.go:334] "Generic (PLEG): container finished" podID="c4dfd337-f738-4898-82e2-1e0362890db6" containerID="21810529ed8ee02366c51c1050f3fb71bd8d8b4d18df7ce556caba17c5d367cd" exitCode=143 Jan 27 19:02:33 crc kubenswrapper[4915]: I0127 19:02:33.296346 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c4dfd337-f738-4898-82e2-1e0362890db6","Type":"ContainerDied","Data":"21810529ed8ee02366c51c1050f3fb71bd8d8b4d18df7ce556caba17c5d367cd"} Jan 27 19:02:37 crc kubenswrapper[4915]: I0127 19:02:37.333183 4915 generic.go:334] "Generic (PLEG): container finished" podID="c4dfd337-f738-4898-82e2-1e0362890db6" containerID="3d1a5e0f164c523c27775e8d8045682b87e3d22b442149c1d46c8e2ad5a7ff75" exitCode=0 Jan 27 19:02:37 crc kubenswrapper[4915]: I0127 19:02:37.333268 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c4dfd337-f738-4898-82e2-1e0362890db6","Type":"ContainerDied","Data":"3d1a5e0f164c523c27775e8d8045682b87e3d22b442149c1d46c8e2ad5a7ff75"} Jan 27 19:02:37 crc kubenswrapper[4915]: I0127 19:02:37.826693 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 19:02:37 crc kubenswrapper[4915]: I0127 19:02:37.827730 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 19:02:37 crc kubenswrapper[4915]: I0127 19:02:37.827758 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 19:02:37 crc kubenswrapper[4915]: I0127 19:02:37.878910 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 19:02:37 crc kubenswrapper[4915]: I0127 19:02:37.897286 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.000488 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4dfd337-f738-4898-82e2-1e0362890db6-logs\") pod \"c4dfd337-f738-4898-82e2-1e0362890db6\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.001361 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4dfd337-f738-4898-82e2-1e0362890db6-logs" (OuterVolumeSpecName: "logs") pod "c4dfd337-f738-4898-82e2-1e0362890db6" (UID: "c4dfd337-f738-4898-82e2-1e0362890db6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.001435 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4dfd337-f738-4898-82e2-1e0362890db6-combined-ca-bundle\") pod \"c4dfd337-f738-4898-82e2-1e0362890db6\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.001525 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"c4dfd337-f738-4898-82e2-1e0362890db6\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.002174 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7psjl\" (UniqueName: \"kubernetes.io/projected/c4dfd337-f738-4898-82e2-1e0362890db6-kube-api-access-7psjl\") pod \"c4dfd337-f738-4898-82e2-1e0362890db6\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.002224 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4dfd337-f738-4898-82e2-1e0362890db6-config-data\") pod \"c4dfd337-f738-4898-82e2-1e0362890db6\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.002272 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4dfd337-f738-4898-82e2-1e0362890db6-scripts\") pod \"c4dfd337-f738-4898-82e2-1e0362890db6\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.002364 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4dfd337-f738-4898-82e2-1e0362890db6-public-tls-certs\") pod \"c4dfd337-f738-4898-82e2-1e0362890db6\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.002453 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4dfd337-f738-4898-82e2-1e0362890db6-httpd-run\") pod \"c4dfd337-f738-4898-82e2-1e0362890db6\" (UID: \"c4dfd337-f738-4898-82e2-1e0362890db6\") " Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.002979 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4dfd337-f738-4898-82e2-1e0362890db6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c4dfd337-f738-4898-82e2-1e0362890db6" (UID: "c4dfd337-f738-4898-82e2-1e0362890db6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.003713 4915 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4dfd337-f738-4898-82e2-1e0362890db6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.003746 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4dfd337-f738-4898-82e2-1e0362890db6-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.008109 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "c4dfd337-f738-4898-82e2-1e0362890db6" (UID: "c4dfd337-f738-4898-82e2-1e0362890db6"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.022372 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4dfd337-f738-4898-82e2-1e0362890db6-kube-api-access-7psjl" (OuterVolumeSpecName: "kube-api-access-7psjl") pod "c4dfd337-f738-4898-82e2-1e0362890db6" (UID: "c4dfd337-f738-4898-82e2-1e0362890db6"). InnerVolumeSpecName "kube-api-access-7psjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.036915 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4dfd337-f738-4898-82e2-1e0362890db6-scripts" (OuterVolumeSpecName: "scripts") pod "c4dfd337-f738-4898-82e2-1e0362890db6" (UID: "c4dfd337-f738-4898-82e2-1e0362890db6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.073201 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4dfd337-f738-4898-82e2-1e0362890db6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4dfd337-f738-4898-82e2-1e0362890db6" (UID: "c4dfd337-f738-4898-82e2-1e0362890db6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.088949 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4dfd337-f738-4898-82e2-1e0362890db6-config-data" (OuterVolumeSpecName: "config-data") pod "c4dfd337-f738-4898-82e2-1e0362890db6" (UID: "c4dfd337-f738-4898-82e2-1e0362890db6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.105560 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4dfd337-f738-4898-82e2-1e0362890db6-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.105600 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4dfd337-f738-4898-82e2-1e0362890db6-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.105610 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4dfd337-f738-4898-82e2-1e0362890db6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.105631 4915 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.105642 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7psjl\" (UniqueName: \"kubernetes.io/projected/c4dfd337-f738-4898-82e2-1e0362890db6-kube-api-access-7psjl\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.111524 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4dfd337-f738-4898-82e2-1e0362890db6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c4dfd337-f738-4898-82e2-1e0362890db6" (UID: "c4dfd337-f738-4898-82e2-1e0362890db6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.127456 4915 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.207122 4915 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4dfd337-f738-4898-82e2-1e0362890db6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.207165 4915 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.309267 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w9vcg"] Jan 27 19:02:38 crc kubenswrapper[4915]: E0127 19:02:38.309972 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d786e434-5fa6-4259-a6bf-594f61b7364a" containerName="mariadb-database-create" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.309991 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d786e434-5fa6-4259-a6bf-594f61b7364a" containerName="mariadb-database-create" Jan 27 19:02:38 crc kubenswrapper[4915]: E0127 19:02:38.309999 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddd31932-4bdb-4426-b2bc-77c6cf650a50" containerName="mariadb-account-create-update" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.310005 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddd31932-4bdb-4426-b2bc-77c6cf650a50" containerName="mariadb-account-create-update" Jan 27 19:02:38 crc kubenswrapper[4915]: E0127 19:02:38.310014 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08034122-eb86-44a0-aba7-6d6d07afee31" containerName="mariadb-database-create" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.310021 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="08034122-eb86-44a0-aba7-6d6d07afee31" containerName="mariadb-database-create" Jan 27 19:02:38 crc kubenswrapper[4915]: E0127 19:02:38.310035 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b270a3c-b3a7-484a-ba5d-3acd08465527" containerName="mariadb-account-create-update" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.310041 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b270a3c-b3a7-484a-ba5d-3acd08465527" containerName="mariadb-account-create-update" Jan 27 19:02:38 crc kubenswrapper[4915]: E0127 19:02:38.310052 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4dfd337-f738-4898-82e2-1e0362890db6" containerName="glance-log" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.310059 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4dfd337-f738-4898-82e2-1e0362890db6" containerName="glance-log" Jan 27 19:02:38 crc kubenswrapper[4915]: E0127 19:02:38.310075 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44c9af00-6a16-4e30-b375-95d5bbe7bec6" containerName="mariadb-account-create-update" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.310080 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="44c9af00-6a16-4e30-b375-95d5bbe7bec6" containerName="mariadb-account-create-update" Jan 27 19:02:38 crc kubenswrapper[4915]: E0127 19:02:38.310090 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4dfd337-f738-4898-82e2-1e0362890db6" containerName="glance-httpd" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.310096 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4dfd337-f738-4898-82e2-1e0362890db6" containerName="glance-httpd" Jan 27 19:02:38 crc kubenswrapper[4915]: E0127 19:02:38.310120 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9960a75-b93c-4b1b-b6f8-6f2168b597df" containerName="mariadb-database-create" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.310129 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9960a75-b93c-4b1b-b6f8-6f2168b597df" containerName="mariadb-database-create" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.310281 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4dfd337-f738-4898-82e2-1e0362890db6" containerName="glance-httpd" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.310293 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="08034122-eb86-44a0-aba7-6d6d07afee31" containerName="mariadb-database-create" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.310304 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="d786e434-5fa6-4259-a6bf-594f61b7364a" containerName="mariadb-database-create" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.310311 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4dfd337-f738-4898-82e2-1e0362890db6" containerName="glance-log" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.310320 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="44c9af00-6a16-4e30-b375-95d5bbe7bec6" containerName="mariadb-account-create-update" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.310331 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddd31932-4bdb-4426-b2bc-77c6cf650a50" containerName="mariadb-account-create-update" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.310341 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b270a3c-b3a7-484a-ba5d-3acd08465527" containerName="mariadb-account-create-update" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.310350 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9960a75-b93c-4b1b-b6f8-6f2168b597df" containerName="mariadb-database-create" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.310956 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-w9vcg" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.313256 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.314080 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-kltgb" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.314606 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.334184 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w9vcg"] Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.344947 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c4dfd337-f738-4898-82e2-1e0362890db6","Type":"ContainerDied","Data":"b876248c3439affcab72a047008d79858fb24c5de0fc7a7a6df3fafe866fd097"} Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.344993 4915 scope.go:117] "RemoveContainer" containerID="3d1a5e0f164c523c27775e8d8045682b87e3d22b442149c1d46c8e2ad5a7ff75" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.345113 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.345427 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.345898 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.383269 4915 scope.go:117] "RemoveContainer" containerID="21810529ed8ee02366c51c1050f3fb71bd8d8b4d18df7ce556caba17c5d367cd" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.386150 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.398856 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.412099 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90fb062d-f400-4aac-9c83-409336ff071d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-w9vcg\" (UID: \"90fb062d-f400-4aac-9c83-409336ff071d\") " pod="openstack/nova-cell0-conductor-db-sync-w9vcg" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.412197 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90fb062d-f400-4aac-9c83-409336ff071d-scripts\") pod \"nova-cell0-conductor-db-sync-w9vcg\" (UID: \"90fb062d-f400-4aac-9c83-409336ff071d\") " pod="openstack/nova-cell0-conductor-db-sync-w9vcg" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.412229 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k475\" (UniqueName: \"kubernetes.io/projected/90fb062d-f400-4aac-9c83-409336ff071d-kube-api-access-5k475\") pod \"nova-cell0-conductor-db-sync-w9vcg\" (UID: \"90fb062d-f400-4aac-9c83-409336ff071d\") " pod="openstack/nova-cell0-conductor-db-sync-w9vcg" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.412272 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90fb062d-f400-4aac-9c83-409336ff071d-config-data\") pod \"nova-cell0-conductor-db-sync-w9vcg\" (UID: \"90fb062d-f400-4aac-9c83-409336ff071d\") " pod="openstack/nova-cell0-conductor-db-sync-w9vcg" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.412349 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.414133 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.437159 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.437353 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.444093 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.515976 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-scripts\") pod \"glance-default-external-api-0\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " pod="openstack/glance-default-external-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.516029 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90fb062d-f400-4aac-9c83-409336ff071d-config-data\") pod \"nova-cell0-conductor-db-sync-w9vcg\" (UID: \"90fb062d-f400-4aac-9c83-409336ff071d\") " pod="openstack/nova-cell0-conductor-db-sync-w9vcg" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.516079 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-logs\") pod \"glance-default-external-api-0\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " pod="openstack/glance-default-external-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.516098 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " pod="openstack/glance-default-external-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.516121 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-config-data\") pod \"glance-default-external-api-0\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " pod="openstack/glance-default-external-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.516149 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " pod="openstack/glance-default-external-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.516165 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " pod="openstack/glance-default-external-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.516197 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90fb062d-f400-4aac-9c83-409336ff071d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-w9vcg\" (UID: \"90fb062d-f400-4aac-9c83-409336ff071d\") " pod="openstack/nova-cell0-conductor-db-sync-w9vcg" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.516220 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " pod="openstack/glance-default-external-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.516235 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm9cd\" (UniqueName: \"kubernetes.io/projected/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-kube-api-access-sm9cd\") pod \"glance-default-external-api-0\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " pod="openstack/glance-default-external-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.516310 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90fb062d-f400-4aac-9c83-409336ff071d-scripts\") pod \"nova-cell0-conductor-db-sync-w9vcg\" (UID: \"90fb062d-f400-4aac-9c83-409336ff071d\") " pod="openstack/nova-cell0-conductor-db-sync-w9vcg" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.516335 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k475\" (UniqueName: \"kubernetes.io/projected/90fb062d-f400-4aac-9c83-409336ff071d-kube-api-access-5k475\") pod \"nova-cell0-conductor-db-sync-w9vcg\" (UID: \"90fb062d-f400-4aac-9c83-409336ff071d\") " pod="openstack/nova-cell0-conductor-db-sync-w9vcg" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.523609 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90fb062d-f400-4aac-9c83-409336ff071d-scripts\") pod \"nova-cell0-conductor-db-sync-w9vcg\" (UID: \"90fb062d-f400-4aac-9c83-409336ff071d\") " pod="openstack/nova-cell0-conductor-db-sync-w9vcg" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.525441 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90fb062d-f400-4aac-9c83-409336ff071d-config-data\") pod \"nova-cell0-conductor-db-sync-w9vcg\" (UID: \"90fb062d-f400-4aac-9c83-409336ff071d\") " pod="openstack/nova-cell0-conductor-db-sync-w9vcg" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.533606 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90fb062d-f400-4aac-9c83-409336ff071d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-w9vcg\" (UID: \"90fb062d-f400-4aac-9c83-409336ff071d\") " pod="openstack/nova-cell0-conductor-db-sync-w9vcg" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.537965 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k475\" (UniqueName: \"kubernetes.io/projected/90fb062d-f400-4aac-9c83-409336ff071d-kube-api-access-5k475\") pod \"nova-cell0-conductor-db-sync-w9vcg\" (UID: \"90fb062d-f400-4aac-9c83-409336ff071d\") " pod="openstack/nova-cell0-conductor-db-sync-w9vcg" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.618987 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-scripts\") pod \"glance-default-external-api-0\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " pod="openstack/glance-default-external-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.619049 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-logs\") pod \"glance-default-external-api-0\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " pod="openstack/glance-default-external-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.619069 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " pod="openstack/glance-default-external-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.619092 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-config-data\") pod \"glance-default-external-api-0\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " pod="openstack/glance-default-external-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.619119 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " pod="openstack/glance-default-external-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.619137 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " pod="openstack/glance-default-external-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.619168 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm9cd\" (UniqueName: \"kubernetes.io/projected/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-kube-api-access-sm9cd\") pod \"glance-default-external-api-0\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " pod="openstack/glance-default-external-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.619184 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " pod="openstack/glance-default-external-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.621051 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-logs\") pod \"glance-default-external-api-0\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " pod="openstack/glance-default-external-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.621119 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " pod="openstack/glance-default-external-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.621314 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.626629 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " pod="openstack/glance-default-external-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.628033 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-w9vcg" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.628296 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-scripts\") pod \"glance-default-external-api-0\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " pod="openstack/glance-default-external-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.628653 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-config-data\") pod \"glance-default-external-api-0\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " pod="openstack/glance-default-external-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.629252 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " pod="openstack/glance-default-external-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.656494 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm9cd\" (UniqueName: \"kubernetes.io/projected/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-kube-api-access-sm9cd\") pod \"glance-default-external-api-0\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " pod="openstack/glance-default-external-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.660445 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " pod="openstack/glance-default-external-api-0" Jan 27 19:02:38 crc kubenswrapper[4915]: I0127 19:02:38.752433 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 19:02:39 crc kubenswrapper[4915]: I0127 19:02:39.166225 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w9vcg"] Jan 27 19:02:39 crc kubenswrapper[4915]: I0127 19:02:39.206582 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 19:02:39 crc kubenswrapper[4915]: I0127 19:02:39.367733 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4dfd337-f738-4898-82e2-1e0362890db6" path="/var/lib/kubelet/pods/c4dfd337-f738-4898-82e2-1e0362890db6/volumes" Jan 27 19:02:39 crc kubenswrapper[4915]: I0127 19:02:39.368675 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43","Type":"ContainerStarted","Data":"189fc85982919d425616b908ee089e3dbafb8c6fc0b6fa2d5bf412971e21f280"} Jan 27 19:02:39 crc kubenswrapper[4915]: I0127 19:02:39.368695 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-w9vcg" event={"ID":"90fb062d-f400-4aac-9c83-409336ff071d","Type":"ContainerStarted","Data":"2922b644d3054c9f722227be12506631f11d73924c7353e5b7304aa9077cbf5f"} Jan 27 19:02:40 crc kubenswrapper[4915]: I0127 19:02:40.388250 4915 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 19:02:40 crc kubenswrapper[4915]: I0127 19:02:40.388583 4915 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 19:02:40 crc kubenswrapper[4915]: I0127 19:02:40.388842 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43","Type":"ContainerStarted","Data":"06641066358db82da7a50ef5141d5040acea1b4c9a664c360a4dc77e290a9f7d"} Jan 27 19:02:40 crc kubenswrapper[4915]: I0127 19:02:40.876362 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 27 19:02:40 crc kubenswrapper[4915]: I0127 19:02:40.891564 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 19:02:40 crc kubenswrapper[4915]: I0127 19:02:40.893397 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 19:02:41 crc kubenswrapper[4915]: I0127 19:02:41.402532 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43","Type":"ContainerStarted","Data":"dd40b8fd0daebe36f826489ae0a0b8562d952c77d4c20c94744566de6a48b7a9"} Jan 27 19:02:41 crc kubenswrapper[4915]: I0127 19:02:41.433997 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.433977947 podStartE2EDuration="3.433977947s" podCreationTimestamp="2026-01-27 19:02:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:02:41.429220661 +0000 UTC m=+1252.787074325" watchObservedRunningTime="2026-01-27 19:02:41.433977947 +0000 UTC m=+1252.791831611" Jan 27 19:02:48 crc kubenswrapper[4915]: I0127 19:02:48.471019 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-w9vcg" event={"ID":"90fb062d-f400-4aac-9c83-409336ff071d","Type":"ContainerStarted","Data":"4a61282c22786665c1c6c450e9b65eb6a23a9e6b7e17f42e87e361901c21f628"} Jan 27 19:02:48 crc kubenswrapper[4915]: I0127 19:02:48.504449 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-w9vcg" podStartSLOduration=2.395431846 podStartE2EDuration="10.504418833s" podCreationTimestamp="2026-01-27 19:02:38 +0000 UTC" firstStartedPulling="2026-01-27 19:02:39.17372732 +0000 UTC m=+1250.531580984" lastFinishedPulling="2026-01-27 19:02:47.282714307 +0000 UTC m=+1258.640567971" observedRunningTime="2026-01-27 19:02:48.491426634 +0000 UTC m=+1259.849280308" watchObservedRunningTime="2026-01-27 19:02:48.504418833 +0000 UTC m=+1259.862272537" Jan 27 19:02:48 crc kubenswrapper[4915]: I0127 19:02:48.753948 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 19:02:48 crc kubenswrapper[4915]: I0127 19:02:48.754510 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 19:02:48 crc kubenswrapper[4915]: I0127 19:02:48.797438 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 19:02:48 crc kubenswrapper[4915]: I0127 19:02:48.833110 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 19:02:49 crc kubenswrapper[4915]: I0127 19:02:49.486563 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 19:02:49 crc kubenswrapper[4915]: I0127 19:02:49.486846 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 19:02:50 crc kubenswrapper[4915]: I0127 19:02:50.403822 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="cd52116b-0ba6-450d-be65-ac0d409827f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 19:02:51 crc kubenswrapper[4915]: I0127 19:02:51.316158 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 19:02:51 crc kubenswrapper[4915]: I0127 19:02:51.452214 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 19:02:55 crc kubenswrapper[4915]: W0127 19:02:55.111533 4915 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08034122_eb86_44a0_aba7_6d6d07afee31.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08034122_eb86_44a0_aba7_6d6d07afee31.slice: no such file or directory Jan 27 19:02:55 crc kubenswrapper[4915]: W0127 19:02:55.112024 4915 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b270a3c_b3a7_484a_ba5d_3acd08465527.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b270a3c_b3a7_484a_ba5d_3acd08465527.slice: no such file or directory Jan 27 19:02:55 crc kubenswrapper[4915]: W0127 19:02:55.112055 4915 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddd31932_4bdb_4426_b2bc_77c6cf650a50.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddd31932_4bdb_4426_b2bc_77c6cf650a50.slice: no such file or directory Jan 27 19:02:55 crc kubenswrapper[4915]: W0127 19:02:55.112074 4915 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44c9af00_6a16_4e30_b375_95d5bbe7bec6.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44c9af00_6a16_4e30_b375_95d5bbe7bec6.slice: no such file or directory Jan 27 19:02:55 crc kubenswrapper[4915]: W0127 19:02:55.119326 4915 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9960a75_b93c_4b1b_b6f8_6f2168b597df.slice/crio-c0d64659bf0dc9eba16f7c6df86d5d02dd32e1f4ed6ea654371489fb6e805cef": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9960a75_b93c_4b1b_b6f8_6f2168b597df.slice/crio-c0d64659bf0dc9eba16f7c6df86d5d02dd32e1f4ed6ea654371489fb6e805cef: no such file or directory Jan 27 19:02:55 crc kubenswrapper[4915]: W0127 19:02:55.119361 4915 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9960a75_b93c_4b1b_b6f8_6f2168b597df.slice/crio-conmon-db398784d22a8ab35ea75bbe90593b343f7ea20905f57079cee11f49dd81b517.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9960a75_b93c_4b1b_b6f8_6f2168b597df.slice/crio-conmon-db398784d22a8ab35ea75bbe90593b343f7ea20905f57079cee11f49dd81b517.scope: no such file or directory Jan 27 19:02:55 crc kubenswrapper[4915]: W0127 19:02:55.119379 4915 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9960a75_b93c_4b1b_b6f8_6f2168b597df.slice/crio-db398784d22a8ab35ea75bbe90593b343f7ea20905f57079cee11f49dd81b517.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9960a75_b93c_4b1b_b6f8_6f2168b597df.slice/crio-db398784d22a8ab35ea75bbe90593b343f7ea20905f57079cee11f49dd81b517.scope: no such file or directory Jan 27 19:02:55 crc kubenswrapper[4915]: W0127 19:02:55.119399 4915 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd786e434_5fa6_4259_a6bf_594f61b7364a.slice/crio-61a56d6aad544cfbb1193b75525a7581b1849c4666207e170a53fe82ac0995c8": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd786e434_5fa6_4259_a6bf_594f61b7364a.slice/crio-61a56d6aad544cfbb1193b75525a7581b1849c4666207e170a53fe82ac0995c8: no such file or directory Jan 27 19:02:55 crc kubenswrapper[4915]: W0127 19:02:55.119417 4915 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd786e434_5fa6_4259_a6bf_594f61b7364a.slice/crio-conmon-f597c72d6e48db1ed7f8a9a65cb96d325a3bf4ccf903fe72a0904e1613c1cfca.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd786e434_5fa6_4259_a6bf_594f61b7364a.slice/crio-conmon-f597c72d6e48db1ed7f8a9a65cb96d325a3bf4ccf903fe72a0904e1613c1cfca.scope: no such file or directory Jan 27 19:02:55 crc kubenswrapper[4915]: W0127 19:02:55.119439 4915 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd786e434_5fa6_4259_a6bf_594f61b7364a.slice/crio-f597c72d6e48db1ed7f8a9a65cb96d325a3bf4ccf903fe72a0904e1613c1cfca.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd786e434_5fa6_4259_a6bf_594f61b7364a.slice/crio-f597c72d6e48db1ed7f8a9a65cb96d325a3bf4ccf903fe72a0904e1613c1cfca.scope: no such file or directory Jan 27 19:02:55 crc kubenswrapper[4915]: E0127 19:02:55.357540 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4dfd337_f738_4898_82e2_1e0362890db6.slice/crio-3d1a5e0f164c523c27775e8d8045682b87e3d22b442149c1d46c8e2ad5a7ff75.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9455215d_2a98_42df_a801_53f31071447e.slice/crio-51f19db988a90c12173bf7469812b2b41d8d54e9f4debf7e7701ded8ee263f01\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6c1db90_61fc_4aa0_8371_eae7ac202752.slice/crio-71f4ace2955425eb87c5118964008f2ca4f570e18e4e221df8dd2338bffa18ab\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4dfd337_f738_4898_82e2_1e0362890db6.slice/crio-conmon-3d1a5e0f164c523c27775e8d8045682b87e3d22b442149c1d46c8e2ad5a7ff75.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd786e434_5fa6_4259_a6bf_594f61b7364a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9960a75_b93c_4b1b_b6f8_6f2168b597df.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9455215d_2a98_42df_a801_53f31071447e.slice/crio-conmon-e1099dce504cb119e341c86eb7e8e9e9820968ff40772d97e4d942a2883762d6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd52116b_0ba6_450d_be65_ac0d409827f5.slice/crio-641e9fdf4baae3d740f09c1d1b16af80fa9de4c484c4c3080f571c8b21550b91.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4dfd337_f738_4898_82e2_1e0362890db6.slice/crio-b876248c3439affcab72a047008d79858fb24c5de0fc7a7a6df3fafe866fd097\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4dfd337_f738_4898_82e2_1e0362890db6.slice/crio-21810529ed8ee02366c51c1050f3fb71bd8d8b4d18df7ce556caba17c5d367cd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd52116b_0ba6_450d_be65_ac0d409827f5.slice/crio-conmon-641e9fdf4baae3d740f09c1d1b16af80fa9de4c484c4c3080f571c8b21550b91.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4dfd337_f738_4898_82e2_1e0362890db6.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4dfd337_f738_4898_82e2_1e0362890db6.slice/crio-conmon-21810529ed8ee02366c51c1050f3fb71bd8d8b4d18df7ce556caba17c5d367cd.scope\": RecentStats: unable to find data in memory cache]" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.480026 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.550170 4915 generic.go:334] "Generic (PLEG): container finished" podID="cd52116b-0ba6-450d-be65-ac0d409827f5" containerID="641e9fdf4baae3d740f09c1d1b16af80fa9de4c484c4c3080f571c8b21550b91" exitCode=137 Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.550245 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd52116b-0ba6-450d-be65-ac0d409827f5","Type":"ContainerDied","Data":"641e9fdf4baae3d740f09c1d1b16af80fa9de4c484c4c3080f571c8b21550b91"} Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.550535 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd52116b-0ba6-450d-be65-ac0d409827f5","Type":"ContainerDied","Data":"558c1bc1b0d6cdb1c535909aeabb4089eca607288562b9b74589df1a83892e3a"} Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.550553 4915 scope.go:117] "RemoveContainer" containerID="641e9fdf4baae3d740f09c1d1b16af80fa9de4c484c4c3080f571c8b21550b91" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.550256 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.570149 4915 scope.go:117] "RemoveContainer" containerID="dc6091a43f5d7c5ed6b03f9d758e430eb03928414a9db9dbb5570cb2d8bb55d3" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.576680 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd52116b-0ba6-450d-be65-ac0d409827f5-log-httpd\") pod \"cd52116b-0ba6-450d-be65-ac0d409827f5\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.576779 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd52116b-0ba6-450d-be65-ac0d409827f5-config-data\") pod \"cd52116b-0ba6-450d-be65-ac0d409827f5\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.577129 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6w9rg\" (UniqueName: \"kubernetes.io/projected/cd52116b-0ba6-450d-be65-ac0d409827f5-kube-api-access-6w9rg\") pod \"cd52116b-0ba6-450d-be65-ac0d409827f5\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.577473 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd52116b-0ba6-450d-be65-ac0d409827f5-combined-ca-bundle\") pod \"cd52116b-0ba6-450d-be65-ac0d409827f5\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.577548 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd52116b-0ba6-450d-be65-ac0d409827f5-sg-core-conf-yaml\") pod \"cd52116b-0ba6-450d-be65-ac0d409827f5\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.577639 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd52116b-0ba6-450d-be65-ac0d409827f5-run-httpd\") pod \"cd52116b-0ba6-450d-be65-ac0d409827f5\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.577703 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd52116b-0ba6-450d-be65-ac0d409827f5-scripts\") pod \"cd52116b-0ba6-450d-be65-ac0d409827f5\" (UID: \"cd52116b-0ba6-450d-be65-ac0d409827f5\") " Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.577885 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd52116b-0ba6-450d-be65-ac0d409827f5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cd52116b-0ba6-450d-be65-ac0d409827f5" (UID: "cd52116b-0ba6-450d-be65-ac0d409827f5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.578411 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd52116b-0ba6-450d-be65-ac0d409827f5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cd52116b-0ba6-450d-be65-ac0d409827f5" (UID: "cd52116b-0ba6-450d-be65-ac0d409827f5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.579029 4915 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd52116b-0ba6-450d-be65-ac0d409827f5-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.579125 4915 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd52116b-0ba6-450d-be65-ac0d409827f5-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.583305 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd52116b-0ba6-450d-be65-ac0d409827f5-scripts" (OuterVolumeSpecName: "scripts") pod "cd52116b-0ba6-450d-be65-ac0d409827f5" (UID: "cd52116b-0ba6-450d-be65-ac0d409827f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.585666 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd52116b-0ba6-450d-be65-ac0d409827f5-kube-api-access-6w9rg" (OuterVolumeSpecName: "kube-api-access-6w9rg") pod "cd52116b-0ba6-450d-be65-ac0d409827f5" (UID: "cd52116b-0ba6-450d-be65-ac0d409827f5"). InnerVolumeSpecName "kube-api-access-6w9rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.596102 4915 scope.go:117] "RemoveContainer" containerID="53083e2ba0aec9c683e88400fa37c10d11ffea5a8da161d8cd9fc59297bd294d" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.615976 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd52116b-0ba6-450d-be65-ac0d409827f5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cd52116b-0ba6-450d-be65-ac0d409827f5" (UID: "cd52116b-0ba6-450d-be65-ac0d409827f5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.652940 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd52116b-0ba6-450d-be65-ac0d409827f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd52116b-0ba6-450d-be65-ac0d409827f5" (UID: "cd52116b-0ba6-450d-be65-ac0d409827f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.674093 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd52116b-0ba6-450d-be65-ac0d409827f5-config-data" (OuterVolumeSpecName: "config-data") pod "cd52116b-0ba6-450d-be65-ac0d409827f5" (UID: "cd52116b-0ba6-450d-be65-ac0d409827f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.681215 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd52116b-0ba6-450d-be65-ac0d409827f5-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.681246 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6w9rg\" (UniqueName: \"kubernetes.io/projected/cd52116b-0ba6-450d-be65-ac0d409827f5-kube-api-access-6w9rg\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.681256 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd52116b-0ba6-450d-be65-ac0d409827f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.681266 4915 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd52116b-0ba6-450d-be65-ac0d409827f5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.681276 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd52116b-0ba6-450d-be65-ac0d409827f5-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.684324 4915 scope.go:117] "RemoveContainer" containerID="8cd3982a3f8882e1a1f2a3a43dd50bbc3c4aff2a860d83d85457ed541934c0dd" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.713658 4915 scope.go:117] "RemoveContainer" containerID="641e9fdf4baae3d740f09c1d1b16af80fa9de4c484c4c3080f571c8b21550b91" Jan 27 19:02:55 crc kubenswrapper[4915]: E0127 19:02:55.714155 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"641e9fdf4baae3d740f09c1d1b16af80fa9de4c484c4c3080f571c8b21550b91\": container with ID starting with 641e9fdf4baae3d740f09c1d1b16af80fa9de4c484c4c3080f571c8b21550b91 not found: ID does not exist" containerID="641e9fdf4baae3d740f09c1d1b16af80fa9de4c484c4c3080f571c8b21550b91" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.714193 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"641e9fdf4baae3d740f09c1d1b16af80fa9de4c484c4c3080f571c8b21550b91"} err="failed to get container status \"641e9fdf4baae3d740f09c1d1b16af80fa9de4c484c4c3080f571c8b21550b91\": rpc error: code = NotFound desc = could not find container \"641e9fdf4baae3d740f09c1d1b16af80fa9de4c484c4c3080f571c8b21550b91\": container with ID starting with 641e9fdf4baae3d740f09c1d1b16af80fa9de4c484c4c3080f571c8b21550b91 not found: ID does not exist" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.714218 4915 scope.go:117] "RemoveContainer" containerID="dc6091a43f5d7c5ed6b03f9d758e430eb03928414a9db9dbb5570cb2d8bb55d3" Jan 27 19:02:55 crc kubenswrapper[4915]: E0127 19:02:55.714671 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc6091a43f5d7c5ed6b03f9d758e430eb03928414a9db9dbb5570cb2d8bb55d3\": container with ID starting with dc6091a43f5d7c5ed6b03f9d758e430eb03928414a9db9dbb5570cb2d8bb55d3 not found: ID does not exist" containerID="dc6091a43f5d7c5ed6b03f9d758e430eb03928414a9db9dbb5570cb2d8bb55d3" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.714757 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc6091a43f5d7c5ed6b03f9d758e430eb03928414a9db9dbb5570cb2d8bb55d3"} err="failed to get container status \"dc6091a43f5d7c5ed6b03f9d758e430eb03928414a9db9dbb5570cb2d8bb55d3\": rpc error: code = NotFound desc = could not find container \"dc6091a43f5d7c5ed6b03f9d758e430eb03928414a9db9dbb5570cb2d8bb55d3\": container with ID starting with dc6091a43f5d7c5ed6b03f9d758e430eb03928414a9db9dbb5570cb2d8bb55d3 not found: ID does not exist" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.714804 4915 scope.go:117] "RemoveContainer" containerID="53083e2ba0aec9c683e88400fa37c10d11ffea5a8da161d8cd9fc59297bd294d" Jan 27 19:02:55 crc kubenswrapper[4915]: E0127 19:02:55.715291 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53083e2ba0aec9c683e88400fa37c10d11ffea5a8da161d8cd9fc59297bd294d\": container with ID starting with 53083e2ba0aec9c683e88400fa37c10d11ffea5a8da161d8cd9fc59297bd294d not found: ID does not exist" containerID="53083e2ba0aec9c683e88400fa37c10d11ffea5a8da161d8cd9fc59297bd294d" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.715342 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53083e2ba0aec9c683e88400fa37c10d11ffea5a8da161d8cd9fc59297bd294d"} err="failed to get container status \"53083e2ba0aec9c683e88400fa37c10d11ffea5a8da161d8cd9fc59297bd294d\": rpc error: code = NotFound desc = could not find container \"53083e2ba0aec9c683e88400fa37c10d11ffea5a8da161d8cd9fc59297bd294d\": container with ID starting with 53083e2ba0aec9c683e88400fa37c10d11ffea5a8da161d8cd9fc59297bd294d not found: ID does not exist" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.715368 4915 scope.go:117] "RemoveContainer" containerID="8cd3982a3f8882e1a1f2a3a43dd50bbc3c4aff2a860d83d85457ed541934c0dd" Jan 27 19:02:55 crc kubenswrapper[4915]: E0127 19:02:55.715670 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cd3982a3f8882e1a1f2a3a43dd50bbc3c4aff2a860d83d85457ed541934c0dd\": container with ID starting with 8cd3982a3f8882e1a1f2a3a43dd50bbc3c4aff2a860d83d85457ed541934c0dd not found: ID does not exist" containerID="8cd3982a3f8882e1a1f2a3a43dd50bbc3c4aff2a860d83d85457ed541934c0dd" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.715729 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cd3982a3f8882e1a1f2a3a43dd50bbc3c4aff2a860d83d85457ed541934c0dd"} err="failed to get container status \"8cd3982a3f8882e1a1f2a3a43dd50bbc3c4aff2a860d83d85457ed541934c0dd\": rpc error: code = NotFound desc = could not find container \"8cd3982a3f8882e1a1f2a3a43dd50bbc3c4aff2a860d83d85457ed541934c0dd\": container with ID starting with 8cd3982a3f8882e1a1f2a3a43dd50bbc3c4aff2a860d83d85457ed541934c0dd not found: ID does not exist" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.918101 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.934366 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.947699 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:02:55 crc kubenswrapper[4915]: E0127 19:02:55.948222 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd52116b-0ba6-450d-be65-ac0d409827f5" containerName="ceilometer-notification-agent" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.948252 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd52116b-0ba6-450d-be65-ac0d409827f5" containerName="ceilometer-notification-agent" Jan 27 19:02:55 crc kubenswrapper[4915]: E0127 19:02:55.948287 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd52116b-0ba6-450d-be65-ac0d409827f5" containerName="sg-core" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.948298 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd52116b-0ba6-450d-be65-ac0d409827f5" containerName="sg-core" Jan 27 19:02:55 crc kubenswrapper[4915]: E0127 19:02:55.948321 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd52116b-0ba6-450d-be65-ac0d409827f5" containerName="ceilometer-central-agent" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.948331 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd52116b-0ba6-450d-be65-ac0d409827f5" containerName="ceilometer-central-agent" Jan 27 19:02:55 crc kubenswrapper[4915]: E0127 19:02:55.948354 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd52116b-0ba6-450d-be65-ac0d409827f5" containerName="proxy-httpd" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.948364 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd52116b-0ba6-450d-be65-ac0d409827f5" containerName="proxy-httpd" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.948626 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd52116b-0ba6-450d-be65-ac0d409827f5" containerName="ceilometer-notification-agent" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.948665 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd52116b-0ba6-450d-be65-ac0d409827f5" containerName="proxy-httpd" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.948684 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd52116b-0ba6-450d-be65-ac0d409827f5" containerName="sg-core" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.948712 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd52116b-0ba6-450d-be65-ac0d409827f5" containerName="ceilometer-central-agent" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.951250 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.953639 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.955127 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 19:02:55 crc kubenswrapper[4915]: I0127 19:02:55.960667 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:02:56 crc kubenswrapper[4915]: I0127 19:02:56.089139 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-config-data\") pod \"ceilometer-0\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " pod="openstack/ceilometer-0" Jan 27 19:02:56 crc kubenswrapper[4915]: I0127 19:02:56.089267 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-scripts\") pod \"ceilometer-0\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " pod="openstack/ceilometer-0" Jan 27 19:02:56 crc kubenswrapper[4915]: I0127 19:02:56.089391 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-log-httpd\") pod \"ceilometer-0\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " pod="openstack/ceilometer-0" Jan 27 19:02:56 crc kubenswrapper[4915]: I0127 19:02:56.089520 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-run-httpd\") pod \"ceilometer-0\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " pod="openstack/ceilometer-0" Jan 27 19:02:56 crc kubenswrapper[4915]: I0127 19:02:56.089593 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " pod="openstack/ceilometer-0" Jan 27 19:02:56 crc kubenswrapper[4915]: I0127 19:02:56.089748 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " pod="openstack/ceilometer-0" Jan 27 19:02:56 crc kubenswrapper[4915]: I0127 19:02:56.089843 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8g5l\" (UniqueName: \"kubernetes.io/projected/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-kube-api-access-r8g5l\") pod \"ceilometer-0\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " pod="openstack/ceilometer-0" Jan 27 19:02:56 crc kubenswrapper[4915]: I0127 19:02:56.192082 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-log-httpd\") pod \"ceilometer-0\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " pod="openstack/ceilometer-0" Jan 27 19:02:56 crc kubenswrapper[4915]: I0127 19:02:56.205947 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-run-httpd\") pod \"ceilometer-0\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " pod="openstack/ceilometer-0" Jan 27 19:02:56 crc kubenswrapper[4915]: I0127 19:02:56.206005 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " pod="openstack/ceilometer-0" Jan 27 19:02:56 crc kubenswrapper[4915]: I0127 19:02:56.206087 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " pod="openstack/ceilometer-0" Jan 27 19:02:56 crc kubenswrapper[4915]: I0127 19:02:56.206131 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8g5l\" (UniqueName: \"kubernetes.io/projected/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-kube-api-access-r8g5l\") pod \"ceilometer-0\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " pod="openstack/ceilometer-0" Jan 27 19:02:56 crc kubenswrapper[4915]: I0127 19:02:56.206333 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-config-data\") pod \"ceilometer-0\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " pod="openstack/ceilometer-0" Jan 27 19:02:56 crc kubenswrapper[4915]: I0127 19:02:56.206375 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-scripts\") pod \"ceilometer-0\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " pod="openstack/ceilometer-0" Jan 27 19:02:56 crc kubenswrapper[4915]: I0127 19:02:56.197534 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-log-httpd\") pod \"ceilometer-0\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " pod="openstack/ceilometer-0" Jan 27 19:02:56 crc kubenswrapper[4915]: I0127 19:02:56.211644 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-run-httpd\") pod \"ceilometer-0\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " pod="openstack/ceilometer-0" Jan 27 19:02:56 crc kubenswrapper[4915]: I0127 19:02:56.215059 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " pod="openstack/ceilometer-0" Jan 27 19:02:56 crc kubenswrapper[4915]: I0127 19:02:56.218273 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-scripts\") pod \"ceilometer-0\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " pod="openstack/ceilometer-0" Jan 27 19:02:56 crc kubenswrapper[4915]: I0127 19:02:56.220721 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " pod="openstack/ceilometer-0" Jan 27 19:02:56 crc kubenswrapper[4915]: I0127 19:02:56.221162 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-config-data\") pod \"ceilometer-0\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " pod="openstack/ceilometer-0" Jan 27 19:02:56 crc kubenswrapper[4915]: I0127 19:02:56.244235 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8g5l\" (UniqueName: \"kubernetes.io/projected/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-kube-api-access-r8g5l\") pod \"ceilometer-0\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " pod="openstack/ceilometer-0" Jan 27 19:02:56 crc kubenswrapper[4915]: I0127 19:02:56.276116 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:02:56 crc kubenswrapper[4915]: I0127 19:02:56.793108 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:02:56 crc kubenswrapper[4915]: W0127 19:02:56.793146 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26e9c2f5_33a8_4439_97cc_5da38f3ca1e7.slice/crio-9b64b5d209d788961beaf9e51fac63ae89efd1885b6e90508d1ff918172c86cd WatchSource:0}: Error finding container 9b64b5d209d788961beaf9e51fac63ae89efd1885b6e90508d1ff918172c86cd: Status 404 returned error can't find the container with id 9b64b5d209d788961beaf9e51fac63ae89efd1885b6e90508d1ff918172c86cd Jan 27 19:02:57 crc kubenswrapper[4915]: I0127 19:02:57.368812 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd52116b-0ba6-450d-be65-ac0d409827f5" path="/var/lib/kubelet/pods/cd52116b-0ba6-450d-be65-ac0d409827f5/volumes" Jan 27 19:02:57 crc kubenswrapper[4915]: I0127 19:02:57.579159 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7","Type":"ContainerStarted","Data":"9b64b5d209d788961beaf9e51fac63ae89efd1885b6e90508d1ff918172c86cd"} Jan 27 19:02:58 crc kubenswrapper[4915]: I0127 19:02:58.588925 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7","Type":"ContainerStarted","Data":"a1d29196fbdde52ccfad6154f3b0fb7fe2b84b303b95e505e00422e31d73a010"} Jan 27 19:02:58 crc kubenswrapper[4915]: I0127 19:02:58.589283 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7","Type":"ContainerStarted","Data":"c87c5870eae5df2cc19225c57bea3e5f2f3af24a6f3798c34453c3aa25233791"} Jan 27 19:02:58 crc kubenswrapper[4915]: I0127 19:02:58.590508 4915 generic.go:334] "Generic (PLEG): container finished" podID="90fb062d-f400-4aac-9c83-409336ff071d" containerID="4a61282c22786665c1c6c450e9b65eb6a23a9e6b7e17f42e87e361901c21f628" exitCode=0 Jan 27 19:02:58 crc kubenswrapper[4915]: I0127 19:02:58.590536 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-w9vcg" event={"ID":"90fb062d-f400-4aac-9c83-409336ff071d","Type":"ContainerDied","Data":"4a61282c22786665c1c6c450e9b65eb6a23a9e6b7e17f42e87e361901c21f628"} Jan 27 19:02:59 crc kubenswrapper[4915]: I0127 19:02:59.604264 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7","Type":"ContainerStarted","Data":"9a2451d511edb32b9951baa240367bb8330982d97a6ad4a00eb9bb810361d199"} Jan 27 19:02:59 crc kubenswrapper[4915]: I0127 19:02:59.940948 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-w9vcg" Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.076449 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90fb062d-f400-4aac-9c83-409336ff071d-config-data\") pod \"90fb062d-f400-4aac-9c83-409336ff071d\" (UID: \"90fb062d-f400-4aac-9c83-409336ff071d\") " Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.076923 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90fb062d-f400-4aac-9c83-409336ff071d-scripts\") pod \"90fb062d-f400-4aac-9c83-409336ff071d\" (UID: \"90fb062d-f400-4aac-9c83-409336ff071d\") " Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.077200 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k475\" (UniqueName: \"kubernetes.io/projected/90fb062d-f400-4aac-9c83-409336ff071d-kube-api-access-5k475\") pod \"90fb062d-f400-4aac-9c83-409336ff071d\" (UID: \"90fb062d-f400-4aac-9c83-409336ff071d\") " Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.081570 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90fb062d-f400-4aac-9c83-409336ff071d-combined-ca-bundle\") pod \"90fb062d-f400-4aac-9c83-409336ff071d\" (UID: \"90fb062d-f400-4aac-9c83-409336ff071d\") " Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.083705 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90fb062d-f400-4aac-9c83-409336ff071d-scripts" (OuterVolumeSpecName: "scripts") pod "90fb062d-f400-4aac-9c83-409336ff071d" (UID: "90fb062d-f400-4aac-9c83-409336ff071d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.084124 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90fb062d-f400-4aac-9c83-409336ff071d-kube-api-access-5k475" (OuterVolumeSpecName: "kube-api-access-5k475") pod "90fb062d-f400-4aac-9c83-409336ff071d" (UID: "90fb062d-f400-4aac-9c83-409336ff071d"). InnerVolumeSpecName "kube-api-access-5k475". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.103078 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90fb062d-f400-4aac-9c83-409336ff071d-config-data" (OuterVolumeSpecName: "config-data") pod "90fb062d-f400-4aac-9c83-409336ff071d" (UID: "90fb062d-f400-4aac-9c83-409336ff071d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.119438 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90fb062d-f400-4aac-9c83-409336ff071d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90fb062d-f400-4aac-9c83-409336ff071d" (UID: "90fb062d-f400-4aac-9c83-409336ff071d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.184784 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90fb062d-f400-4aac-9c83-409336ff071d-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.184860 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k475\" (UniqueName: \"kubernetes.io/projected/90fb062d-f400-4aac-9c83-409336ff071d-kube-api-access-5k475\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.184883 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90fb062d-f400-4aac-9c83-409336ff071d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.184901 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90fb062d-f400-4aac-9c83-409336ff071d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.615393 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-w9vcg" event={"ID":"90fb062d-f400-4aac-9c83-409336ff071d","Type":"ContainerDied","Data":"2922b644d3054c9f722227be12506631f11d73924c7353e5b7304aa9077cbf5f"} Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.616876 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2922b644d3054c9f722227be12506631f11d73924c7353e5b7304aa9077cbf5f" Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.615437 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-w9vcg" Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.617922 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7","Type":"ContainerStarted","Data":"52e0bf2d77e29d1919a72d1ffede334758be72296ae021122132409dcfa2406e"} Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.619530 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.663676 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.187103906 podStartE2EDuration="5.663653798s" podCreationTimestamp="2026-01-27 19:02:55 +0000 UTC" firstStartedPulling="2026-01-27 19:02:56.79543096 +0000 UTC m=+1268.153284634" lastFinishedPulling="2026-01-27 19:03:00.271980862 +0000 UTC m=+1271.629834526" observedRunningTime="2026-01-27 19:03:00.644722177 +0000 UTC m=+1272.002575851" watchObservedRunningTime="2026-01-27 19:03:00.663653798 +0000 UTC m=+1272.021507462" Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.731709 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 19:03:00 crc kubenswrapper[4915]: E0127 19:03:00.732326 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90fb062d-f400-4aac-9c83-409336ff071d" containerName="nova-cell0-conductor-db-sync" Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.732354 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="90fb062d-f400-4aac-9c83-409336ff071d" containerName="nova-cell0-conductor-db-sync" Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.732683 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="90fb062d-f400-4aac-9c83-409336ff071d" containerName="nova-cell0-conductor-db-sync" Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.733682 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.737281 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.738155 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-kltgb" Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.748618 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.796220 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d059c6f-eb7b-47ad-bdeb-2af976dd43d7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7d059c6f-eb7b-47ad-bdeb-2af976dd43d7\") " pod="openstack/nova-cell0-conductor-0" Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.796309 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrh2n\" (UniqueName: \"kubernetes.io/projected/7d059c6f-eb7b-47ad-bdeb-2af976dd43d7-kube-api-access-vrh2n\") pod \"nova-cell0-conductor-0\" (UID: \"7d059c6f-eb7b-47ad-bdeb-2af976dd43d7\") " pod="openstack/nova-cell0-conductor-0" Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.796374 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d059c6f-eb7b-47ad-bdeb-2af976dd43d7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7d059c6f-eb7b-47ad-bdeb-2af976dd43d7\") " pod="openstack/nova-cell0-conductor-0" Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.906982 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrh2n\" (UniqueName: \"kubernetes.io/projected/7d059c6f-eb7b-47ad-bdeb-2af976dd43d7-kube-api-access-vrh2n\") pod \"nova-cell0-conductor-0\" (UID: \"7d059c6f-eb7b-47ad-bdeb-2af976dd43d7\") " pod="openstack/nova-cell0-conductor-0" Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.907299 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d059c6f-eb7b-47ad-bdeb-2af976dd43d7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7d059c6f-eb7b-47ad-bdeb-2af976dd43d7\") " pod="openstack/nova-cell0-conductor-0" Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.907579 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d059c6f-eb7b-47ad-bdeb-2af976dd43d7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7d059c6f-eb7b-47ad-bdeb-2af976dd43d7\") " pod="openstack/nova-cell0-conductor-0" Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.913365 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d059c6f-eb7b-47ad-bdeb-2af976dd43d7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7d059c6f-eb7b-47ad-bdeb-2af976dd43d7\") " pod="openstack/nova-cell0-conductor-0" Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.915029 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d059c6f-eb7b-47ad-bdeb-2af976dd43d7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7d059c6f-eb7b-47ad-bdeb-2af976dd43d7\") " pod="openstack/nova-cell0-conductor-0" Jan 27 19:03:00 crc kubenswrapper[4915]: I0127 19:03:00.930378 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrh2n\" (UniqueName: \"kubernetes.io/projected/7d059c6f-eb7b-47ad-bdeb-2af976dd43d7-kube-api-access-vrh2n\") pod \"nova-cell0-conductor-0\" (UID: \"7d059c6f-eb7b-47ad-bdeb-2af976dd43d7\") " pod="openstack/nova-cell0-conductor-0" Jan 27 19:03:01 crc kubenswrapper[4915]: I0127 19:03:01.058554 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 19:03:01 crc kubenswrapper[4915]: I0127 19:03:01.533328 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 19:03:01 crc kubenswrapper[4915]: I0127 19:03:01.628223 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7d059c6f-eb7b-47ad-bdeb-2af976dd43d7","Type":"ContainerStarted","Data":"eb386f79be2f2c517c1a63cc09c48bc18f5204fafe2161a73fd4032c6a7e536d"} Jan 27 19:03:02 crc kubenswrapper[4915]: I0127 19:03:02.641596 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7d059c6f-eb7b-47ad-bdeb-2af976dd43d7","Type":"ContainerStarted","Data":"c7fb90c400c672a38753580b510a0c3a5677129c7aa4308ee8cb3a9337fd46e2"} Jan 27 19:03:02 crc kubenswrapper[4915]: I0127 19:03:02.664378 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.66435671 podStartE2EDuration="2.66435671s" podCreationTimestamp="2026-01-27 19:03:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:03:02.662368706 +0000 UTC m=+1274.020222390" watchObservedRunningTime="2026-01-27 19:03:02.66435671 +0000 UTC m=+1274.022210374" Jan 27 19:03:03 crc kubenswrapper[4915]: I0127 19:03:03.651236 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.096177 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.618171 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-wq9n4"] Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.621693 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wq9n4" Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.628170 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.628542 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.635191 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-wq9n4"] Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.715603 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fcc76eb-7a19-4509-b8e0-1051dc3e3231-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-wq9n4\" (UID: \"8fcc76eb-7a19-4509-b8e0-1051dc3e3231\") " pod="openstack/nova-cell0-cell-mapping-wq9n4" Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.715652 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fcc76eb-7a19-4509-b8e0-1051dc3e3231-config-data\") pod \"nova-cell0-cell-mapping-wq9n4\" (UID: \"8fcc76eb-7a19-4509-b8e0-1051dc3e3231\") " pod="openstack/nova-cell0-cell-mapping-wq9n4" Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.715677 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9n8b\" (UniqueName: \"kubernetes.io/projected/8fcc76eb-7a19-4509-b8e0-1051dc3e3231-kube-api-access-q9n8b\") pod \"nova-cell0-cell-mapping-wq9n4\" (UID: \"8fcc76eb-7a19-4509-b8e0-1051dc3e3231\") " pod="openstack/nova-cell0-cell-mapping-wq9n4" Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.715708 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fcc76eb-7a19-4509-b8e0-1051dc3e3231-scripts\") pod \"nova-cell0-cell-mapping-wq9n4\" (UID: \"8fcc76eb-7a19-4509-b8e0-1051dc3e3231\") " pod="openstack/nova-cell0-cell-mapping-wq9n4" Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.817691 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9n8b\" (UniqueName: \"kubernetes.io/projected/8fcc76eb-7a19-4509-b8e0-1051dc3e3231-kube-api-access-q9n8b\") pod \"nova-cell0-cell-mapping-wq9n4\" (UID: \"8fcc76eb-7a19-4509-b8e0-1051dc3e3231\") " pod="openstack/nova-cell0-cell-mapping-wq9n4" Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.817946 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fcc76eb-7a19-4509-b8e0-1051dc3e3231-scripts\") pod \"nova-cell0-cell-mapping-wq9n4\" (UID: \"8fcc76eb-7a19-4509-b8e0-1051dc3e3231\") " pod="openstack/nova-cell0-cell-mapping-wq9n4" Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.818193 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fcc76eb-7a19-4509-b8e0-1051dc3e3231-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-wq9n4\" (UID: \"8fcc76eb-7a19-4509-b8e0-1051dc3e3231\") " pod="openstack/nova-cell0-cell-mapping-wq9n4" Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.818556 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fcc76eb-7a19-4509-b8e0-1051dc3e3231-config-data\") pod \"nova-cell0-cell-mapping-wq9n4\" (UID: \"8fcc76eb-7a19-4509-b8e0-1051dc3e3231\") " pod="openstack/nova-cell0-cell-mapping-wq9n4" Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.823603 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fcc76eb-7a19-4509-b8e0-1051dc3e3231-scripts\") pod \"nova-cell0-cell-mapping-wq9n4\" (UID: \"8fcc76eb-7a19-4509-b8e0-1051dc3e3231\") " pod="openstack/nova-cell0-cell-mapping-wq9n4" Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.827454 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fcc76eb-7a19-4509-b8e0-1051dc3e3231-config-data\") pod \"nova-cell0-cell-mapping-wq9n4\" (UID: \"8fcc76eb-7a19-4509-b8e0-1051dc3e3231\") " pod="openstack/nova-cell0-cell-mapping-wq9n4" Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.855903 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.857650 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.857877 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fcc76eb-7a19-4509-b8e0-1051dc3e3231-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-wq9n4\" (UID: \"8fcc76eb-7a19-4509-b8e0-1051dc3e3231\") " pod="openstack/nova-cell0-cell-mapping-wq9n4" Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.860611 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9n8b\" (UniqueName: \"kubernetes.io/projected/8fcc76eb-7a19-4509-b8e0-1051dc3e3231-kube-api-access-q9n8b\") pod \"nova-cell0-cell-mapping-wq9n4\" (UID: \"8fcc76eb-7a19-4509-b8e0-1051dc3e3231\") " pod="openstack/nova-cell0-cell-mapping-wq9n4" Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.864083 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.901266 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.923105 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83613ee1-cb91-4a7d-8b99-0c8912fa40a5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"83613ee1-cb91-4a7d-8b99-0c8912fa40a5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.923195 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83613ee1-cb91-4a7d-8b99-0c8912fa40a5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"83613ee1-cb91-4a7d-8b99-0c8912fa40a5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.923366 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmwns\" (UniqueName: \"kubernetes.io/projected/83613ee1-cb91-4a7d-8b99-0c8912fa40a5-kube-api-access-kmwns\") pod \"nova-cell1-novncproxy-0\" (UID: \"83613ee1-cb91-4a7d-8b99-0c8912fa40a5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.960945 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.961185 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.961288 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 19:03:06 crc kubenswrapper[4915]: I0127 19:03:06.976431 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.006147 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wq9n4" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.011081 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.020536 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.023679 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.029224 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.029607 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/775f6671-cdff-4296-9acd-59fa7d919c0d-logs\") pod \"nova-api-0\" (UID: \"775f6671-cdff-4296-9acd-59fa7d919c0d\") " pod="openstack/nova-api-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.029656 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/775f6671-cdff-4296-9acd-59fa7d919c0d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"775f6671-cdff-4296-9acd-59fa7d919c0d\") " pod="openstack/nova-api-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.029687 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmwns\" (UniqueName: \"kubernetes.io/projected/83613ee1-cb91-4a7d-8b99-0c8912fa40a5-kube-api-access-kmwns\") pod \"nova-cell1-novncproxy-0\" (UID: \"83613ee1-cb91-4a7d-8b99-0c8912fa40a5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.029725 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkhsc\" (UniqueName: \"kubernetes.io/projected/775f6671-cdff-4296-9acd-59fa7d919c0d-kube-api-access-gkhsc\") pod \"nova-api-0\" (UID: \"775f6671-cdff-4296-9acd-59fa7d919c0d\") " pod="openstack/nova-api-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.029775 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/775f6671-cdff-4296-9acd-59fa7d919c0d-config-data\") pod \"nova-api-0\" (UID: \"775f6671-cdff-4296-9acd-59fa7d919c0d\") " pod="openstack/nova-api-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.029809 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83613ee1-cb91-4a7d-8b99-0c8912fa40a5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"83613ee1-cb91-4a7d-8b99-0c8912fa40a5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.029844 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83613ee1-cb91-4a7d-8b99-0c8912fa40a5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"83613ee1-cb91-4a7d-8b99-0c8912fa40a5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.034384 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83613ee1-cb91-4a7d-8b99-0c8912fa40a5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"83613ee1-cb91-4a7d-8b99-0c8912fa40a5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.050768 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.053515 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.060316 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.075783 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmwns\" (UniqueName: \"kubernetes.io/projected/83613ee1-cb91-4a7d-8b99-0c8912fa40a5-kube-api-access-kmwns\") pod \"nova-cell1-novncproxy-0\" (UID: \"83613ee1-cb91-4a7d-8b99-0c8912fa40a5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.106077 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.131339 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84fgh\" (UniqueName: \"kubernetes.io/projected/6199537a-d8a6-401d-882b-df63e10f1705-kube-api-access-84fgh\") pod \"nova-metadata-0\" (UID: \"6199537a-d8a6-401d-882b-df63e10f1705\") " pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.131396 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6199537a-d8a6-401d-882b-df63e10f1705-logs\") pod \"nova-metadata-0\" (UID: \"6199537a-d8a6-401d-882b-df63e10f1705\") " pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.131420 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqk46\" (UniqueName: \"kubernetes.io/projected/a7f63629-3c4c-4cef-9a51-6015c3c95211-kube-api-access-wqk46\") pod \"nova-scheduler-0\" (UID: \"a7f63629-3c4c-4cef-9a51-6015c3c95211\") " pod="openstack/nova-scheduler-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.131470 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7f63629-3c4c-4cef-9a51-6015c3c95211-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a7f63629-3c4c-4cef-9a51-6015c3c95211\") " pod="openstack/nova-scheduler-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.131556 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/775f6671-cdff-4296-9acd-59fa7d919c0d-logs\") pod \"nova-api-0\" (UID: \"775f6671-cdff-4296-9acd-59fa7d919c0d\") " pod="openstack/nova-api-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.131583 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/775f6671-cdff-4296-9acd-59fa7d919c0d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"775f6671-cdff-4296-9acd-59fa7d919c0d\") " pod="openstack/nova-api-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.131636 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7f63629-3c4c-4cef-9a51-6015c3c95211-config-data\") pod \"nova-scheduler-0\" (UID: \"a7f63629-3c4c-4cef-9a51-6015c3c95211\") " pod="openstack/nova-scheduler-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.131696 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkhsc\" (UniqueName: \"kubernetes.io/projected/775f6671-cdff-4296-9acd-59fa7d919c0d-kube-api-access-gkhsc\") pod \"nova-api-0\" (UID: \"775f6671-cdff-4296-9acd-59fa7d919c0d\") " pod="openstack/nova-api-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.131759 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/775f6671-cdff-4296-9acd-59fa7d919c0d-config-data\") pod \"nova-api-0\" (UID: \"775f6671-cdff-4296-9acd-59fa7d919c0d\") " pod="openstack/nova-api-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.131828 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6199537a-d8a6-401d-882b-df63e10f1705-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6199537a-d8a6-401d-882b-df63e10f1705\") " pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.131872 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6199537a-d8a6-401d-882b-df63e10f1705-config-data\") pod \"nova-metadata-0\" (UID: \"6199537a-d8a6-401d-882b-df63e10f1705\") " pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.132111 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/775f6671-cdff-4296-9acd-59fa7d919c0d-logs\") pod \"nova-api-0\" (UID: \"775f6671-cdff-4296-9acd-59fa7d919c0d\") " pod="openstack/nova-api-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.146311 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83613ee1-cb91-4a7d-8b99-0c8912fa40a5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"83613ee1-cb91-4a7d-8b99-0c8912fa40a5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.152528 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/775f6671-cdff-4296-9acd-59fa7d919c0d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"775f6671-cdff-4296-9acd-59fa7d919c0d\") " pod="openstack/nova-api-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.167343 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkhsc\" (UniqueName: \"kubernetes.io/projected/775f6671-cdff-4296-9acd-59fa7d919c0d-kube-api-access-gkhsc\") pod \"nova-api-0\" (UID: \"775f6671-cdff-4296-9acd-59fa7d919c0d\") " pod="openstack/nova-api-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.170758 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/775f6671-cdff-4296-9acd-59fa7d919c0d-config-data\") pod \"nova-api-0\" (UID: \"775f6671-cdff-4296-9acd-59fa7d919c0d\") " pod="openstack/nova-api-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.188686 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-kffk9"] Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.190297 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-kffk9" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.205280 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-kffk9"] Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.238098 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7f63629-3c4c-4cef-9a51-6015c3c95211-config-data\") pod \"nova-scheduler-0\" (UID: \"a7f63629-3c4c-4cef-9a51-6015c3c95211\") " pod="openstack/nova-scheduler-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.238467 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6199537a-d8a6-401d-882b-df63e10f1705-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6199537a-d8a6-401d-882b-df63e10f1705\") " pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.238489 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6199537a-d8a6-401d-882b-df63e10f1705-config-data\") pod \"nova-metadata-0\" (UID: \"6199537a-d8a6-401d-882b-df63e10f1705\") " pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.238614 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84fgh\" (UniqueName: \"kubernetes.io/projected/6199537a-d8a6-401d-882b-df63e10f1705-kube-api-access-84fgh\") pod \"nova-metadata-0\" (UID: \"6199537a-d8a6-401d-882b-df63e10f1705\") " pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.238646 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6199537a-d8a6-401d-882b-df63e10f1705-logs\") pod \"nova-metadata-0\" (UID: \"6199537a-d8a6-401d-882b-df63e10f1705\") " pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.238660 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqk46\" (UniqueName: \"kubernetes.io/projected/a7f63629-3c4c-4cef-9a51-6015c3c95211-kube-api-access-wqk46\") pod \"nova-scheduler-0\" (UID: \"a7f63629-3c4c-4cef-9a51-6015c3c95211\") " pod="openstack/nova-scheduler-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.239385 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7f63629-3c4c-4cef-9a51-6015c3c95211-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a7f63629-3c4c-4cef-9a51-6015c3c95211\") " pod="openstack/nova-scheduler-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.252250 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6199537a-d8a6-401d-882b-df63e10f1705-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6199537a-d8a6-401d-882b-df63e10f1705\") " pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.253163 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6199537a-d8a6-401d-882b-df63e10f1705-logs\") pod \"nova-metadata-0\" (UID: \"6199537a-d8a6-401d-882b-df63e10f1705\") " pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.267726 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6199537a-d8a6-401d-882b-df63e10f1705-config-data\") pod \"nova-metadata-0\" (UID: \"6199537a-d8a6-401d-882b-df63e10f1705\") " pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.269183 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7f63629-3c4c-4cef-9a51-6015c3c95211-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a7f63629-3c4c-4cef-9a51-6015c3c95211\") " pod="openstack/nova-scheduler-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.272656 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84fgh\" (UniqueName: \"kubernetes.io/projected/6199537a-d8a6-401d-882b-df63e10f1705-kube-api-access-84fgh\") pod \"nova-metadata-0\" (UID: \"6199537a-d8a6-401d-882b-df63e10f1705\") " pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.274207 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7f63629-3c4c-4cef-9a51-6015c3c95211-config-data\") pod \"nova-scheduler-0\" (UID: \"a7f63629-3c4c-4cef-9a51-6015c3c95211\") " pod="openstack/nova-scheduler-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.281408 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqk46\" (UniqueName: \"kubernetes.io/projected/a7f63629-3c4c-4cef-9a51-6015c3c95211-kube-api-access-wqk46\") pod \"nova-scheduler-0\" (UID: \"a7f63629-3c4c-4cef-9a51-6015c3c95211\") " pod="openstack/nova-scheduler-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.285346 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.307958 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.341320 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-kffk9\" (UID: \"5675837c-21b0-4474-984d-a253dfcb0df9\") " pod="openstack/dnsmasq-dns-bccf8f775-kffk9" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.341429 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-dns-svc\") pod \"dnsmasq-dns-bccf8f775-kffk9\" (UID: \"5675837c-21b0-4474-984d-a253dfcb0df9\") " pod="openstack/dnsmasq-dns-bccf8f775-kffk9" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.341472 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-kffk9\" (UID: \"5675837c-21b0-4474-984d-a253dfcb0df9\") " pod="openstack/dnsmasq-dns-bccf8f775-kffk9" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.343325 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-kffk9\" (UID: \"5675837c-21b0-4474-984d-a253dfcb0df9\") " pod="openstack/dnsmasq-dns-bccf8f775-kffk9" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.343448 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-config\") pod \"dnsmasq-dns-bccf8f775-kffk9\" (UID: \"5675837c-21b0-4474-984d-a253dfcb0df9\") " pod="openstack/dnsmasq-dns-bccf8f775-kffk9" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.343572 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7762\" (UniqueName: \"kubernetes.io/projected/5675837c-21b0-4474-984d-a253dfcb0df9-kube-api-access-z7762\") pod \"dnsmasq-dns-bccf8f775-kffk9\" (UID: \"5675837c-21b0-4474-984d-a253dfcb0df9\") " pod="openstack/dnsmasq-dns-bccf8f775-kffk9" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.445467 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-config\") pod \"dnsmasq-dns-bccf8f775-kffk9\" (UID: \"5675837c-21b0-4474-984d-a253dfcb0df9\") " pod="openstack/dnsmasq-dns-bccf8f775-kffk9" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.445560 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7762\" (UniqueName: \"kubernetes.io/projected/5675837c-21b0-4474-984d-a253dfcb0df9-kube-api-access-z7762\") pod \"dnsmasq-dns-bccf8f775-kffk9\" (UID: \"5675837c-21b0-4474-984d-a253dfcb0df9\") " pod="openstack/dnsmasq-dns-bccf8f775-kffk9" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.445619 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-kffk9\" (UID: \"5675837c-21b0-4474-984d-a253dfcb0df9\") " pod="openstack/dnsmasq-dns-bccf8f775-kffk9" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.445676 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-dns-svc\") pod \"dnsmasq-dns-bccf8f775-kffk9\" (UID: \"5675837c-21b0-4474-984d-a253dfcb0df9\") " pod="openstack/dnsmasq-dns-bccf8f775-kffk9" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.445706 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-kffk9\" (UID: \"5675837c-21b0-4474-984d-a253dfcb0df9\") " pod="openstack/dnsmasq-dns-bccf8f775-kffk9" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.445756 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-kffk9\" (UID: \"5675837c-21b0-4474-984d-a253dfcb0df9\") " pod="openstack/dnsmasq-dns-bccf8f775-kffk9" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.448043 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-config\") pod \"dnsmasq-dns-bccf8f775-kffk9\" (UID: \"5675837c-21b0-4474-984d-a253dfcb0df9\") " pod="openstack/dnsmasq-dns-bccf8f775-kffk9" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.448521 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-dns-svc\") pod \"dnsmasq-dns-bccf8f775-kffk9\" (UID: \"5675837c-21b0-4474-984d-a253dfcb0df9\") " pod="openstack/dnsmasq-dns-bccf8f775-kffk9" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.448759 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-kffk9\" (UID: \"5675837c-21b0-4474-984d-a253dfcb0df9\") " pod="openstack/dnsmasq-dns-bccf8f775-kffk9" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.449299 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-kffk9\" (UID: \"5675837c-21b0-4474-984d-a253dfcb0df9\") " pod="openstack/dnsmasq-dns-bccf8f775-kffk9" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.449529 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-kffk9\" (UID: \"5675837c-21b0-4474-984d-a253dfcb0df9\") " pod="openstack/dnsmasq-dns-bccf8f775-kffk9" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.465115 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7762\" (UniqueName: \"kubernetes.io/projected/5675837c-21b0-4474-984d-a253dfcb0df9-kube-api-access-z7762\") pod \"dnsmasq-dns-bccf8f775-kffk9\" (UID: \"5675837c-21b0-4474-984d-a253dfcb0df9\") " pod="openstack/dnsmasq-dns-bccf8f775-kffk9" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.491298 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.541227 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.587316 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-kffk9" Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.611479 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-wq9n4"] Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.756918 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wq9n4" event={"ID":"8fcc76eb-7a19-4509-b8e0-1051dc3e3231","Type":"ContainerStarted","Data":"8968ba7abd072b81806127c2e328d1f1e9c5f8521502c6b84892fc09ea74feab"} Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.836384 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 19:03:07 crc kubenswrapper[4915]: W0127 19:03:07.867679 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83613ee1_cb91_4a7d_8b99_0c8912fa40a5.slice/crio-96d643301d62537a80fc9c3c655e6f0d840f36db2770372514ca85667a189af4 WatchSource:0}: Error finding container 96d643301d62537a80fc9c3c655e6f0d840f36db2770372514ca85667a189af4: Status 404 returned error can't find the container with id 96d643301d62537a80fc9c3c655e6f0d840f36db2770372514ca85667a189af4 Jan 27 19:03:07 crc kubenswrapper[4915]: I0127 19:03:07.980894 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:03:08 crc kubenswrapper[4915]: W0127 19:03:08.003384 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod775f6671_cdff_4296_9acd_59fa7d919c0d.slice/crio-cb6892f8414535089a46f4e52226afaf7b2c53c5783384be8718f8873724bf66 WatchSource:0}: Error finding container cb6892f8414535089a46f4e52226afaf7b2c53c5783384be8718f8873724bf66: Status 404 returned error can't find the container with id cb6892f8414535089a46f4e52226afaf7b2c53c5783384be8718f8873724bf66 Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.072475 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fwtcc"] Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.074073 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fwtcc" Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.078482 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.080357 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.093893 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fwtcc"] Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.167869 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ffe11b-985f-4acd-a2ea-12983908d961-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fwtcc\" (UID: \"77ffe11b-985f-4acd-a2ea-12983908d961\") " pod="openstack/nova-cell1-conductor-db-sync-fwtcc" Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.167963 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgwr7\" (UniqueName: \"kubernetes.io/projected/77ffe11b-985f-4acd-a2ea-12983908d961-kube-api-access-dgwr7\") pod \"nova-cell1-conductor-db-sync-fwtcc\" (UID: \"77ffe11b-985f-4acd-a2ea-12983908d961\") " pod="openstack/nova-cell1-conductor-db-sync-fwtcc" Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.168153 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ffe11b-985f-4acd-a2ea-12983908d961-config-data\") pod \"nova-cell1-conductor-db-sync-fwtcc\" (UID: \"77ffe11b-985f-4acd-a2ea-12983908d961\") " pod="openstack/nova-cell1-conductor-db-sync-fwtcc" Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.168357 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77ffe11b-985f-4acd-a2ea-12983908d961-scripts\") pod \"nova-cell1-conductor-db-sync-fwtcc\" (UID: \"77ffe11b-985f-4acd-a2ea-12983908d961\") " pod="openstack/nova-cell1-conductor-db-sync-fwtcc" Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.184059 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:03:08 crc kubenswrapper[4915]: W0127 19:03:08.186930 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7f63629_3c4c_4cef_9a51_6015c3c95211.slice/crio-853e2e0f92178a3e3d14ca7d6c1291777bf223ebe63420b60844b9134c18c433 WatchSource:0}: Error finding container 853e2e0f92178a3e3d14ca7d6c1291777bf223ebe63420b60844b9134c18c433: Status 404 returned error can't find the container with id 853e2e0f92178a3e3d14ca7d6c1291777bf223ebe63420b60844b9134c18c433 Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.212591 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.269539 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77ffe11b-985f-4acd-a2ea-12983908d961-scripts\") pod \"nova-cell1-conductor-db-sync-fwtcc\" (UID: \"77ffe11b-985f-4acd-a2ea-12983908d961\") " pod="openstack/nova-cell1-conductor-db-sync-fwtcc" Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.269828 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ffe11b-985f-4acd-a2ea-12983908d961-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fwtcc\" (UID: \"77ffe11b-985f-4acd-a2ea-12983908d961\") " pod="openstack/nova-cell1-conductor-db-sync-fwtcc" Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.269891 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgwr7\" (UniqueName: \"kubernetes.io/projected/77ffe11b-985f-4acd-a2ea-12983908d961-kube-api-access-dgwr7\") pod \"nova-cell1-conductor-db-sync-fwtcc\" (UID: \"77ffe11b-985f-4acd-a2ea-12983908d961\") " pod="openstack/nova-cell1-conductor-db-sync-fwtcc" Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.269937 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ffe11b-985f-4acd-a2ea-12983908d961-config-data\") pod \"nova-cell1-conductor-db-sync-fwtcc\" (UID: \"77ffe11b-985f-4acd-a2ea-12983908d961\") " pod="openstack/nova-cell1-conductor-db-sync-fwtcc" Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.275745 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77ffe11b-985f-4acd-a2ea-12983908d961-scripts\") pod \"nova-cell1-conductor-db-sync-fwtcc\" (UID: \"77ffe11b-985f-4acd-a2ea-12983908d961\") " pod="openstack/nova-cell1-conductor-db-sync-fwtcc" Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.275885 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ffe11b-985f-4acd-a2ea-12983908d961-config-data\") pod \"nova-cell1-conductor-db-sync-fwtcc\" (UID: \"77ffe11b-985f-4acd-a2ea-12983908d961\") " pod="openstack/nova-cell1-conductor-db-sync-fwtcc" Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.276299 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ffe11b-985f-4acd-a2ea-12983908d961-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fwtcc\" (UID: \"77ffe11b-985f-4acd-a2ea-12983908d961\") " pod="openstack/nova-cell1-conductor-db-sync-fwtcc" Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.292715 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgwr7\" (UniqueName: \"kubernetes.io/projected/77ffe11b-985f-4acd-a2ea-12983908d961-kube-api-access-dgwr7\") pod \"nova-cell1-conductor-db-sync-fwtcc\" (UID: \"77ffe11b-985f-4acd-a2ea-12983908d961\") " pod="openstack/nova-cell1-conductor-db-sync-fwtcc" Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.404371 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fwtcc" Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.411310 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-kffk9"] Jan 27 19:03:08 crc kubenswrapper[4915]: W0127 19:03:08.423323 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5675837c_21b0_4474_984d_a253dfcb0df9.slice/crio-58788f3c9acb71cd2420974b19ac534c3998eedbdcf217cc917ca63324024cca WatchSource:0}: Error finding container 58788f3c9acb71cd2420974b19ac534c3998eedbdcf217cc917ca63324024cca: Status 404 returned error can't find the container with id 58788f3c9acb71cd2420974b19ac534c3998eedbdcf217cc917ca63324024cca Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.779102 4915 generic.go:334] "Generic (PLEG): container finished" podID="5675837c-21b0-4474-984d-a253dfcb0df9" containerID="946d0fbf0a60466bdadff2b0d2b3cce040d0f6778310ec166776ef746bc6b6ad" exitCode=0 Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.779319 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-kffk9" event={"ID":"5675837c-21b0-4474-984d-a253dfcb0df9","Type":"ContainerDied","Data":"946d0fbf0a60466bdadff2b0d2b3cce040d0f6778310ec166776ef746bc6b6ad"} Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.779441 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-kffk9" event={"ID":"5675837c-21b0-4474-984d-a253dfcb0df9","Type":"ContainerStarted","Data":"58788f3c9acb71cd2420974b19ac534c3998eedbdcf217cc917ca63324024cca"} Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.784099 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"83613ee1-cb91-4a7d-8b99-0c8912fa40a5","Type":"ContainerStarted","Data":"96d643301d62537a80fc9c3c655e6f0d840f36db2770372514ca85667a189af4"} Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.795154 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"775f6671-cdff-4296-9acd-59fa7d919c0d","Type":"ContainerStarted","Data":"cb6892f8414535089a46f4e52226afaf7b2c53c5783384be8718f8873724bf66"} Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.801951 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6199537a-d8a6-401d-882b-df63e10f1705","Type":"ContainerStarted","Data":"5e42402f9956a7e6fa64df4643cb057cf86e847d2107313c37440499c562fecf"} Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.803226 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a7f63629-3c4c-4cef-9a51-6015c3c95211","Type":"ContainerStarted","Data":"853e2e0f92178a3e3d14ca7d6c1291777bf223ebe63420b60844b9134c18c433"} Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.807123 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wq9n4" event={"ID":"8fcc76eb-7a19-4509-b8e0-1051dc3e3231","Type":"ContainerStarted","Data":"aa2060fdabb091563f415077006bb9c0a8815f2020f87e80cd0822b5238c5ef6"} Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.824389 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-wq9n4" podStartSLOduration=2.824372696 podStartE2EDuration="2.824372696s" podCreationTimestamp="2026-01-27 19:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:03:08.822077445 +0000 UTC m=+1280.179931109" watchObservedRunningTime="2026-01-27 19:03:08.824372696 +0000 UTC m=+1280.182226360" Jan 27 19:03:08 crc kubenswrapper[4915]: I0127 19:03:08.916337 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fwtcc"] Jan 27 19:03:08 crc kubenswrapper[4915]: W0127 19:03:08.969993 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77ffe11b_985f_4acd_a2ea_12983908d961.slice/crio-499b1df5ca130f189168fcd35d550c7e4eaa94eb94f596a54ad932bd0a9d8b17 WatchSource:0}: Error finding container 499b1df5ca130f189168fcd35d550c7e4eaa94eb94f596a54ad932bd0a9d8b17: Status 404 returned error can't find the container with id 499b1df5ca130f189168fcd35d550c7e4eaa94eb94f596a54ad932bd0a9d8b17 Jan 27 19:03:09 crc kubenswrapper[4915]: I0127 19:03:09.818283 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fwtcc" event={"ID":"77ffe11b-985f-4acd-a2ea-12983908d961","Type":"ContainerStarted","Data":"16a9a2d97d7bcb250afc9ef0659b63ec28ea84248a94d15544709c1947ac3836"} Jan 27 19:03:09 crc kubenswrapper[4915]: I0127 19:03:09.818637 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fwtcc" event={"ID":"77ffe11b-985f-4acd-a2ea-12983908d961","Type":"ContainerStarted","Data":"499b1df5ca130f189168fcd35d550c7e4eaa94eb94f596a54ad932bd0a9d8b17"} Jan 27 19:03:09 crc kubenswrapper[4915]: I0127 19:03:09.824143 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-kffk9" event={"ID":"5675837c-21b0-4474-984d-a253dfcb0df9","Type":"ContainerStarted","Data":"f25bf3422f3f7e32a635a003cddd6ea6e669acfb484a4de57c34506902c9d70a"} Jan 27 19:03:09 crc kubenswrapper[4915]: I0127 19:03:09.850240 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-fwtcc" podStartSLOduration=1.850217593 podStartE2EDuration="1.850217593s" podCreationTimestamp="2026-01-27 19:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:03:09.833994872 +0000 UTC m=+1281.191848536" watchObservedRunningTime="2026-01-27 19:03:09.850217593 +0000 UTC m=+1281.208071257" Jan 27 19:03:09 crc kubenswrapper[4915]: I0127 19:03:09.859931 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-kffk9" podStartSLOduration=2.859913259 podStartE2EDuration="2.859913259s" podCreationTimestamp="2026-01-27 19:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:03:09.855998272 +0000 UTC m=+1281.213851936" watchObservedRunningTime="2026-01-27 19:03:09.859913259 +0000 UTC m=+1281.217766923" Jan 27 19:03:10 crc kubenswrapper[4915]: I0127 19:03:10.834320 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-kffk9" Jan 27 19:03:10 crc kubenswrapper[4915]: I0127 19:03:10.936627 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:03:10 crc kubenswrapper[4915]: I0127 19:03:10.948559 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 19:03:11 crc kubenswrapper[4915]: I0127 19:03:11.844515 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6199537a-d8a6-401d-882b-df63e10f1705","Type":"ContainerStarted","Data":"76d6c767772c0cf0830ab8c969aa61d8d4e932636f94a310cc1aeb170d1053cd"} Jan 27 19:03:12 crc kubenswrapper[4915]: I0127 19:03:12.857945 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6199537a-d8a6-401d-882b-df63e10f1705","Type":"ContainerStarted","Data":"a9540274d5fbfa4440a83bce8091317cf301c93ce288def7e4854f0ad91fb89d"} Jan 27 19:03:12 crc kubenswrapper[4915]: I0127 19:03:12.858426 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6199537a-d8a6-401d-882b-df63e10f1705" containerName="nova-metadata-log" containerID="cri-o://76d6c767772c0cf0830ab8c969aa61d8d4e932636f94a310cc1aeb170d1053cd" gracePeriod=30 Jan 27 19:03:12 crc kubenswrapper[4915]: I0127 19:03:12.858976 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6199537a-d8a6-401d-882b-df63e10f1705" containerName="nova-metadata-metadata" containerID="cri-o://a9540274d5fbfa4440a83bce8091317cf301c93ce288def7e4854f0ad91fb89d" gracePeriod=30 Jan 27 19:03:12 crc kubenswrapper[4915]: I0127 19:03:12.860464 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a7f63629-3c4c-4cef-9a51-6015c3c95211","Type":"ContainerStarted","Data":"9bd34de08c292d4e5590d6c1e6d7fe93a895a93ba866965b13df85e683826b29"} Jan 27 19:03:12 crc kubenswrapper[4915]: I0127 19:03:12.862441 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"83613ee1-cb91-4a7d-8b99-0c8912fa40a5","Type":"ContainerStarted","Data":"98432153bfbb408fb37d5b72b947f997e19ea9923140645718109ff2b7621621"} Jan 27 19:03:12 crc kubenswrapper[4915]: I0127 19:03:12.862640 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="83613ee1-cb91-4a7d-8b99-0c8912fa40a5" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://98432153bfbb408fb37d5b72b947f997e19ea9923140645718109ff2b7621621" gracePeriod=30 Jan 27 19:03:12 crc kubenswrapper[4915]: I0127 19:03:12.871360 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"775f6671-cdff-4296-9acd-59fa7d919c0d","Type":"ContainerStarted","Data":"4fde1ed925d0cc23c40fd49806ac32f78656c5a2acc878706e90efe9d3420dd0"} Jan 27 19:03:12 crc kubenswrapper[4915]: I0127 19:03:12.871412 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"775f6671-cdff-4296-9acd-59fa7d919c0d","Type":"ContainerStarted","Data":"53b2a95e1d0a0c6457f227ad2f8c0ef57d5d516c2a28f45224477d066015f509"} Jan 27 19:03:12 crc kubenswrapper[4915]: I0127 19:03:12.884729 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.601830356 podStartE2EDuration="6.884704335s" podCreationTimestamp="2026-01-27 19:03:06 +0000 UTC" firstStartedPulling="2026-01-27 19:03:08.227640648 +0000 UTC m=+1279.585494312" lastFinishedPulling="2026-01-27 19:03:11.510514627 +0000 UTC m=+1282.868368291" observedRunningTime="2026-01-27 19:03:12.87863424 +0000 UTC m=+1284.236487914" watchObservedRunningTime="2026-01-27 19:03:12.884704335 +0000 UTC m=+1284.242558009" Jan 27 19:03:12 crc kubenswrapper[4915]: I0127 19:03:12.903370 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.324031384 podStartE2EDuration="6.90334829s" podCreationTimestamp="2026-01-27 19:03:06 +0000 UTC" firstStartedPulling="2026-01-27 19:03:07.925364021 +0000 UTC m=+1279.283217685" lastFinishedPulling="2026-01-27 19:03:11.504680927 +0000 UTC m=+1282.862534591" observedRunningTime="2026-01-27 19:03:12.891008995 +0000 UTC m=+1284.248862659" watchObservedRunningTime="2026-01-27 19:03:12.90334829 +0000 UTC m=+1284.261201964" Jan 27 19:03:12 crc kubenswrapper[4915]: I0127 19:03:12.914063 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.41058784 podStartE2EDuration="6.914034658s" podCreationTimestamp="2026-01-27 19:03:06 +0000 UTC" firstStartedPulling="2026-01-27 19:03:08.008181604 +0000 UTC m=+1279.366035278" lastFinishedPulling="2026-01-27 19:03:11.511628422 +0000 UTC m=+1282.869482096" observedRunningTime="2026-01-27 19:03:12.904737521 +0000 UTC m=+1284.262591185" watchObservedRunningTime="2026-01-27 19:03:12.914034658 +0000 UTC m=+1284.271888322" Jan 27 19:03:12 crc kubenswrapper[4915]: I0127 19:03:12.928107 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.617889672 podStartE2EDuration="6.92808981s" podCreationTimestamp="2026-01-27 19:03:06 +0000 UTC" firstStartedPulling="2026-01-27 19:03:08.192898944 +0000 UTC m=+1279.550752608" lastFinishedPulling="2026-01-27 19:03:11.503099082 +0000 UTC m=+1282.860952746" observedRunningTime="2026-01-27 19:03:12.922704771 +0000 UTC m=+1284.280558435" watchObservedRunningTime="2026-01-27 19:03:12.92808981 +0000 UTC m=+1284.285943474" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.458108 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.580221 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6199537a-d8a6-401d-882b-df63e10f1705-logs\") pod \"6199537a-d8a6-401d-882b-df63e10f1705\" (UID: \"6199537a-d8a6-401d-882b-df63e10f1705\") " Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.580339 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6199537a-d8a6-401d-882b-df63e10f1705-config-data\") pod \"6199537a-d8a6-401d-882b-df63e10f1705\" (UID: \"6199537a-d8a6-401d-882b-df63e10f1705\") " Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.580401 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6199537a-d8a6-401d-882b-df63e10f1705-combined-ca-bundle\") pod \"6199537a-d8a6-401d-882b-df63e10f1705\" (UID: \"6199537a-d8a6-401d-882b-df63e10f1705\") " Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.580521 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84fgh\" (UniqueName: \"kubernetes.io/projected/6199537a-d8a6-401d-882b-df63e10f1705-kube-api-access-84fgh\") pod \"6199537a-d8a6-401d-882b-df63e10f1705\" (UID: \"6199537a-d8a6-401d-882b-df63e10f1705\") " Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.580654 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6199537a-d8a6-401d-882b-df63e10f1705-logs" (OuterVolumeSpecName: "logs") pod "6199537a-d8a6-401d-882b-df63e10f1705" (UID: "6199537a-d8a6-401d-882b-df63e10f1705"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.589007 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6199537a-d8a6-401d-882b-df63e10f1705-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.609154 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6199537a-d8a6-401d-882b-df63e10f1705-kube-api-access-84fgh" (OuterVolumeSpecName: "kube-api-access-84fgh") pod "6199537a-d8a6-401d-882b-df63e10f1705" (UID: "6199537a-d8a6-401d-882b-df63e10f1705"). InnerVolumeSpecName "kube-api-access-84fgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.621527 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6199537a-d8a6-401d-882b-df63e10f1705-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6199537a-d8a6-401d-882b-df63e10f1705" (UID: "6199537a-d8a6-401d-882b-df63e10f1705"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.628057 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6199537a-d8a6-401d-882b-df63e10f1705-config-data" (OuterVolumeSpecName: "config-data") pod "6199537a-d8a6-401d-882b-df63e10f1705" (UID: "6199537a-d8a6-401d-882b-df63e10f1705"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.691177 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84fgh\" (UniqueName: \"kubernetes.io/projected/6199537a-d8a6-401d-882b-df63e10f1705-kube-api-access-84fgh\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.691214 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6199537a-d8a6-401d-882b-df63e10f1705-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.691224 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6199537a-d8a6-401d-882b-df63e10f1705-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.883038 4915 generic.go:334] "Generic (PLEG): container finished" podID="6199537a-d8a6-401d-882b-df63e10f1705" containerID="a9540274d5fbfa4440a83bce8091317cf301c93ce288def7e4854f0ad91fb89d" exitCode=0 Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.883067 4915 generic.go:334] "Generic (PLEG): container finished" podID="6199537a-d8a6-401d-882b-df63e10f1705" containerID="76d6c767772c0cf0830ab8c969aa61d8d4e932636f94a310cc1aeb170d1053cd" exitCode=143 Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.883306 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6199537a-d8a6-401d-882b-df63e10f1705","Type":"ContainerDied","Data":"a9540274d5fbfa4440a83bce8091317cf301c93ce288def7e4854f0ad91fb89d"} Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.883378 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6199537a-d8a6-401d-882b-df63e10f1705","Type":"ContainerDied","Data":"76d6c767772c0cf0830ab8c969aa61d8d4e932636f94a310cc1aeb170d1053cd"} Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.883394 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6199537a-d8a6-401d-882b-df63e10f1705","Type":"ContainerDied","Data":"5e42402f9956a7e6fa64df4643cb057cf86e847d2107313c37440499c562fecf"} Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.883398 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.883414 4915 scope.go:117] "RemoveContainer" containerID="a9540274d5fbfa4440a83bce8091317cf301c93ce288def7e4854f0ad91fb89d" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.919311 4915 scope.go:117] "RemoveContainer" containerID="76d6c767772c0cf0830ab8c969aa61d8d4e932636f94a310cc1aeb170d1053cd" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.935662 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.940700 4915 scope.go:117] "RemoveContainer" containerID="a9540274d5fbfa4440a83bce8091317cf301c93ce288def7e4854f0ad91fb89d" Jan 27 19:03:13 crc kubenswrapper[4915]: E0127 19:03:13.942342 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9540274d5fbfa4440a83bce8091317cf301c93ce288def7e4854f0ad91fb89d\": container with ID starting with a9540274d5fbfa4440a83bce8091317cf301c93ce288def7e4854f0ad91fb89d not found: ID does not exist" containerID="a9540274d5fbfa4440a83bce8091317cf301c93ce288def7e4854f0ad91fb89d" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.942389 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9540274d5fbfa4440a83bce8091317cf301c93ce288def7e4854f0ad91fb89d"} err="failed to get container status \"a9540274d5fbfa4440a83bce8091317cf301c93ce288def7e4854f0ad91fb89d\": rpc error: code = NotFound desc = could not find container \"a9540274d5fbfa4440a83bce8091317cf301c93ce288def7e4854f0ad91fb89d\": container with ID starting with a9540274d5fbfa4440a83bce8091317cf301c93ce288def7e4854f0ad91fb89d not found: ID does not exist" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.942445 4915 scope.go:117] "RemoveContainer" containerID="76d6c767772c0cf0830ab8c969aa61d8d4e932636f94a310cc1aeb170d1053cd" Jan 27 19:03:13 crc kubenswrapper[4915]: E0127 19:03:13.946014 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76d6c767772c0cf0830ab8c969aa61d8d4e932636f94a310cc1aeb170d1053cd\": container with ID starting with 76d6c767772c0cf0830ab8c969aa61d8d4e932636f94a310cc1aeb170d1053cd not found: ID does not exist" containerID="76d6c767772c0cf0830ab8c969aa61d8d4e932636f94a310cc1aeb170d1053cd" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.946071 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76d6c767772c0cf0830ab8c969aa61d8d4e932636f94a310cc1aeb170d1053cd"} err="failed to get container status \"76d6c767772c0cf0830ab8c969aa61d8d4e932636f94a310cc1aeb170d1053cd\": rpc error: code = NotFound desc = could not find container \"76d6c767772c0cf0830ab8c969aa61d8d4e932636f94a310cc1aeb170d1053cd\": container with ID starting with 76d6c767772c0cf0830ab8c969aa61d8d4e932636f94a310cc1aeb170d1053cd not found: ID does not exist" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.946103 4915 scope.go:117] "RemoveContainer" containerID="a9540274d5fbfa4440a83bce8091317cf301c93ce288def7e4854f0ad91fb89d" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.951127 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9540274d5fbfa4440a83bce8091317cf301c93ce288def7e4854f0ad91fb89d"} err="failed to get container status \"a9540274d5fbfa4440a83bce8091317cf301c93ce288def7e4854f0ad91fb89d\": rpc error: code = NotFound desc = could not find container \"a9540274d5fbfa4440a83bce8091317cf301c93ce288def7e4854f0ad91fb89d\": container with ID starting with a9540274d5fbfa4440a83bce8091317cf301c93ce288def7e4854f0ad91fb89d not found: ID does not exist" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.951171 4915 scope.go:117] "RemoveContainer" containerID="76d6c767772c0cf0830ab8c969aa61d8d4e932636f94a310cc1aeb170d1053cd" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.952330 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76d6c767772c0cf0830ab8c969aa61d8d4e932636f94a310cc1aeb170d1053cd"} err="failed to get container status \"76d6c767772c0cf0830ab8c969aa61d8d4e932636f94a310cc1aeb170d1053cd\": rpc error: code = NotFound desc = could not find container \"76d6c767772c0cf0830ab8c969aa61d8d4e932636f94a310cc1aeb170d1053cd\": container with ID starting with 76d6c767772c0cf0830ab8c969aa61d8d4e932636f94a310cc1aeb170d1053cd not found: ID does not exist" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.952388 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.968417 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:03:13 crc kubenswrapper[4915]: E0127 19:03:13.968975 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6199537a-d8a6-401d-882b-df63e10f1705" containerName="nova-metadata-log" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.968996 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="6199537a-d8a6-401d-882b-df63e10f1705" containerName="nova-metadata-log" Jan 27 19:03:13 crc kubenswrapper[4915]: E0127 19:03:13.969018 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6199537a-d8a6-401d-882b-df63e10f1705" containerName="nova-metadata-metadata" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.969026 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="6199537a-d8a6-401d-882b-df63e10f1705" containerName="nova-metadata-metadata" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.969245 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="6199537a-d8a6-401d-882b-df63e10f1705" containerName="nova-metadata-metadata" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.969274 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="6199537a-d8a6-401d-882b-df63e10f1705" containerName="nova-metadata-log" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.970548 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.973171 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.973519 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 19:03:13 crc kubenswrapper[4915]: I0127 19:03:13.979914 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:03:14 crc kubenswrapper[4915]: I0127 19:03:14.100382 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-logs\") pod \"nova-metadata-0\" (UID: \"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111\") " pod="openstack/nova-metadata-0" Jan 27 19:03:14 crc kubenswrapper[4915]: I0127 19:03:14.100504 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgk9r\" (UniqueName: \"kubernetes.io/projected/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-kube-api-access-rgk9r\") pod \"nova-metadata-0\" (UID: \"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111\") " pod="openstack/nova-metadata-0" Jan 27 19:03:14 crc kubenswrapper[4915]: I0127 19:03:14.100547 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-config-data\") pod \"nova-metadata-0\" (UID: \"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111\") " pod="openstack/nova-metadata-0" Jan 27 19:03:14 crc kubenswrapper[4915]: I0127 19:03:14.100584 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111\") " pod="openstack/nova-metadata-0" Jan 27 19:03:14 crc kubenswrapper[4915]: I0127 19:03:14.100614 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111\") " pod="openstack/nova-metadata-0" Jan 27 19:03:14 crc kubenswrapper[4915]: I0127 19:03:14.202540 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-logs\") pod \"nova-metadata-0\" (UID: \"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111\") " pod="openstack/nova-metadata-0" Jan 27 19:03:14 crc kubenswrapper[4915]: I0127 19:03:14.202757 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgk9r\" (UniqueName: \"kubernetes.io/projected/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-kube-api-access-rgk9r\") pod \"nova-metadata-0\" (UID: \"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111\") " pod="openstack/nova-metadata-0" Jan 27 19:03:14 crc kubenswrapper[4915]: I0127 19:03:14.202838 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-config-data\") pod \"nova-metadata-0\" (UID: \"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111\") " pod="openstack/nova-metadata-0" Jan 27 19:03:14 crc kubenswrapper[4915]: I0127 19:03:14.202886 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111\") " pod="openstack/nova-metadata-0" Jan 27 19:03:14 crc kubenswrapper[4915]: I0127 19:03:14.202953 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111\") " pod="openstack/nova-metadata-0" Jan 27 19:03:14 crc kubenswrapper[4915]: I0127 19:03:14.204826 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-logs\") pod \"nova-metadata-0\" (UID: \"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111\") " pod="openstack/nova-metadata-0" Jan 27 19:03:14 crc kubenswrapper[4915]: I0127 19:03:14.210510 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111\") " pod="openstack/nova-metadata-0" Jan 27 19:03:14 crc kubenswrapper[4915]: I0127 19:03:14.210911 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111\") " pod="openstack/nova-metadata-0" Jan 27 19:03:14 crc kubenswrapper[4915]: I0127 19:03:14.213887 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-config-data\") pod \"nova-metadata-0\" (UID: \"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111\") " pod="openstack/nova-metadata-0" Jan 27 19:03:14 crc kubenswrapper[4915]: I0127 19:03:14.225569 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgk9r\" (UniqueName: \"kubernetes.io/projected/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-kube-api-access-rgk9r\") pod \"nova-metadata-0\" (UID: \"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111\") " pod="openstack/nova-metadata-0" Jan 27 19:03:14 crc kubenswrapper[4915]: I0127 19:03:14.333692 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:03:14 crc kubenswrapper[4915]: I0127 19:03:14.802387 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:03:14 crc kubenswrapper[4915]: W0127 19:03:14.807466 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaea3ce45_bcf3_4e68_9c76_3bd52cf0e111.slice/crio-c60695282e9d498da9cb82094c7578a89869132922c8389016d59c3822e204a8 WatchSource:0}: Error finding container c60695282e9d498da9cb82094c7578a89869132922c8389016d59c3822e204a8: Status 404 returned error can't find the container with id c60695282e9d498da9cb82094c7578a89869132922c8389016d59c3822e204a8 Jan 27 19:03:14 crc kubenswrapper[4915]: I0127 19:03:14.895823 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111","Type":"ContainerStarted","Data":"c60695282e9d498da9cb82094c7578a89869132922c8389016d59c3822e204a8"} Jan 27 19:03:15 crc kubenswrapper[4915]: I0127 19:03:15.370823 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6199537a-d8a6-401d-882b-df63e10f1705" path="/var/lib/kubelet/pods/6199537a-d8a6-401d-882b-df63e10f1705/volumes" Jan 27 19:03:15 crc kubenswrapper[4915]: I0127 19:03:15.911641 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111","Type":"ContainerStarted","Data":"50f763cade6fdc4070a6a380194af915d8d2ba27e3e48b751c0978618c7d3e56"} Jan 27 19:03:15 crc kubenswrapper[4915]: I0127 19:03:15.911690 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111","Type":"ContainerStarted","Data":"c53b57f0196ced48b6dbde3100a1595081e74c7d63064ea9e570c6e32a7f9a47"} Jan 27 19:03:15 crc kubenswrapper[4915]: I0127 19:03:15.937577 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.937552396 podStartE2EDuration="2.937552396s" podCreationTimestamp="2026-01-27 19:03:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:03:15.930182652 +0000 UTC m=+1287.288036326" watchObservedRunningTime="2026-01-27 19:03:15.937552396 +0000 UTC m=+1287.295406070" Jan 27 19:03:16 crc kubenswrapper[4915]: I0127 19:03:16.922042 4915 generic.go:334] "Generic (PLEG): container finished" podID="77ffe11b-985f-4acd-a2ea-12983908d961" containerID="16a9a2d97d7bcb250afc9ef0659b63ec28ea84248a94d15544709c1947ac3836" exitCode=0 Jan 27 19:03:16 crc kubenswrapper[4915]: I0127 19:03:16.922141 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fwtcc" event={"ID":"77ffe11b-985f-4acd-a2ea-12983908d961","Type":"ContainerDied","Data":"16a9a2d97d7bcb250afc9ef0659b63ec28ea84248a94d15544709c1947ac3836"} Jan 27 19:03:16 crc kubenswrapper[4915]: I0127 19:03:16.924990 4915 generic.go:334] "Generic (PLEG): container finished" podID="8fcc76eb-7a19-4509-b8e0-1051dc3e3231" containerID="aa2060fdabb091563f415077006bb9c0a8815f2020f87e80cd0822b5238c5ef6" exitCode=0 Jan 27 19:03:16 crc kubenswrapper[4915]: I0127 19:03:16.925054 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wq9n4" event={"ID":"8fcc76eb-7a19-4509-b8e0-1051dc3e3231","Type":"ContainerDied","Data":"aa2060fdabb091563f415077006bb9c0a8815f2020f87e80cd0822b5238c5ef6"} Jan 27 19:03:17 crc kubenswrapper[4915]: I0127 19:03:17.285852 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:17 crc kubenswrapper[4915]: I0127 19:03:17.308044 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 19:03:17 crc kubenswrapper[4915]: I0127 19:03:17.308094 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 19:03:17 crc kubenswrapper[4915]: I0127 19:03:17.492530 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 19:03:17 crc kubenswrapper[4915]: I0127 19:03:17.492605 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 19:03:17 crc kubenswrapper[4915]: I0127 19:03:17.528868 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 19:03:17 crc kubenswrapper[4915]: I0127 19:03:17.589047 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-kffk9" Jan 27 19:03:17 crc kubenswrapper[4915]: I0127 19:03:17.676078 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-67xmd"] Jan 27 19:03:17 crc kubenswrapper[4915]: I0127 19:03:17.676568 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-67xmd" podUID="7d649213-0d40-4201-9012-222a20cb227d" containerName="dnsmasq-dns" containerID="cri-o://8c6168fa75a3ae1fe6ff503932c9a5efeae098ff0353637e894962b17722529b" gracePeriod=10 Jan 27 19:03:17 crc kubenswrapper[4915]: I0127 19:03:17.937737 4915 generic.go:334] "Generic (PLEG): container finished" podID="7d649213-0d40-4201-9012-222a20cb227d" containerID="8c6168fa75a3ae1fe6ff503932c9a5efeae098ff0353637e894962b17722529b" exitCode=0 Jan 27 19:03:17 crc kubenswrapper[4915]: I0127 19:03:17.938182 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-67xmd" event={"ID":"7d649213-0d40-4201-9012-222a20cb227d","Type":"ContainerDied","Data":"8c6168fa75a3ae1fe6ff503932c9a5efeae098ff0353637e894962b17722529b"} Jan 27 19:03:17 crc kubenswrapper[4915]: I0127 19:03:17.987871 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.208382 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-67xmd" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.306178 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-config\") pod \"7d649213-0d40-4201-9012-222a20cb227d\" (UID: \"7d649213-0d40-4201-9012-222a20cb227d\") " Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.306222 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-ovsdbserver-sb\") pod \"7d649213-0d40-4201-9012-222a20cb227d\" (UID: \"7d649213-0d40-4201-9012-222a20cb227d\") " Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.306349 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpg6n\" (UniqueName: \"kubernetes.io/projected/7d649213-0d40-4201-9012-222a20cb227d-kube-api-access-jpg6n\") pod \"7d649213-0d40-4201-9012-222a20cb227d\" (UID: \"7d649213-0d40-4201-9012-222a20cb227d\") " Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.308556 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="775f6671-cdff-4296-9acd-59fa7d919c0d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.311564 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-dns-swift-storage-0\") pod \"7d649213-0d40-4201-9012-222a20cb227d\" (UID: \"7d649213-0d40-4201-9012-222a20cb227d\") " Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.311808 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-dns-svc\") pod \"7d649213-0d40-4201-9012-222a20cb227d\" (UID: \"7d649213-0d40-4201-9012-222a20cb227d\") " Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.311844 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-ovsdbserver-nb\") pod \"7d649213-0d40-4201-9012-222a20cb227d\" (UID: \"7d649213-0d40-4201-9012-222a20cb227d\") " Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.369840 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="775f6671-cdff-4296-9acd-59fa7d919c0d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.370190 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d649213-0d40-4201-9012-222a20cb227d-kube-api-access-jpg6n" (OuterVolumeSpecName: "kube-api-access-jpg6n") pod "7d649213-0d40-4201-9012-222a20cb227d" (UID: "7d649213-0d40-4201-9012-222a20cb227d"). InnerVolumeSpecName "kube-api-access-jpg6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.392846 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7d649213-0d40-4201-9012-222a20cb227d" (UID: "7d649213-0d40-4201-9012-222a20cb227d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.392866 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-config" (OuterVolumeSpecName: "config") pod "7d649213-0d40-4201-9012-222a20cb227d" (UID: "7d649213-0d40-4201-9012-222a20cb227d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.393224 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7d649213-0d40-4201-9012-222a20cb227d" (UID: "7d649213-0d40-4201-9012-222a20cb227d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.400099 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7d649213-0d40-4201-9012-222a20cb227d" (UID: "7d649213-0d40-4201-9012-222a20cb227d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.418235 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7d649213-0d40-4201-9012-222a20cb227d" (UID: "7d649213-0d40-4201-9012-222a20cb227d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.420325 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpg6n\" (UniqueName: \"kubernetes.io/projected/7d649213-0d40-4201-9012-222a20cb227d-kube-api-access-jpg6n\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.420357 4915 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.420394 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.420408 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.420419 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.420428 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d649213-0d40-4201-9012-222a20cb227d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.452165 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wq9n4" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.501945 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fwtcc" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.623996 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77ffe11b-985f-4acd-a2ea-12983908d961-scripts\") pod \"77ffe11b-985f-4acd-a2ea-12983908d961\" (UID: \"77ffe11b-985f-4acd-a2ea-12983908d961\") " Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.624285 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fcc76eb-7a19-4509-b8e0-1051dc3e3231-combined-ca-bundle\") pod \"8fcc76eb-7a19-4509-b8e0-1051dc3e3231\" (UID: \"8fcc76eb-7a19-4509-b8e0-1051dc3e3231\") " Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.624408 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ffe11b-985f-4acd-a2ea-12983908d961-config-data\") pod \"77ffe11b-985f-4acd-a2ea-12983908d961\" (UID: \"77ffe11b-985f-4acd-a2ea-12983908d961\") " Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.624511 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9n8b\" (UniqueName: \"kubernetes.io/projected/8fcc76eb-7a19-4509-b8e0-1051dc3e3231-kube-api-access-q9n8b\") pod \"8fcc76eb-7a19-4509-b8e0-1051dc3e3231\" (UID: \"8fcc76eb-7a19-4509-b8e0-1051dc3e3231\") " Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.624626 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fcc76eb-7a19-4509-b8e0-1051dc3e3231-config-data\") pod \"8fcc76eb-7a19-4509-b8e0-1051dc3e3231\" (UID: \"8fcc76eb-7a19-4509-b8e0-1051dc3e3231\") " Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.625074 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ffe11b-985f-4acd-a2ea-12983908d961-combined-ca-bundle\") pod \"77ffe11b-985f-4acd-a2ea-12983908d961\" (UID: \"77ffe11b-985f-4acd-a2ea-12983908d961\") " Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.625159 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgwr7\" (UniqueName: \"kubernetes.io/projected/77ffe11b-985f-4acd-a2ea-12983908d961-kube-api-access-dgwr7\") pod \"77ffe11b-985f-4acd-a2ea-12983908d961\" (UID: \"77ffe11b-985f-4acd-a2ea-12983908d961\") " Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.625303 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fcc76eb-7a19-4509-b8e0-1051dc3e3231-scripts\") pod \"8fcc76eb-7a19-4509-b8e0-1051dc3e3231\" (UID: \"8fcc76eb-7a19-4509-b8e0-1051dc3e3231\") " Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.629210 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77ffe11b-985f-4acd-a2ea-12983908d961-scripts" (OuterVolumeSpecName: "scripts") pod "77ffe11b-985f-4acd-a2ea-12983908d961" (UID: "77ffe11b-985f-4acd-a2ea-12983908d961"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.629968 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fcc76eb-7a19-4509-b8e0-1051dc3e3231-scripts" (OuterVolumeSpecName: "scripts") pod "8fcc76eb-7a19-4509-b8e0-1051dc3e3231" (UID: "8fcc76eb-7a19-4509-b8e0-1051dc3e3231"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.630201 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fcc76eb-7a19-4509-b8e0-1051dc3e3231-kube-api-access-q9n8b" (OuterVolumeSpecName: "kube-api-access-q9n8b") pod "8fcc76eb-7a19-4509-b8e0-1051dc3e3231" (UID: "8fcc76eb-7a19-4509-b8e0-1051dc3e3231"). InnerVolumeSpecName "kube-api-access-q9n8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.630845 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77ffe11b-985f-4acd-a2ea-12983908d961-kube-api-access-dgwr7" (OuterVolumeSpecName: "kube-api-access-dgwr7") pod "77ffe11b-985f-4acd-a2ea-12983908d961" (UID: "77ffe11b-985f-4acd-a2ea-12983908d961"). InnerVolumeSpecName "kube-api-access-dgwr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.653322 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77ffe11b-985f-4acd-a2ea-12983908d961-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77ffe11b-985f-4acd-a2ea-12983908d961" (UID: "77ffe11b-985f-4acd-a2ea-12983908d961"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.654430 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77ffe11b-985f-4acd-a2ea-12983908d961-config-data" (OuterVolumeSpecName: "config-data") pod "77ffe11b-985f-4acd-a2ea-12983908d961" (UID: "77ffe11b-985f-4acd-a2ea-12983908d961"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.654859 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fcc76eb-7a19-4509-b8e0-1051dc3e3231-config-data" (OuterVolumeSpecName: "config-data") pod "8fcc76eb-7a19-4509-b8e0-1051dc3e3231" (UID: "8fcc76eb-7a19-4509-b8e0-1051dc3e3231"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.659375 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fcc76eb-7a19-4509-b8e0-1051dc3e3231-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fcc76eb-7a19-4509-b8e0-1051dc3e3231" (UID: "8fcc76eb-7a19-4509-b8e0-1051dc3e3231"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.730146 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77ffe11b-985f-4acd-a2ea-12983908d961-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.730173 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fcc76eb-7a19-4509-b8e0-1051dc3e3231-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.730184 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ffe11b-985f-4acd-a2ea-12983908d961-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.730194 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9n8b\" (UniqueName: \"kubernetes.io/projected/8fcc76eb-7a19-4509-b8e0-1051dc3e3231-kube-api-access-q9n8b\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.730203 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fcc76eb-7a19-4509-b8e0-1051dc3e3231-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.730211 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ffe11b-985f-4acd-a2ea-12983908d961-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.730219 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgwr7\" (UniqueName: \"kubernetes.io/projected/77ffe11b-985f-4acd-a2ea-12983908d961-kube-api-access-dgwr7\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.730227 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fcc76eb-7a19-4509-b8e0-1051dc3e3231-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.949152 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wq9n4" event={"ID":"8fcc76eb-7a19-4509-b8e0-1051dc3e3231","Type":"ContainerDied","Data":"8968ba7abd072b81806127c2e328d1f1e9c5f8521502c6b84892fc09ea74feab"} Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.949188 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8968ba7abd072b81806127c2e328d1f1e9c5f8521502c6b84892fc09ea74feab" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.949248 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wq9n4" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.971518 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-67xmd" event={"ID":"7d649213-0d40-4201-9012-222a20cb227d","Type":"ContainerDied","Data":"4227187e95d8fbdf545e8bb0a1ee52a1941ab0623a48f3e4294e7b8e43a5c4ca"} Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.971919 4915 scope.go:117] "RemoveContainer" containerID="8c6168fa75a3ae1fe6ff503932c9a5efeae098ff0353637e894962b17722529b" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.972130 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-67xmd" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.984447 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fwtcc" Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.984494 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fwtcc" event={"ID":"77ffe11b-985f-4acd-a2ea-12983908d961","Type":"ContainerDied","Data":"499b1df5ca130f189168fcd35d550c7e4eaa94eb94f596a54ad932bd0a9d8b17"} Jan 27 19:03:18 crc kubenswrapper[4915]: I0127 19:03:18.984516 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="499b1df5ca130f189168fcd35d550c7e4eaa94eb94f596a54ad932bd0a9d8b17" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.012898 4915 scope.go:117] "RemoveContainer" containerID="425c5e8236250a8b8e2984437acdee213e11990b4905f1a2f19fd4e5f5bf5221" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.047574 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-67xmd"] Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.058812 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-67xmd"] Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.074712 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 19:03:19 crc kubenswrapper[4915]: E0127 19:03:19.075187 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77ffe11b-985f-4acd-a2ea-12983908d961" containerName="nova-cell1-conductor-db-sync" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.075209 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="77ffe11b-985f-4acd-a2ea-12983908d961" containerName="nova-cell1-conductor-db-sync" Jan 27 19:03:19 crc kubenswrapper[4915]: E0127 19:03:19.075251 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d649213-0d40-4201-9012-222a20cb227d" containerName="init" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.075261 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d649213-0d40-4201-9012-222a20cb227d" containerName="init" Jan 27 19:03:19 crc kubenswrapper[4915]: E0127 19:03:19.075273 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fcc76eb-7a19-4509-b8e0-1051dc3e3231" containerName="nova-manage" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.075281 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fcc76eb-7a19-4509-b8e0-1051dc3e3231" containerName="nova-manage" Jan 27 19:03:19 crc kubenswrapper[4915]: E0127 19:03:19.075293 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d649213-0d40-4201-9012-222a20cb227d" containerName="dnsmasq-dns" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.075301 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d649213-0d40-4201-9012-222a20cb227d" containerName="dnsmasq-dns" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.075503 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="77ffe11b-985f-4acd-a2ea-12983908d961" containerName="nova-cell1-conductor-db-sync" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.075522 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fcc76eb-7a19-4509-b8e0-1051dc3e3231" containerName="nova-manage" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.075539 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d649213-0d40-4201-9012-222a20cb227d" containerName="dnsmasq-dns" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.076303 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.078840 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.088842 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.142643 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.142985 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="775f6671-cdff-4296-9acd-59fa7d919c0d" containerName="nova-api-log" containerID="cri-o://53b2a95e1d0a0c6457f227ad2f8c0ef57d5d516c2a28f45224477d066015f509" gracePeriod=30 Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.143005 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="775f6671-cdff-4296-9acd-59fa7d919c0d" containerName="nova-api-api" containerID="cri-o://4fde1ed925d0cc23c40fd49806ac32f78656c5a2acc878706e90efe9d3420dd0" gracePeriod=30 Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.168928 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.181649 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.181916 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="aea3ce45-bcf3-4e68-9c76-3bd52cf0e111" containerName="nova-metadata-log" containerID="cri-o://c53b57f0196ced48b6dbde3100a1595081e74c7d63064ea9e570c6e32a7f9a47" gracePeriod=30 Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.182417 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="aea3ce45-bcf3-4e68-9c76-3bd52cf0e111" containerName="nova-metadata-metadata" containerID="cri-o://50f763cade6fdc4070a6a380194af915d8d2ba27e3e48b751c0978618c7d3e56" gracePeriod=30 Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.237878 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5acd6cd6-a7d4-4839-b3c1-aec924797e53-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5acd6cd6-a7d4-4839-b3c1-aec924797e53\") " pod="openstack/nova-cell1-conductor-0" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.237947 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrck2\" (UniqueName: \"kubernetes.io/projected/5acd6cd6-a7d4-4839-b3c1-aec924797e53-kube-api-access-zrck2\") pod \"nova-cell1-conductor-0\" (UID: \"5acd6cd6-a7d4-4839-b3c1-aec924797e53\") " pod="openstack/nova-cell1-conductor-0" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.238070 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5acd6cd6-a7d4-4839-b3c1-aec924797e53-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5acd6cd6-a7d4-4839-b3c1-aec924797e53\") " pod="openstack/nova-cell1-conductor-0" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.335343 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.335420 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.340571 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5acd6cd6-a7d4-4839-b3c1-aec924797e53-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5acd6cd6-a7d4-4839-b3c1-aec924797e53\") " pod="openstack/nova-cell1-conductor-0" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.340738 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5acd6cd6-a7d4-4839-b3c1-aec924797e53-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5acd6cd6-a7d4-4839-b3c1-aec924797e53\") " pod="openstack/nova-cell1-conductor-0" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.341433 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrck2\" (UniqueName: \"kubernetes.io/projected/5acd6cd6-a7d4-4839-b3c1-aec924797e53-kube-api-access-zrck2\") pod \"nova-cell1-conductor-0\" (UID: \"5acd6cd6-a7d4-4839-b3c1-aec924797e53\") " pod="openstack/nova-cell1-conductor-0" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.344438 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5acd6cd6-a7d4-4839-b3c1-aec924797e53-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5acd6cd6-a7d4-4839-b3c1-aec924797e53\") " pod="openstack/nova-cell1-conductor-0" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.345202 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5acd6cd6-a7d4-4839-b3c1-aec924797e53-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5acd6cd6-a7d4-4839-b3c1-aec924797e53\") " pod="openstack/nova-cell1-conductor-0" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.356955 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrck2\" (UniqueName: \"kubernetes.io/projected/5acd6cd6-a7d4-4839-b3c1-aec924797e53-kube-api-access-zrck2\") pod \"nova-cell1-conductor-0\" (UID: \"5acd6cd6-a7d4-4839-b3c1-aec924797e53\") " pod="openstack/nova-cell1-conductor-0" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.368072 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d649213-0d40-4201-9012-222a20cb227d" path="/var/lib/kubelet/pods/7d649213-0d40-4201-9012-222a20cb227d/volumes" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.414528 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.739403 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.850298 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-config-data\") pod \"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111\" (UID: \"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111\") " Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.850376 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgk9r\" (UniqueName: \"kubernetes.io/projected/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-kube-api-access-rgk9r\") pod \"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111\" (UID: \"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111\") " Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.850428 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-logs\") pod \"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111\" (UID: \"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111\") " Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.850532 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-nova-metadata-tls-certs\") pod \"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111\" (UID: \"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111\") " Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.850648 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-combined-ca-bundle\") pod \"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111\" (UID: \"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111\") " Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.851355 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-logs" (OuterVolumeSpecName: "logs") pod "aea3ce45-bcf3-4e68-9c76-3bd52cf0e111" (UID: "aea3ce45-bcf3-4e68-9c76-3bd52cf0e111"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.851910 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.865035 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-kube-api-access-rgk9r" (OuterVolumeSpecName: "kube-api-access-rgk9r") pod "aea3ce45-bcf3-4e68-9c76-3bd52cf0e111" (UID: "aea3ce45-bcf3-4e68-9c76-3bd52cf0e111"). InnerVolumeSpecName "kube-api-access-rgk9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.879777 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-config-data" (OuterVolumeSpecName: "config-data") pod "aea3ce45-bcf3-4e68-9c76-3bd52cf0e111" (UID: "aea3ce45-bcf3-4e68-9c76-3bd52cf0e111"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.882247 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aea3ce45-bcf3-4e68-9c76-3bd52cf0e111" (UID: "aea3ce45-bcf3-4e68-9c76-3bd52cf0e111"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.907162 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "aea3ce45-bcf3-4e68-9c76-3bd52cf0e111" (UID: "aea3ce45-bcf3-4e68-9c76-3bd52cf0e111"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.939721 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 19:03:19 crc kubenswrapper[4915]: W0127 19:03:19.950596 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5acd6cd6_a7d4_4839_b3c1_aec924797e53.slice/crio-d8424f65a2ac917a5cb2614f100ad21d5e7e5af93d2e28b09e5457143522d542 WatchSource:0}: Error finding container d8424f65a2ac917a5cb2614f100ad21d5e7e5af93d2e28b09e5457143522d542: Status 404 returned error can't find the container with id d8424f65a2ac917a5cb2614f100ad21d5e7e5af93d2e28b09e5457143522d542 Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.953587 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.953620 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.953632 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgk9r\" (UniqueName: \"kubernetes.io/projected/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-kube-api-access-rgk9r\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:19 crc kubenswrapper[4915]: I0127 19:03:19.953645 4915 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.001718 4915 generic.go:334] "Generic (PLEG): container finished" podID="775f6671-cdff-4296-9acd-59fa7d919c0d" containerID="53b2a95e1d0a0c6457f227ad2f8c0ef57d5d516c2a28f45224477d066015f509" exitCode=143 Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.001928 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"775f6671-cdff-4296-9acd-59fa7d919c0d","Type":"ContainerDied","Data":"53b2a95e1d0a0c6457f227ad2f8c0ef57d5d516c2a28f45224477d066015f509"} Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.005961 4915 generic.go:334] "Generic (PLEG): container finished" podID="aea3ce45-bcf3-4e68-9c76-3bd52cf0e111" containerID="50f763cade6fdc4070a6a380194af915d8d2ba27e3e48b751c0978618c7d3e56" exitCode=0 Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.005997 4915 generic.go:334] "Generic (PLEG): container finished" podID="aea3ce45-bcf3-4e68-9c76-3bd52cf0e111" containerID="c53b57f0196ced48b6dbde3100a1595081e74c7d63064ea9e570c6e32a7f9a47" exitCode=143 Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.006063 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111","Type":"ContainerDied","Data":"50f763cade6fdc4070a6a380194af915d8d2ba27e3e48b751c0978618c7d3e56"} Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.006094 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111","Type":"ContainerDied","Data":"c53b57f0196ced48b6dbde3100a1595081e74c7d63064ea9e570c6e32a7f9a47"} Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.006107 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aea3ce45-bcf3-4e68-9c76-3bd52cf0e111","Type":"ContainerDied","Data":"c60695282e9d498da9cb82094c7578a89869132922c8389016d59c3822e204a8"} Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.006125 4915 scope.go:117] "RemoveContainer" containerID="50f763cade6fdc4070a6a380194af915d8d2ba27e3e48b751c0978618c7d3e56" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.006268 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.021770 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5acd6cd6-a7d4-4839-b3c1-aec924797e53","Type":"ContainerStarted","Data":"d8424f65a2ac917a5cb2614f100ad21d5e7e5af93d2e28b09e5457143522d542"} Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.035532 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a7f63629-3c4c-4cef-9a51-6015c3c95211" containerName="nova-scheduler-scheduler" containerID="cri-o://9bd34de08c292d4e5590d6c1e6d7fe93a895a93ba866965b13df85e683826b29" gracePeriod=30 Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.050503 4915 scope.go:117] "RemoveContainer" containerID="c53b57f0196ced48b6dbde3100a1595081e74c7d63064ea9e570c6e32a7f9a47" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.086557 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.094946 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.106546 4915 scope.go:117] "RemoveContainer" containerID="50f763cade6fdc4070a6a380194af915d8d2ba27e3e48b751c0978618c7d3e56" Jan 27 19:03:20 crc kubenswrapper[4915]: E0127 19:03:20.107891 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50f763cade6fdc4070a6a380194af915d8d2ba27e3e48b751c0978618c7d3e56\": container with ID starting with 50f763cade6fdc4070a6a380194af915d8d2ba27e3e48b751c0978618c7d3e56 not found: ID does not exist" containerID="50f763cade6fdc4070a6a380194af915d8d2ba27e3e48b751c0978618c7d3e56" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.107936 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50f763cade6fdc4070a6a380194af915d8d2ba27e3e48b751c0978618c7d3e56"} err="failed to get container status \"50f763cade6fdc4070a6a380194af915d8d2ba27e3e48b751c0978618c7d3e56\": rpc error: code = NotFound desc = could not find container \"50f763cade6fdc4070a6a380194af915d8d2ba27e3e48b751c0978618c7d3e56\": container with ID starting with 50f763cade6fdc4070a6a380194af915d8d2ba27e3e48b751c0978618c7d3e56 not found: ID does not exist" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.107963 4915 scope.go:117] "RemoveContainer" containerID="c53b57f0196ced48b6dbde3100a1595081e74c7d63064ea9e570c6e32a7f9a47" Jan 27 19:03:20 crc kubenswrapper[4915]: E0127 19:03:20.108323 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c53b57f0196ced48b6dbde3100a1595081e74c7d63064ea9e570c6e32a7f9a47\": container with ID starting with c53b57f0196ced48b6dbde3100a1595081e74c7d63064ea9e570c6e32a7f9a47 not found: ID does not exist" containerID="c53b57f0196ced48b6dbde3100a1595081e74c7d63064ea9e570c6e32a7f9a47" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.108342 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c53b57f0196ced48b6dbde3100a1595081e74c7d63064ea9e570c6e32a7f9a47"} err="failed to get container status \"c53b57f0196ced48b6dbde3100a1595081e74c7d63064ea9e570c6e32a7f9a47\": rpc error: code = NotFound desc = could not find container \"c53b57f0196ced48b6dbde3100a1595081e74c7d63064ea9e570c6e32a7f9a47\": container with ID starting with c53b57f0196ced48b6dbde3100a1595081e74c7d63064ea9e570c6e32a7f9a47 not found: ID does not exist" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.108356 4915 scope.go:117] "RemoveContainer" containerID="50f763cade6fdc4070a6a380194af915d8d2ba27e3e48b751c0978618c7d3e56" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.108677 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50f763cade6fdc4070a6a380194af915d8d2ba27e3e48b751c0978618c7d3e56"} err="failed to get container status \"50f763cade6fdc4070a6a380194af915d8d2ba27e3e48b751c0978618c7d3e56\": rpc error: code = NotFound desc = could not find container \"50f763cade6fdc4070a6a380194af915d8d2ba27e3e48b751c0978618c7d3e56\": container with ID starting with 50f763cade6fdc4070a6a380194af915d8d2ba27e3e48b751c0978618c7d3e56 not found: ID does not exist" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.108693 4915 scope.go:117] "RemoveContainer" containerID="c53b57f0196ced48b6dbde3100a1595081e74c7d63064ea9e570c6e32a7f9a47" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.108991 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c53b57f0196ced48b6dbde3100a1595081e74c7d63064ea9e570c6e32a7f9a47"} err="failed to get container status \"c53b57f0196ced48b6dbde3100a1595081e74c7d63064ea9e570c6e32a7f9a47\": rpc error: code = NotFound desc = could not find container \"c53b57f0196ced48b6dbde3100a1595081e74c7d63064ea9e570c6e32a7f9a47\": container with ID starting with c53b57f0196ced48b6dbde3100a1595081e74c7d63064ea9e570c6e32a7f9a47 not found: ID does not exist" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.117819 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:03:20 crc kubenswrapper[4915]: E0127 19:03:20.120849 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea3ce45-bcf3-4e68-9c76-3bd52cf0e111" containerName="nova-metadata-log" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.120909 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea3ce45-bcf3-4e68-9c76-3bd52cf0e111" containerName="nova-metadata-log" Jan 27 19:03:20 crc kubenswrapper[4915]: E0127 19:03:20.120938 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea3ce45-bcf3-4e68-9c76-3bd52cf0e111" containerName="nova-metadata-metadata" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.120947 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea3ce45-bcf3-4e68-9c76-3bd52cf0e111" containerName="nova-metadata-metadata" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.121467 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="aea3ce45-bcf3-4e68-9c76-3bd52cf0e111" containerName="nova-metadata-metadata" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.121517 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="aea3ce45-bcf3-4e68-9c76-3bd52cf0e111" containerName="nova-metadata-log" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.125596 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.127895 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.127908 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.135212 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.260762 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqrsf\" (UniqueName: \"kubernetes.io/projected/e73c5725-eee6-498e-825c-94bf98bb0432-kube-api-access-hqrsf\") pod \"nova-metadata-0\" (UID: \"e73c5725-eee6-498e-825c-94bf98bb0432\") " pod="openstack/nova-metadata-0" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.261103 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e73c5725-eee6-498e-825c-94bf98bb0432-logs\") pod \"nova-metadata-0\" (UID: \"e73c5725-eee6-498e-825c-94bf98bb0432\") " pod="openstack/nova-metadata-0" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.261263 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e73c5725-eee6-498e-825c-94bf98bb0432-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e73c5725-eee6-498e-825c-94bf98bb0432\") " pod="openstack/nova-metadata-0" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.261402 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e73c5725-eee6-498e-825c-94bf98bb0432-config-data\") pod \"nova-metadata-0\" (UID: \"e73c5725-eee6-498e-825c-94bf98bb0432\") " pod="openstack/nova-metadata-0" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.261587 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e73c5725-eee6-498e-825c-94bf98bb0432-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e73c5725-eee6-498e-825c-94bf98bb0432\") " pod="openstack/nova-metadata-0" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.363651 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e73c5725-eee6-498e-825c-94bf98bb0432-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e73c5725-eee6-498e-825c-94bf98bb0432\") " pod="openstack/nova-metadata-0" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.363754 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqrsf\" (UniqueName: \"kubernetes.io/projected/e73c5725-eee6-498e-825c-94bf98bb0432-kube-api-access-hqrsf\") pod \"nova-metadata-0\" (UID: \"e73c5725-eee6-498e-825c-94bf98bb0432\") " pod="openstack/nova-metadata-0" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.363857 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e73c5725-eee6-498e-825c-94bf98bb0432-logs\") pod \"nova-metadata-0\" (UID: \"e73c5725-eee6-498e-825c-94bf98bb0432\") " pod="openstack/nova-metadata-0" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.363885 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e73c5725-eee6-498e-825c-94bf98bb0432-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e73c5725-eee6-498e-825c-94bf98bb0432\") " pod="openstack/nova-metadata-0" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.363906 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e73c5725-eee6-498e-825c-94bf98bb0432-config-data\") pod \"nova-metadata-0\" (UID: \"e73c5725-eee6-498e-825c-94bf98bb0432\") " pod="openstack/nova-metadata-0" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.364373 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e73c5725-eee6-498e-825c-94bf98bb0432-logs\") pod \"nova-metadata-0\" (UID: \"e73c5725-eee6-498e-825c-94bf98bb0432\") " pod="openstack/nova-metadata-0" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.368002 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e73c5725-eee6-498e-825c-94bf98bb0432-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e73c5725-eee6-498e-825c-94bf98bb0432\") " pod="openstack/nova-metadata-0" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.368176 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e73c5725-eee6-498e-825c-94bf98bb0432-config-data\") pod \"nova-metadata-0\" (UID: \"e73c5725-eee6-498e-825c-94bf98bb0432\") " pod="openstack/nova-metadata-0" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.368581 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e73c5725-eee6-498e-825c-94bf98bb0432-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e73c5725-eee6-498e-825c-94bf98bb0432\") " pod="openstack/nova-metadata-0" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.382523 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqrsf\" (UniqueName: \"kubernetes.io/projected/e73c5725-eee6-498e-825c-94bf98bb0432-kube-api-access-hqrsf\") pod \"nova-metadata-0\" (UID: \"e73c5725-eee6-498e-825c-94bf98bb0432\") " pod="openstack/nova-metadata-0" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.459267 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:03:20 crc kubenswrapper[4915]: I0127 19:03:20.951326 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:03:20 crc kubenswrapper[4915]: W0127 19:03:20.959806 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode73c5725_eee6_498e_825c_94bf98bb0432.slice/crio-e96eb4f6375e6554627c29d8a99817788c5e7413108e9ab9d809fda35667e476 WatchSource:0}: Error finding container e96eb4f6375e6554627c29d8a99817788c5e7413108e9ab9d809fda35667e476: Status 404 returned error can't find the container with id e96eb4f6375e6554627c29d8a99817788c5e7413108e9ab9d809fda35667e476 Jan 27 19:03:21 crc kubenswrapper[4915]: I0127 19:03:21.045363 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e73c5725-eee6-498e-825c-94bf98bb0432","Type":"ContainerStarted","Data":"e96eb4f6375e6554627c29d8a99817788c5e7413108e9ab9d809fda35667e476"} Jan 27 19:03:21 crc kubenswrapper[4915]: I0127 19:03:21.047692 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5acd6cd6-a7d4-4839-b3c1-aec924797e53","Type":"ContainerStarted","Data":"1ea6f431c86fb5649051ee3f19b1974868377911695706cdfe403159de6a57f6"} Jan 27 19:03:21 crc kubenswrapper[4915]: I0127 19:03:21.050005 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 27 19:03:21 crc kubenswrapper[4915]: I0127 19:03:21.074283 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.074260591 podStartE2EDuration="2.074260591s" podCreationTimestamp="2026-01-27 19:03:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:03:21.071237983 +0000 UTC m=+1292.429091687" watchObservedRunningTime="2026-01-27 19:03:21.074260591 +0000 UTC m=+1292.432114265" Jan 27 19:03:21 crc kubenswrapper[4915]: I0127 19:03:21.373162 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aea3ce45-bcf3-4e68-9c76-3bd52cf0e111" path="/var/lib/kubelet/pods/aea3ce45-bcf3-4e68-9c76-3bd52cf0e111/volumes" Jan 27 19:03:22 crc kubenswrapper[4915]: I0127 19:03:22.058914 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e73c5725-eee6-498e-825c-94bf98bb0432","Type":"ContainerStarted","Data":"fd19fc59927e8d95edf1bd8a8f7460d53df754f2ed176e6b35daad4db61bb2e7"} Jan 27 19:03:22 crc kubenswrapper[4915]: I0127 19:03:22.058977 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e73c5725-eee6-498e-825c-94bf98bb0432","Type":"ContainerStarted","Data":"ad8415002a6a411734c8884332a0a3e0f43ca32978f590fa332234bd4e92e051"} Jan 27 19:03:22 crc kubenswrapper[4915]: I0127 19:03:22.090426 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.090407363 podStartE2EDuration="2.090407363s" podCreationTimestamp="2026-01-27 19:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:03:22.081483014 +0000 UTC m=+1293.439336678" watchObservedRunningTime="2026-01-27 19:03:22.090407363 +0000 UTC m=+1293.448261027" Jan 27 19:03:22 crc kubenswrapper[4915]: E0127 19:03:22.494596 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9bd34de08c292d4e5590d6c1e6d7fe93a895a93ba866965b13df85e683826b29" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 19:03:22 crc kubenswrapper[4915]: E0127 19:03:22.497128 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9bd34de08c292d4e5590d6c1e6d7fe93a895a93ba866965b13df85e683826b29" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 19:03:22 crc kubenswrapper[4915]: E0127 19:03:22.498967 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9bd34de08c292d4e5590d6c1e6d7fe93a895a93ba866965b13df85e683826b29" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 19:03:22 crc kubenswrapper[4915]: E0127 19:03:22.498999 4915 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="a7f63629-3c4c-4cef-9a51-6015c3c95211" containerName="nova-scheduler-scheduler" Jan 27 19:03:23 crc kubenswrapper[4915]: I0127 19:03:23.701195 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 19:03:23 crc kubenswrapper[4915]: I0127 19:03:23.836640 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7f63629-3c4c-4cef-9a51-6015c3c95211-config-data\") pod \"a7f63629-3c4c-4cef-9a51-6015c3c95211\" (UID: \"a7f63629-3c4c-4cef-9a51-6015c3c95211\") " Jan 27 19:03:23 crc kubenswrapper[4915]: I0127 19:03:23.836869 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7f63629-3c4c-4cef-9a51-6015c3c95211-combined-ca-bundle\") pod \"a7f63629-3c4c-4cef-9a51-6015c3c95211\" (UID: \"a7f63629-3c4c-4cef-9a51-6015c3c95211\") " Jan 27 19:03:23 crc kubenswrapper[4915]: I0127 19:03:23.836943 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqk46\" (UniqueName: \"kubernetes.io/projected/a7f63629-3c4c-4cef-9a51-6015c3c95211-kube-api-access-wqk46\") pod \"a7f63629-3c4c-4cef-9a51-6015c3c95211\" (UID: \"a7f63629-3c4c-4cef-9a51-6015c3c95211\") " Jan 27 19:03:23 crc kubenswrapper[4915]: I0127 19:03:23.842743 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7f63629-3c4c-4cef-9a51-6015c3c95211-kube-api-access-wqk46" (OuterVolumeSpecName: "kube-api-access-wqk46") pod "a7f63629-3c4c-4cef-9a51-6015c3c95211" (UID: "a7f63629-3c4c-4cef-9a51-6015c3c95211"). InnerVolumeSpecName "kube-api-access-wqk46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:23 crc kubenswrapper[4915]: I0127 19:03:23.864700 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7f63629-3c4c-4cef-9a51-6015c3c95211-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7f63629-3c4c-4cef-9a51-6015c3c95211" (UID: "a7f63629-3c4c-4cef-9a51-6015c3c95211"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:23 crc kubenswrapper[4915]: I0127 19:03:23.866939 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7f63629-3c4c-4cef-9a51-6015c3c95211-config-data" (OuterVolumeSpecName: "config-data") pod "a7f63629-3c4c-4cef-9a51-6015c3c95211" (UID: "a7f63629-3c4c-4cef-9a51-6015c3c95211"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:23 crc kubenswrapper[4915]: I0127 19:03:23.927202 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:03:23 crc kubenswrapper[4915]: I0127 19:03:23.938936 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqk46\" (UniqueName: \"kubernetes.io/projected/a7f63629-3c4c-4cef-9a51-6015c3c95211-kube-api-access-wqk46\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:23 crc kubenswrapper[4915]: I0127 19:03:23.938983 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7f63629-3c4c-4cef-9a51-6015c3c95211-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:23 crc kubenswrapper[4915]: I0127 19:03:23.939004 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7f63629-3c4c-4cef-9a51-6015c3c95211-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.040471 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/775f6671-cdff-4296-9acd-59fa7d919c0d-logs\") pod \"775f6671-cdff-4296-9acd-59fa7d919c0d\" (UID: \"775f6671-cdff-4296-9acd-59fa7d919c0d\") " Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.040856 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkhsc\" (UniqueName: \"kubernetes.io/projected/775f6671-cdff-4296-9acd-59fa7d919c0d-kube-api-access-gkhsc\") pod \"775f6671-cdff-4296-9acd-59fa7d919c0d\" (UID: \"775f6671-cdff-4296-9acd-59fa7d919c0d\") " Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.040890 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/775f6671-cdff-4296-9acd-59fa7d919c0d-config-data\") pod \"775f6671-cdff-4296-9acd-59fa7d919c0d\" (UID: \"775f6671-cdff-4296-9acd-59fa7d919c0d\") " Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.040943 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/775f6671-cdff-4296-9acd-59fa7d919c0d-combined-ca-bundle\") pod \"775f6671-cdff-4296-9acd-59fa7d919c0d\" (UID: \"775f6671-cdff-4296-9acd-59fa7d919c0d\") " Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.041273 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/775f6671-cdff-4296-9acd-59fa7d919c0d-logs" (OuterVolumeSpecName: "logs") pod "775f6671-cdff-4296-9acd-59fa7d919c0d" (UID: "775f6671-cdff-4296-9acd-59fa7d919c0d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.041597 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/775f6671-cdff-4296-9acd-59fa7d919c0d-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.044556 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/775f6671-cdff-4296-9acd-59fa7d919c0d-kube-api-access-gkhsc" (OuterVolumeSpecName: "kube-api-access-gkhsc") pod "775f6671-cdff-4296-9acd-59fa7d919c0d" (UID: "775f6671-cdff-4296-9acd-59fa7d919c0d"). InnerVolumeSpecName "kube-api-access-gkhsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.066548 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/775f6671-cdff-4296-9acd-59fa7d919c0d-config-data" (OuterVolumeSpecName: "config-data") pod "775f6671-cdff-4296-9acd-59fa7d919c0d" (UID: "775f6671-cdff-4296-9acd-59fa7d919c0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.068167 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/775f6671-cdff-4296-9acd-59fa7d919c0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "775f6671-cdff-4296-9acd-59fa7d919c0d" (UID: "775f6671-cdff-4296-9acd-59fa7d919c0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.143277 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkhsc\" (UniqueName: \"kubernetes.io/projected/775f6671-cdff-4296-9acd-59fa7d919c0d-kube-api-access-gkhsc\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.143319 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/775f6671-cdff-4296-9acd-59fa7d919c0d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.143333 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/775f6671-cdff-4296-9acd-59fa7d919c0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.173951 4915 generic.go:334] "Generic (PLEG): container finished" podID="775f6671-cdff-4296-9acd-59fa7d919c0d" containerID="4fde1ed925d0cc23c40fd49806ac32f78656c5a2acc878706e90efe9d3420dd0" exitCode=0 Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.174000 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"775f6671-cdff-4296-9acd-59fa7d919c0d","Type":"ContainerDied","Data":"4fde1ed925d0cc23c40fd49806ac32f78656c5a2acc878706e90efe9d3420dd0"} Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.174035 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.174063 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"775f6671-cdff-4296-9acd-59fa7d919c0d","Type":"ContainerDied","Data":"cb6892f8414535089a46f4e52226afaf7b2c53c5783384be8718f8873724bf66"} Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.174088 4915 scope.go:117] "RemoveContainer" containerID="4fde1ed925d0cc23c40fd49806ac32f78656c5a2acc878706e90efe9d3420dd0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.175838 4915 generic.go:334] "Generic (PLEG): container finished" podID="a7f63629-3c4c-4cef-9a51-6015c3c95211" containerID="9bd34de08c292d4e5590d6c1e6d7fe93a895a93ba866965b13df85e683826b29" exitCode=0 Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.175881 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a7f63629-3c4c-4cef-9a51-6015c3c95211","Type":"ContainerDied","Data":"9bd34de08c292d4e5590d6c1e6d7fe93a895a93ba866965b13df85e683826b29"} Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.175941 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.175957 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a7f63629-3c4c-4cef-9a51-6015c3c95211","Type":"ContainerDied","Data":"853e2e0f92178a3e3d14ca7d6c1291777bf223ebe63420b60844b9134c18c433"} Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.216837 4915 scope.go:117] "RemoveContainer" containerID="53b2a95e1d0a0c6457f227ad2f8c0ef57d5d516c2a28f45224477d066015f509" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.229445 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.245810 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.257953 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.272591 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 19:03:24 crc kubenswrapper[4915]: E0127 19:03:24.273023 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775f6671-cdff-4296-9acd-59fa7d919c0d" containerName="nova-api-log" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.273042 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="775f6671-cdff-4296-9acd-59fa7d919c0d" containerName="nova-api-log" Jan 27 19:03:24 crc kubenswrapper[4915]: E0127 19:03:24.273061 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775f6671-cdff-4296-9acd-59fa7d919c0d" containerName="nova-api-api" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.273067 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="775f6671-cdff-4296-9acd-59fa7d919c0d" containerName="nova-api-api" Jan 27 19:03:24 crc kubenswrapper[4915]: E0127 19:03:24.273088 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f63629-3c4c-4cef-9a51-6015c3c95211" containerName="nova-scheduler-scheduler" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.273094 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f63629-3c4c-4cef-9a51-6015c3c95211" containerName="nova-scheduler-scheduler" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.273254 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7f63629-3c4c-4cef-9a51-6015c3c95211" containerName="nova-scheduler-scheduler" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.273268 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="775f6671-cdff-4296-9acd-59fa7d919c0d" containerName="nova-api-api" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.273280 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="775f6671-cdff-4296-9acd-59fa7d919c0d" containerName="nova-api-log" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.274212 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.277388 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.285552 4915 scope.go:117] "RemoveContainer" containerID="4fde1ed925d0cc23c40fd49806ac32f78656c5a2acc878706e90efe9d3420dd0" Jan 27 19:03:24 crc kubenswrapper[4915]: E0127 19:03:24.287032 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fde1ed925d0cc23c40fd49806ac32f78656c5a2acc878706e90efe9d3420dd0\": container with ID starting with 4fde1ed925d0cc23c40fd49806ac32f78656c5a2acc878706e90efe9d3420dd0 not found: ID does not exist" containerID="4fde1ed925d0cc23c40fd49806ac32f78656c5a2acc878706e90efe9d3420dd0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.287089 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fde1ed925d0cc23c40fd49806ac32f78656c5a2acc878706e90efe9d3420dd0"} err="failed to get container status \"4fde1ed925d0cc23c40fd49806ac32f78656c5a2acc878706e90efe9d3420dd0\": rpc error: code = NotFound desc = could not find container \"4fde1ed925d0cc23c40fd49806ac32f78656c5a2acc878706e90efe9d3420dd0\": container with ID starting with 4fde1ed925d0cc23c40fd49806ac32f78656c5a2acc878706e90efe9d3420dd0 not found: ID does not exist" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.287114 4915 scope.go:117] "RemoveContainer" containerID="53b2a95e1d0a0c6457f227ad2f8c0ef57d5d516c2a28f45224477d066015f509" Jan 27 19:03:24 crc kubenswrapper[4915]: E0127 19:03:24.287390 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53b2a95e1d0a0c6457f227ad2f8c0ef57d5d516c2a28f45224477d066015f509\": container with ID starting with 53b2a95e1d0a0c6457f227ad2f8c0ef57d5d516c2a28f45224477d066015f509 not found: ID does not exist" containerID="53b2a95e1d0a0c6457f227ad2f8c0ef57d5d516c2a28f45224477d066015f509" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.287422 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53b2a95e1d0a0c6457f227ad2f8c0ef57d5d516c2a28f45224477d066015f509"} err="failed to get container status \"53b2a95e1d0a0c6457f227ad2f8c0ef57d5d516c2a28f45224477d066015f509\": rpc error: code = NotFound desc = could not find container \"53b2a95e1d0a0c6457f227ad2f8c0ef57d5d516c2a28f45224477d066015f509\": container with ID starting with 53b2a95e1d0a0c6457f227ad2f8c0ef57d5d516c2a28f45224477d066015f509 not found: ID does not exist" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.287435 4915 scope.go:117] "RemoveContainer" containerID="9bd34de08c292d4e5590d6c1e6d7fe93a895a93ba866965b13df85e683826b29" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.290391 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.306622 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.307873 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.311143 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.322041 4915 scope.go:117] "RemoveContainer" containerID="9bd34de08c292d4e5590d6c1e6d7fe93a895a93ba866965b13df85e683826b29" Jan 27 19:03:24 crc kubenswrapper[4915]: E0127 19:03:24.322642 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bd34de08c292d4e5590d6c1e6d7fe93a895a93ba866965b13df85e683826b29\": container with ID starting with 9bd34de08c292d4e5590d6c1e6d7fe93a895a93ba866965b13df85e683826b29 not found: ID does not exist" containerID="9bd34de08c292d4e5590d6c1e6d7fe93a895a93ba866965b13df85e683826b29" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.322766 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bd34de08c292d4e5590d6c1e6d7fe93a895a93ba866965b13df85e683826b29"} err="failed to get container status \"9bd34de08c292d4e5590d6c1e6d7fe93a895a93ba866965b13df85e683826b29\": rpc error: code = NotFound desc = could not find container \"9bd34de08c292d4e5590d6c1e6d7fe93a895a93ba866965b13df85e683826b29\": container with ID starting with 9bd34de08c292d4e5590d6c1e6d7fe93a895a93ba866965b13df85e683826b29 not found: ID does not exist" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.325372 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.332984 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.449274 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsxmr\" (UniqueName: \"kubernetes.io/projected/534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f-kube-api-access-gsxmr\") pod \"nova-scheduler-0\" (UID: \"534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f\") " pod="openstack/nova-scheduler-0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.449499 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f\") " pod="openstack/nova-scheduler-0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.449664 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2a303b-f58c-4b93-9a06-ef1e871c6945-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4c2a303b-f58c-4b93-9a06-ef1e871c6945\") " pod="openstack/nova-api-0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.449760 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jbmq\" (UniqueName: \"kubernetes.io/projected/4c2a303b-f58c-4b93-9a06-ef1e871c6945-kube-api-access-2jbmq\") pod \"nova-api-0\" (UID: \"4c2a303b-f58c-4b93-9a06-ef1e871c6945\") " pod="openstack/nova-api-0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.449940 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c2a303b-f58c-4b93-9a06-ef1e871c6945-logs\") pod \"nova-api-0\" (UID: \"4c2a303b-f58c-4b93-9a06-ef1e871c6945\") " pod="openstack/nova-api-0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.450028 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f-config-data\") pod \"nova-scheduler-0\" (UID: \"534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f\") " pod="openstack/nova-scheduler-0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.450103 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c2a303b-f58c-4b93-9a06-ef1e871c6945-config-data\") pod \"nova-api-0\" (UID: \"4c2a303b-f58c-4b93-9a06-ef1e871c6945\") " pod="openstack/nova-api-0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.551446 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f-config-data\") pod \"nova-scheduler-0\" (UID: \"534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f\") " pod="openstack/nova-scheduler-0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.551497 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c2a303b-f58c-4b93-9a06-ef1e871c6945-config-data\") pod \"nova-api-0\" (UID: \"4c2a303b-f58c-4b93-9a06-ef1e871c6945\") " pod="openstack/nova-api-0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.551550 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsxmr\" (UniqueName: \"kubernetes.io/projected/534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f-kube-api-access-gsxmr\") pod \"nova-scheduler-0\" (UID: \"534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f\") " pod="openstack/nova-scheduler-0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.551591 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f\") " pod="openstack/nova-scheduler-0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.551627 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2a303b-f58c-4b93-9a06-ef1e871c6945-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4c2a303b-f58c-4b93-9a06-ef1e871c6945\") " pod="openstack/nova-api-0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.551657 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jbmq\" (UniqueName: \"kubernetes.io/projected/4c2a303b-f58c-4b93-9a06-ef1e871c6945-kube-api-access-2jbmq\") pod \"nova-api-0\" (UID: \"4c2a303b-f58c-4b93-9a06-ef1e871c6945\") " pod="openstack/nova-api-0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.551700 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c2a303b-f58c-4b93-9a06-ef1e871c6945-logs\") pod \"nova-api-0\" (UID: \"4c2a303b-f58c-4b93-9a06-ef1e871c6945\") " pod="openstack/nova-api-0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.552213 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c2a303b-f58c-4b93-9a06-ef1e871c6945-logs\") pod \"nova-api-0\" (UID: \"4c2a303b-f58c-4b93-9a06-ef1e871c6945\") " pod="openstack/nova-api-0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.556581 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c2a303b-f58c-4b93-9a06-ef1e871c6945-config-data\") pod \"nova-api-0\" (UID: \"4c2a303b-f58c-4b93-9a06-ef1e871c6945\") " pod="openstack/nova-api-0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.557747 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f\") " pod="openstack/nova-scheduler-0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.558487 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2a303b-f58c-4b93-9a06-ef1e871c6945-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4c2a303b-f58c-4b93-9a06-ef1e871c6945\") " pod="openstack/nova-api-0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.569340 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f-config-data\") pod \"nova-scheduler-0\" (UID: \"534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f\") " pod="openstack/nova-scheduler-0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.571695 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jbmq\" (UniqueName: \"kubernetes.io/projected/4c2a303b-f58c-4b93-9a06-ef1e871c6945-kube-api-access-2jbmq\") pod \"nova-api-0\" (UID: \"4c2a303b-f58c-4b93-9a06-ef1e871c6945\") " pod="openstack/nova-api-0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.572923 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsxmr\" (UniqueName: \"kubernetes.io/projected/534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f-kube-api-access-gsxmr\") pod \"nova-scheduler-0\" (UID: \"534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f\") " pod="openstack/nova-scheduler-0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.603475 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:03:24 crc kubenswrapper[4915]: I0127 19:03:24.625829 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 19:03:25 crc kubenswrapper[4915]: W0127 19:03:25.129929 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c2a303b_f58c_4b93_9a06_ef1e871c6945.slice/crio-aeaa87ea5f131b0539b31cca1c9de59e3a9fb0dffac956bb94700bdbb7a3b078 WatchSource:0}: Error finding container aeaa87ea5f131b0539b31cca1c9de59e3a9fb0dffac956bb94700bdbb7a3b078: Status 404 returned error can't find the container with id aeaa87ea5f131b0539b31cca1c9de59e3a9fb0dffac956bb94700bdbb7a3b078 Jan 27 19:03:25 crc kubenswrapper[4915]: I0127 19:03:25.130724 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:03:25 crc kubenswrapper[4915]: I0127 19:03:25.184214 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:03:25 crc kubenswrapper[4915]: I0127 19:03:25.197589 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c2a303b-f58c-4b93-9a06-ef1e871c6945","Type":"ContainerStarted","Data":"aeaa87ea5f131b0539b31cca1c9de59e3a9fb0dffac956bb94700bdbb7a3b078"} Jan 27 19:03:25 crc kubenswrapper[4915]: W0127 19:03:25.203407 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod534f4ad6_6d00_4ea6_ae1a_20f600d0fe8f.slice/crio-77d1c984fba11e196ed9bc5e0339baf05787aeb321baab2df6979f22691b0dda WatchSource:0}: Error finding container 77d1c984fba11e196ed9bc5e0339baf05787aeb321baab2df6979f22691b0dda: Status 404 returned error can't find the container with id 77d1c984fba11e196ed9bc5e0339baf05787aeb321baab2df6979f22691b0dda Jan 27 19:03:25 crc kubenswrapper[4915]: I0127 19:03:25.373345 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="775f6671-cdff-4296-9acd-59fa7d919c0d" path="/var/lib/kubelet/pods/775f6671-cdff-4296-9acd-59fa7d919c0d/volumes" Jan 27 19:03:25 crc kubenswrapper[4915]: I0127 19:03:25.374209 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7f63629-3c4c-4cef-9a51-6015c3c95211" path="/var/lib/kubelet/pods/a7f63629-3c4c-4cef-9a51-6015c3c95211/volumes" Jan 27 19:03:25 crc kubenswrapper[4915]: I0127 19:03:25.459661 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 19:03:25 crc kubenswrapper[4915]: I0127 19:03:25.459716 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 19:03:26 crc kubenswrapper[4915]: I0127 19:03:26.210298 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c2a303b-f58c-4b93-9a06-ef1e871c6945","Type":"ContainerStarted","Data":"8525e5a9aeb078cb0cd4756546717222aa60bba646ed41d168ef0e52cdd32f75"} Jan 27 19:03:26 crc kubenswrapper[4915]: I0127 19:03:26.210639 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c2a303b-f58c-4b93-9a06-ef1e871c6945","Type":"ContainerStarted","Data":"fe7039a47e6b6846ab2f97c32937606cfe5c82490555ef570713fc64ec45e5f0"} Jan 27 19:03:26 crc kubenswrapper[4915]: I0127 19:03:26.212573 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f","Type":"ContainerStarted","Data":"e2eb3644a3d3f56f51d28a2c7c306b05c8f02baeabd91a14900fe406028007e3"} Jan 27 19:03:26 crc kubenswrapper[4915]: I0127 19:03:26.212596 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f","Type":"ContainerStarted","Data":"77d1c984fba11e196ed9bc5e0339baf05787aeb321baab2df6979f22691b0dda"} Jan 27 19:03:26 crc kubenswrapper[4915]: I0127 19:03:26.243878 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.2438603759999998 podStartE2EDuration="2.243860376s" podCreationTimestamp="2026-01-27 19:03:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:03:26.238833574 +0000 UTC m=+1297.596687278" watchObservedRunningTime="2026-01-27 19:03:26.243860376 +0000 UTC m=+1297.601714050" Jan 27 19:03:26 crc kubenswrapper[4915]: I0127 19:03:26.269253 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.269223183 podStartE2EDuration="2.269223183s" podCreationTimestamp="2026-01-27 19:03:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:03:26.262461309 +0000 UTC m=+1297.620314973" watchObservedRunningTime="2026-01-27 19:03:26.269223183 +0000 UTC m=+1297.627076897" Jan 27 19:03:26 crc kubenswrapper[4915]: I0127 19:03:26.286705 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 19:03:29 crc kubenswrapper[4915]: I0127 19:03:29.443963 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 27 19:03:29 crc kubenswrapper[4915]: I0127 19:03:29.626092 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 19:03:29 crc kubenswrapper[4915]: I0127 19:03:29.643414 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 19:03:29 crc kubenswrapper[4915]: I0127 19:03:29.643695 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="4c362f63-191f-4589-80ee-212b909db51e" containerName="kube-state-metrics" containerID="cri-o://7548d601ec369e2e34f04802814e4719086b0bb1a7874f7d1b23bbb8492a19f9" gracePeriod=30 Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.186047 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.253461 4915 generic.go:334] "Generic (PLEG): container finished" podID="4c362f63-191f-4589-80ee-212b909db51e" containerID="7548d601ec369e2e34f04802814e4719086b0bb1a7874f7d1b23bbb8492a19f9" exitCode=2 Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.253516 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4c362f63-191f-4589-80ee-212b909db51e","Type":"ContainerDied","Data":"7548d601ec369e2e34f04802814e4719086b0bb1a7874f7d1b23bbb8492a19f9"} Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.253537 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.253556 4915 scope.go:117] "RemoveContainer" containerID="7548d601ec369e2e34f04802814e4719086b0bb1a7874f7d1b23bbb8492a19f9" Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.253543 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4c362f63-191f-4589-80ee-212b909db51e","Type":"ContainerDied","Data":"f7b4f88ad3fb139053d97a533796c44be723c4d452eb537e9ac21d1a000075db"} Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.270212 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hztkr\" (UniqueName: \"kubernetes.io/projected/4c362f63-191f-4589-80ee-212b909db51e-kube-api-access-hztkr\") pod \"4c362f63-191f-4589-80ee-212b909db51e\" (UID: \"4c362f63-191f-4589-80ee-212b909db51e\") " Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.274510 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c362f63-191f-4589-80ee-212b909db51e-kube-api-access-hztkr" (OuterVolumeSpecName: "kube-api-access-hztkr") pod "4c362f63-191f-4589-80ee-212b909db51e" (UID: "4c362f63-191f-4589-80ee-212b909db51e"). InnerVolumeSpecName "kube-api-access-hztkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.281369 4915 scope.go:117] "RemoveContainer" containerID="7548d601ec369e2e34f04802814e4719086b0bb1a7874f7d1b23bbb8492a19f9" Jan 27 19:03:30 crc kubenswrapper[4915]: E0127 19:03:30.281722 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7548d601ec369e2e34f04802814e4719086b0bb1a7874f7d1b23bbb8492a19f9\": container with ID starting with 7548d601ec369e2e34f04802814e4719086b0bb1a7874f7d1b23bbb8492a19f9 not found: ID does not exist" containerID="7548d601ec369e2e34f04802814e4719086b0bb1a7874f7d1b23bbb8492a19f9" Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.281765 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7548d601ec369e2e34f04802814e4719086b0bb1a7874f7d1b23bbb8492a19f9"} err="failed to get container status \"7548d601ec369e2e34f04802814e4719086b0bb1a7874f7d1b23bbb8492a19f9\": rpc error: code = NotFound desc = could not find container \"7548d601ec369e2e34f04802814e4719086b0bb1a7874f7d1b23bbb8492a19f9\": container with ID starting with 7548d601ec369e2e34f04802814e4719086b0bb1a7874f7d1b23bbb8492a19f9 not found: ID does not exist" Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.372381 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hztkr\" (UniqueName: \"kubernetes.io/projected/4c362f63-191f-4589-80ee-212b909db51e-kube-api-access-hztkr\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.459848 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.459917 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.589338 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.598397 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.614139 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 19:03:30 crc kubenswrapper[4915]: E0127 19:03:30.614552 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c362f63-191f-4589-80ee-212b909db51e" containerName="kube-state-metrics" Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.614576 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c362f63-191f-4589-80ee-212b909db51e" containerName="kube-state-metrics" Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.614771 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c362f63-191f-4589-80ee-212b909db51e" containerName="kube-state-metrics" Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.615471 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.618894 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.619149 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.625130 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.677427 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfm5d\" (UniqueName: \"kubernetes.io/projected/900ea28b-9ef8-41fe-a522-044443efa94b-kube-api-access-cfm5d\") pod \"kube-state-metrics-0\" (UID: \"900ea28b-9ef8-41fe-a522-044443efa94b\") " pod="openstack/kube-state-metrics-0" Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.677650 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/900ea28b-9ef8-41fe-a522-044443efa94b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"900ea28b-9ef8-41fe-a522-044443efa94b\") " pod="openstack/kube-state-metrics-0" Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.677699 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900ea28b-9ef8-41fe-a522-044443efa94b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"900ea28b-9ef8-41fe-a522-044443efa94b\") " pod="openstack/kube-state-metrics-0" Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.677774 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/900ea28b-9ef8-41fe-a522-044443efa94b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"900ea28b-9ef8-41fe-a522-044443efa94b\") " pod="openstack/kube-state-metrics-0" Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.779870 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfm5d\" (UniqueName: \"kubernetes.io/projected/900ea28b-9ef8-41fe-a522-044443efa94b-kube-api-access-cfm5d\") pod \"kube-state-metrics-0\" (UID: \"900ea28b-9ef8-41fe-a522-044443efa94b\") " pod="openstack/kube-state-metrics-0" Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.779941 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900ea28b-9ef8-41fe-a522-044443efa94b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"900ea28b-9ef8-41fe-a522-044443efa94b\") " pod="openstack/kube-state-metrics-0" Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.779961 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/900ea28b-9ef8-41fe-a522-044443efa94b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"900ea28b-9ef8-41fe-a522-044443efa94b\") " pod="openstack/kube-state-metrics-0" Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.779992 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/900ea28b-9ef8-41fe-a522-044443efa94b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"900ea28b-9ef8-41fe-a522-044443efa94b\") " pod="openstack/kube-state-metrics-0" Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.786561 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/900ea28b-9ef8-41fe-a522-044443efa94b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"900ea28b-9ef8-41fe-a522-044443efa94b\") " pod="openstack/kube-state-metrics-0" Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.786901 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900ea28b-9ef8-41fe-a522-044443efa94b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"900ea28b-9ef8-41fe-a522-044443efa94b\") " pod="openstack/kube-state-metrics-0" Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.802602 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfm5d\" (UniqueName: \"kubernetes.io/projected/900ea28b-9ef8-41fe-a522-044443efa94b-kube-api-access-cfm5d\") pod \"kube-state-metrics-0\" (UID: \"900ea28b-9ef8-41fe-a522-044443efa94b\") " pod="openstack/kube-state-metrics-0" Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.807635 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/900ea28b-9ef8-41fe-a522-044443efa94b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"900ea28b-9ef8-41fe-a522-044443efa94b\") " pod="openstack/kube-state-metrics-0" Jan 27 19:03:30 crc kubenswrapper[4915]: I0127 19:03:30.934632 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 19:03:31 crc kubenswrapper[4915]: I0127 19:03:31.369677 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c362f63-191f-4589-80ee-212b909db51e" path="/var/lib/kubelet/pods/4c362f63-191f-4589-80ee-212b909db51e/volumes" Jan 27 19:03:31 crc kubenswrapper[4915]: I0127 19:03:31.413345 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 19:03:31 crc kubenswrapper[4915]: W0127 19:03:31.422773 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod900ea28b_9ef8_41fe_a522_044443efa94b.slice/crio-5ad5df697990ab497d2703a3e7316bbfe3f5b9aef5b1c68bf85abc512589c490 WatchSource:0}: Error finding container 5ad5df697990ab497d2703a3e7316bbfe3f5b9aef5b1c68bf85abc512589c490: Status 404 returned error can't find the container with id 5ad5df697990ab497d2703a3e7316bbfe3f5b9aef5b1c68bf85abc512589c490 Jan 27 19:03:31 crc kubenswrapper[4915]: I0127 19:03:31.473180 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e73c5725-eee6-498e-825c-94bf98bb0432" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:03:31 crc kubenswrapper[4915]: I0127 19:03:31.473180 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e73c5725-eee6-498e-825c-94bf98bb0432" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:03:31 crc kubenswrapper[4915]: I0127 19:03:31.535132 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:03:31 crc kubenswrapper[4915]: I0127 19:03:31.535857 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" containerName="proxy-httpd" containerID="cri-o://52e0bf2d77e29d1919a72d1ffede334758be72296ae021122132409dcfa2406e" gracePeriod=30 Jan 27 19:03:31 crc kubenswrapper[4915]: I0127 19:03:31.535918 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" containerName="ceilometer-notification-agent" containerID="cri-o://a1d29196fbdde52ccfad6154f3b0fb7fe2b84b303b95e505e00422e31d73a010" gracePeriod=30 Jan 27 19:03:31 crc kubenswrapper[4915]: I0127 19:03:31.535906 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" containerName="sg-core" containerID="cri-o://9a2451d511edb32b9951baa240367bb8330982d97a6ad4a00eb9bb810361d199" gracePeriod=30 Jan 27 19:03:31 crc kubenswrapper[4915]: I0127 19:03:31.536326 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" containerName="ceilometer-central-agent" containerID="cri-o://c87c5870eae5df2cc19225c57bea3e5f2f3af24a6f3798c34453c3aa25233791" gracePeriod=30 Jan 27 19:03:32 crc kubenswrapper[4915]: I0127 19:03:32.282464 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"900ea28b-9ef8-41fe-a522-044443efa94b","Type":"ContainerStarted","Data":"5ad5df697990ab497d2703a3e7316bbfe3f5b9aef5b1c68bf85abc512589c490"} Jan 27 19:03:32 crc kubenswrapper[4915]: I0127 19:03:32.285903 4915 generic.go:334] "Generic (PLEG): container finished" podID="26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" containerID="52e0bf2d77e29d1919a72d1ffede334758be72296ae021122132409dcfa2406e" exitCode=0 Jan 27 19:03:32 crc kubenswrapper[4915]: I0127 19:03:32.286085 4915 generic.go:334] "Generic (PLEG): container finished" podID="26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" containerID="9a2451d511edb32b9951baa240367bb8330982d97a6ad4a00eb9bb810361d199" exitCode=2 Jan 27 19:03:32 crc kubenswrapper[4915]: I0127 19:03:32.286182 4915 generic.go:334] "Generic (PLEG): container finished" podID="26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" containerID="c87c5870eae5df2cc19225c57bea3e5f2f3af24a6f3798c34453c3aa25233791" exitCode=0 Jan 27 19:03:32 crc kubenswrapper[4915]: I0127 19:03:32.285976 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7","Type":"ContainerDied","Data":"52e0bf2d77e29d1919a72d1ffede334758be72296ae021122132409dcfa2406e"} Jan 27 19:03:32 crc kubenswrapper[4915]: I0127 19:03:32.286349 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7","Type":"ContainerDied","Data":"9a2451d511edb32b9951baa240367bb8330982d97a6ad4a00eb9bb810361d199"} Jan 27 19:03:32 crc kubenswrapper[4915]: I0127 19:03:32.286423 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7","Type":"ContainerDied","Data":"c87c5870eae5df2cc19225c57bea3e5f2f3af24a6f3798c34453c3aa25233791"} Jan 27 19:03:33 crc kubenswrapper[4915]: I0127 19:03:33.299336 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"900ea28b-9ef8-41fe-a522-044443efa94b","Type":"ContainerStarted","Data":"d35fdae5565950f6f21c26cee4bc83f55e54749039323306d48c13acce01c4b3"} Jan 27 19:03:33 crc kubenswrapper[4915]: I0127 19:03:33.299932 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 19:03:33 crc kubenswrapper[4915]: I0127 19:03:33.334595 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.455267253 podStartE2EDuration="3.334567969s" podCreationTimestamp="2026-01-27 19:03:30 +0000 UTC" firstStartedPulling="2026-01-27 19:03:31.427008361 +0000 UTC m=+1302.784862035" lastFinishedPulling="2026-01-27 19:03:32.306309047 +0000 UTC m=+1303.664162751" observedRunningTime="2026-01-27 19:03:33.325943919 +0000 UTC m=+1304.683797583" watchObservedRunningTime="2026-01-27 19:03:33.334567969 +0000 UTC m=+1304.692421633" Jan 27 19:03:34 crc kubenswrapper[4915]: I0127 19:03:34.605213 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 19:03:34 crc kubenswrapper[4915]: I0127 19:03:34.605523 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 19:03:34 crc kubenswrapper[4915]: I0127 19:03:34.627054 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 19:03:34 crc kubenswrapper[4915]: I0127 19:03:34.669749 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 19:03:35 crc kubenswrapper[4915]: I0127 19:03:35.389990 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 19:03:35 crc kubenswrapper[4915]: I0127 19:03:35.687172 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4c2a303b-f58c-4b93-9a06-ef1e871c6945" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:03:35 crc kubenswrapper[4915]: I0127 19:03:35.687225 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4c2a303b-f58c-4b93-9a06-ef1e871c6945" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:03:40 crc kubenswrapper[4915]: I0127 19:03:40.466912 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 19:03:40 crc kubenswrapper[4915]: I0127 19:03:40.468271 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 19:03:40 crc kubenswrapper[4915]: I0127 19:03:40.480014 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 19:03:40 crc kubenswrapper[4915]: I0127 19:03:40.480288 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 19:03:40 crc kubenswrapper[4915]: I0127 19:03:40.948233 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 19:03:41 crc kubenswrapper[4915]: I0127 19:03:41.379702 4915 generic.go:334] "Generic (PLEG): container finished" podID="26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" containerID="a1d29196fbdde52ccfad6154f3b0fb7fe2b84b303b95e505e00422e31d73a010" exitCode=0 Jan 27 19:03:41 crc kubenswrapper[4915]: I0127 19:03:41.380865 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7","Type":"ContainerDied","Data":"a1d29196fbdde52ccfad6154f3b0fb7fe2b84b303b95e505e00422e31d73a010"} Jan 27 19:03:41 crc kubenswrapper[4915]: I0127 19:03:41.551044 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:03:41 crc kubenswrapper[4915]: I0127 19:03:41.585153 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-combined-ca-bundle\") pod \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " Jan 27 19:03:41 crc kubenswrapper[4915]: I0127 19:03:41.585252 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-log-httpd\") pod \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " Jan 27 19:03:41 crc kubenswrapper[4915]: I0127 19:03:41.585310 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-config-data\") pod \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " Jan 27 19:03:41 crc kubenswrapper[4915]: I0127 19:03:41.585329 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-scripts\") pod \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " Jan 27 19:03:41 crc kubenswrapper[4915]: I0127 19:03:41.585353 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8g5l\" (UniqueName: \"kubernetes.io/projected/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-kube-api-access-r8g5l\") pod \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " Jan 27 19:03:41 crc kubenswrapper[4915]: I0127 19:03:41.585390 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-run-httpd\") pod \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " Jan 27 19:03:41 crc kubenswrapper[4915]: I0127 19:03:41.585420 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-sg-core-conf-yaml\") pod \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\" (UID: \"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7\") " Jan 27 19:03:41 crc kubenswrapper[4915]: I0127 19:03:41.589136 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" (UID: "26e9c2f5-33a8-4439-97cc-5da38f3ca1e7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:03:41 crc kubenswrapper[4915]: I0127 19:03:41.589327 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" (UID: "26e9c2f5-33a8-4439-97cc-5da38f3ca1e7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:03:41 crc kubenswrapper[4915]: I0127 19:03:41.594048 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-kube-api-access-r8g5l" (OuterVolumeSpecName: "kube-api-access-r8g5l") pod "26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" (UID: "26e9c2f5-33a8-4439-97cc-5da38f3ca1e7"). InnerVolumeSpecName "kube-api-access-r8g5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:41 crc kubenswrapper[4915]: I0127 19:03:41.599913 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-scripts" (OuterVolumeSpecName: "scripts") pod "26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" (UID: "26e9c2f5-33a8-4439-97cc-5da38f3ca1e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:41 crc kubenswrapper[4915]: I0127 19:03:41.630865 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" (UID: "26e9c2f5-33a8-4439-97cc-5da38f3ca1e7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:41 crc kubenswrapper[4915]: I0127 19:03:41.670587 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" (UID: "26e9c2f5-33a8-4439-97cc-5da38f3ca1e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:41 crc kubenswrapper[4915]: I0127 19:03:41.687489 4915 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:41 crc kubenswrapper[4915]: I0127 19:03:41.687514 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:41 crc kubenswrapper[4915]: I0127 19:03:41.687525 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8g5l\" (UniqueName: \"kubernetes.io/projected/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-kube-api-access-r8g5l\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:41 crc kubenswrapper[4915]: I0127 19:03:41.687536 4915 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:41 crc kubenswrapper[4915]: I0127 19:03:41.687543 4915 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:41 crc kubenswrapper[4915]: I0127 19:03:41.687554 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:41 crc kubenswrapper[4915]: I0127 19:03:41.687882 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-config-data" (OuterVolumeSpecName: "config-data") pod "26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" (UID: "26e9c2f5-33a8-4439-97cc-5da38f3ca1e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:41 crc kubenswrapper[4915]: I0127 19:03:41.789921 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.390331 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.392938 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26e9c2f5-33a8-4439-97cc-5da38f3ca1e7","Type":"ContainerDied","Data":"9b64b5d209d788961beaf9e51fac63ae89efd1885b6e90508d1ff918172c86cd"} Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.392979 4915 scope.go:117] "RemoveContainer" containerID="52e0bf2d77e29d1919a72d1ffede334758be72296ae021122132409dcfa2406e" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.432713 4915 scope.go:117] "RemoveContainer" containerID="9a2451d511edb32b9951baa240367bb8330982d97a6ad4a00eb9bb810361d199" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.449110 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.462122 4915 scope.go:117] "RemoveContainer" containerID="a1d29196fbdde52ccfad6154f3b0fb7fe2b84b303b95e505e00422e31d73a010" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.464288 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.490975 4915 scope.go:117] "RemoveContainer" containerID="c87c5870eae5df2cc19225c57bea3e5f2f3af24a6f3798c34453c3aa25233791" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.491099 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:03:42 crc kubenswrapper[4915]: E0127 19:03:42.491467 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" containerName="proxy-httpd" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.491478 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" containerName="proxy-httpd" Jan 27 19:03:42 crc kubenswrapper[4915]: E0127 19:03:42.491489 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" containerName="ceilometer-notification-agent" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.491495 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" containerName="ceilometer-notification-agent" Jan 27 19:03:42 crc kubenswrapper[4915]: E0127 19:03:42.491514 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" containerName="ceilometer-central-agent" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.491519 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" containerName="ceilometer-central-agent" Jan 27 19:03:42 crc kubenswrapper[4915]: E0127 19:03:42.491531 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" containerName="sg-core" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.491536 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" containerName="sg-core" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.491711 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" containerName="proxy-httpd" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.491726 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" containerName="sg-core" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.491738 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" containerName="ceilometer-central-agent" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.491755 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" containerName="ceilometer-notification-agent" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.493648 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.495422 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.496659 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.497181 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.498146 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.605658 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29g8k\" (UniqueName: \"kubernetes.io/projected/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-kube-api-access-29g8k\") pod \"ceilometer-0\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " pod="openstack/ceilometer-0" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.606008 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " pod="openstack/ceilometer-0" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.606028 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-config-data\") pod \"ceilometer-0\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " pod="openstack/ceilometer-0" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.606066 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-run-httpd\") pod \"ceilometer-0\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " pod="openstack/ceilometer-0" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.606110 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " pod="openstack/ceilometer-0" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.606404 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-scripts\") pod \"ceilometer-0\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " pod="openstack/ceilometer-0" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.606507 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " pod="openstack/ceilometer-0" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.606543 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-log-httpd\") pod \"ceilometer-0\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " pod="openstack/ceilometer-0" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.708464 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " pod="openstack/ceilometer-0" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.708533 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-config-data\") pod \"ceilometer-0\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " pod="openstack/ceilometer-0" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.708589 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-run-httpd\") pod \"ceilometer-0\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " pod="openstack/ceilometer-0" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.708651 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " pod="openstack/ceilometer-0" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.708730 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-scripts\") pod \"ceilometer-0\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " pod="openstack/ceilometer-0" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.708768 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " pod="openstack/ceilometer-0" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.708820 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-log-httpd\") pod \"ceilometer-0\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " pod="openstack/ceilometer-0" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.708883 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29g8k\" (UniqueName: \"kubernetes.io/projected/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-kube-api-access-29g8k\") pod \"ceilometer-0\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " pod="openstack/ceilometer-0" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.709454 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-log-httpd\") pod \"ceilometer-0\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " pod="openstack/ceilometer-0" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.709821 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-run-httpd\") pod \"ceilometer-0\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " pod="openstack/ceilometer-0" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.713627 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " pod="openstack/ceilometer-0" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.720579 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " pod="openstack/ceilometer-0" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.721097 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " pod="openstack/ceilometer-0" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.730676 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-scripts\") pod \"ceilometer-0\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " pod="openstack/ceilometer-0" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.732121 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-config-data\") pod \"ceilometer-0\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " pod="openstack/ceilometer-0" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.741565 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29g8k\" (UniqueName: \"kubernetes.io/projected/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-kube-api-access-29g8k\") pod \"ceilometer-0\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " pod="openstack/ceilometer-0" Jan 27 19:03:42 crc kubenswrapper[4915]: I0127 19:03:42.819139 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.294712 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:03:43 crc kubenswrapper[4915]: W0127 19:03:43.295812 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cf187d8_b7ee_4c54_9ee4_dc880d0993a2.slice/crio-88cc27d884121f421c6365cfb08ccd37131bda8a3223d0639545874530d9563b WatchSource:0}: Error finding container 88cc27d884121f421c6365cfb08ccd37131bda8a3223d0639545874530d9563b: Status 404 returned error can't find the container with id 88cc27d884121f421c6365cfb08ccd37131bda8a3223d0639545874530d9563b Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.357391 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.375993 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26e9c2f5-33a8-4439-97cc-5da38f3ca1e7" path="/var/lib/kubelet/pods/26e9c2f5-33a8-4439-97cc-5da38f3ca1e7/volumes" Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.402063 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2","Type":"ContainerStarted","Data":"88cc27d884121f421c6365cfb08ccd37131bda8a3223d0639545874530d9563b"} Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.406959 4915 generic.go:334] "Generic (PLEG): container finished" podID="83613ee1-cb91-4a7d-8b99-0c8912fa40a5" containerID="98432153bfbb408fb37d5b72b947f997e19ea9923140645718109ff2b7621621" exitCode=137 Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.407017 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"83613ee1-cb91-4a7d-8b99-0c8912fa40a5","Type":"ContainerDied","Data":"98432153bfbb408fb37d5b72b947f997e19ea9923140645718109ff2b7621621"} Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.407046 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"83613ee1-cb91-4a7d-8b99-0c8912fa40a5","Type":"ContainerDied","Data":"96d643301d62537a80fc9c3c655e6f0d840f36db2770372514ca85667a189af4"} Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.407066 4915 scope.go:117] "RemoveContainer" containerID="98432153bfbb408fb37d5b72b947f997e19ea9923140645718109ff2b7621621" Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.407185 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.423698 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83613ee1-cb91-4a7d-8b99-0c8912fa40a5-config-data\") pod \"83613ee1-cb91-4a7d-8b99-0c8912fa40a5\" (UID: \"83613ee1-cb91-4a7d-8b99-0c8912fa40a5\") " Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.424022 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83613ee1-cb91-4a7d-8b99-0c8912fa40a5-combined-ca-bundle\") pod \"83613ee1-cb91-4a7d-8b99-0c8912fa40a5\" (UID: \"83613ee1-cb91-4a7d-8b99-0c8912fa40a5\") " Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.424123 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmwns\" (UniqueName: \"kubernetes.io/projected/83613ee1-cb91-4a7d-8b99-0c8912fa40a5-kube-api-access-kmwns\") pod \"83613ee1-cb91-4a7d-8b99-0c8912fa40a5\" (UID: \"83613ee1-cb91-4a7d-8b99-0c8912fa40a5\") " Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.431721 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83613ee1-cb91-4a7d-8b99-0c8912fa40a5-kube-api-access-kmwns" (OuterVolumeSpecName: "kube-api-access-kmwns") pod "83613ee1-cb91-4a7d-8b99-0c8912fa40a5" (UID: "83613ee1-cb91-4a7d-8b99-0c8912fa40a5"). InnerVolumeSpecName "kube-api-access-kmwns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.454345 4915 scope.go:117] "RemoveContainer" containerID="98432153bfbb408fb37d5b72b947f997e19ea9923140645718109ff2b7621621" Jan 27 19:03:43 crc kubenswrapper[4915]: E0127 19:03:43.455095 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98432153bfbb408fb37d5b72b947f997e19ea9923140645718109ff2b7621621\": container with ID starting with 98432153bfbb408fb37d5b72b947f997e19ea9923140645718109ff2b7621621 not found: ID does not exist" containerID="98432153bfbb408fb37d5b72b947f997e19ea9923140645718109ff2b7621621" Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.455134 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98432153bfbb408fb37d5b72b947f997e19ea9923140645718109ff2b7621621"} err="failed to get container status \"98432153bfbb408fb37d5b72b947f997e19ea9923140645718109ff2b7621621\": rpc error: code = NotFound desc = could not find container \"98432153bfbb408fb37d5b72b947f997e19ea9923140645718109ff2b7621621\": container with ID starting with 98432153bfbb408fb37d5b72b947f997e19ea9923140645718109ff2b7621621 not found: ID does not exist" Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.458966 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83613ee1-cb91-4a7d-8b99-0c8912fa40a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83613ee1-cb91-4a7d-8b99-0c8912fa40a5" (UID: "83613ee1-cb91-4a7d-8b99-0c8912fa40a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.473661 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83613ee1-cb91-4a7d-8b99-0c8912fa40a5-config-data" (OuterVolumeSpecName: "config-data") pod "83613ee1-cb91-4a7d-8b99-0c8912fa40a5" (UID: "83613ee1-cb91-4a7d-8b99-0c8912fa40a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.529341 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83613ee1-cb91-4a7d-8b99-0c8912fa40a5-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.529375 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83613ee1-cb91-4a7d-8b99-0c8912fa40a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.529385 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmwns\" (UniqueName: \"kubernetes.io/projected/83613ee1-cb91-4a7d-8b99-0c8912fa40a5-kube-api-access-kmwns\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.781677 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.796325 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.806658 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 19:03:43 crc kubenswrapper[4915]: E0127 19:03:43.807249 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83613ee1-cb91-4a7d-8b99-0c8912fa40a5" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.807287 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="83613ee1-cb91-4a7d-8b99-0c8912fa40a5" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.807536 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="83613ee1-cb91-4a7d-8b99-0c8912fa40a5" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.808259 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.811187 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.811450 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.811610 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.817158 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.936779 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkfzl\" (UniqueName: \"kubernetes.io/projected/bead4142-2b0e-41b1-85dd-cfe102596e93-kube-api-access-kkfzl\") pod \"nova-cell1-novncproxy-0\" (UID: \"bead4142-2b0e-41b1-85dd-cfe102596e93\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.936863 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bead4142-2b0e-41b1-85dd-cfe102596e93-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bead4142-2b0e-41b1-85dd-cfe102596e93\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.936940 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bead4142-2b0e-41b1-85dd-cfe102596e93-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bead4142-2b0e-41b1-85dd-cfe102596e93\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.936994 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bead4142-2b0e-41b1-85dd-cfe102596e93-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bead4142-2b0e-41b1-85dd-cfe102596e93\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:43 crc kubenswrapper[4915]: I0127 19:03:43.937019 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bead4142-2b0e-41b1-85dd-cfe102596e93-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bead4142-2b0e-41b1-85dd-cfe102596e93\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:44 crc kubenswrapper[4915]: I0127 19:03:44.038437 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bead4142-2b0e-41b1-85dd-cfe102596e93-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bead4142-2b0e-41b1-85dd-cfe102596e93\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:44 crc kubenswrapper[4915]: I0127 19:03:44.038495 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bead4142-2b0e-41b1-85dd-cfe102596e93-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bead4142-2b0e-41b1-85dd-cfe102596e93\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:44 crc kubenswrapper[4915]: I0127 19:03:44.038597 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkfzl\" (UniqueName: \"kubernetes.io/projected/bead4142-2b0e-41b1-85dd-cfe102596e93-kube-api-access-kkfzl\") pod \"nova-cell1-novncproxy-0\" (UID: \"bead4142-2b0e-41b1-85dd-cfe102596e93\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:44 crc kubenswrapper[4915]: I0127 19:03:44.038648 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bead4142-2b0e-41b1-85dd-cfe102596e93-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bead4142-2b0e-41b1-85dd-cfe102596e93\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:44 crc kubenswrapper[4915]: I0127 19:03:44.038728 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bead4142-2b0e-41b1-85dd-cfe102596e93-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bead4142-2b0e-41b1-85dd-cfe102596e93\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:44 crc kubenswrapper[4915]: I0127 19:03:44.042469 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bead4142-2b0e-41b1-85dd-cfe102596e93-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bead4142-2b0e-41b1-85dd-cfe102596e93\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:44 crc kubenswrapper[4915]: I0127 19:03:44.043033 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bead4142-2b0e-41b1-85dd-cfe102596e93-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bead4142-2b0e-41b1-85dd-cfe102596e93\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:44 crc kubenswrapper[4915]: I0127 19:03:44.043469 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bead4142-2b0e-41b1-85dd-cfe102596e93-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bead4142-2b0e-41b1-85dd-cfe102596e93\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:44 crc kubenswrapper[4915]: I0127 19:03:44.043528 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bead4142-2b0e-41b1-85dd-cfe102596e93-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bead4142-2b0e-41b1-85dd-cfe102596e93\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:44 crc kubenswrapper[4915]: I0127 19:03:44.063961 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkfzl\" (UniqueName: \"kubernetes.io/projected/bead4142-2b0e-41b1-85dd-cfe102596e93-kube-api-access-kkfzl\") pod \"nova-cell1-novncproxy-0\" (UID: \"bead4142-2b0e-41b1-85dd-cfe102596e93\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:44 crc kubenswrapper[4915]: I0127 19:03:44.129363 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:44 crc kubenswrapper[4915]: I0127 19:03:44.419455 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2","Type":"ContainerStarted","Data":"29baa4644a5be72c910aaf77c789621b24d26e7d662630cdbb1739948bbcff88"} Jan 27 19:03:44 crc kubenswrapper[4915]: I0127 19:03:44.610991 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 19:03:44 crc kubenswrapper[4915]: I0127 19:03:44.611603 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 19:03:44 crc kubenswrapper[4915]: I0127 19:03:44.611644 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 19:03:44 crc kubenswrapper[4915]: I0127 19:03:44.614537 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 19:03:44 crc kubenswrapper[4915]: I0127 19:03:44.632922 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.373311 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83613ee1-cb91-4a7d-8b99-0c8912fa40a5" path="/var/lib/kubelet/pods/83613ee1-cb91-4a7d-8b99-0c8912fa40a5/volumes" Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.456669 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bead4142-2b0e-41b1-85dd-cfe102596e93","Type":"ContainerStarted","Data":"5d4ade3f487b1e4db7db4b01ad564f77089c9959eb1510f7aa9062feafcbd4c4"} Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.456732 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bead4142-2b0e-41b1-85dd-cfe102596e93","Type":"ContainerStarted","Data":"0c26d89328faf1df7d0177da71f0f038c798cddc7e51df26e524cc7b983478b6"} Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.457415 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.469753 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.488540 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.488465847 podStartE2EDuration="2.488465847s" podCreationTimestamp="2026-01-27 19:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:03:45.48203865 +0000 UTC m=+1316.839892314" watchObservedRunningTime="2026-01-27 19:03:45.488465847 +0000 UTC m=+1316.846319531" Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.669259 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-tjs2m"] Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.671704 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.677461 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-tjs2m"] Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.778276 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-config\") pod \"dnsmasq-dns-cd5cbd7b9-tjs2m\" (UID: \"4ef0121e-da20-4815-bdba-f03c90dea333\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.778405 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tq4r\" (UniqueName: \"kubernetes.io/projected/4ef0121e-da20-4815-bdba-f03c90dea333-kube-api-access-2tq4r\") pod \"dnsmasq-dns-cd5cbd7b9-tjs2m\" (UID: \"4ef0121e-da20-4815-bdba-f03c90dea333\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.778428 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-tjs2m\" (UID: \"4ef0121e-da20-4815-bdba-f03c90dea333\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.778461 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-tjs2m\" (UID: \"4ef0121e-da20-4815-bdba-f03c90dea333\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.778630 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-tjs2m\" (UID: \"4ef0121e-da20-4815-bdba-f03c90dea333\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.778714 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-tjs2m\" (UID: \"4ef0121e-da20-4815-bdba-f03c90dea333\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.879955 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-tjs2m\" (UID: \"4ef0121e-da20-4815-bdba-f03c90dea333\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.880015 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-tjs2m\" (UID: \"4ef0121e-da20-4815-bdba-f03c90dea333\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.880065 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-tjs2m\" (UID: \"4ef0121e-da20-4815-bdba-f03c90dea333\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.880094 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-tjs2m\" (UID: \"4ef0121e-da20-4815-bdba-f03c90dea333\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.880936 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-tjs2m\" (UID: \"4ef0121e-da20-4815-bdba-f03c90dea333\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.880961 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-tjs2m\" (UID: \"4ef0121e-da20-4815-bdba-f03c90dea333\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.881081 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-config\") pod \"dnsmasq-dns-cd5cbd7b9-tjs2m\" (UID: \"4ef0121e-da20-4815-bdba-f03c90dea333\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.881249 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-tjs2m\" (UID: \"4ef0121e-da20-4815-bdba-f03c90dea333\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.881485 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-tjs2m\" (UID: \"4ef0121e-da20-4815-bdba-f03c90dea333\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.881670 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-config\") pod \"dnsmasq-dns-cd5cbd7b9-tjs2m\" (UID: \"4ef0121e-da20-4815-bdba-f03c90dea333\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.881828 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tq4r\" (UniqueName: \"kubernetes.io/projected/4ef0121e-da20-4815-bdba-f03c90dea333-kube-api-access-2tq4r\") pod \"dnsmasq-dns-cd5cbd7b9-tjs2m\" (UID: \"4ef0121e-da20-4815-bdba-f03c90dea333\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.905652 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tq4r\" (UniqueName: \"kubernetes.io/projected/4ef0121e-da20-4815-bdba-f03c90dea333-kube-api-access-2tq4r\") pod \"dnsmasq-dns-cd5cbd7b9-tjs2m\" (UID: \"4ef0121e-da20-4815-bdba-f03c90dea333\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" Jan 27 19:03:45 crc kubenswrapper[4915]: I0127 19:03:45.988432 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" Jan 27 19:03:46 crc kubenswrapper[4915]: I0127 19:03:46.459466 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-tjs2m"] Jan 27 19:03:46 crc kubenswrapper[4915]: W0127 19:03:46.469771 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ef0121e_da20_4815_bdba_f03c90dea333.slice/crio-477108d88d1dfcde20dfb4ecbfee5cde6d4599361a4aee275eedf5d272f32f6e WatchSource:0}: Error finding container 477108d88d1dfcde20dfb4ecbfee5cde6d4599361a4aee275eedf5d272f32f6e: Status 404 returned error can't find the container with id 477108d88d1dfcde20dfb4ecbfee5cde6d4599361a4aee275eedf5d272f32f6e Jan 27 19:03:46 crc kubenswrapper[4915]: I0127 19:03:46.470384 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2","Type":"ContainerStarted","Data":"0ee1bbd0626d60f89c4bce473ae593fac4806a3d49a4bb73f23cd08e17b5e58f"} Jan 27 19:03:47 crc kubenswrapper[4915]: I0127 19:03:47.484594 4915 generic.go:334] "Generic (PLEG): container finished" podID="4ef0121e-da20-4815-bdba-f03c90dea333" containerID="fd22bfbe90460c42cf3c357448b7851d433eaff827b51a95098297dad113e150" exitCode=0 Jan 27 19:03:47 crc kubenswrapper[4915]: I0127 19:03:47.484765 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" event={"ID":"4ef0121e-da20-4815-bdba-f03c90dea333","Type":"ContainerDied","Data":"fd22bfbe90460c42cf3c357448b7851d433eaff827b51a95098297dad113e150"} Jan 27 19:03:47 crc kubenswrapper[4915]: I0127 19:03:47.486148 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" event={"ID":"4ef0121e-da20-4815-bdba-f03c90dea333","Type":"ContainerStarted","Data":"477108d88d1dfcde20dfb4ecbfee5cde6d4599361a4aee275eedf5d272f32f6e"} Jan 27 19:03:47 crc kubenswrapper[4915]: I0127 19:03:47.488998 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2","Type":"ContainerStarted","Data":"efc24c3da07634cc15e5b2349c03a74cd00fbdb15a126807f434ea7a27727b4c"} Jan 27 19:03:48 crc kubenswrapper[4915]: I0127 19:03:48.079747 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:03:48 crc kubenswrapper[4915]: I0127 19:03:48.502497 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4c2a303b-f58c-4b93-9a06-ef1e871c6945" containerName="nova-api-log" containerID="cri-o://fe7039a47e6b6846ab2f97c32937606cfe5c82490555ef570713fc64ec45e5f0" gracePeriod=30 Jan 27 19:03:48 crc kubenswrapper[4915]: I0127 19:03:48.507562 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" event={"ID":"4ef0121e-da20-4815-bdba-f03c90dea333","Type":"ContainerStarted","Data":"db6a738db0a4ebeeaf1061a5d01ac97d32cdc31fdce6dfd88815e40db583ca2f"} Jan 27 19:03:48 crc kubenswrapper[4915]: I0127 19:03:48.507619 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" Jan 27 19:03:48 crc kubenswrapper[4915]: I0127 19:03:48.508280 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4c2a303b-f58c-4b93-9a06-ef1e871c6945" containerName="nova-api-api" containerID="cri-o://8525e5a9aeb078cb0cd4756546717222aa60bba646ed41d168ef0e52cdd32f75" gracePeriod=30 Jan 27 19:03:48 crc kubenswrapper[4915]: I0127 19:03:48.541652 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" podStartSLOduration=3.541629721 podStartE2EDuration="3.541629721s" podCreationTimestamp="2026-01-27 19:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:03:48.536568478 +0000 UTC m=+1319.894422162" watchObservedRunningTime="2026-01-27 19:03:48.541629721 +0000 UTC m=+1319.899483385" Jan 27 19:03:48 crc kubenswrapper[4915]: I0127 19:03:48.996034 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:03:49 crc kubenswrapper[4915]: I0127 19:03:49.131560 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:49 crc kubenswrapper[4915]: I0127 19:03:49.511426 4915 generic.go:334] "Generic (PLEG): container finished" podID="4c2a303b-f58c-4b93-9a06-ef1e871c6945" containerID="fe7039a47e6b6846ab2f97c32937606cfe5c82490555ef570713fc64ec45e5f0" exitCode=143 Jan 27 19:03:49 crc kubenswrapper[4915]: I0127 19:03:49.511679 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c2a303b-f58c-4b93-9a06-ef1e871c6945","Type":"ContainerDied","Data":"fe7039a47e6b6846ab2f97c32937606cfe5c82490555ef570713fc64ec45e5f0"} Jan 27 19:03:49 crc kubenswrapper[4915]: I0127 19:03:49.515013 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2","Type":"ContainerStarted","Data":"500c3cbd52100e95b7b33d076db069ef375b4917f1b41fb18d6d968dc700ab8f"} Jan 27 19:03:49 crc kubenswrapper[4915]: I0127 19:03:49.532382 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.357291821 podStartE2EDuration="7.532365489s" podCreationTimestamp="2026-01-27 19:03:42 +0000 UTC" firstStartedPulling="2026-01-27 19:03:43.298174659 +0000 UTC m=+1314.656028323" lastFinishedPulling="2026-01-27 19:03:48.473248327 +0000 UTC m=+1319.831101991" observedRunningTime="2026-01-27 19:03:49.531783325 +0000 UTC m=+1320.889636989" watchObservedRunningTime="2026-01-27 19:03:49.532365489 +0000 UTC m=+1320.890219153" Jan 27 19:03:50 crc kubenswrapper[4915]: I0127 19:03:50.522835 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 19:03:50 crc kubenswrapper[4915]: I0127 19:03:50.522889 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" containerName="ceilometer-notification-agent" containerID="cri-o://0ee1bbd0626d60f89c4bce473ae593fac4806a3d49a4bb73f23cd08e17b5e58f" gracePeriod=30 Jan 27 19:03:50 crc kubenswrapper[4915]: I0127 19:03:50.522830 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" containerName="ceilometer-central-agent" containerID="cri-o://29baa4644a5be72c910aaf77c789621b24d26e7d662630cdbb1739948bbcff88" gracePeriod=30 Jan 27 19:03:50 crc kubenswrapper[4915]: I0127 19:03:50.522930 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" containerName="sg-core" containerID="cri-o://efc24c3da07634cc15e5b2349c03a74cd00fbdb15a126807f434ea7a27727b4c" gracePeriod=30 Jan 27 19:03:50 crc kubenswrapper[4915]: I0127 19:03:50.522891 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" containerName="proxy-httpd" containerID="cri-o://500c3cbd52100e95b7b33d076db069ef375b4917f1b41fb18d6d968dc700ab8f" gracePeriod=30 Jan 27 19:03:51 crc kubenswrapper[4915]: I0127 19:03:51.538510 4915 generic.go:334] "Generic (PLEG): container finished" podID="4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" containerID="500c3cbd52100e95b7b33d076db069ef375b4917f1b41fb18d6d968dc700ab8f" exitCode=0 Jan 27 19:03:51 crc kubenswrapper[4915]: I0127 19:03:51.540426 4915 generic.go:334] "Generic (PLEG): container finished" podID="4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" containerID="efc24c3da07634cc15e5b2349c03a74cd00fbdb15a126807f434ea7a27727b4c" exitCode=2 Jan 27 19:03:51 crc kubenswrapper[4915]: I0127 19:03:51.540455 4915 generic.go:334] "Generic (PLEG): container finished" podID="4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" containerID="0ee1bbd0626d60f89c4bce473ae593fac4806a3d49a4bb73f23cd08e17b5e58f" exitCode=0 Jan 27 19:03:51 crc kubenswrapper[4915]: I0127 19:03:51.540471 4915 generic.go:334] "Generic (PLEG): container finished" podID="4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" containerID="29baa4644a5be72c910aaf77c789621b24d26e7d662630cdbb1739948bbcff88" exitCode=0 Jan 27 19:03:51 crc kubenswrapper[4915]: I0127 19:03:51.538603 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2","Type":"ContainerDied","Data":"500c3cbd52100e95b7b33d076db069ef375b4917f1b41fb18d6d968dc700ab8f"} Jan 27 19:03:51 crc kubenswrapper[4915]: I0127 19:03:51.540991 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2","Type":"ContainerDied","Data":"efc24c3da07634cc15e5b2349c03a74cd00fbdb15a126807f434ea7a27727b4c"} Jan 27 19:03:51 crc kubenswrapper[4915]: I0127 19:03:51.541033 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2","Type":"ContainerDied","Data":"0ee1bbd0626d60f89c4bce473ae593fac4806a3d49a4bb73f23cd08e17b5e58f"} Jan 27 19:03:51 crc kubenswrapper[4915]: I0127 19:03:51.541058 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2","Type":"ContainerDied","Data":"29baa4644a5be72c910aaf77c789621b24d26e7d662630cdbb1739948bbcff88"} Jan 27 19:03:51 crc kubenswrapper[4915]: I0127 19:03:51.897998 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.024626 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-log-httpd\") pod \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.024681 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-scripts\") pod \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.025744 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-config-data\") pod \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.025797 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-combined-ca-bundle\") pod \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.025966 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-sg-core-conf-yaml\") pod \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.026021 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-ceilometer-tls-certs\") pod \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.026088 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29g8k\" (UniqueName: \"kubernetes.io/projected/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-kube-api-access-29g8k\") pod \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.026120 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-run-httpd\") pod \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\" (UID: \"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2\") " Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.026174 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" (UID: "4cf187d8-b7ee-4c54-9ee4-dc880d0993a2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.026544 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" (UID: "4cf187d8-b7ee-4c54-9ee4-dc880d0993a2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.026966 4915 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.026992 4915 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.030492 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-kube-api-access-29g8k" (OuterVolumeSpecName: "kube-api-access-29g8k") pod "4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" (UID: "4cf187d8-b7ee-4c54-9ee4-dc880d0993a2"). InnerVolumeSpecName "kube-api-access-29g8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.032484 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-scripts" (OuterVolumeSpecName: "scripts") pod "4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" (UID: "4cf187d8-b7ee-4c54-9ee4-dc880d0993a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.062250 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" (UID: "4cf187d8-b7ee-4c54-9ee4-dc880d0993a2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.106211 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" (UID: "4cf187d8-b7ee-4c54-9ee4-dc880d0993a2"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.128370 4915 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.128401 4915 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.128412 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29g8k\" (UniqueName: \"kubernetes.io/projected/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-kube-api-access-29g8k\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.128421 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.133115 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.134625 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" (UID: "4cf187d8-b7ee-4c54-9ee4-dc880d0993a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.144756 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-config-data" (OuterVolumeSpecName: "config-data") pod "4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" (UID: "4cf187d8-b7ee-4c54-9ee4-dc880d0993a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.230213 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jbmq\" (UniqueName: \"kubernetes.io/projected/4c2a303b-f58c-4b93-9a06-ef1e871c6945-kube-api-access-2jbmq\") pod \"4c2a303b-f58c-4b93-9a06-ef1e871c6945\" (UID: \"4c2a303b-f58c-4b93-9a06-ef1e871c6945\") " Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.230277 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2a303b-f58c-4b93-9a06-ef1e871c6945-combined-ca-bundle\") pod \"4c2a303b-f58c-4b93-9a06-ef1e871c6945\" (UID: \"4c2a303b-f58c-4b93-9a06-ef1e871c6945\") " Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.230325 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c2a303b-f58c-4b93-9a06-ef1e871c6945-logs\") pod \"4c2a303b-f58c-4b93-9a06-ef1e871c6945\" (UID: \"4c2a303b-f58c-4b93-9a06-ef1e871c6945\") " Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.230395 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c2a303b-f58c-4b93-9a06-ef1e871c6945-config-data\") pod \"4c2a303b-f58c-4b93-9a06-ef1e871c6945\" (UID: \"4c2a303b-f58c-4b93-9a06-ef1e871c6945\") " Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.231043 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c2a303b-f58c-4b93-9a06-ef1e871c6945-logs" (OuterVolumeSpecName: "logs") pod "4c2a303b-f58c-4b93-9a06-ef1e871c6945" (UID: "4c2a303b-f58c-4b93-9a06-ef1e871c6945"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.231167 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c2a303b-f58c-4b93-9a06-ef1e871c6945-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.231179 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.231189 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.234468 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c2a303b-f58c-4b93-9a06-ef1e871c6945-kube-api-access-2jbmq" (OuterVolumeSpecName: "kube-api-access-2jbmq") pod "4c2a303b-f58c-4b93-9a06-ef1e871c6945" (UID: "4c2a303b-f58c-4b93-9a06-ef1e871c6945"). InnerVolumeSpecName "kube-api-access-2jbmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.255933 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c2a303b-f58c-4b93-9a06-ef1e871c6945-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c2a303b-f58c-4b93-9a06-ef1e871c6945" (UID: "4c2a303b-f58c-4b93-9a06-ef1e871c6945"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.264715 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c2a303b-f58c-4b93-9a06-ef1e871c6945-config-data" (OuterVolumeSpecName: "config-data") pod "4c2a303b-f58c-4b93-9a06-ef1e871c6945" (UID: "4c2a303b-f58c-4b93-9a06-ef1e871c6945"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.332422 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c2a303b-f58c-4b93-9a06-ef1e871c6945-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.332455 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jbmq\" (UniqueName: \"kubernetes.io/projected/4c2a303b-f58c-4b93-9a06-ef1e871c6945-kube-api-access-2jbmq\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.332468 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2a303b-f58c-4b93-9a06-ef1e871c6945-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.550759 4915 generic.go:334] "Generic (PLEG): container finished" podID="4c2a303b-f58c-4b93-9a06-ef1e871c6945" containerID="8525e5a9aeb078cb0cd4756546717222aa60bba646ed41d168ef0e52cdd32f75" exitCode=0 Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.550819 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c2a303b-f58c-4b93-9a06-ef1e871c6945","Type":"ContainerDied","Data":"8525e5a9aeb078cb0cd4756546717222aa60bba646ed41d168ef0e52cdd32f75"} Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.551153 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c2a303b-f58c-4b93-9a06-ef1e871c6945","Type":"ContainerDied","Data":"aeaa87ea5f131b0539b31cca1c9de59e3a9fb0dffac956bb94700bdbb7a3b078"} Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.551174 4915 scope.go:117] "RemoveContainer" containerID="8525e5a9aeb078cb0cd4756546717222aa60bba646ed41d168ef0e52cdd32f75" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.550857 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.556036 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4cf187d8-b7ee-4c54-9ee4-dc880d0993a2","Type":"ContainerDied","Data":"88cc27d884121f421c6365cfb08ccd37131bda8a3223d0639545874530d9563b"} Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.556125 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.594462 4915 scope.go:117] "RemoveContainer" containerID="fe7039a47e6b6846ab2f97c32937606cfe5c82490555ef570713fc64ec45e5f0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.630922 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.650235 4915 scope.go:117] "RemoveContainer" containerID="8525e5a9aeb078cb0cd4756546717222aa60bba646ed41d168ef0e52cdd32f75" Jan 27 19:03:52 crc kubenswrapper[4915]: E0127 19:03:52.653349 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8525e5a9aeb078cb0cd4756546717222aa60bba646ed41d168ef0e52cdd32f75\": container with ID starting with 8525e5a9aeb078cb0cd4756546717222aa60bba646ed41d168ef0e52cdd32f75 not found: ID does not exist" containerID="8525e5a9aeb078cb0cd4756546717222aa60bba646ed41d168ef0e52cdd32f75" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.653413 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8525e5a9aeb078cb0cd4756546717222aa60bba646ed41d168ef0e52cdd32f75"} err="failed to get container status \"8525e5a9aeb078cb0cd4756546717222aa60bba646ed41d168ef0e52cdd32f75\": rpc error: code = NotFound desc = could not find container \"8525e5a9aeb078cb0cd4756546717222aa60bba646ed41d168ef0e52cdd32f75\": container with ID starting with 8525e5a9aeb078cb0cd4756546717222aa60bba646ed41d168ef0e52cdd32f75 not found: ID does not exist" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.653443 4915 scope.go:117] "RemoveContainer" containerID="fe7039a47e6b6846ab2f97c32937606cfe5c82490555ef570713fc64ec45e5f0" Jan 27 19:03:52 crc kubenswrapper[4915]: E0127 19:03:52.653992 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe7039a47e6b6846ab2f97c32937606cfe5c82490555ef570713fc64ec45e5f0\": container with ID starting with fe7039a47e6b6846ab2f97c32937606cfe5c82490555ef570713fc64ec45e5f0 not found: ID does not exist" containerID="fe7039a47e6b6846ab2f97c32937606cfe5c82490555ef570713fc64ec45e5f0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.654020 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe7039a47e6b6846ab2f97c32937606cfe5c82490555ef570713fc64ec45e5f0"} err="failed to get container status \"fe7039a47e6b6846ab2f97c32937606cfe5c82490555ef570713fc64ec45e5f0\": rpc error: code = NotFound desc = could not find container \"fe7039a47e6b6846ab2f97c32937606cfe5c82490555ef570713fc64ec45e5f0\": container with ID starting with fe7039a47e6b6846ab2f97c32937606cfe5c82490555ef570713fc64ec45e5f0 not found: ID does not exist" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.654045 4915 scope.go:117] "RemoveContainer" containerID="500c3cbd52100e95b7b33d076db069ef375b4917f1b41fb18d6d968dc700ab8f" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.661358 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.670115 4915 scope.go:117] "RemoveContainer" containerID="efc24c3da07634cc15e5b2349c03a74cd00fbdb15a126807f434ea7a27727b4c" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.675181 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 19:03:52 crc kubenswrapper[4915]: E0127 19:03:52.675547 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" containerName="ceilometer-notification-agent" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.675566 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" containerName="ceilometer-notification-agent" Jan 27 19:03:52 crc kubenswrapper[4915]: E0127 19:03:52.675584 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" containerName="sg-core" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.675589 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" containerName="sg-core" Jan 27 19:03:52 crc kubenswrapper[4915]: E0127 19:03:52.675602 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c2a303b-f58c-4b93-9a06-ef1e871c6945" containerName="nova-api-log" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.675608 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c2a303b-f58c-4b93-9a06-ef1e871c6945" containerName="nova-api-log" Jan 27 19:03:52 crc kubenswrapper[4915]: E0127 19:03:52.675622 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" containerName="ceilometer-central-agent" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.675628 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" containerName="ceilometer-central-agent" Jan 27 19:03:52 crc kubenswrapper[4915]: E0127 19:03:52.675636 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" containerName="proxy-httpd" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.675641 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" containerName="proxy-httpd" Jan 27 19:03:52 crc kubenswrapper[4915]: E0127 19:03:52.675664 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c2a303b-f58c-4b93-9a06-ef1e871c6945" containerName="nova-api-api" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.675670 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c2a303b-f58c-4b93-9a06-ef1e871c6945" containerName="nova-api-api" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.678654 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" containerName="ceilometer-central-agent" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.678684 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" containerName="proxy-httpd" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.678700 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c2a303b-f58c-4b93-9a06-ef1e871c6945" containerName="nova-api-api" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.678715 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" containerName="sg-core" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.678732 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c2a303b-f58c-4b93-9a06-ef1e871c6945" containerName="nova-api-log" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.678741 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" containerName="ceilometer-notification-agent" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.679748 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.682636 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.682938 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.683113 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.702457 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.704172 4915 scope.go:117] "RemoveContainer" containerID="0ee1bbd0626d60f89c4bce473ae593fac4806a3d49a4bb73f23cd08e17b5e58f" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.727018 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.733695 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.734356 4915 scope.go:117] "RemoveContainer" containerID="29baa4644a5be72c910aaf77c789621b24d26e7d662630cdbb1739948bbcff88" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.743936 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f560f18-9e53-4de4-a0f2-41b922ec1b65-config-data\") pod \"nova-api-0\" (UID: \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\") " pod="openstack/nova-api-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.744007 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f560f18-9e53-4de4-a0f2-41b922ec1b65-public-tls-certs\") pod \"nova-api-0\" (UID: \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\") " pod="openstack/nova-api-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.744027 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwj9k\" (UniqueName: \"kubernetes.io/projected/8f560f18-9e53-4de4-a0f2-41b922ec1b65-kube-api-access-fwj9k\") pod \"nova-api-0\" (UID: \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\") " pod="openstack/nova-api-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.744091 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f560f18-9e53-4de4-a0f2-41b922ec1b65-logs\") pod \"nova-api-0\" (UID: \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\") " pod="openstack/nova-api-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.744200 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f560f18-9e53-4de4-a0f2-41b922ec1b65-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\") " pod="openstack/nova-api-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.744279 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f560f18-9e53-4de4-a0f2-41b922ec1b65-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\") " pod="openstack/nova-api-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.749383 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.751742 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.754332 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.754424 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.754467 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.772761 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.846659 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " pod="openstack/ceilometer-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.846713 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-config-data\") pod \"ceilometer-0\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " pod="openstack/ceilometer-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.846799 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f560f18-9e53-4de4-a0f2-41b922ec1b65-config-data\") pod \"nova-api-0\" (UID: \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\") " pod="openstack/nova-api-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.846888 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f560f18-9e53-4de4-a0f2-41b922ec1b65-public-tls-certs\") pod \"nova-api-0\" (UID: \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\") " pod="openstack/nova-api-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.846911 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwj9k\" (UniqueName: \"kubernetes.io/projected/8f560f18-9e53-4de4-a0f2-41b922ec1b65-kube-api-access-fwj9k\") pod \"nova-api-0\" (UID: \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\") " pod="openstack/nova-api-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.846951 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-scripts\") pod \"ceilometer-0\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " pod="openstack/ceilometer-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.846974 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f560f18-9e53-4de4-a0f2-41b922ec1b65-logs\") pod \"nova-api-0\" (UID: \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\") " pod="openstack/nova-api-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.847004 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nw9x\" (UniqueName: \"kubernetes.io/projected/f765c967-cd1f-44be-bf35-200a93f06c08-kube-api-access-6nw9x\") pod \"ceilometer-0\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " pod="openstack/ceilometer-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.847034 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f560f18-9e53-4de4-a0f2-41b922ec1b65-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\") " pod="openstack/nova-api-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.847081 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f560f18-9e53-4de4-a0f2-41b922ec1b65-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\") " pod="openstack/nova-api-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.847104 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f765c967-cd1f-44be-bf35-200a93f06c08-log-httpd\") pod \"ceilometer-0\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " pod="openstack/ceilometer-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.847131 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f765c967-cd1f-44be-bf35-200a93f06c08-run-httpd\") pod \"ceilometer-0\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " pod="openstack/ceilometer-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.847171 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " pod="openstack/ceilometer-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.847212 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " pod="openstack/ceilometer-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.848433 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f560f18-9e53-4de4-a0f2-41b922ec1b65-logs\") pod \"nova-api-0\" (UID: \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\") " pod="openstack/nova-api-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.851007 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f560f18-9e53-4de4-a0f2-41b922ec1b65-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\") " pod="openstack/nova-api-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.851857 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f560f18-9e53-4de4-a0f2-41b922ec1b65-public-tls-certs\") pod \"nova-api-0\" (UID: \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\") " pod="openstack/nova-api-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.852366 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f560f18-9e53-4de4-a0f2-41b922ec1b65-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\") " pod="openstack/nova-api-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.856248 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f560f18-9e53-4de4-a0f2-41b922ec1b65-config-data\") pod \"nova-api-0\" (UID: \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\") " pod="openstack/nova-api-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.874119 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwj9k\" (UniqueName: \"kubernetes.io/projected/8f560f18-9e53-4de4-a0f2-41b922ec1b65-kube-api-access-fwj9k\") pod \"nova-api-0\" (UID: \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\") " pod="openstack/nova-api-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.948564 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-scripts\") pod \"ceilometer-0\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " pod="openstack/ceilometer-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.948626 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nw9x\" (UniqueName: \"kubernetes.io/projected/f765c967-cd1f-44be-bf35-200a93f06c08-kube-api-access-6nw9x\") pod \"ceilometer-0\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " pod="openstack/ceilometer-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.948695 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f765c967-cd1f-44be-bf35-200a93f06c08-log-httpd\") pod \"ceilometer-0\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " pod="openstack/ceilometer-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.948724 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f765c967-cd1f-44be-bf35-200a93f06c08-run-httpd\") pod \"ceilometer-0\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " pod="openstack/ceilometer-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.948757 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " pod="openstack/ceilometer-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.948797 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " pod="openstack/ceilometer-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.948900 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " pod="openstack/ceilometer-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.948929 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-config-data\") pod \"ceilometer-0\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " pod="openstack/ceilometer-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.949517 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f765c967-cd1f-44be-bf35-200a93f06c08-run-httpd\") pod \"ceilometer-0\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " pod="openstack/ceilometer-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.949841 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f765c967-cd1f-44be-bf35-200a93f06c08-log-httpd\") pod \"ceilometer-0\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " pod="openstack/ceilometer-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.952052 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-scripts\") pod \"ceilometer-0\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " pod="openstack/ceilometer-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.953513 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-config-data\") pod \"ceilometer-0\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " pod="openstack/ceilometer-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.954234 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " pod="openstack/ceilometer-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.954382 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " pod="openstack/ceilometer-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.956892 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " pod="openstack/ceilometer-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.964939 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nw9x\" (UniqueName: \"kubernetes.io/projected/f765c967-cd1f-44be-bf35-200a93f06c08-kube-api-access-6nw9x\") pod \"ceilometer-0\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " pod="openstack/ceilometer-0" Jan 27 19:03:52 crc kubenswrapper[4915]: I0127 19:03:52.997902 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:03:53 crc kubenswrapper[4915]: I0127 19:03:53.089366 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:03:53 crc kubenswrapper[4915]: I0127 19:03:53.391359 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c2a303b-f58c-4b93-9a06-ef1e871c6945" path="/var/lib/kubelet/pods/4c2a303b-f58c-4b93-9a06-ef1e871c6945/volumes" Jan 27 19:03:53 crc kubenswrapper[4915]: I0127 19:03:53.392290 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cf187d8-b7ee-4c54-9ee4-dc880d0993a2" path="/var/lib/kubelet/pods/4cf187d8-b7ee-4c54-9ee4-dc880d0993a2/volumes" Jan 27 19:03:53 crc kubenswrapper[4915]: I0127 19:03:53.479330 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:03:53 crc kubenswrapper[4915]: I0127 19:03:53.571625 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8f560f18-9e53-4de4-a0f2-41b922ec1b65","Type":"ContainerStarted","Data":"f86099dc4cae36150664989167296ce2f7f0415da8bc3c08c04e4bf9a39d136d"} Jan 27 19:03:53 crc kubenswrapper[4915]: I0127 19:03:53.657494 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:03:53 crc kubenswrapper[4915]: W0127 19:03:53.657602 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf765c967_cd1f_44be_bf35_200a93f06c08.slice/crio-03756e0180e06eef65db1069717371cc3e130eab91d786bd0c865735e0e8527b WatchSource:0}: Error finding container 03756e0180e06eef65db1069717371cc3e130eab91d786bd0c865735e0e8527b: Status 404 returned error can't find the container with id 03756e0180e06eef65db1069717371cc3e130eab91d786bd0c865735e0e8527b Jan 27 19:03:54 crc kubenswrapper[4915]: I0127 19:03:54.131087 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:54 crc kubenswrapper[4915]: I0127 19:03:54.173205 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:54 crc kubenswrapper[4915]: I0127 19:03:54.584287 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f765c967-cd1f-44be-bf35-200a93f06c08","Type":"ContainerStarted","Data":"91dde6520b09dd9a3bee7b1dccad272eed3957544bd65c78044cde31b5a5a33a"} Jan 27 19:03:54 crc kubenswrapper[4915]: I0127 19:03:54.584327 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f765c967-cd1f-44be-bf35-200a93f06c08","Type":"ContainerStarted","Data":"03756e0180e06eef65db1069717371cc3e130eab91d786bd0c865735e0e8527b"} Jan 27 19:03:54 crc kubenswrapper[4915]: I0127 19:03:54.585776 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8f560f18-9e53-4de4-a0f2-41b922ec1b65","Type":"ContainerStarted","Data":"f7ff8628131102d671746563500614fd39147fac8bfef03b44c336ea76edfdf2"} Jan 27 19:03:54 crc kubenswrapper[4915]: I0127 19:03:54.585845 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8f560f18-9e53-4de4-a0f2-41b922ec1b65","Type":"ContainerStarted","Data":"b9d86ac7b4b0673b027c36a965d4ea48c0cec8653e1f1bc19ef0c883d352d8b4"} Jan 27 19:03:54 crc kubenswrapper[4915]: I0127 19:03:54.603401 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:03:54 crc kubenswrapper[4915]: I0127 19:03:54.613038 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.613019059 podStartE2EDuration="2.613019059s" podCreationTimestamp="2026-01-27 19:03:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:03:54.605850865 +0000 UTC m=+1325.963704549" watchObservedRunningTime="2026-01-27 19:03:54.613019059 +0000 UTC m=+1325.970872723" Jan 27 19:03:54 crc kubenswrapper[4915]: I0127 19:03:54.786397 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-nphkx"] Jan 27 19:03:54 crc kubenswrapper[4915]: I0127 19:03:54.787931 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nphkx" Jan 27 19:03:54 crc kubenswrapper[4915]: I0127 19:03:54.790722 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 27 19:03:54 crc kubenswrapper[4915]: I0127 19:03:54.791042 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 27 19:03:54 crc kubenswrapper[4915]: I0127 19:03:54.807929 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-nphkx"] Jan 27 19:03:54 crc kubenswrapper[4915]: I0127 19:03:54.886032 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glg7l\" (UniqueName: \"kubernetes.io/projected/5852cc0c-e2c5-4f1f-a979-73fc5607b961-kube-api-access-glg7l\") pod \"nova-cell1-cell-mapping-nphkx\" (UID: \"5852cc0c-e2c5-4f1f-a979-73fc5607b961\") " pod="openstack/nova-cell1-cell-mapping-nphkx" Jan 27 19:03:54 crc kubenswrapper[4915]: I0127 19:03:54.886266 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5852cc0c-e2c5-4f1f-a979-73fc5607b961-config-data\") pod \"nova-cell1-cell-mapping-nphkx\" (UID: \"5852cc0c-e2c5-4f1f-a979-73fc5607b961\") " pod="openstack/nova-cell1-cell-mapping-nphkx" Jan 27 19:03:54 crc kubenswrapper[4915]: I0127 19:03:54.886410 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5852cc0c-e2c5-4f1f-a979-73fc5607b961-scripts\") pod \"nova-cell1-cell-mapping-nphkx\" (UID: \"5852cc0c-e2c5-4f1f-a979-73fc5607b961\") " pod="openstack/nova-cell1-cell-mapping-nphkx" Jan 27 19:03:54 crc kubenswrapper[4915]: I0127 19:03:54.886472 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5852cc0c-e2c5-4f1f-a979-73fc5607b961-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-nphkx\" (UID: \"5852cc0c-e2c5-4f1f-a979-73fc5607b961\") " pod="openstack/nova-cell1-cell-mapping-nphkx" Jan 27 19:03:54 crc kubenswrapper[4915]: I0127 19:03:54.989447 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5852cc0c-e2c5-4f1f-a979-73fc5607b961-config-data\") pod \"nova-cell1-cell-mapping-nphkx\" (UID: \"5852cc0c-e2c5-4f1f-a979-73fc5607b961\") " pod="openstack/nova-cell1-cell-mapping-nphkx" Jan 27 19:03:54 crc kubenswrapper[4915]: I0127 19:03:54.989578 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5852cc0c-e2c5-4f1f-a979-73fc5607b961-scripts\") pod \"nova-cell1-cell-mapping-nphkx\" (UID: \"5852cc0c-e2c5-4f1f-a979-73fc5607b961\") " pod="openstack/nova-cell1-cell-mapping-nphkx" Jan 27 19:03:54 crc kubenswrapper[4915]: I0127 19:03:54.989653 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5852cc0c-e2c5-4f1f-a979-73fc5607b961-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-nphkx\" (UID: \"5852cc0c-e2c5-4f1f-a979-73fc5607b961\") " pod="openstack/nova-cell1-cell-mapping-nphkx" Jan 27 19:03:54 crc kubenswrapper[4915]: I0127 19:03:54.989707 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glg7l\" (UniqueName: \"kubernetes.io/projected/5852cc0c-e2c5-4f1f-a979-73fc5607b961-kube-api-access-glg7l\") pod \"nova-cell1-cell-mapping-nphkx\" (UID: \"5852cc0c-e2c5-4f1f-a979-73fc5607b961\") " pod="openstack/nova-cell1-cell-mapping-nphkx" Jan 27 19:03:54 crc kubenswrapper[4915]: I0127 19:03:54.999216 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5852cc0c-e2c5-4f1f-a979-73fc5607b961-scripts\") pod \"nova-cell1-cell-mapping-nphkx\" (UID: \"5852cc0c-e2c5-4f1f-a979-73fc5607b961\") " pod="openstack/nova-cell1-cell-mapping-nphkx" Jan 27 19:03:54 crc kubenswrapper[4915]: I0127 19:03:54.999267 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5852cc0c-e2c5-4f1f-a979-73fc5607b961-config-data\") pod \"nova-cell1-cell-mapping-nphkx\" (UID: \"5852cc0c-e2c5-4f1f-a979-73fc5607b961\") " pod="openstack/nova-cell1-cell-mapping-nphkx" Jan 27 19:03:55 crc kubenswrapper[4915]: I0127 19:03:55.008218 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5852cc0c-e2c5-4f1f-a979-73fc5607b961-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-nphkx\" (UID: \"5852cc0c-e2c5-4f1f-a979-73fc5607b961\") " pod="openstack/nova-cell1-cell-mapping-nphkx" Jan 27 19:03:55 crc kubenswrapper[4915]: I0127 19:03:55.011437 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glg7l\" (UniqueName: \"kubernetes.io/projected/5852cc0c-e2c5-4f1f-a979-73fc5607b961-kube-api-access-glg7l\") pod \"nova-cell1-cell-mapping-nphkx\" (UID: \"5852cc0c-e2c5-4f1f-a979-73fc5607b961\") " pod="openstack/nova-cell1-cell-mapping-nphkx" Jan 27 19:03:55 crc kubenswrapper[4915]: I0127 19:03:55.220256 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nphkx" Jan 27 19:03:55 crc kubenswrapper[4915]: I0127 19:03:55.597719 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f765c967-cd1f-44be-bf35-200a93f06c08","Type":"ContainerStarted","Data":"fe19ba02a7b46df32800af54c264a8f50785062daa8c7d556b47dcdfaa7f8440"} Jan 27 19:03:55 crc kubenswrapper[4915]: I0127 19:03:55.598581 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f765c967-cd1f-44be-bf35-200a93f06c08","Type":"ContainerStarted","Data":"b6d441dae839212e31371ba1c55077036e6541f4ea0dbed85fba70fa1100e2eb"} Jan 27 19:03:55 crc kubenswrapper[4915]: W0127 19:03:55.636037 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5852cc0c_e2c5_4f1f_a979_73fc5607b961.slice/crio-33b07dbd4bc34020e20ecc7f3424096cc2425daec93b565167b9c8069d510c10 WatchSource:0}: Error finding container 33b07dbd4bc34020e20ecc7f3424096cc2425daec93b565167b9c8069d510c10: Status 404 returned error can't find the container with id 33b07dbd4bc34020e20ecc7f3424096cc2425daec93b565167b9c8069d510c10 Jan 27 19:03:55 crc kubenswrapper[4915]: I0127 19:03:55.638631 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-nphkx"] Jan 27 19:03:55 crc kubenswrapper[4915]: I0127 19:03:55.990056 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.063013 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-kffk9"] Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.063305 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-kffk9" podUID="5675837c-21b0-4474-984d-a253dfcb0df9" containerName="dnsmasq-dns" containerID="cri-o://f25bf3422f3f7e32a635a003cddd6ea6e669acfb484a4de57c34506902c9d70a" gracePeriod=10 Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.611499 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-kffk9" Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.611536 4915 generic.go:334] "Generic (PLEG): container finished" podID="5675837c-21b0-4474-984d-a253dfcb0df9" containerID="f25bf3422f3f7e32a635a003cddd6ea6e669acfb484a4de57c34506902c9d70a" exitCode=0 Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.611565 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-kffk9" event={"ID":"5675837c-21b0-4474-984d-a253dfcb0df9","Type":"ContainerDied","Data":"f25bf3422f3f7e32a635a003cddd6ea6e669acfb484a4de57c34506902c9d70a"} Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.611870 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-kffk9" event={"ID":"5675837c-21b0-4474-984d-a253dfcb0df9","Type":"ContainerDied","Data":"58788f3c9acb71cd2420974b19ac534c3998eedbdcf217cc917ca63324024cca"} Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.611891 4915 scope.go:117] "RemoveContainer" containerID="f25bf3422f3f7e32a635a003cddd6ea6e669acfb484a4de57c34506902c9d70a" Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.613061 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nphkx" event={"ID":"5852cc0c-e2c5-4f1f-a979-73fc5607b961","Type":"ContainerStarted","Data":"819859927b2d78abacf57b6ca8d9b1fe29088977d7d8e0a43137061459a30ee7"} Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.613086 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nphkx" event={"ID":"5852cc0c-e2c5-4f1f-a979-73fc5607b961","Type":"ContainerStarted","Data":"33b07dbd4bc34020e20ecc7f3424096cc2425daec93b565167b9c8069d510c10"} Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.652025 4915 scope.go:117] "RemoveContainer" containerID="946d0fbf0a60466bdadff2b0d2b3cce040d0f6778310ec166776ef746bc6b6ad" Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.742251 4915 scope.go:117] "RemoveContainer" containerID="f25bf3422f3f7e32a635a003cddd6ea6e669acfb484a4de57c34506902c9d70a" Jan 27 19:03:56 crc kubenswrapper[4915]: E0127 19:03:56.743149 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f25bf3422f3f7e32a635a003cddd6ea6e669acfb484a4de57c34506902c9d70a\": container with ID starting with f25bf3422f3f7e32a635a003cddd6ea6e669acfb484a4de57c34506902c9d70a not found: ID does not exist" containerID="f25bf3422f3f7e32a635a003cddd6ea6e669acfb484a4de57c34506902c9d70a" Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.743236 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f25bf3422f3f7e32a635a003cddd6ea6e669acfb484a4de57c34506902c9d70a"} err="failed to get container status \"f25bf3422f3f7e32a635a003cddd6ea6e669acfb484a4de57c34506902c9d70a\": rpc error: code = NotFound desc = could not find container \"f25bf3422f3f7e32a635a003cddd6ea6e669acfb484a4de57c34506902c9d70a\": container with ID starting with f25bf3422f3f7e32a635a003cddd6ea6e669acfb484a4de57c34506902c9d70a not found: ID does not exist" Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.743307 4915 scope.go:117] "RemoveContainer" containerID="946d0fbf0a60466bdadff2b0d2b3cce040d0f6778310ec166776ef746bc6b6ad" Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.744207 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-config\") pod \"5675837c-21b0-4474-984d-a253dfcb0df9\" (UID: \"5675837c-21b0-4474-984d-a253dfcb0df9\") " Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.744406 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7762\" (UniqueName: \"kubernetes.io/projected/5675837c-21b0-4474-984d-a253dfcb0df9-kube-api-access-z7762\") pod \"5675837c-21b0-4474-984d-a253dfcb0df9\" (UID: \"5675837c-21b0-4474-984d-a253dfcb0df9\") " Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.744653 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-ovsdbserver-nb\") pod \"5675837c-21b0-4474-984d-a253dfcb0df9\" (UID: \"5675837c-21b0-4474-984d-a253dfcb0df9\") " Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.744716 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-ovsdbserver-sb\") pod \"5675837c-21b0-4474-984d-a253dfcb0df9\" (UID: \"5675837c-21b0-4474-984d-a253dfcb0df9\") " Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.744791 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-dns-svc\") pod \"5675837c-21b0-4474-984d-a253dfcb0df9\" (UID: \"5675837c-21b0-4474-984d-a253dfcb0df9\") " Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.745433 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-dns-swift-storage-0\") pod \"5675837c-21b0-4474-984d-a253dfcb0df9\" (UID: \"5675837c-21b0-4474-984d-a253dfcb0df9\") " Jan 27 19:03:56 crc kubenswrapper[4915]: E0127 19:03:56.747103 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"946d0fbf0a60466bdadff2b0d2b3cce040d0f6778310ec166776ef746bc6b6ad\": container with ID starting with 946d0fbf0a60466bdadff2b0d2b3cce040d0f6778310ec166776ef746bc6b6ad not found: ID does not exist" containerID="946d0fbf0a60466bdadff2b0d2b3cce040d0f6778310ec166776ef746bc6b6ad" Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.747148 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"946d0fbf0a60466bdadff2b0d2b3cce040d0f6778310ec166776ef746bc6b6ad"} err="failed to get container status \"946d0fbf0a60466bdadff2b0d2b3cce040d0f6778310ec166776ef746bc6b6ad\": rpc error: code = NotFound desc = could not find container \"946d0fbf0a60466bdadff2b0d2b3cce040d0f6778310ec166776ef746bc6b6ad\": container with ID starting with 946d0fbf0a60466bdadff2b0d2b3cce040d0f6778310ec166776ef746bc6b6ad not found: ID does not exist" Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.751756 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5675837c-21b0-4474-984d-a253dfcb0df9-kube-api-access-z7762" (OuterVolumeSpecName: "kube-api-access-z7762") pod "5675837c-21b0-4474-984d-a253dfcb0df9" (UID: "5675837c-21b0-4474-984d-a253dfcb0df9"). InnerVolumeSpecName "kube-api-access-z7762". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.800142 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-config" (OuterVolumeSpecName: "config") pod "5675837c-21b0-4474-984d-a253dfcb0df9" (UID: "5675837c-21b0-4474-984d-a253dfcb0df9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.804450 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5675837c-21b0-4474-984d-a253dfcb0df9" (UID: "5675837c-21b0-4474-984d-a253dfcb0df9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.815141 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5675837c-21b0-4474-984d-a253dfcb0df9" (UID: "5675837c-21b0-4474-984d-a253dfcb0df9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.821457 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5675837c-21b0-4474-984d-a253dfcb0df9" (UID: "5675837c-21b0-4474-984d-a253dfcb0df9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.821756 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5675837c-21b0-4474-984d-a253dfcb0df9" (UID: "5675837c-21b0-4474-984d-a253dfcb0df9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.848649 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7762\" (UniqueName: \"kubernetes.io/projected/5675837c-21b0-4474-984d-a253dfcb0df9-kube-api-access-z7762\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.848695 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.848708 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.848722 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.848737 4915 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:56 crc kubenswrapper[4915]: I0127 19:03:56.848749 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5675837c-21b0-4474-984d-a253dfcb0df9-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:03:57 crc kubenswrapper[4915]: I0127 19:03:57.633926 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-kffk9" Jan 27 19:03:57 crc kubenswrapper[4915]: I0127 19:03:57.656729 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-nphkx" podStartSLOduration=3.6567045030000003 podStartE2EDuration="3.656704503s" podCreationTimestamp="2026-01-27 19:03:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:03:56.65215515 +0000 UTC m=+1328.010008814" watchObservedRunningTime="2026-01-27 19:03:57.656704503 +0000 UTC m=+1329.014558167" Jan 27 19:03:57 crc kubenswrapper[4915]: I0127 19:03:57.663579 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f765c967-cd1f-44be-bf35-200a93f06c08","Type":"ContainerStarted","Data":"442aa21a700a655bb2c5e592d27927399226ce7e4818c5d5466669e4014e4238"} Jan 27 19:03:57 crc kubenswrapper[4915]: I0127 19:03:57.664293 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 19:03:57 crc kubenswrapper[4915]: I0127 19:03:57.666083 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-kffk9"] Jan 27 19:03:57 crc kubenswrapper[4915]: I0127 19:03:57.684029 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-kffk9"] Jan 27 19:03:57 crc kubenswrapper[4915]: I0127 19:03:57.693922 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.191449651 podStartE2EDuration="5.693903888s" podCreationTimestamp="2026-01-27 19:03:52 +0000 UTC" firstStartedPulling="2026-01-27 19:03:53.659728292 +0000 UTC m=+1325.017581956" lastFinishedPulling="2026-01-27 19:03:57.162182489 +0000 UTC m=+1328.520036193" observedRunningTime="2026-01-27 19:03:57.68987394 +0000 UTC m=+1329.047727624" watchObservedRunningTime="2026-01-27 19:03:57.693903888 +0000 UTC m=+1329.051757552" Jan 27 19:03:59 crc kubenswrapper[4915]: I0127 19:03:59.375721 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5675837c-21b0-4474-984d-a253dfcb0df9" path="/var/lib/kubelet/pods/5675837c-21b0-4474-984d-a253dfcb0df9/volumes" Jan 27 19:04:00 crc kubenswrapper[4915]: I0127 19:04:00.707677 4915 generic.go:334] "Generic (PLEG): container finished" podID="5852cc0c-e2c5-4f1f-a979-73fc5607b961" containerID="819859927b2d78abacf57b6ca8d9b1fe29088977d7d8e0a43137061459a30ee7" exitCode=0 Jan 27 19:04:00 crc kubenswrapper[4915]: I0127 19:04:00.707780 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nphkx" event={"ID":"5852cc0c-e2c5-4f1f-a979-73fc5607b961","Type":"ContainerDied","Data":"819859927b2d78abacf57b6ca8d9b1fe29088977d7d8e0a43137061459a30ee7"} Jan 27 19:04:02 crc kubenswrapper[4915]: I0127 19:04:02.109342 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nphkx" Jan 27 19:04:02 crc kubenswrapper[4915]: I0127 19:04:02.276648 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5852cc0c-e2c5-4f1f-a979-73fc5607b961-config-data\") pod \"5852cc0c-e2c5-4f1f-a979-73fc5607b961\" (UID: \"5852cc0c-e2c5-4f1f-a979-73fc5607b961\") " Jan 27 19:04:02 crc kubenswrapper[4915]: I0127 19:04:02.276779 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glg7l\" (UniqueName: \"kubernetes.io/projected/5852cc0c-e2c5-4f1f-a979-73fc5607b961-kube-api-access-glg7l\") pod \"5852cc0c-e2c5-4f1f-a979-73fc5607b961\" (UID: \"5852cc0c-e2c5-4f1f-a979-73fc5607b961\") " Jan 27 19:04:02 crc kubenswrapper[4915]: I0127 19:04:02.276847 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5852cc0c-e2c5-4f1f-a979-73fc5607b961-scripts\") pod \"5852cc0c-e2c5-4f1f-a979-73fc5607b961\" (UID: \"5852cc0c-e2c5-4f1f-a979-73fc5607b961\") " Jan 27 19:04:02 crc kubenswrapper[4915]: I0127 19:04:02.277028 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5852cc0c-e2c5-4f1f-a979-73fc5607b961-combined-ca-bundle\") pod \"5852cc0c-e2c5-4f1f-a979-73fc5607b961\" (UID: \"5852cc0c-e2c5-4f1f-a979-73fc5607b961\") " Jan 27 19:04:02 crc kubenswrapper[4915]: I0127 19:04:02.286356 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5852cc0c-e2c5-4f1f-a979-73fc5607b961-scripts" (OuterVolumeSpecName: "scripts") pod "5852cc0c-e2c5-4f1f-a979-73fc5607b961" (UID: "5852cc0c-e2c5-4f1f-a979-73fc5607b961"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:02 crc kubenswrapper[4915]: I0127 19:04:02.286931 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5852cc0c-e2c5-4f1f-a979-73fc5607b961-kube-api-access-glg7l" (OuterVolumeSpecName: "kube-api-access-glg7l") pod "5852cc0c-e2c5-4f1f-a979-73fc5607b961" (UID: "5852cc0c-e2c5-4f1f-a979-73fc5607b961"). InnerVolumeSpecName "kube-api-access-glg7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:02 crc kubenswrapper[4915]: I0127 19:04:02.309902 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5852cc0c-e2c5-4f1f-a979-73fc5607b961-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5852cc0c-e2c5-4f1f-a979-73fc5607b961" (UID: "5852cc0c-e2c5-4f1f-a979-73fc5607b961"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:02 crc kubenswrapper[4915]: I0127 19:04:02.328665 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5852cc0c-e2c5-4f1f-a979-73fc5607b961-config-data" (OuterVolumeSpecName: "config-data") pod "5852cc0c-e2c5-4f1f-a979-73fc5607b961" (UID: "5852cc0c-e2c5-4f1f-a979-73fc5607b961"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:02 crc kubenswrapper[4915]: I0127 19:04:02.380025 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5852cc0c-e2c5-4f1f-a979-73fc5607b961-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:02 crc kubenswrapper[4915]: I0127 19:04:02.380077 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5852cc0c-e2c5-4f1f-a979-73fc5607b961-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:02 crc kubenswrapper[4915]: I0127 19:04:02.380096 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glg7l\" (UniqueName: \"kubernetes.io/projected/5852cc0c-e2c5-4f1f-a979-73fc5607b961-kube-api-access-glg7l\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:02 crc kubenswrapper[4915]: I0127 19:04:02.380115 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5852cc0c-e2c5-4f1f-a979-73fc5607b961-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:02 crc kubenswrapper[4915]: I0127 19:04:02.734623 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nphkx" event={"ID":"5852cc0c-e2c5-4f1f-a979-73fc5607b961","Type":"ContainerDied","Data":"33b07dbd4bc34020e20ecc7f3424096cc2425daec93b565167b9c8069d510c10"} Jan 27 19:04:02 crc kubenswrapper[4915]: I0127 19:04:02.734680 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33b07dbd4bc34020e20ecc7f3424096cc2425daec93b565167b9c8069d510c10" Jan 27 19:04:02 crc kubenswrapper[4915]: I0127 19:04:02.734821 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nphkx" Jan 27 19:04:02 crc kubenswrapper[4915]: I0127 19:04:02.931793 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:04:02 crc kubenswrapper[4915]: I0127 19:04:02.933291 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8f560f18-9e53-4de4-a0f2-41b922ec1b65" containerName="nova-api-log" containerID="cri-o://b9d86ac7b4b0673b027c36a965d4ea48c0cec8653e1f1bc19ef0c883d352d8b4" gracePeriod=30 Jan 27 19:04:02 crc kubenswrapper[4915]: I0127 19:04:02.933407 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8f560f18-9e53-4de4-a0f2-41b922ec1b65" containerName="nova-api-api" containerID="cri-o://f7ff8628131102d671746563500614fd39147fac8bfef03b44c336ea76edfdf2" gracePeriod=30 Jan 27 19:04:02 crc kubenswrapper[4915]: I0127 19:04:02.980545 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:04:02 crc kubenswrapper[4915]: I0127 19:04:02.981148 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f" containerName="nova-scheduler-scheduler" containerID="cri-o://e2eb3644a3d3f56f51d28a2c7c306b05c8f02baeabd91a14900fe406028007e3" gracePeriod=30 Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.003082 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.003338 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e73c5725-eee6-498e-825c-94bf98bb0432" containerName="nova-metadata-log" containerID="cri-o://ad8415002a6a411734c8884332a0a3e0f43ca32978f590fa332234bd4e92e051" gracePeriod=30 Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.003442 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e73c5725-eee6-498e-825c-94bf98bb0432" containerName="nova-metadata-metadata" containerID="cri-o://fd19fc59927e8d95edf1bd8a8f7460d53df754f2ed176e6b35daad4db61bb2e7" gracePeriod=30 Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.526714 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.603052 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f560f18-9e53-4de4-a0f2-41b922ec1b65-logs\") pod \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\" (UID: \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\") " Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.603162 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f560f18-9e53-4de4-a0f2-41b922ec1b65-config-data\") pod \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\" (UID: \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\") " Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.603209 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f560f18-9e53-4de4-a0f2-41b922ec1b65-public-tls-certs\") pod \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\" (UID: \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\") " Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.603239 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f560f18-9e53-4de4-a0f2-41b922ec1b65-combined-ca-bundle\") pod \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\" (UID: \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\") " Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.603356 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwj9k\" (UniqueName: \"kubernetes.io/projected/8f560f18-9e53-4de4-a0f2-41b922ec1b65-kube-api-access-fwj9k\") pod \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\" (UID: \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\") " Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.603439 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f560f18-9e53-4de4-a0f2-41b922ec1b65-internal-tls-certs\") pod \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\" (UID: \"8f560f18-9e53-4de4-a0f2-41b922ec1b65\") " Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.603442 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f560f18-9e53-4de4-a0f2-41b922ec1b65-logs" (OuterVolumeSpecName: "logs") pod "8f560f18-9e53-4de4-a0f2-41b922ec1b65" (UID: "8f560f18-9e53-4de4-a0f2-41b922ec1b65"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.604010 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f560f18-9e53-4de4-a0f2-41b922ec1b65-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.615237 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f560f18-9e53-4de4-a0f2-41b922ec1b65-kube-api-access-fwj9k" (OuterVolumeSpecName: "kube-api-access-fwj9k") pod "8f560f18-9e53-4de4-a0f2-41b922ec1b65" (UID: "8f560f18-9e53-4de4-a0f2-41b922ec1b65"). InnerVolumeSpecName "kube-api-access-fwj9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.637851 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f560f18-9e53-4de4-a0f2-41b922ec1b65-config-data" (OuterVolumeSpecName: "config-data") pod "8f560f18-9e53-4de4-a0f2-41b922ec1b65" (UID: "8f560f18-9e53-4de4-a0f2-41b922ec1b65"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.662289 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f560f18-9e53-4de4-a0f2-41b922ec1b65-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8f560f18-9e53-4de4-a0f2-41b922ec1b65" (UID: "8f560f18-9e53-4de4-a0f2-41b922ec1b65"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.666444 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f560f18-9e53-4de4-a0f2-41b922ec1b65-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f560f18-9e53-4de4-a0f2-41b922ec1b65" (UID: "8f560f18-9e53-4de4-a0f2-41b922ec1b65"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.684512 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f560f18-9e53-4de4-a0f2-41b922ec1b65-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8f560f18-9e53-4de4-a0f2-41b922ec1b65" (UID: "8f560f18-9e53-4de4-a0f2-41b922ec1b65"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.706225 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f560f18-9e53-4de4-a0f2-41b922ec1b65-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.706443 4915 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f560f18-9e53-4de4-a0f2-41b922ec1b65-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.706504 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f560f18-9e53-4de4-a0f2-41b922ec1b65-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.706582 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwj9k\" (UniqueName: \"kubernetes.io/projected/8f560f18-9e53-4de4-a0f2-41b922ec1b65-kube-api-access-fwj9k\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.706687 4915 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f560f18-9e53-4de4-a0f2-41b922ec1b65-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.744554 4915 generic.go:334] "Generic (PLEG): container finished" podID="e73c5725-eee6-498e-825c-94bf98bb0432" containerID="ad8415002a6a411734c8884332a0a3e0f43ca32978f590fa332234bd4e92e051" exitCode=143 Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.744623 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e73c5725-eee6-498e-825c-94bf98bb0432","Type":"ContainerDied","Data":"ad8415002a6a411734c8884332a0a3e0f43ca32978f590fa332234bd4e92e051"} Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.746897 4915 generic.go:334] "Generic (PLEG): container finished" podID="8f560f18-9e53-4de4-a0f2-41b922ec1b65" containerID="f7ff8628131102d671746563500614fd39147fac8bfef03b44c336ea76edfdf2" exitCode=0 Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.746916 4915 generic.go:334] "Generic (PLEG): container finished" podID="8f560f18-9e53-4de4-a0f2-41b922ec1b65" containerID="b9d86ac7b4b0673b027c36a965d4ea48c0cec8653e1f1bc19ef0c883d352d8b4" exitCode=143 Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.746936 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8f560f18-9e53-4de4-a0f2-41b922ec1b65","Type":"ContainerDied","Data":"f7ff8628131102d671746563500614fd39147fac8bfef03b44c336ea76edfdf2"} Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.746965 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8f560f18-9e53-4de4-a0f2-41b922ec1b65","Type":"ContainerDied","Data":"b9d86ac7b4b0673b027c36a965d4ea48c0cec8653e1f1bc19ef0c883d352d8b4"} Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.746975 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8f560f18-9e53-4de4-a0f2-41b922ec1b65","Type":"ContainerDied","Data":"f86099dc4cae36150664989167296ce2f7f0415da8bc3c08c04e4bf9a39d136d"} Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.746992 4915 scope.go:117] "RemoveContainer" containerID="f7ff8628131102d671746563500614fd39147fac8bfef03b44c336ea76edfdf2" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.747118 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.790477 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.806611 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.816397 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 19:04:03 crc kubenswrapper[4915]: E0127 19:04:03.816772 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5852cc0c-e2c5-4f1f-a979-73fc5607b961" containerName="nova-manage" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.816808 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="5852cc0c-e2c5-4f1f-a979-73fc5607b961" containerName="nova-manage" Jan 27 19:04:03 crc kubenswrapper[4915]: E0127 19:04:03.816818 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f560f18-9e53-4de4-a0f2-41b922ec1b65" containerName="nova-api-log" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.816825 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f560f18-9e53-4de4-a0f2-41b922ec1b65" containerName="nova-api-log" Jan 27 19:04:03 crc kubenswrapper[4915]: E0127 19:04:03.816841 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f560f18-9e53-4de4-a0f2-41b922ec1b65" containerName="nova-api-api" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.816847 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f560f18-9e53-4de4-a0f2-41b922ec1b65" containerName="nova-api-api" Jan 27 19:04:03 crc kubenswrapper[4915]: E0127 19:04:03.816865 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5675837c-21b0-4474-984d-a253dfcb0df9" containerName="dnsmasq-dns" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.816871 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="5675837c-21b0-4474-984d-a253dfcb0df9" containerName="dnsmasq-dns" Jan 27 19:04:03 crc kubenswrapper[4915]: E0127 19:04:03.816885 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5675837c-21b0-4474-984d-a253dfcb0df9" containerName="init" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.816891 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="5675837c-21b0-4474-984d-a253dfcb0df9" containerName="init" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.817060 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f560f18-9e53-4de4-a0f2-41b922ec1b65" containerName="nova-api-log" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.817082 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f560f18-9e53-4de4-a0f2-41b922ec1b65" containerName="nova-api-api" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.817093 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="5675837c-21b0-4474-984d-a253dfcb0df9" containerName="dnsmasq-dns" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.817105 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="5852cc0c-e2c5-4f1f-a979-73fc5607b961" containerName="nova-manage" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.818035 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.819094 4915 scope.go:117] "RemoveContainer" containerID="b9d86ac7b4b0673b027c36a965d4ea48c0cec8653e1f1bc19ef0c883d352d8b4" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.825248 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.825307 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.825368 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.833721 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.873463 4915 scope.go:117] "RemoveContainer" containerID="f7ff8628131102d671746563500614fd39147fac8bfef03b44c336ea76edfdf2" Jan 27 19:04:03 crc kubenswrapper[4915]: E0127 19:04:03.873920 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7ff8628131102d671746563500614fd39147fac8bfef03b44c336ea76edfdf2\": container with ID starting with f7ff8628131102d671746563500614fd39147fac8bfef03b44c336ea76edfdf2 not found: ID does not exist" containerID="f7ff8628131102d671746563500614fd39147fac8bfef03b44c336ea76edfdf2" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.873961 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7ff8628131102d671746563500614fd39147fac8bfef03b44c336ea76edfdf2"} err="failed to get container status \"f7ff8628131102d671746563500614fd39147fac8bfef03b44c336ea76edfdf2\": rpc error: code = NotFound desc = could not find container \"f7ff8628131102d671746563500614fd39147fac8bfef03b44c336ea76edfdf2\": container with ID starting with f7ff8628131102d671746563500614fd39147fac8bfef03b44c336ea76edfdf2 not found: ID does not exist" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.873986 4915 scope.go:117] "RemoveContainer" containerID="b9d86ac7b4b0673b027c36a965d4ea48c0cec8653e1f1bc19ef0c883d352d8b4" Jan 27 19:04:03 crc kubenswrapper[4915]: E0127 19:04:03.874395 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9d86ac7b4b0673b027c36a965d4ea48c0cec8653e1f1bc19ef0c883d352d8b4\": container with ID starting with b9d86ac7b4b0673b027c36a965d4ea48c0cec8653e1f1bc19ef0c883d352d8b4 not found: ID does not exist" containerID="b9d86ac7b4b0673b027c36a965d4ea48c0cec8653e1f1bc19ef0c883d352d8b4" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.874431 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9d86ac7b4b0673b027c36a965d4ea48c0cec8653e1f1bc19ef0c883d352d8b4"} err="failed to get container status \"b9d86ac7b4b0673b027c36a965d4ea48c0cec8653e1f1bc19ef0c883d352d8b4\": rpc error: code = NotFound desc = could not find container \"b9d86ac7b4b0673b027c36a965d4ea48c0cec8653e1f1bc19ef0c883d352d8b4\": container with ID starting with b9d86ac7b4b0673b027c36a965d4ea48c0cec8653e1f1bc19ef0c883d352d8b4 not found: ID does not exist" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.874453 4915 scope.go:117] "RemoveContainer" containerID="f7ff8628131102d671746563500614fd39147fac8bfef03b44c336ea76edfdf2" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.874684 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7ff8628131102d671746563500614fd39147fac8bfef03b44c336ea76edfdf2"} err="failed to get container status \"f7ff8628131102d671746563500614fd39147fac8bfef03b44c336ea76edfdf2\": rpc error: code = NotFound desc = could not find container \"f7ff8628131102d671746563500614fd39147fac8bfef03b44c336ea76edfdf2\": container with ID starting with f7ff8628131102d671746563500614fd39147fac8bfef03b44c336ea76edfdf2 not found: ID does not exist" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.874702 4915 scope.go:117] "RemoveContainer" containerID="b9d86ac7b4b0673b027c36a965d4ea48c0cec8653e1f1bc19ef0c883d352d8b4" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.875194 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9d86ac7b4b0673b027c36a965d4ea48c0cec8653e1f1bc19ef0c883d352d8b4"} err="failed to get container status \"b9d86ac7b4b0673b027c36a965d4ea48c0cec8653e1f1bc19ef0c883d352d8b4\": rpc error: code = NotFound desc = could not find container \"b9d86ac7b4b0673b027c36a965d4ea48c0cec8653e1f1bc19ef0c883d352d8b4\": container with ID starting with b9d86ac7b4b0673b027c36a965d4ea48c0cec8653e1f1bc19ef0c883d352d8b4 not found: ID does not exist" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.909349 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-config-data\") pod \"nova-api-0\" (UID: \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\") " pod="openstack/nova-api-0" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.909422 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\") " pod="openstack/nova-api-0" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.909550 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzzj2\" (UniqueName: \"kubernetes.io/projected/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-kube-api-access-tzzj2\") pod \"nova-api-0\" (UID: \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\") " pod="openstack/nova-api-0" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.909585 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-logs\") pod \"nova-api-0\" (UID: \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\") " pod="openstack/nova-api-0" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.909628 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\") " pod="openstack/nova-api-0" Jan 27 19:04:03 crc kubenswrapper[4915]: I0127 19:04:03.909678 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-public-tls-certs\") pod \"nova-api-0\" (UID: \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\") " pod="openstack/nova-api-0" Jan 27 19:04:04 crc kubenswrapper[4915]: I0127 19:04:04.011599 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-logs\") pod \"nova-api-0\" (UID: \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\") " pod="openstack/nova-api-0" Jan 27 19:04:04 crc kubenswrapper[4915]: I0127 19:04:04.011686 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\") " pod="openstack/nova-api-0" Jan 27 19:04:04 crc kubenswrapper[4915]: I0127 19:04:04.011743 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-public-tls-certs\") pod \"nova-api-0\" (UID: \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\") " pod="openstack/nova-api-0" Jan 27 19:04:04 crc kubenswrapper[4915]: I0127 19:04:04.011780 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-config-data\") pod \"nova-api-0\" (UID: \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\") " pod="openstack/nova-api-0" Jan 27 19:04:04 crc kubenswrapper[4915]: I0127 19:04:04.011841 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\") " pod="openstack/nova-api-0" Jan 27 19:04:04 crc kubenswrapper[4915]: I0127 19:04:04.011939 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzzj2\" (UniqueName: \"kubernetes.io/projected/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-kube-api-access-tzzj2\") pod \"nova-api-0\" (UID: \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\") " pod="openstack/nova-api-0" Jan 27 19:04:04 crc kubenswrapper[4915]: I0127 19:04:04.012812 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-logs\") pod \"nova-api-0\" (UID: \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\") " pod="openstack/nova-api-0" Jan 27 19:04:04 crc kubenswrapper[4915]: I0127 19:04:04.015636 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-config-data\") pod \"nova-api-0\" (UID: \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\") " pod="openstack/nova-api-0" Jan 27 19:04:04 crc kubenswrapper[4915]: I0127 19:04:04.015683 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\") " pod="openstack/nova-api-0" Jan 27 19:04:04 crc kubenswrapper[4915]: I0127 19:04:04.016230 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-public-tls-certs\") pod \"nova-api-0\" (UID: \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\") " pod="openstack/nova-api-0" Jan 27 19:04:04 crc kubenswrapper[4915]: I0127 19:04:04.016768 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\") " pod="openstack/nova-api-0" Jan 27 19:04:04 crc kubenswrapper[4915]: I0127 19:04:04.029433 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzzj2\" (UniqueName: \"kubernetes.io/projected/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-kube-api-access-tzzj2\") pod \"nova-api-0\" (UID: \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\") " pod="openstack/nova-api-0" Jan 27 19:04:04 crc kubenswrapper[4915]: I0127 19:04:04.169380 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:04:04 crc kubenswrapper[4915]: E0127 19:04:04.629947 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e2eb3644a3d3f56f51d28a2c7c306b05c8f02baeabd91a14900fe406028007e3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 19:04:04 crc kubenswrapper[4915]: E0127 19:04:04.633449 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e2eb3644a3d3f56f51d28a2c7c306b05c8f02baeabd91a14900fe406028007e3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 19:04:04 crc kubenswrapper[4915]: E0127 19:04:04.635202 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e2eb3644a3d3f56f51d28a2c7c306b05c8f02baeabd91a14900fe406028007e3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 19:04:04 crc kubenswrapper[4915]: E0127 19:04:04.635273 4915 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f" containerName="nova-scheduler-scheduler" Jan 27 19:04:04 crc kubenswrapper[4915]: I0127 19:04:04.663536 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:04:04 crc kubenswrapper[4915]: I0127 19:04:04.761923 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d","Type":"ContainerStarted","Data":"5025463ff8a29d3149d68f010b1b41c3ce646798a3bf71eeadf7d64743293617"} Jan 27 19:04:05 crc kubenswrapper[4915]: I0127 19:04:05.367142 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f560f18-9e53-4de4-a0f2-41b922ec1b65" path="/var/lib/kubelet/pods/8f560f18-9e53-4de4-a0f2-41b922ec1b65/volumes" Jan 27 19:04:05 crc kubenswrapper[4915]: I0127 19:04:05.777707 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d","Type":"ContainerStarted","Data":"a933786d7cc9361f17ff67cb45c92e9e82dd73239e589fc136036527cb4a3031"} Jan 27 19:04:05 crc kubenswrapper[4915]: I0127 19:04:05.777758 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d","Type":"ContainerStarted","Data":"361b3a3c5ffb20740339364094b1dc7890ed96dc005e72cd38dfd352f50114ed"} Jan 27 19:04:05 crc kubenswrapper[4915]: I0127 19:04:05.818296 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.818272334 podStartE2EDuration="2.818272334s" podCreationTimestamp="2026-01-27 19:04:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:04:05.803527915 +0000 UTC m=+1337.161381599" watchObservedRunningTime="2026-01-27 19:04:05.818272334 +0000 UTC m=+1337.176126008" Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.144571 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e73c5725-eee6-498e-825c-94bf98bb0432" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:49558->10.217.0.196:8775: read: connection reset by peer" Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.144577 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e73c5725-eee6-498e-825c-94bf98bb0432" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:49548->10.217.0.196:8775: read: connection reset by peer" Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.636309 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.762604 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqrsf\" (UniqueName: \"kubernetes.io/projected/e73c5725-eee6-498e-825c-94bf98bb0432-kube-api-access-hqrsf\") pod \"e73c5725-eee6-498e-825c-94bf98bb0432\" (UID: \"e73c5725-eee6-498e-825c-94bf98bb0432\") " Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.762739 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e73c5725-eee6-498e-825c-94bf98bb0432-config-data\") pod \"e73c5725-eee6-498e-825c-94bf98bb0432\" (UID: \"e73c5725-eee6-498e-825c-94bf98bb0432\") " Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.762768 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e73c5725-eee6-498e-825c-94bf98bb0432-logs\") pod \"e73c5725-eee6-498e-825c-94bf98bb0432\" (UID: \"e73c5725-eee6-498e-825c-94bf98bb0432\") " Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.762816 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e73c5725-eee6-498e-825c-94bf98bb0432-combined-ca-bundle\") pod \"e73c5725-eee6-498e-825c-94bf98bb0432\" (UID: \"e73c5725-eee6-498e-825c-94bf98bb0432\") " Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.762960 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e73c5725-eee6-498e-825c-94bf98bb0432-nova-metadata-tls-certs\") pod \"e73c5725-eee6-498e-825c-94bf98bb0432\" (UID: \"e73c5725-eee6-498e-825c-94bf98bb0432\") " Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.763281 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e73c5725-eee6-498e-825c-94bf98bb0432-logs" (OuterVolumeSpecName: "logs") pod "e73c5725-eee6-498e-825c-94bf98bb0432" (UID: "e73c5725-eee6-498e-825c-94bf98bb0432"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.763523 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e73c5725-eee6-498e-825c-94bf98bb0432-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.784818 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e73c5725-eee6-498e-825c-94bf98bb0432-kube-api-access-hqrsf" (OuterVolumeSpecName: "kube-api-access-hqrsf") pod "e73c5725-eee6-498e-825c-94bf98bb0432" (UID: "e73c5725-eee6-498e-825c-94bf98bb0432"). InnerVolumeSpecName "kube-api-access-hqrsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.792857 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e73c5725-eee6-498e-825c-94bf98bb0432-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e73c5725-eee6-498e-825c-94bf98bb0432" (UID: "e73c5725-eee6-498e-825c-94bf98bb0432"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.794916 4915 generic.go:334] "Generic (PLEG): container finished" podID="e73c5725-eee6-498e-825c-94bf98bb0432" containerID="fd19fc59927e8d95edf1bd8a8f7460d53df754f2ed176e6b35daad4db61bb2e7" exitCode=0 Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.796093 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.796645 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e73c5725-eee6-498e-825c-94bf98bb0432","Type":"ContainerDied","Data":"fd19fc59927e8d95edf1bd8a8f7460d53df754f2ed176e6b35daad4db61bb2e7"} Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.796682 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e73c5725-eee6-498e-825c-94bf98bb0432","Type":"ContainerDied","Data":"e96eb4f6375e6554627c29d8a99817788c5e7413108e9ab9d809fda35667e476"} Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.796705 4915 scope.go:117] "RemoveContainer" containerID="fd19fc59927e8d95edf1bd8a8f7460d53df754f2ed176e6b35daad4db61bb2e7" Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.804284 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e73c5725-eee6-498e-825c-94bf98bb0432-config-data" (OuterVolumeSpecName: "config-data") pod "e73c5725-eee6-498e-825c-94bf98bb0432" (UID: "e73c5725-eee6-498e-825c-94bf98bb0432"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.825213 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e73c5725-eee6-498e-825c-94bf98bb0432-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e73c5725-eee6-498e-825c-94bf98bb0432" (UID: "e73c5725-eee6-498e-825c-94bf98bb0432"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.867124 4915 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e73c5725-eee6-498e-825c-94bf98bb0432-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.867156 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqrsf\" (UniqueName: \"kubernetes.io/projected/e73c5725-eee6-498e-825c-94bf98bb0432-kube-api-access-hqrsf\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.867165 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e73c5725-eee6-498e-825c-94bf98bb0432-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.867175 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e73c5725-eee6-498e-825c-94bf98bb0432-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.888327 4915 scope.go:117] "RemoveContainer" containerID="ad8415002a6a411734c8884332a0a3e0f43ca32978f590fa332234bd4e92e051" Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.904561 4915 scope.go:117] "RemoveContainer" containerID="fd19fc59927e8d95edf1bd8a8f7460d53df754f2ed176e6b35daad4db61bb2e7" Jan 27 19:04:06 crc kubenswrapper[4915]: E0127 19:04:06.904914 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd19fc59927e8d95edf1bd8a8f7460d53df754f2ed176e6b35daad4db61bb2e7\": container with ID starting with fd19fc59927e8d95edf1bd8a8f7460d53df754f2ed176e6b35daad4db61bb2e7 not found: ID does not exist" containerID="fd19fc59927e8d95edf1bd8a8f7460d53df754f2ed176e6b35daad4db61bb2e7" Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.904953 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd19fc59927e8d95edf1bd8a8f7460d53df754f2ed176e6b35daad4db61bb2e7"} err="failed to get container status \"fd19fc59927e8d95edf1bd8a8f7460d53df754f2ed176e6b35daad4db61bb2e7\": rpc error: code = NotFound desc = could not find container \"fd19fc59927e8d95edf1bd8a8f7460d53df754f2ed176e6b35daad4db61bb2e7\": container with ID starting with fd19fc59927e8d95edf1bd8a8f7460d53df754f2ed176e6b35daad4db61bb2e7 not found: ID does not exist" Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.904972 4915 scope.go:117] "RemoveContainer" containerID="ad8415002a6a411734c8884332a0a3e0f43ca32978f590fa332234bd4e92e051" Jan 27 19:04:06 crc kubenswrapper[4915]: E0127 19:04:06.905191 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad8415002a6a411734c8884332a0a3e0f43ca32978f590fa332234bd4e92e051\": container with ID starting with ad8415002a6a411734c8884332a0a3e0f43ca32978f590fa332234bd4e92e051 not found: ID does not exist" containerID="ad8415002a6a411734c8884332a0a3e0f43ca32978f590fa332234bd4e92e051" Jan 27 19:04:06 crc kubenswrapper[4915]: I0127 19:04:06.905219 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad8415002a6a411734c8884332a0a3e0f43ca32978f590fa332234bd4e92e051"} err="failed to get container status \"ad8415002a6a411734c8884332a0a3e0f43ca32978f590fa332234bd4e92e051\": rpc error: code = NotFound desc = could not find container \"ad8415002a6a411734c8884332a0a3e0f43ca32978f590fa332234bd4e92e051\": container with ID starting with ad8415002a6a411734c8884332a0a3e0f43ca32978f590fa332234bd4e92e051 not found: ID does not exist" Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.137736 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.149887 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.160286 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:04:07 crc kubenswrapper[4915]: E0127 19:04:07.160851 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e73c5725-eee6-498e-825c-94bf98bb0432" containerName="nova-metadata-log" Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.160944 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e73c5725-eee6-498e-825c-94bf98bb0432" containerName="nova-metadata-log" Jan 27 19:04:07 crc kubenswrapper[4915]: E0127 19:04:07.161000 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e73c5725-eee6-498e-825c-94bf98bb0432" containerName="nova-metadata-metadata" Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.161067 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e73c5725-eee6-498e-825c-94bf98bb0432" containerName="nova-metadata-metadata" Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.161300 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="e73c5725-eee6-498e-825c-94bf98bb0432" containerName="nova-metadata-metadata" Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.161364 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="e73c5725-eee6-498e-825c-94bf98bb0432" containerName="nova-metadata-log" Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.162479 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.167221 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.167531 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.182300 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:04:07 crc kubenswrapper[4915]: E0127 19:04:07.243514 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode73c5725_eee6_498e_825c_94bf98bb0432.slice/crio-e96eb4f6375e6554627c29d8a99817788c5e7413108e9ab9d809fda35667e476\": RecentStats: unable to find data in memory cache]" Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.273077 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc16c397-45e2-4878-8927-752a1832ec0a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cc16c397-45e2-4878-8927-752a1832ec0a\") " pod="openstack/nova-metadata-0" Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.273193 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fbtw\" (UniqueName: \"kubernetes.io/projected/cc16c397-45e2-4878-8927-752a1832ec0a-kube-api-access-5fbtw\") pod \"nova-metadata-0\" (UID: \"cc16c397-45e2-4878-8927-752a1832ec0a\") " pod="openstack/nova-metadata-0" Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.273236 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc16c397-45e2-4878-8927-752a1832ec0a-logs\") pod \"nova-metadata-0\" (UID: \"cc16c397-45e2-4878-8927-752a1832ec0a\") " pod="openstack/nova-metadata-0" Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.273268 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc16c397-45e2-4878-8927-752a1832ec0a-config-data\") pod \"nova-metadata-0\" (UID: \"cc16c397-45e2-4878-8927-752a1832ec0a\") " pod="openstack/nova-metadata-0" Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.273334 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc16c397-45e2-4878-8927-752a1832ec0a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cc16c397-45e2-4878-8927-752a1832ec0a\") " pod="openstack/nova-metadata-0" Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.374679 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e73c5725-eee6-498e-825c-94bf98bb0432" path="/var/lib/kubelet/pods/e73c5725-eee6-498e-825c-94bf98bb0432/volumes" Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.376841 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc16c397-45e2-4878-8927-752a1832ec0a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cc16c397-45e2-4878-8927-752a1832ec0a\") " pod="openstack/nova-metadata-0" Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.377035 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fbtw\" (UniqueName: \"kubernetes.io/projected/cc16c397-45e2-4878-8927-752a1832ec0a-kube-api-access-5fbtw\") pod \"nova-metadata-0\" (UID: \"cc16c397-45e2-4878-8927-752a1832ec0a\") " pod="openstack/nova-metadata-0" Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.377114 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc16c397-45e2-4878-8927-752a1832ec0a-logs\") pod \"nova-metadata-0\" (UID: \"cc16c397-45e2-4878-8927-752a1832ec0a\") " pod="openstack/nova-metadata-0" Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.377203 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc16c397-45e2-4878-8927-752a1832ec0a-config-data\") pod \"nova-metadata-0\" (UID: \"cc16c397-45e2-4878-8927-752a1832ec0a\") " pod="openstack/nova-metadata-0" Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.377317 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc16c397-45e2-4878-8927-752a1832ec0a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cc16c397-45e2-4878-8927-752a1832ec0a\") " pod="openstack/nova-metadata-0" Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.378239 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc16c397-45e2-4878-8927-752a1832ec0a-logs\") pod \"nova-metadata-0\" (UID: \"cc16c397-45e2-4878-8927-752a1832ec0a\") " pod="openstack/nova-metadata-0" Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.384220 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc16c397-45e2-4878-8927-752a1832ec0a-config-data\") pod \"nova-metadata-0\" (UID: \"cc16c397-45e2-4878-8927-752a1832ec0a\") " pod="openstack/nova-metadata-0" Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.384346 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc16c397-45e2-4878-8927-752a1832ec0a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cc16c397-45e2-4878-8927-752a1832ec0a\") " pod="openstack/nova-metadata-0" Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.384372 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc16c397-45e2-4878-8927-752a1832ec0a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cc16c397-45e2-4878-8927-752a1832ec0a\") " pod="openstack/nova-metadata-0" Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.394908 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fbtw\" (UniqueName: \"kubernetes.io/projected/cc16c397-45e2-4878-8927-752a1832ec0a-kube-api-access-5fbtw\") pod \"nova-metadata-0\" (UID: \"cc16c397-45e2-4878-8927-752a1832ec0a\") " pod="openstack/nova-metadata-0" Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.483987 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:04:07 crc kubenswrapper[4915]: I0127 19:04:07.952895 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:04:07 crc kubenswrapper[4915]: W0127 19:04:07.955707 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc16c397_45e2_4878_8927_752a1832ec0a.slice/crio-3f20568226549974c479a6833447d9a996f9dcbf1046d70b71bd24fce0d9f202 WatchSource:0}: Error finding container 3f20568226549974c479a6833447d9a996f9dcbf1046d70b71bd24fce0d9f202: Status 404 returned error can't find the container with id 3f20568226549974c479a6833447d9a996f9dcbf1046d70b71bd24fce0d9f202 Jan 27 19:04:08 crc kubenswrapper[4915]: I0127 19:04:08.695003 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 19:04:08 crc kubenswrapper[4915]: I0127 19:04:08.802928 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f-combined-ca-bundle\") pod \"534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f\" (UID: \"534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f\") " Jan 27 19:04:08 crc kubenswrapper[4915]: I0127 19:04:08.803041 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsxmr\" (UniqueName: \"kubernetes.io/projected/534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f-kube-api-access-gsxmr\") pod \"534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f\" (UID: \"534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f\") " Jan 27 19:04:08 crc kubenswrapper[4915]: I0127 19:04:08.803214 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f-config-data\") pod \"534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f\" (UID: \"534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f\") " Jan 27 19:04:08 crc kubenswrapper[4915]: I0127 19:04:08.807741 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f-kube-api-access-gsxmr" (OuterVolumeSpecName: "kube-api-access-gsxmr") pod "534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f" (UID: "534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f"). InnerVolumeSpecName "kube-api-access-gsxmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:08 crc kubenswrapper[4915]: I0127 19:04:08.823074 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cc16c397-45e2-4878-8927-752a1832ec0a","Type":"ContainerStarted","Data":"3071a4d932d0954ba665a0964a52de9a30feb98cb297eae5bdd4c4d385c63546"} Jan 27 19:04:08 crc kubenswrapper[4915]: I0127 19:04:08.823158 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cc16c397-45e2-4878-8927-752a1832ec0a","Type":"ContainerStarted","Data":"d835385c0bed6004e4974dcc64c9e29fb80d508d9fd2f147d3785f1645ad55ed"} Jan 27 19:04:08 crc kubenswrapper[4915]: I0127 19:04:08.823186 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cc16c397-45e2-4878-8927-752a1832ec0a","Type":"ContainerStarted","Data":"3f20568226549974c479a6833447d9a996f9dcbf1046d70b71bd24fce0d9f202"} Jan 27 19:04:08 crc kubenswrapper[4915]: I0127 19:04:08.826279 4915 generic.go:334] "Generic (PLEG): container finished" podID="534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f" containerID="e2eb3644a3d3f56f51d28a2c7c306b05c8f02baeabd91a14900fe406028007e3" exitCode=0 Jan 27 19:04:08 crc kubenswrapper[4915]: I0127 19:04:08.826340 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 19:04:08 crc kubenswrapper[4915]: I0127 19:04:08.827425 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f","Type":"ContainerDied","Data":"e2eb3644a3d3f56f51d28a2c7c306b05c8f02baeabd91a14900fe406028007e3"} Jan 27 19:04:08 crc kubenswrapper[4915]: I0127 19:04:08.827500 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f","Type":"ContainerDied","Data":"77d1c984fba11e196ed9bc5e0339baf05787aeb321baab2df6979f22691b0dda"} Jan 27 19:04:08 crc kubenswrapper[4915]: I0127 19:04:08.827561 4915 scope.go:117] "RemoveContainer" containerID="e2eb3644a3d3f56f51d28a2c7c306b05c8f02baeabd91a14900fe406028007e3" Jan 27 19:04:08 crc kubenswrapper[4915]: I0127 19:04:08.829580 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f" (UID: "534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:08 crc kubenswrapper[4915]: I0127 19:04:08.849011 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f-config-data" (OuterVolumeSpecName: "config-data") pod "534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f" (UID: "534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:08 crc kubenswrapper[4915]: I0127 19:04:08.850141 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.850118629 podStartE2EDuration="1.850118629s" podCreationTimestamp="2026-01-27 19:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:04:08.845034215 +0000 UTC m=+1340.202887959" watchObservedRunningTime="2026-01-27 19:04:08.850118629 +0000 UTC m=+1340.207972293" Jan 27 19:04:08 crc kubenswrapper[4915]: I0127 19:04:08.871881 4915 scope.go:117] "RemoveContainer" containerID="e2eb3644a3d3f56f51d28a2c7c306b05c8f02baeabd91a14900fe406028007e3" Jan 27 19:04:08 crc kubenswrapper[4915]: E0127 19:04:08.872287 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2eb3644a3d3f56f51d28a2c7c306b05c8f02baeabd91a14900fe406028007e3\": container with ID starting with e2eb3644a3d3f56f51d28a2c7c306b05c8f02baeabd91a14900fe406028007e3 not found: ID does not exist" containerID="e2eb3644a3d3f56f51d28a2c7c306b05c8f02baeabd91a14900fe406028007e3" Jan 27 19:04:08 crc kubenswrapper[4915]: I0127 19:04:08.872325 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2eb3644a3d3f56f51d28a2c7c306b05c8f02baeabd91a14900fe406028007e3"} err="failed to get container status \"e2eb3644a3d3f56f51d28a2c7c306b05c8f02baeabd91a14900fe406028007e3\": rpc error: code = NotFound desc = could not find container \"e2eb3644a3d3f56f51d28a2c7c306b05c8f02baeabd91a14900fe406028007e3\": container with ID starting with e2eb3644a3d3f56f51d28a2c7c306b05c8f02baeabd91a14900fe406028007e3 not found: ID does not exist" Jan 27 19:04:08 crc kubenswrapper[4915]: I0127 19:04:08.906579 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:08 crc kubenswrapper[4915]: I0127 19:04:08.906710 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:08 crc kubenswrapper[4915]: I0127 19:04:08.906747 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsxmr\" (UniqueName: \"kubernetes.io/projected/534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f-kube-api-access-gsxmr\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:09 crc kubenswrapper[4915]: I0127 19:04:09.174477 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:04:09 crc kubenswrapper[4915]: I0127 19:04:09.184665 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:04:09 crc kubenswrapper[4915]: I0127 19:04:09.229855 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:04:09 crc kubenswrapper[4915]: E0127 19:04:09.230299 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f" containerName="nova-scheduler-scheduler" Jan 27 19:04:09 crc kubenswrapper[4915]: I0127 19:04:09.230314 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f" containerName="nova-scheduler-scheduler" Jan 27 19:04:09 crc kubenswrapper[4915]: I0127 19:04:09.230481 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f" containerName="nova-scheduler-scheduler" Jan 27 19:04:09 crc kubenswrapper[4915]: I0127 19:04:09.231127 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 19:04:09 crc kubenswrapper[4915]: I0127 19:04:09.239647 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 19:04:09 crc kubenswrapper[4915]: I0127 19:04:09.276461 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:04:09 crc kubenswrapper[4915]: I0127 19:04:09.313706 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pddpd\" (UniqueName: \"kubernetes.io/projected/d5f29847-cfad-4f69-aff9-9c62b0088754-kube-api-access-pddpd\") pod \"nova-scheduler-0\" (UID: \"d5f29847-cfad-4f69-aff9-9c62b0088754\") " pod="openstack/nova-scheduler-0" Jan 27 19:04:09 crc kubenswrapper[4915]: I0127 19:04:09.314066 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f29847-cfad-4f69-aff9-9c62b0088754-config-data\") pod \"nova-scheduler-0\" (UID: \"d5f29847-cfad-4f69-aff9-9c62b0088754\") " pod="openstack/nova-scheduler-0" Jan 27 19:04:09 crc kubenswrapper[4915]: I0127 19:04:09.314262 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f29847-cfad-4f69-aff9-9c62b0088754-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d5f29847-cfad-4f69-aff9-9c62b0088754\") " pod="openstack/nova-scheduler-0" Jan 27 19:04:09 crc kubenswrapper[4915]: I0127 19:04:09.370401 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f" path="/var/lib/kubelet/pods/534f4ad6-6d00-4ea6-ae1a-20f600d0fe8f/volumes" Jan 27 19:04:09 crc kubenswrapper[4915]: I0127 19:04:09.416335 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f29847-cfad-4f69-aff9-9c62b0088754-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d5f29847-cfad-4f69-aff9-9c62b0088754\") " pod="openstack/nova-scheduler-0" Jan 27 19:04:09 crc kubenswrapper[4915]: I0127 19:04:09.416488 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pddpd\" (UniqueName: \"kubernetes.io/projected/d5f29847-cfad-4f69-aff9-9c62b0088754-kube-api-access-pddpd\") pod \"nova-scheduler-0\" (UID: \"d5f29847-cfad-4f69-aff9-9c62b0088754\") " pod="openstack/nova-scheduler-0" Jan 27 19:04:09 crc kubenswrapper[4915]: I0127 19:04:09.416582 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f29847-cfad-4f69-aff9-9c62b0088754-config-data\") pod \"nova-scheduler-0\" (UID: \"d5f29847-cfad-4f69-aff9-9c62b0088754\") " pod="openstack/nova-scheduler-0" Jan 27 19:04:09 crc kubenswrapper[4915]: I0127 19:04:09.421158 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f29847-cfad-4f69-aff9-9c62b0088754-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d5f29847-cfad-4f69-aff9-9c62b0088754\") " pod="openstack/nova-scheduler-0" Jan 27 19:04:09 crc kubenswrapper[4915]: I0127 19:04:09.443575 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f29847-cfad-4f69-aff9-9c62b0088754-config-data\") pod \"nova-scheduler-0\" (UID: \"d5f29847-cfad-4f69-aff9-9c62b0088754\") " pod="openstack/nova-scheduler-0" Jan 27 19:04:09 crc kubenswrapper[4915]: I0127 19:04:09.448283 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pddpd\" (UniqueName: \"kubernetes.io/projected/d5f29847-cfad-4f69-aff9-9c62b0088754-kube-api-access-pddpd\") pod \"nova-scheduler-0\" (UID: \"d5f29847-cfad-4f69-aff9-9c62b0088754\") " pod="openstack/nova-scheduler-0" Jan 27 19:04:09 crc kubenswrapper[4915]: I0127 19:04:09.611986 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 19:04:10 crc kubenswrapper[4915]: I0127 19:04:10.074277 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:04:10 crc kubenswrapper[4915]: W0127 19:04:10.081199 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5f29847_cfad_4f69_aff9_9c62b0088754.slice/crio-4ecd5e9be248d7e219dd1ae230699a91ebdddf39f4b01e58edfa4c7ff1a2e0e4 WatchSource:0}: Error finding container 4ecd5e9be248d7e219dd1ae230699a91ebdddf39f4b01e58edfa4c7ff1a2e0e4: Status 404 returned error can't find the container with id 4ecd5e9be248d7e219dd1ae230699a91ebdddf39f4b01e58edfa4c7ff1a2e0e4 Jan 27 19:04:10 crc kubenswrapper[4915]: I0127 19:04:10.850302 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d5f29847-cfad-4f69-aff9-9c62b0088754","Type":"ContainerStarted","Data":"34e977874b962f76e26b957f6cccd2391cc8d2a127538d3d1be4030d291b2063"} Jan 27 19:04:10 crc kubenswrapper[4915]: I0127 19:04:10.850601 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d5f29847-cfad-4f69-aff9-9c62b0088754","Type":"ContainerStarted","Data":"4ecd5e9be248d7e219dd1ae230699a91ebdddf39f4b01e58edfa4c7ff1a2e0e4"} Jan 27 19:04:10 crc kubenswrapper[4915]: I0127 19:04:10.874596 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.874572262 podStartE2EDuration="1.874572262s" podCreationTimestamp="2026-01-27 19:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:04:10.871333033 +0000 UTC m=+1342.229186707" watchObservedRunningTime="2026-01-27 19:04:10.874572262 +0000 UTC m=+1342.232425936" Jan 27 19:04:12 crc kubenswrapper[4915]: I0127 19:04:12.484746 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 19:04:12 crc kubenswrapper[4915]: I0127 19:04:12.485087 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 19:04:14 crc kubenswrapper[4915]: I0127 19:04:14.169546 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 19:04:14 crc kubenswrapper[4915]: I0127 19:04:14.170025 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 19:04:14 crc kubenswrapper[4915]: I0127 19:04:14.612968 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 19:04:15 crc kubenswrapper[4915]: I0127 19:04:15.184957 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.206:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:04:15 crc kubenswrapper[4915]: I0127 19:04:15.185061 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.206:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:04:17 crc kubenswrapper[4915]: I0127 19:04:17.484503 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 19:04:17 crc kubenswrapper[4915]: I0127 19:04:17.484985 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 19:04:18 crc kubenswrapper[4915]: I0127 19:04:18.499125 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cc16c397-45e2-4878-8927-752a1832ec0a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:04:18 crc kubenswrapper[4915]: I0127 19:04:18.499144 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cc16c397-45e2-4878-8927-752a1832ec0a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 19:04:19 crc kubenswrapper[4915]: I0127 19:04:19.612685 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 19:04:19 crc kubenswrapper[4915]: I0127 19:04:19.636289 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 19:04:19 crc kubenswrapper[4915]: I0127 19:04:19.992515 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 19:04:20 crc kubenswrapper[4915]: I0127 19:04:20.625052 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:04:20 crc kubenswrapper[4915]: I0127 19:04:20.625112 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:04:23 crc kubenswrapper[4915]: I0127 19:04:23.102296 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 19:04:24 crc kubenswrapper[4915]: I0127 19:04:24.176634 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 19:04:24 crc kubenswrapper[4915]: I0127 19:04:24.177196 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 19:04:24 crc kubenswrapper[4915]: I0127 19:04:24.177861 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 19:04:24 crc kubenswrapper[4915]: I0127 19:04:24.189644 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 19:04:24 crc kubenswrapper[4915]: I0127 19:04:24.994046 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 19:04:25 crc kubenswrapper[4915]: I0127 19:04:25.011853 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 19:04:27 crc kubenswrapper[4915]: I0127 19:04:27.492927 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 19:04:27 crc kubenswrapper[4915]: I0127 19:04:27.497461 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 19:04:27 crc kubenswrapper[4915]: I0127 19:04:27.500625 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 19:04:28 crc kubenswrapper[4915]: I0127 19:04:28.031911 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 19:04:33 crc kubenswrapper[4915]: I0127 19:04:33.772357 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-btsbx"] Jan 27 19:04:33 crc kubenswrapper[4915]: I0127 19:04:33.775451 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-btsbx" Jan 27 19:04:33 crc kubenswrapper[4915]: I0127 19:04:33.781011 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-btsbx"] Jan 27 19:04:33 crc kubenswrapper[4915]: I0127 19:04:33.815958 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wln4n\" (UniqueName: \"kubernetes.io/projected/fab41436-1aa4-47d5-aa00-9a1199ce8d97-kube-api-access-wln4n\") pod \"redhat-operators-btsbx\" (UID: \"fab41436-1aa4-47d5-aa00-9a1199ce8d97\") " pod="openshift-marketplace/redhat-operators-btsbx" Jan 27 19:04:33 crc kubenswrapper[4915]: I0127 19:04:33.816142 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fab41436-1aa4-47d5-aa00-9a1199ce8d97-catalog-content\") pod \"redhat-operators-btsbx\" (UID: \"fab41436-1aa4-47d5-aa00-9a1199ce8d97\") " pod="openshift-marketplace/redhat-operators-btsbx" Jan 27 19:04:33 crc kubenswrapper[4915]: I0127 19:04:33.816244 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fab41436-1aa4-47d5-aa00-9a1199ce8d97-utilities\") pod \"redhat-operators-btsbx\" (UID: \"fab41436-1aa4-47d5-aa00-9a1199ce8d97\") " pod="openshift-marketplace/redhat-operators-btsbx" Jan 27 19:04:33 crc kubenswrapper[4915]: I0127 19:04:33.917731 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wln4n\" (UniqueName: \"kubernetes.io/projected/fab41436-1aa4-47d5-aa00-9a1199ce8d97-kube-api-access-wln4n\") pod \"redhat-operators-btsbx\" (UID: \"fab41436-1aa4-47d5-aa00-9a1199ce8d97\") " pod="openshift-marketplace/redhat-operators-btsbx" Jan 27 19:04:33 crc kubenswrapper[4915]: I0127 19:04:33.917877 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fab41436-1aa4-47d5-aa00-9a1199ce8d97-catalog-content\") pod \"redhat-operators-btsbx\" (UID: \"fab41436-1aa4-47d5-aa00-9a1199ce8d97\") " pod="openshift-marketplace/redhat-operators-btsbx" Jan 27 19:04:33 crc kubenswrapper[4915]: I0127 19:04:33.917926 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fab41436-1aa4-47d5-aa00-9a1199ce8d97-utilities\") pod \"redhat-operators-btsbx\" (UID: \"fab41436-1aa4-47d5-aa00-9a1199ce8d97\") " pod="openshift-marketplace/redhat-operators-btsbx" Jan 27 19:04:33 crc kubenswrapper[4915]: I0127 19:04:33.918442 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fab41436-1aa4-47d5-aa00-9a1199ce8d97-utilities\") pod \"redhat-operators-btsbx\" (UID: \"fab41436-1aa4-47d5-aa00-9a1199ce8d97\") " pod="openshift-marketplace/redhat-operators-btsbx" Jan 27 19:04:33 crc kubenswrapper[4915]: I0127 19:04:33.919016 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fab41436-1aa4-47d5-aa00-9a1199ce8d97-catalog-content\") pod \"redhat-operators-btsbx\" (UID: \"fab41436-1aa4-47d5-aa00-9a1199ce8d97\") " pod="openshift-marketplace/redhat-operators-btsbx" Jan 27 19:04:33 crc kubenswrapper[4915]: I0127 19:04:33.939660 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wln4n\" (UniqueName: \"kubernetes.io/projected/fab41436-1aa4-47d5-aa00-9a1199ce8d97-kube-api-access-wln4n\") pod \"redhat-operators-btsbx\" (UID: \"fab41436-1aa4-47d5-aa00-9a1199ce8d97\") " pod="openshift-marketplace/redhat-operators-btsbx" Jan 27 19:04:34 crc kubenswrapper[4915]: I0127 19:04:34.142060 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-btsbx" Jan 27 19:04:34 crc kubenswrapper[4915]: I0127 19:04:34.640327 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-btsbx"] Jan 27 19:04:35 crc kubenswrapper[4915]: I0127 19:04:35.098576 4915 generic.go:334] "Generic (PLEG): container finished" podID="fab41436-1aa4-47d5-aa00-9a1199ce8d97" containerID="c327c5cb51ac79a1c8e9006c568bbab7eb5c46b5982eafb21e6cf57f2a366526" exitCode=0 Jan 27 19:04:35 crc kubenswrapper[4915]: I0127 19:04:35.098679 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btsbx" event={"ID":"fab41436-1aa4-47d5-aa00-9a1199ce8d97","Type":"ContainerDied","Data":"c327c5cb51ac79a1c8e9006c568bbab7eb5c46b5982eafb21e6cf57f2a366526"} Jan 27 19:04:35 crc kubenswrapper[4915]: I0127 19:04:35.098904 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btsbx" event={"ID":"fab41436-1aa4-47d5-aa00-9a1199ce8d97","Type":"ContainerStarted","Data":"ff4d4653225d64498e65d9f4bea6a59d29dd117687df20b61e746540917dfa91"} Jan 27 19:04:35 crc kubenswrapper[4915]: I0127 19:04:35.100981 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:04:37 crc kubenswrapper[4915]: I0127 19:04:37.124201 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btsbx" event={"ID":"fab41436-1aa4-47d5-aa00-9a1199ce8d97","Type":"ContainerStarted","Data":"bdc2b25c5f12fd04b75e7cbf223a2aa1009b7fd2ce5bf98e1ce4c3b2962cd138"} Jan 27 19:04:39 crc kubenswrapper[4915]: I0127 19:04:39.149480 4915 generic.go:334] "Generic (PLEG): container finished" podID="fab41436-1aa4-47d5-aa00-9a1199ce8d97" containerID="bdc2b25c5f12fd04b75e7cbf223a2aa1009b7fd2ce5bf98e1ce4c3b2962cd138" exitCode=0 Jan 27 19:04:39 crc kubenswrapper[4915]: I0127 19:04:39.149572 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btsbx" event={"ID":"fab41436-1aa4-47d5-aa00-9a1199ce8d97","Type":"ContainerDied","Data":"bdc2b25c5f12fd04b75e7cbf223a2aa1009b7fd2ce5bf98e1ce4c3b2962cd138"} Jan 27 19:04:40 crc kubenswrapper[4915]: I0127 19:04:40.164326 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btsbx" event={"ID":"fab41436-1aa4-47d5-aa00-9a1199ce8d97","Type":"ContainerStarted","Data":"bd8024a44597258da389c76231d4b668491b3a18d4b37b10d5b6d21c9b7cef39"} Jan 27 19:04:40 crc kubenswrapper[4915]: I0127 19:04:40.208470 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-btsbx" podStartSLOduration=2.768735936 podStartE2EDuration="7.208444296s" podCreationTimestamp="2026-01-27 19:04:33 +0000 UTC" firstStartedPulling="2026-01-27 19:04:35.100753405 +0000 UTC m=+1366.458607069" lastFinishedPulling="2026-01-27 19:04:39.540461725 +0000 UTC m=+1370.898315429" observedRunningTime="2026-01-27 19:04:40.183648949 +0000 UTC m=+1371.541502613" watchObservedRunningTime="2026-01-27 19:04:40.208444296 +0000 UTC m=+1371.566297990" Jan 27 19:04:44 crc kubenswrapper[4915]: I0127 19:04:44.142651 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-btsbx" Jan 27 19:04:44 crc kubenswrapper[4915]: I0127 19:04:44.143410 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-btsbx" Jan 27 19:04:45 crc kubenswrapper[4915]: I0127 19:04:45.194041 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-btsbx" podUID="fab41436-1aa4-47d5-aa00-9a1199ce8d97" containerName="registry-server" probeResult="failure" output=< Jan 27 19:04:45 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Jan 27 19:04:45 crc kubenswrapper[4915]: > Jan 27 19:04:47 crc kubenswrapper[4915]: I0127 19:04:47.627885 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 27 19:04:47 crc kubenswrapper[4915]: I0127 19:04:47.628377 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="0060401c-f9b6-4772-bce5-bda7633de81a" containerName="openstackclient" containerID="cri-o://e18f372d20db23571278d2902be8558c9c9c1236c8adb8a38d079d6827836640" gracePeriod=2 Jan 27 19:04:47 crc kubenswrapper[4915]: I0127 19:04:47.647739 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 27 19:04:47 crc kubenswrapper[4915]: I0127 19:04:47.771699 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 19:04:47 crc kubenswrapper[4915]: I0127 19:04:47.771978 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="fa88bf20-ee44-4049-b46d-75f3a64d3a4d" containerName="cinder-scheduler" containerID="cri-o://cae1cda15bc05c31ba74684aadd103b351b56954bbbc994031b930a06b055b44" gracePeriod=30 Jan 27 19:04:47 crc kubenswrapper[4915]: I0127 19:04:47.772125 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="fa88bf20-ee44-4049-b46d-75f3a64d3a4d" containerName="probe" containerID="cri-o://c41546d68783dedc9647f66f8a74dbf9122942d0f78c5b48713646f399bbd1fc" gracePeriod=30 Jan 27 19:04:47 crc kubenswrapper[4915]: I0127 19:04:47.861654 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-2fe5-account-create-update-j8xr7"] Jan 27 19:04:47 crc kubenswrapper[4915]: E0127 19:04:47.863305 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0060401c-f9b6-4772-bce5-bda7633de81a" containerName="openstackclient" Jan 27 19:04:47 crc kubenswrapper[4915]: I0127 19:04:47.863464 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0060401c-f9b6-4772-bce5-bda7633de81a" containerName="openstackclient" Jan 27 19:04:47 crc kubenswrapper[4915]: I0127 19:04:47.863821 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="0060401c-f9b6-4772-bce5-bda7633de81a" containerName="openstackclient" Jan 27 19:04:47 crc kubenswrapper[4915]: I0127 19:04:47.872654 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2fe5-account-create-update-j8xr7" Jan 27 19:04:47 crc kubenswrapper[4915]: I0127 19:04:47.887718 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-8e80-account-create-update-47h54"] Jan 27 19:04:47 crc kubenswrapper[4915]: I0127 19:04:47.889516 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8e80-account-create-update-47h54" Jan 27 19:04:47 crc kubenswrapper[4915]: I0127 19:04:47.893966 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 27 19:04:47 crc kubenswrapper[4915]: I0127 19:04:47.899189 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 27 19:04:47 crc kubenswrapper[4915]: I0127 19:04:47.902049 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slwsx\" (UniqueName: \"kubernetes.io/projected/0dfca067-a625-475b-9443-cde8a54f12af-kube-api-access-slwsx\") pod \"nova-api-8e80-account-create-update-47h54\" (UID: \"0dfca067-a625-475b-9443-cde8a54f12af\") " pod="openstack/nova-api-8e80-account-create-update-47h54" Jan 27 19:04:47 crc kubenswrapper[4915]: I0127 19:04:47.902146 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72m74\" (UniqueName: \"kubernetes.io/projected/06ad99e6-1205-4b12-8516-b5f0f595f0af-kube-api-access-72m74\") pod \"placement-2fe5-account-create-update-j8xr7\" (UID: \"06ad99e6-1205-4b12-8516-b5f0f595f0af\") " pod="openstack/placement-2fe5-account-create-update-j8xr7" Jan 27 19:04:47 crc kubenswrapper[4915]: I0127 19:04:47.902170 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dfca067-a625-475b-9443-cde8a54f12af-operator-scripts\") pod \"nova-api-8e80-account-create-update-47h54\" (UID: \"0dfca067-a625-475b-9443-cde8a54f12af\") " pod="openstack/nova-api-8e80-account-create-update-47h54" Jan 27 19:04:47 crc kubenswrapper[4915]: I0127 19:04:47.902241 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06ad99e6-1205-4b12-8516-b5f0f595f0af-operator-scripts\") pod \"placement-2fe5-account-create-update-j8xr7\" (UID: \"06ad99e6-1205-4b12-8516-b5f0f595f0af\") " pod="openstack/placement-2fe5-account-create-update-j8xr7" Jan 27 19:04:47 crc kubenswrapper[4915]: I0127 19:04:47.906018 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2fe5-account-create-update-j8xr7"] Jan 27 19:04:47 crc kubenswrapper[4915]: I0127 19:04:47.960979 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8e80-account-create-update-47h54"] Jan 27 19:04:47 crc kubenswrapper[4915]: I0127 19:04:47.988120 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.004122 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06ad99e6-1205-4b12-8516-b5f0f595f0af-operator-scripts\") pod \"placement-2fe5-account-create-update-j8xr7\" (UID: \"06ad99e6-1205-4b12-8516-b5f0f595f0af\") " pod="openstack/placement-2fe5-account-create-update-j8xr7" Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.004246 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slwsx\" (UniqueName: \"kubernetes.io/projected/0dfca067-a625-475b-9443-cde8a54f12af-kube-api-access-slwsx\") pod \"nova-api-8e80-account-create-update-47h54\" (UID: \"0dfca067-a625-475b-9443-cde8a54f12af\") " pod="openstack/nova-api-8e80-account-create-update-47h54" Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.004350 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72m74\" (UniqueName: \"kubernetes.io/projected/06ad99e6-1205-4b12-8516-b5f0f595f0af-kube-api-access-72m74\") pod \"placement-2fe5-account-create-update-j8xr7\" (UID: \"06ad99e6-1205-4b12-8516-b5f0f595f0af\") " pod="openstack/placement-2fe5-account-create-update-j8xr7" Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.004382 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dfca067-a625-475b-9443-cde8a54f12af-operator-scripts\") pod \"nova-api-8e80-account-create-update-47h54\" (UID: \"0dfca067-a625-475b-9443-cde8a54f12af\") " pod="openstack/nova-api-8e80-account-create-update-47h54" Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.005199 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dfca067-a625-475b-9443-cde8a54f12af-operator-scripts\") pod \"nova-api-8e80-account-create-update-47h54\" (UID: \"0dfca067-a625-475b-9443-cde8a54f12af\") " pod="openstack/nova-api-8e80-account-create-update-47h54" Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.007077 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06ad99e6-1205-4b12-8516-b5f0f595f0af-operator-scripts\") pod \"placement-2fe5-account-create-update-j8xr7\" (UID: \"06ad99e6-1205-4b12-8516-b5f0f595f0af\") " pod="openstack/placement-2fe5-account-create-update-j8xr7" Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.097426 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slwsx\" (UniqueName: \"kubernetes.io/projected/0dfca067-a625-475b-9443-cde8a54f12af-kube-api-access-slwsx\") pod \"nova-api-8e80-account-create-update-47h54\" (UID: \"0dfca067-a625-475b-9443-cde8a54f12af\") " pod="openstack/nova-api-8e80-account-create-update-47h54" Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.097493 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-2fe5-account-create-update-x9fkk"] Jan 27 19:04:48 crc kubenswrapper[4915]: E0127 19:04:48.109914 4915 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 27 19:04:48 crc kubenswrapper[4915]: E0127 19:04:48.124646 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-config-data podName:5b5f81dc-48ff-40c8-a0af-84c7c60338fd nodeName:}" failed. No retries permitted until 2026-01-27 19:04:48.624614324 +0000 UTC m=+1379.982467988 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-config-data") pod "rabbitmq-server-0" (UID: "5b5f81dc-48ff-40c8-a0af-84c7c60338fd") : configmap "rabbitmq-config-data" not found Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.110619 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72m74\" (UniqueName: \"kubernetes.io/projected/06ad99e6-1205-4b12-8516-b5f0f595f0af-kube-api-access-72m74\") pod \"placement-2fe5-account-create-update-j8xr7\" (UID: \"06ad99e6-1205-4b12-8516-b5f0f595f0af\") " pod="openstack/placement-2fe5-account-create-update-j8xr7" Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.138859 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-8e80-account-create-update-dbknm"] Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.213196 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8e80-account-create-update-47h54" Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.227923 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2fe5-account-create-update-j8xr7" Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.240902 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-2fe5-account-create-update-x9fkk"] Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.319439 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-8e80-account-create-update-dbknm"] Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.339928 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.340143 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b5debd92-961a-492a-8e9d-a51652a3a84a" containerName="cinder-api-log" containerID="cri-o://a1132d8d6f288010605cbaf9d48ce537f25622fc42632084c22783f5251724a2" gracePeriod=30 Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.340507 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b5debd92-961a-492a-8e9d-a51652a3a84a" containerName="cinder-api" containerID="cri-o://8fca0d05b5fe82e3c340f29da10ad9c25f93a4f93ae8f0f3914c11e06c53298f" gracePeriod=30 Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.355210 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-2bngm"] Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.357084 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2bngm" Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.371946 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.384617 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2bngm"] Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.414499 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-fc87-account-create-update-jl699"] Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.416686 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fc87-account-create-update-jl699" Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.427021 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p84z\" (UniqueName: \"kubernetes.io/projected/7ff3f576-3c7a-434d-b4be-51ff90cbd199-kube-api-access-7p84z\") pod \"root-account-create-update-2bngm\" (UID: \"7ff3f576-3c7a-434d-b4be-51ff90cbd199\") " pod="openstack/root-account-create-update-2bngm" Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.427075 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff3f576-3c7a-434d-b4be-51ff90cbd199-operator-scripts\") pod \"root-account-create-update-2bngm\" (UID: \"7ff3f576-3c7a-434d-b4be-51ff90cbd199\") " pod="openstack/root-account-create-update-2bngm" Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.433721 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.469908 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-fc87-account-create-update-jl699"] Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.532022 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p84z\" (UniqueName: \"kubernetes.io/projected/7ff3f576-3c7a-434d-b4be-51ff90cbd199-kube-api-access-7p84z\") pod \"root-account-create-update-2bngm\" (UID: \"7ff3f576-3c7a-434d-b4be-51ff90cbd199\") " pod="openstack/root-account-create-update-2bngm" Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.532115 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff3f576-3c7a-434d-b4be-51ff90cbd199-operator-scripts\") pod \"root-account-create-update-2bngm\" (UID: \"7ff3f576-3c7a-434d-b4be-51ff90cbd199\") " pod="openstack/root-account-create-update-2bngm" Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.532155 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg9ms\" (UniqueName: \"kubernetes.io/projected/67a9425f-4628-4cad-a9d0-b455a3b6f2b1-kube-api-access-xg9ms\") pod \"nova-cell0-fc87-account-create-update-jl699\" (UID: \"67a9425f-4628-4cad-a9d0-b455a3b6f2b1\") " pod="openstack/nova-cell0-fc87-account-create-update-jl699" Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.532225 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67a9425f-4628-4cad-a9d0-b455a3b6f2b1-operator-scripts\") pod \"nova-cell0-fc87-account-create-update-jl699\" (UID: \"67a9425f-4628-4cad-a9d0-b455a3b6f2b1\") " pod="openstack/nova-cell0-fc87-account-create-update-jl699" Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.533326 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff3f576-3c7a-434d-b4be-51ff90cbd199-operator-scripts\") pod \"root-account-create-update-2bngm\" (UID: \"7ff3f576-3c7a-434d-b4be-51ff90cbd199\") " pod="openstack/root-account-create-update-2bngm" Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.534246 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-67s8h"] Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.555308 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.591527 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-67s8h"] Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.592322 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p84z\" (UniqueName: \"kubernetes.io/projected/7ff3f576-3c7a-434d-b4be-51ff90cbd199-kube-api-access-7p84z\") pod \"root-account-create-update-2bngm\" (UID: \"7ff3f576-3c7a-434d-b4be-51ff90cbd199\") " pod="openstack/root-account-create-update-2bngm" Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.623389 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-fc87-account-create-update-rd4bq"] Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.633865 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg9ms\" (UniqueName: \"kubernetes.io/projected/67a9425f-4628-4cad-a9d0-b455a3b6f2b1-kube-api-access-xg9ms\") pod \"nova-cell0-fc87-account-create-update-jl699\" (UID: \"67a9425f-4628-4cad-a9d0-b455a3b6f2b1\") " pod="openstack/nova-cell0-fc87-account-create-update-jl699" Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.633935 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67a9425f-4628-4cad-a9d0-b455a3b6f2b1-operator-scripts\") pod \"nova-cell0-fc87-account-create-update-jl699\" (UID: \"67a9425f-4628-4cad-a9d0-b455a3b6f2b1\") " pod="openstack/nova-cell0-fc87-account-create-update-jl699" Jan 27 19:04:48 crc kubenswrapper[4915]: E0127 19:04:48.634155 4915 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 27 19:04:48 crc kubenswrapper[4915]: E0127 19:04:48.634202 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-config-data podName:5b5f81dc-48ff-40c8-a0af-84c7c60338fd nodeName:}" failed. No retries permitted until 2026-01-27 19:04:49.634189656 +0000 UTC m=+1380.992043320 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-config-data") pod "rabbitmq-server-0" (UID: "5b5f81dc-48ff-40c8-a0af-84c7c60338fd") : configmap "rabbitmq-config-data" not found Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.637756 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67a9425f-4628-4cad-a9d0-b455a3b6f2b1-operator-scripts\") pod \"nova-cell0-fc87-account-create-update-jl699\" (UID: \"67a9425f-4628-4cad-a9d0-b455a3b6f2b1\") " pod="openstack/nova-cell0-fc87-account-create-update-jl699" Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.671880 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-fc87-account-create-update-rd4bq"] Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.675944 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg9ms\" (UniqueName: \"kubernetes.io/projected/67a9425f-4628-4cad-a9d0-b455a3b6f2b1-kube-api-access-xg9ms\") pod \"nova-cell0-fc87-account-create-update-jl699\" (UID: \"67a9425f-4628-4cad-a9d0-b455a3b6f2b1\") " pod="openstack/nova-cell0-fc87-account-create-update-jl699" Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.721843 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-5a93-account-create-update-zkn82"] Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.735274 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-5a93-account-create-update-zkn82"] Jan 27 19:04:48 crc kubenswrapper[4915]: E0127 19:04:48.738088 4915 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 27 19:04:48 crc kubenswrapper[4915]: E0127 19:04:48.738163 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-config-data podName:b3ead5d8-b1e5-4145-a6de-64c316f4027e nodeName:}" failed. No retries permitted until 2026-01-27 19:04:49.238147266 +0000 UTC m=+1380.596000930 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-config-data") pod "rabbitmq-cell1-server-0" (UID: "b3ead5d8-b1e5-4145-a6de-64c316f4027e") : configmap "rabbitmq-cell1-config-data" not found Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.798045 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-9380-account-create-update-244pm"] Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.810454 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-4ppcd"] Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.826266 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.826506 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="67897b01-d7d4-465f-9b98-ca325dabb449" containerName="ovn-northd" containerID="cri-o://9bf35278466b1eb6865efbb8b4d74a29b2c9e804afe1f6f529238009b91cf6d9" gracePeriod=30 Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.826910 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="67897b01-d7d4-465f-9b98-ca325dabb449" containerName="openstack-network-exporter" containerID="cri-o://47de9cacd64cfebe73b1c97722b60389306b8f69b7d5151985869e539f89e176" gracePeriod=30 Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.834565 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2bngm" Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.862838 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-9380-account-create-update-244pm"] Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.881974 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fc87-account-create-update-jl699" Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.896623 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-4ppcd"] Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.903562 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-58b8-account-create-update-nm57k"] Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.938648 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-lrzgd"] Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.944958 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-g8pg6"] Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.954849 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-58b8-account-create-update-nm57k"] Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.976099 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-zdx67"] Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.976326 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-zdx67" podUID="84da5d2b-b9c1-41ef-9222-aaf3e67ff232" containerName="openstack-network-exporter" containerID="cri-o://de86cd114c117f9cffe320ebf31f70906d6b6d5b162d70c11a8ccad94f9510e2" gracePeriod=30 Jan 27 19:04:48 crc kubenswrapper[4915]: I0127 19:04:48.998971 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-nphkx"] Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.020227 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-nphkx"] Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.040716 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-wq9n4"] Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.064042 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-aba9-account-create-update-jpw5r"] Jan 27 19:04:49 crc kubenswrapper[4915]: E0127 19:04:49.085972 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5debd92_961a_492a_8e9d_a51652a3a84a.slice/crio-a1132d8d6f288010605cbaf9d48ce537f25622fc42632084c22783f5251724a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5debd92_961a_492a_8e9d_a51652a3a84a.slice/crio-conmon-a1132d8d6f288010605cbaf9d48ce537f25622fc42632084c22783f5251724a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67897b01_d7d4_465f_9b98_ca325dabb449.slice/crio-conmon-47de9cacd64cfebe73b1c97722b60389306b8f69b7d5151985869e539f89e176.scope\": RecentStats: unable to find data in memory cache]" Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.094372 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-wq9n4"] Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.140741 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-aba9-account-create-update-jpw5r"] Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.235499 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-tbtjt"] Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.257919 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-tbtjt"] Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.312850 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-vs7dx"] Jan 27 19:04:49 crc kubenswrapper[4915]: E0127 19:04:49.323725 4915 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 27 19:04:49 crc kubenswrapper[4915]: E0127 19:04:49.324150 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-config-data podName:b3ead5d8-b1e5-4145-a6de-64c316f4027e nodeName:}" failed. No retries permitted until 2026-01-27 19:04:50.324126365 +0000 UTC m=+1381.681980039 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-config-data") pod "rabbitmq-cell1-server-0" (UID: "b3ead5d8-b1e5-4145-a6de-64c316f4027e") : configmap "rabbitmq-cell1-config-data" not found Jan 27 19:04:49 crc kubenswrapper[4915]: E0127 19:04:49.381648 4915 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-lrzgd" message=< Jan 27 19:04:49 crc kubenswrapper[4915]: Exiting ovn-controller (1) [ OK ] Jan 27 19:04:49 crc kubenswrapper[4915]: > Jan 27 19:04:49 crc kubenswrapper[4915]: E0127 19:04:49.381683 4915 kuberuntime_container.go:691] "PreStop hook failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " pod="openstack/ovn-controller-lrzgd" podUID="aae7274f-1da9-4023-96b1-30cca477c6a2" containerName="ovn-controller" containerID="cri-o://7fffb4fdd6a80fbec7e6cf82acb3dcd293ed9492fc9c6f489ec1057ec100aae3" Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.381717 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-lrzgd" podUID="aae7274f-1da9-4023-96b1-30cca477c6a2" containerName="ovn-controller" containerID="cri-o://7fffb4fdd6a80fbec7e6cf82acb3dcd293ed9492fc9c6f489ec1057ec100aae3" gracePeriod=30 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.410488 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="021d6127-8503-45c4-b420-7fd173e1a789" path="/var/lib/kubelet/pods/021d6127-8503-45c4-b420-7fd173e1a789/volumes" Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.411423 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2368f66b-5bd9-4cf4-8dc9-848d7565f1cd" path="/var/lib/kubelet/pods/2368f66b-5bd9-4cf4-8dc9-848d7565f1cd/volumes" Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.417147 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2852f67f-81d4-4f7c-b5b1-c66939d69aed" path="/var/lib/kubelet/pods/2852f67f-81d4-4f7c-b5b1-c66939d69aed/volumes" Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.418625 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a336723-5840-48ef-b010-ca1cff69f962" path="/var/lib/kubelet/pods/2a336723-5840-48ef-b010-ca1cff69f962/volumes" Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.419425 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44c9af00-6a16-4e30-b375-95d5bbe7bec6" path="/var/lib/kubelet/pods/44c9af00-6a16-4e30-b375-95d5bbe7bec6/volumes" Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.419979 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c48f92-8129-4065-910c-166770ecb401" path="/var/lib/kubelet/pods/49c48f92-8129-4065-910c-166770ecb401/volumes" Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.421035 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b270a3c-b3a7-484a-ba5d-3acd08465527" path="/var/lib/kubelet/pods/4b270a3c-b3a7-484a-ba5d-3acd08465527/volumes" Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.421571 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5852cc0c-e2c5-4f1f-a979-73fc5607b961" path="/var/lib/kubelet/pods/5852cc0c-e2c5-4f1f-a979-73fc5607b961/volumes" Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.422136 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fcc76eb-7a19-4509-b8e0-1051dc3e3231" path="/var/lib/kubelet/pods/8fcc76eb-7a19-4509-b8e0-1051dc3e3231/volumes" Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.422799 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b821973f-c181-4100-b31a-d9f1e9f90ebd" path="/var/lib/kubelet/pods/b821973f-c181-4100-b31a-d9f1e9f90ebd/volumes" Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.424861 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb347f27-66c6-4bc1-8c2b-d75bfcfeea72" path="/var/lib/kubelet/pods/bb347f27-66c6-4bc1-8c2b-d75bfcfeea72/volumes" Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.425480 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddd31932-4bdb-4426-b2bc-77c6cf650a50" path="/var/lib/kubelet/pods/ddd31932-4bdb-4426-b2bc-77c6cf650a50/volumes" Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.426027 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-vs7dx"] Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.443642 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_67897b01-d7d4-465f-9b98-ca325dabb449/ovn-northd/0.log" Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.443691 4915 generic.go:334] "Generic (PLEG): container finished" podID="67897b01-d7d4-465f-9b98-ca325dabb449" containerID="47de9cacd64cfebe73b1c97722b60389306b8f69b7d5151985869e539f89e176" exitCode=2 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.443709 4915 generic.go:334] "Generic (PLEG): container finished" podID="67897b01-d7d4-465f-9b98-ca325dabb449" containerID="9bf35278466b1eb6865efbb8b4d74a29b2c9e804afe1f6f529238009b91cf6d9" exitCode=143 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.443770 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"67897b01-d7d4-465f-9b98-ca325dabb449","Type":"ContainerDied","Data":"47de9cacd64cfebe73b1c97722b60389306b8f69b7d5151985869e539f89e176"} Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.443811 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"67897b01-d7d4-465f-9b98-ca325dabb449","Type":"ContainerDied","Data":"9bf35278466b1eb6865efbb8b4d74a29b2c9e804afe1f6f529238009b91cf6d9"} Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.468667 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-zdx67_84da5d2b-b9c1-41ef-9222-aaf3e67ff232/openstack-network-exporter/0.log" Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.468756 4915 generic.go:334] "Generic (PLEG): container finished" podID="84da5d2b-b9c1-41ef-9222-aaf3e67ff232" containerID="de86cd114c117f9cffe320ebf31f70906d6b6d5b162d70c11a8ccad94f9510e2" exitCode=2 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.468890 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-zdx67" event={"ID":"84da5d2b-b9c1-41ef-9222-aaf3e67ff232","Type":"ContainerDied","Data":"de86cd114c117f9cffe320ebf31f70906d6b6d5b162d70c11a8ccad94f9510e2"} Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.474364 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-7drwm"] Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.490643 4915 generic.go:334] "Generic (PLEG): container finished" podID="b5debd92-961a-492a-8e9d-a51652a3a84a" containerID="a1132d8d6f288010605cbaf9d48ce537f25622fc42632084c22783f5251724a2" exitCode=143 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.490704 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b5debd92-961a-492a-8e9d-a51652a3a84a","Type":"ContainerDied","Data":"a1132d8d6f288010605cbaf9d48ce537f25622fc42632084c22783f5251724a2"} Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.502987 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-7drwm"] Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.525895 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-tjs2m"] Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.526514 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" podUID="4ef0121e-da20-4815-bdba-f03c90dea333" containerName="dnsmasq-dns" containerID="cri-o://db6a738db0a4ebeeaf1061a5d01ac97d32cdc31fdce6dfd88815e40db583ca2f" gracePeriod=10 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.558447 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-nk9l7"] Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.584847 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-nk9l7"] Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.601034 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-cff5fcc84-lxsfm"] Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.601361 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-cff5fcc84-lxsfm" podUID="102e986e-f101-4f49-af96-50368468f7b4" containerName="placement-log" containerID="cri-o://2be69f7a262552a158049c3b0bbfbb5ea2b1a992e0c212434065b95a436e316e" gracePeriod=30 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.601508 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-cff5fcc84-lxsfm" podUID="102e986e-f101-4f49-af96-50368468f7b4" containerName="placement-api" containerID="cri-o://ad1a62b5c13ac79f78f79de53e79d8b6e2d71d37af6a621d62c0f82ef67828b5" gracePeriod=30 Jan 27 19:04:49 crc kubenswrapper[4915]: E0127 19:04:49.644201 4915 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 27 19:04:49 crc kubenswrapper[4915]: E0127 19:04:49.644261 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-config-data podName:5b5f81dc-48ff-40c8-a0af-84c7c60338fd nodeName:}" failed. No retries permitted until 2026-01-27 19:04:51.644246942 +0000 UTC m=+1383.002100606 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-config-data") pod "rabbitmq-server-0" (UID: "5b5f81dc-48ff-40c8-a0af-84c7c60338fd") : configmap "rabbitmq-config-data" not found Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.655516 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.656081 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="6c2e3568-b735-4d0a-a6ff-a4862f244a53" containerName="openstack-network-exporter" containerID="cri-o://c06502ec99303d1016158ff95946da489a6d421aee3a2b886bff7e1bb8d8498f" gracePeriod=300 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.687000 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.687823 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="754faa2e-19b3-47fb-9436-62d0ebd49ea4" containerName="openstack-network-exporter" containerID="cri-o://f92d8556511e3b49e269eccc8efb0fb37188fb77e03e17b4ccb7a23b18b8ec13" gracePeriod=300 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.712385 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-gw7dx"] Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.756915 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-gw7dx"] Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.790713 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.803690 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.806672 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="account-server" containerID="cri-o://e4c6d8d3c41dd8a56303be9ab172d4c9244b4181c2505e4034f82cf5c1a04a49" gracePeriod=30 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.807166 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="swift-recon-cron" containerID="cri-o://46b8d42d59a4c9134b4742daa5c213bf555353d1a589cfa941606255e7eafb0d" gracePeriod=30 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.807221 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="rsync" containerID="cri-o://fd4068096fd5f2903d7d5df0c34be282b42e91a2fefeb9a0704db78cfc32d4e3" gracePeriod=30 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.807272 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="object-expirer" containerID="cri-o://ff8cba282b52306ad467900708f6df7a1eaf511c00b3b5981ea26c5c9e70e013" gracePeriod=30 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.807318 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="object-updater" containerID="cri-o://b98e82fbb1e40903fba45d17c1b07ac3a0e0b6c3be8ba49bc847d57befbdd5ff" gracePeriod=30 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.807362 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="object-auditor" containerID="cri-o://106e5d620c0c7dcf69873a17ca1ebc31c7e5a9422f1f81e014cea3df62b62fb9" gracePeriod=30 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.807406 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="object-replicator" containerID="cri-o://1f21ec545ff0259b4b18263e691718a5351bd9422f37264a3483845c80f1cefb" gracePeriod=30 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.807449 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="object-server" containerID="cri-o://78a076b42bd4e681328f57f13c01615e258359268ea1466d9b974011df9b71db" gracePeriod=30 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.807491 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="container-updater" containerID="cri-o://f190d7512b553feb7d1dac05ec55ec86498a8ed1cdd7497628af16abd2455ecd" gracePeriod=30 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.807534 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="container-auditor" containerID="cri-o://656265f338caee64724f8f589fe0017f1ae9b7a5d7efe0c5ad8db7e9f450d63b" gracePeriod=30 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.807599 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="container-replicator" containerID="cri-o://c8eff7dd1e13e6c536d79e1ff22aa0f1ba0d443e44584288ac87cb8ec64b4588" gracePeriod=30 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.807642 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="container-server" containerID="cri-o://beb400939f63213e02a46a559003731d0fda6c025a7c4af4fba8c1ee07526691" gracePeriod=30 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.807689 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="account-reaper" containerID="cri-o://b249e78afb53948517adb9de33f9e258399a60ef239c7f6384d9c12096b96675" gracePeriod=30 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.807730 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="account-auditor" containerID="cri-o://3b66e42edfceb0c87e0accf31893b0d59c8f918a236d296da7127a8dce3823a5" gracePeriod=30 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.807805 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="account-replicator" containerID="cri-o://72eb6ab24b0561bd223bc040fab18a8f09dd1c00079159354ab22b869271f0ca" gracePeriod=30 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.838140 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.839752 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d" containerName="nova-api-log" containerID="cri-o://361b3a3c5ffb20740339364094b1dc7890ed96dc005e72cd38dfd352f50114ed" gracePeriod=30 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.839998 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d" containerName="nova-api-api" containerID="cri-o://a933786d7cc9361f17ff67cb45c92e9e82dd73239e589fc136036527cb4a3031" gracePeriod=30 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.869318 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cc59d8b57-zj69c"] Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.869558 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-cc59d8b57-zj69c" podUID="0b70cef8-be7d-4d25-87e3-c9916452d855" containerName="neutron-api" containerID="cri-o://62678ab433696d7351f81f9c1770c3c809c3a53ec6feeae8a28042bf8f437350" gracePeriod=30 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.869923 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-cc59d8b57-zj69c" podUID="0b70cef8-be7d-4d25-87e3-c9916452d855" containerName="neutron-httpd" containerID="cri-o://5b5fd7950d039f8f25f1800441b524fb35816018c64293f78005b6a7b70b7696" gracePeriod=30 Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.882992 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-x46j5"] Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.899143 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-x46j5"] Jan 27 19:04:49 crc kubenswrapper[4915]: I0127 19:04:49.912150 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-2fe5-account-create-update-j8xr7"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.013721 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="5b5f81dc-48ff-40c8-a0af-84c7c60338fd" containerName="rabbitmq" containerID="cri-o://2b4f416be9fb86b0cb75f45fd91a7a0c66676aedc9385d72b5cec37350e25d70" gracePeriod=604800 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.023785 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.091944 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="6c2e3568-b735-4d0a-a6ff-a4862f244a53" containerName="ovsdbserver-nb" containerID="cri-o://0ab2050c29aa330700365f1e8b79ab3720b9477e1281313c21a50e2387bf14ae" gracePeriod=300 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.123965 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-dwgr4"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.127135 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="754faa2e-19b3-47fb-9436-62d0ebd49ea4" containerName="ovsdbserver-sb" containerID="cri-o://b4e015d7524e506289b969ff9782e3d4f4f302efdbc8387341c6e0b4bb26a7fc" gracePeriod=300 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.132065 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-g8pg6" podUID="f070ee25-edfb-4020-b526-3ec9d6c727bc" containerName="ovs-vswitchd" containerID="cri-o://d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f" gracePeriod=29 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.147684 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-dwgr4"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.158530 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.158819 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cc16c397-45e2-4878-8927-752a1832ec0a" containerName="nova-metadata-log" containerID="cri-o://d835385c0bed6004e4974dcc64c9e29fb80d508d9fd2f147d3785f1645ad55ed" gracePeriod=30 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.159037 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cc16c397-45e2-4878-8927-752a1832ec0a" containerName="nova-metadata-metadata" containerID="cri-o://3071a4d932d0954ba665a0964a52de9a30feb98cb297eae5bdd4c4d385c63546" gracePeriod=30 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.176055 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.176303 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="09d25f2d-d205-4b7a-a17f-ca7e5b26ee43" containerName="glance-log" containerID="cri-o://06641066358db82da7a50ef5141d5040acea1b4c9a664c360a4dc77e290a9f7d" gracePeriod=30 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.176689 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="09d25f2d-d205-4b7a-a17f-ca7e5b26ee43" containerName="glance-httpd" containerID="cri-o://dd40b8fd0daebe36f826489ae0a0b8562d952c77d4c20c94744566de6a48b7a9" gracePeriod=30 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.233395 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-blb6q"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.250331 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-blb6q"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.289956 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-6ch54"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.311167 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-6ch54"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.332545 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.332963 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="02e16a69-5c98-4e52-ad1c-bf08c989cd88" containerName="glance-log" containerID="cri-o://cd55b928203a100adbd69bb6a3576cba209345a89c3fe528d411607287b3964d" gracePeriod=30 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.333093 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="02e16a69-5c98-4e52-ad1c-bf08c989cd88" containerName="glance-httpd" containerID="cri-o://f76d58b03c67cca894c7c03c90ad9ed4cc48b08790c07b0c5345806e5f49dc1d" gracePeriod=30 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.349597 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-5fb74bb44c-7sqss"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.350002 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5fb74bb44c-7sqss" podUID="1b2c635a-a36f-415f-9746-97620456c8c8" containerName="proxy-httpd" containerID="cri-o://ad6cf2c775928f1b27edb4cbc6dc646165d5a44a7fe20aa97f96400f3264d8fe" gracePeriod=30 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.350207 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5fb74bb44c-7sqss" podUID="1b2c635a-a36f-415f-9746-97620456c8c8" containerName="proxy-server" containerID="cri-o://f63983754e0643b78d05ce039d979443c2034e2c687b210c29e0e9c43c6c94c7" gracePeriod=30 Jan 27 19:04:50 crc kubenswrapper[4915]: E0127 19:04:50.380513 4915 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 27 19:04:50 crc kubenswrapper[4915]: E0127 19:04:50.380578 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-config-data podName:b3ead5d8-b1e5-4145-a6de-64c316f4027e nodeName:}" failed. No retries permitted until 2026-01-27 19:04:52.380563267 +0000 UTC m=+1383.738416931 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-config-data") pod "rabbitmq-cell1-server-0" (UID: "b3ead5d8-b1e5-4145-a6de-64c316f4027e") : configmap "rabbitmq-cell1-config-data" not found Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.387898 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="c245e6e6-955f-4f75-9427-3a3bd0f26c97" containerName="galera" containerID="cri-o://4f3d5336750659c5dda939caa3a53bdbca4ff115b87947e9d27b31dbf4e09950" gracePeriod=30 Jan 27 19:04:50 crc kubenswrapper[4915]: E0127 19:04:50.394380 4915 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 27 19:04:50 crc kubenswrapper[4915]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 27 19:04:50 crc kubenswrapper[4915]: + source /usr/local/bin/container-scripts/functions Jan 27 19:04:50 crc kubenswrapper[4915]: ++ OVNBridge=br-int Jan 27 19:04:50 crc kubenswrapper[4915]: ++ OVNRemote=tcp:localhost:6642 Jan 27 19:04:50 crc kubenswrapper[4915]: ++ OVNEncapType=geneve Jan 27 19:04:50 crc kubenswrapper[4915]: ++ OVNAvailabilityZones= Jan 27 19:04:50 crc kubenswrapper[4915]: ++ EnableChassisAsGateway=true Jan 27 19:04:50 crc kubenswrapper[4915]: ++ PhysicalNetworks= Jan 27 19:04:50 crc kubenswrapper[4915]: ++ OVNHostName= Jan 27 19:04:50 crc kubenswrapper[4915]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 27 19:04:50 crc kubenswrapper[4915]: ++ ovs_dir=/var/lib/openvswitch Jan 27 19:04:50 crc kubenswrapper[4915]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 27 19:04:50 crc kubenswrapper[4915]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 27 19:04:50 crc kubenswrapper[4915]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 27 19:04:50 crc kubenswrapper[4915]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 27 19:04:50 crc kubenswrapper[4915]: + sleep 0.5 Jan 27 19:04:50 crc kubenswrapper[4915]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 27 19:04:50 crc kubenswrapper[4915]: + sleep 0.5 Jan 27 19:04:50 crc kubenswrapper[4915]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 27 19:04:50 crc kubenswrapper[4915]: + cleanup_ovsdb_server_semaphore Jan 27 19:04:50 crc kubenswrapper[4915]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 27 19:04:50 crc kubenswrapper[4915]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 27 19:04:50 crc kubenswrapper[4915]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-g8pg6" message=< Jan 27 19:04:50 crc kubenswrapper[4915]: Exiting ovsdb-server (5) [ OK ] Jan 27 19:04:50 crc kubenswrapper[4915]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 27 19:04:50 crc kubenswrapper[4915]: + source /usr/local/bin/container-scripts/functions Jan 27 19:04:50 crc kubenswrapper[4915]: ++ OVNBridge=br-int Jan 27 19:04:50 crc kubenswrapper[4915]: ++ OVNRemote=tcp:localhost:6642 Jan 27 19:04:50 crc kubenswrapper[4915]: ++ OVNEncapType=geneve Jan 27 19:04:50 crc kubenswrapper[4915]: ++ OVNAvailabilityZones= Jan 27 19:04:50 crc kubenswrapper[4915]: ++ EnableChassisAsGateway=true Jan 27 19:04:50 crc kubenswrapper[4915]: ++ PhysicalNetworks= Jan 27 19:04:50 crc kubenswrapper[4915]: ++ OVNHostName= Jan 27 19:04:50 crc kubenswrapper[4915]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 27 19:04:50 crc kubenswrapper[4915]: ++ ovs_dir=/var/lib/openvswitch Jan 27 19:04:50 crc kubenswrapper[4915]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 27 19:04:50 crc kubenswrapper[4915]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 27 19:04:50 crc kubenswrapper[4915]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 27 19:04:50 crc kubenswrapper[4915]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 27 19:04:50 crc kubenswrapper[4915]: + sleep 0.5 Jan 27 19:04:50 crc kubenswrapper[4915]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 27 19:04:50 crc kubenswrapper[4915]: + sleep 0.5 Jan 27 19:04:50 crc kubenswrapper[4915]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 27 19:04:50 crc kubenswrapper[4915]: + cleanup_ovsdb_server_semaphore Jan 27 19:04:50 crc kubenswrapper[4915]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 27 19:04:50 crc kubenswrapper[4915]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 27 19:04:50 crc kubenswrapper[4915]: > Jan 27 19:04:50 crc kubenswrapper[4915]: E0127 19:04:50.394462 4915 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 27 19:04:50 crc kubenswrapper[4915]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 27 19:04:50 crc kubenswrapper[4915]: + source /usr/local/bin/container-scripts/functions Jan 27 19:04:50 crc kubenswrapper[4915]: ++ OVNBridge=br-int Jan 27 19:04:50 crc kubenswrapper[4915]: ++ OVNRemote=tcp:localhost:6642 Jan 27 19:04:50 crc kubenswrapper[4915]: ++ OVNEncapType=geneve Jan 27 19:04:50 crc kubenswrapper[4915]: ++ OVNAvailabilityZones= Jan 27 19:04:50 crc kubenswrapper[4915]: ++ EnableChassisAsGateway=true Jan 27 19:04:50 crc kubenswrapper[4915]: ++ PhysicalNetworks= Jan 27 19:04:50 crc kubenswrapper[4915]: ++ OVNHostName= Jan 27 19:04:50 crc kubenswrapper[4915]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 27 19:04:50 crc kubenswrapper[4915]: ++ ovs_dir=/var/lib/openvswitch Jan 27 19:04:50 crc kubenswrapper[4915]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 27 19:04:50 crc kubenswrapper[4915]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 27 19:04:50 crc kubenswrapper[4915]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 27 19:04:50 crc kubenswrapper[4915]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 27 19:04:50 crc kubenswrapper[4915]: + sleep 0.5 Jan 27 19:04:50 crc kubenswrapper[4915]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 27 19:04:50 crc kubenswrapper[4915]: + sleep 0.5 Jan 27 19:04:50 crc kubenswrapper[4915]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 27 19:04:50 crc kubenswrapper[4915]: + cleanup_ovsdb_server_semaphore Jan 27 19:04:50 crc kubenswrapper[4915]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 27 19:04:50 crc kubenswrapper[4915]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 27 19:04:50 crc kubenswrapper[4915]: > pod="openstack/ovn-controller-ovs-g8pg6" podUID="f070ee25-edfb-4020-b526-3ec9d6c727bc" containerName="ovsdb-server" containerID="cri-o://416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91" Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.394767 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-g8pg6" podUID="f070ee25-edfb-4020-b526-3ec9d6c727bc" containerName="ovsdb-server" containerID="cri-o://416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91" gracePeriod=29 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.400672 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-fc87-account-create-update-jl699"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.422849 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-fpwgv"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.450944 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-fpwgv"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.469587 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7644f9784b-dbhxl"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.469864 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7644f9784b-dbhxl" podUID="1ec4f102-db6b-4f45-a5f4-1aad213e05fb" containerName="barbican-api-log" containerID="cri-o://4e8f634b8ef683d623319b77feb8b9f9b68eb10a115f361187710eedb123c004" gracePeriod=30 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.470441 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7644f9784b-dbhxl" podUID="1ec4f102-db6b-4f45-a5f4-1aad213e05fb" containerName="barbican-api" containerID="cri-o://e45fa431d5b8ceb54b435ca3de9cafe6ec697d885699991a82ee10f7135d868b" gracePeriod=30 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.493403 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-979tj"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.519467 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-979tj"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.538081 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-8e80-account-create-update-47h54"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.542853 4915 generic.go:334] "Generic (PLEG): container finished" podID="102e986e-f101-4f49-af96-50368468f7b4" containerID="2be69f7a262552a158049c3b0bbfbb5ea2b1a992e0c212434065b95a436e316e" exitCode=143 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.543020 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cff5fcc84-lxsfm" event={"ID":"102e986e-f101-4f49-af96-50368468f7b4","Type":"ContainerDied","Data":"2be69f7a262552a158049c3b0bbfbb5ea2b1a992e0c212434065b95a436e316e"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.546516 4915 generic.go:334] "Generic (PLEG): container finished" podID="4ef0121e-da20-4815-bdba-f03c90dea333" containerID="db6a738db0a4ebeeaf1061a5d01ac97d32cdc31fdce6dfd88815e40db583ca2f" exitCode=0 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.546574 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" event={"ID":"4ef0121e-da20-4815-bdba-f03c90dea333","Type":"ContainerDied","Data":"db6a738db0a4ebeeaf1061a5d01ac97d32cdc31fdce6dfd88815e40db583ca2f"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.549325 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-w4pf8"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.564358 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6c2e3568-b735-4d0a-a6ff-a4862f244a53/ovsdbserver-nb/0.log" Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.564426 4915 generic.go:334] "Generic (PLEG): container finished" podID="6c2e3568-b735-4d0a-a6ff-a4862f244a53" containerID="c06502ec99303d1016158ff95946da489a6d421aee3a2b886bff7e1bb8d8498f" exitCode=2 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.564446 4915 generic.go:334] "Generic (PLEG): container finished" podID="6c2e3568-b735-4d0a-a6ff-a4862f244a53" containerID="0ab2050c29aa330700365f1e8b79ab3720b9477e1281313c21a50e2387bf14ae" exitCode=143 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.564739 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6c2e3568-b735-4d0a-a6ff-a4862f244a53","Type":"ContainerDied","Data":"c06502ec99303d1016158ff95946da489a6d421aee3a2b886bff7e1bb8d8498f"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.564781 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6c2e3568-b735-4d0a-a6ff-a4862f244a53","Type":"ContainerDied","Data":"0ab2050c29aa330700365f1e8b79ab3720b9477e1281313c21a50e2387bf14ae"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.566132 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-w4pf8"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.606907 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.607137 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="bead4142-2b0e-41b1-85dd-cfe102596e93" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://5d4ade3f487b1e4db7db4b01ad564f77089c9959eb1510f7aa9062feafcbd4c4" gracePeriod=30 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.626322 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-f9776577f-2jndx"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.626613 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-f9776577f-2jndx" podUID="f2041e54-fb55-4f2a-8cf9-e439c7774485" containerName="barbican-worker-log" containerID="cri-o://c4b9293d64815d72131fea7b74e18820d6f50efa91a0dc1e08436c11314f0de5" gracePeriod=30 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.626964 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.627027 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-f9776577f-2jndx" podUID="f2041e54-fb55-4f2a-8cf9-e439c7774485" containerName="barbican-worker" containerID="cri-o://23aa745b7dd112c1f66341caed263053b7f5ea24b9e8c71afd2243cebd1cab75" gracePeriod=30 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.627021 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.649382 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_67897b01-d7d4-465f-9b98-ca325dabb449/ovn-northd/0.log" Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.649461 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-kxzjh"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.649483 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"67897b01-d7d4-465f-9b98-ca325dabb449","Type":"ContainerDied","Data":"00431e5452a68d033b8efb59d411dffca92590a52b07644b362fb8b40d74d9de"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.649506 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00431e5452a68d033b8efb59d411dffca92590a52b07644b362fb8b40d74d9de" Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.664824 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-3741-account-create-update-t6kdm"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.688582 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-zdx67_84da5d2b-b9c1-41ef-9222-aaf3e67ff232/openstack-network-exporter/0.log" Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.688686 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-zdx67" event={"ID":"84da5d2b-b9c1-41ef-9222-aaf3e67ff232","Type":"ContainerDied","Data":"5c6723dc84252855ee6682a65b68c069cfae17605985570ef674a52a00cced9d"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.688724 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c6723dc84252855ee6682a65b68c069cfae17605985570ef674a52a00cced9d" Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.692030 4915 generic.go:334] "Generic (PLEG): container finished" podID="0b70cef8-be7d-4d25-87e3-c9916452d855" containerID="5b5fd7950d039f8f25f1800441b524fb35816018c64293f78005b6a7b70b7696" exitCode=0 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.692091 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cc59d8b57-zj69c" event={"ID":"0b70cef8-be7d-4d25-87e3-c9916452d855","Type":"ContainerDied","Data":"5b5fd7950d039f8f25f1800441b524fb35816018c64293f78005b6a7b70b7696"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.698500 4915 generic.go:334] "Generic (PLEG): container finished" podID="02e16a69-5c98-4e52-ad1c-bf08c989cd88" containerID="cd55b928203a100adbd69bb6a3576cba209345a89c3fe528d411607287b3964d" exitCode=143 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.699010 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"02e16a69-5c98-4e52-ad1c-bf08c989cd88","Type":"ContainerDied","Data":"cd55b928203a100adbd69bb6a3576cba209345a89c3fe528d411607287b3964d"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.704464 4915 generic.go:334] "Generic (PLEG): container finished" podID="f070ee25-edfb-4020-b526-3ec9d6c727bc" containerID="416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91" exitCode=0 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.704558 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-g8pg6" event={"ID":"f070ee25-edfb-4020-b526-3ec9d6c727bc","Type":"ContainerDied","Data":"416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.718012 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-kxzjh"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.731270 4915 generic.go:334] "Generic (PLEG): container finished" podID="09d25f2d-d205-4b7a-a17f-ca7e5b26ee43" containerID="06641066358db82da7a50ef5141d5040acea1b4c9a664c360a4dc77e290a9f7d" exitCode=143 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.731325 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43","Type":"ContainerDied","Data":"06641066358db82da7a50ef5141d5040acea1b4c9a664c360a4dc77e290a9f7d"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.733822 4915 generic.go:334] "Generic (PLEG): container finished" podID="0060401c-f9b6-4772-bce5-bda7633de81a" containerID="e18f372d20db23571278d2902be8558c9c9c1236c8adb8a38d079d6827836640" exitCode=137 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.742454 4915 generic.go:334] "Generic (PLEG): container finished" podID="aae7274f-1da9-4023-96b1-30cca477c6a2" containerID="7fffb4fdd6a80fbec7e6cf82acb3dcd293ed9492fc9c6f489ec1057ec100aae3" exitCode=0 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.742501 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lrzgd" event={"ID":"aae7274f-1da9-4023-96b1-30cca477c6a2","Type":"ContainerDied","Data":"7fffb4fdd6a80fbec7e6cf82acb3dcd293ed9492fc9c6f489ec1057ec100aae3"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.742525 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lrzgd" event={"ID":"aae7274f-1da9-4023-96b1-30cca477c6a2","Type":"ContainerDied","Data":"dd632c343111af79be5b2bd34fcfdb9299d5d6bec7bbe59196c550d950a3cf41"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.742535 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd632c343111af79be5b2bd34fcfdb9299d5d6bec7bbe59196c550d950a3cf41" Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.750530 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-3741-account-create-update-t6kdm"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.763253 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-854f9c8998-68jxd"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.763519 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" podUID="8079a88f-5f47-4988-b4c8-6031fbfc9dd8" containerName="barbican-keystone-listener-log" containerID="cri-o://b733ca9ab35e4ebe64a9190112a4fec324eed485f565cf65e04c47fd1024429e" gracePeriod=30 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.763638 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" podUID="8079a88f-5f47-4988-b4c8-6031fbfc9dd8" containerName="barbican-keystone-listener" containerID="cri-o://6cce6eea7541be78647a81657904cffa63e1bcb52b367db413e89aa732f8add2" gracePeriod=30 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.766930 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_754faa2e-19b3-47fb-9436-62d0ebd49ea4/ovsdbserver-sb/0.log" Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.766969 4915 generic.go:334] "Generic (PLEG): container finished" podID="754faa2e-19b3-47fb-9436-62d0ebd49ea4" containerID="f92d8556511e3b49e269eccc8efb0fb37188fb77e03e17b4ccb7a23b18b8ec13" exitCode=2 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.766983 4915 generic.go:334] "Generic (PLEG): container finished" podID="754faa2e-19b3-47fb-9436-62d0ebd49ea4" containerID="b4e015d7524e506289b969ff9782e3d4f4f302efdbc8387341c6e0b4bb26a7fc" exitCode=143 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.767020 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"754faa2e-19b3-47fb-9436-62d0ebd49ea4","Type":"ContainerDied","Data":"f92d8556511e3b49e269eccc8efb0fb37188fb77e03e17b4ccb7a23b18b8ec13"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.767044 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"754faa2e-19b3-47fb-9436-62d0ebd49ea4","Type":"ContainerDied","Data":"b4e015d7524e506289b969ff9782e3d4f4f302efdbc8387341c6e0b4bb26a7fc"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.772239 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.785024 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2bngm"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.791719 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-2fe5-account-create-update-j8xr7"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.793016 4915 generic.go:334] "Generic (PLEG): container finished" podID="fa88bf20-ee44-4049-b46d-75f3a64d3a4d" containerID="c41546d68783dedc9647f66f8a74dbf9122942d0f78c5b48713646f399bbd1fc" exitCode=0 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.793041 4915 generic.go:334] "Generic (PLEG): container finished" podID="fa88bf20-ee44-4049-b46d-75f3a64d3a4d" containerID="cae1cda15bc05c31ba74684aadd103b351b56954bbbc994031b930a06b055b44" exitCode=0 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.793087 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fa88bf20-ee44-4049-b46d-75f3a64d3a4d","Type":"ContainerDied","Data":"c41546d68783dedc9647f66f8a74dbf9122942d0f78c5b48713646f399bbd1fc"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.793105 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fa88bf20-ee44-4049-b46d-75f3a64d3a4d","Type":"ContainerDied","Data":"cae1cda15bc05c31ba74684aadd103b351b56954bbbc994031b930a06b055b44"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.803664 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-8e80-account-create-update-47h54"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.804157 4915 generic.go:334] "Generic (PLEG): container finished" podID="cc16c397-45e2-4878-8927-752a1832ec0a" containerID="d835385c0bed6004e4974dcc64c9e29fb80d508d9fd2f147d3785f1645ad55ed" exitCode=143 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.804207 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cc16c397-45e2-4878-8927-752a1832ec0a","Type":"ContainerDied","Data":"d835385c0bed6004e4974dcc64c9e29fb80d508d9fd2f147d3785f1645ad55ed"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.814188 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.814396 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d5f29847-cfad-4f69-aff9-9c62b0088754" containerName="nova-scheduler-scheduler" containerID="cri-o://34e977874b962f76e26b957f6cccd2391cc8d2a127538d3d1be4030d291b2063" gracePeriod=30 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.842637 4915 generic.go:334] "Generic (PLEG): container finished" podID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerID="fd4068096fd5f2903d7d5df0c34be282b42e91a2fefeb9a0704db78cfc32d4e3" exitCode=0 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.842658 4915 generic.go:334] "Generic (PLEG): container finished" podID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerID="ff8cba282b52306ad467900708f6df7a1eaf511c00b3b5981ea26c5c9e70e013" exitCode=0 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.842665 4915 generic.go:334] "Generic (PLEG): container finished" podID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerID="b98e82fbb1e40903fba45d17c1b07ac3a0e0b6c3be8ba49bc847d57befbdd5ff" exitCode=0 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.842672 4915 generic.go:334] "Generic (PLEG): container finished" podID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerID="106e5d620c0c7dcf69873a17ca1ebc31c7e5a9422f1f81e014cea3df62b62fb9" exitCode=0 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.842677 4915 generic.go:334] "Generic (PLEG): container finished" podID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerID="1f21ec545ff0259b4b18263e691718a5351bd9422f37264a3483845c80f1cefb" exitCode=0 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.842684 4915 generic.go:334] "Generic (PLEG): container finished" podID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerID="f190d7512b553feb7d1dac05ec55ec86498a8ed1cdd7497628af16abd2455ecd" exitCode=0 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.842692 4915 generic.go:334] "Generic (PLEG): container finished" podID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerID="656265f338caee64724f8f589fe0017f1ae9b7a5d7efe0c5ad8db7e9f450d63b" exitCode=0 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.842700 4915 generic.go:334] "Generic (PLEG): container finished" podID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerID="c8eff7dd1e13e6c536d79e1ff22aa0f1ba0d443e44584288ac87cb8ec64b4588" exitCode=0 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.842714 4915 generic.go:334] "Generic (PLEG): container finished" podID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerID="beb400939f63213e02a46a559003731d0fda6c025a7c4af4fba8c1ee07526691" exitCode=0 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.842725 4915 generic.go:334] "Generic (PLEG): container finished" podID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerID="b249e78afb53948517adb9de33f9e258399a60ef239c7f6384d9c12096b96675" exitCode=0 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.842732 4915 generic.go:334] "Generic (PLEG): container finished" podID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerID="3b66e42edfceb0c87e0accf31893b0d59c8f918a236d296da7127a8dce3823a5" exitCode=0 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.842739 4915 generic.go:334] "Generic (PLEG): container finished" podID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerID="72eb6ab24b0561bd223bc040fab18a8f09dd1c00079159354ab22b869271f0ca" exitCode=0 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.842824 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerDied","Data":"fd4068096fd5f2903d7d5df0c34be282b42e91a2fefeb9a0704db78cfc32d4e3"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.842854 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerDied","Data":"ff8cba282b52306ad467900708f6df7a1eaf511c00b3b5981ea26c5c9e70e013"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.842876 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerDied","Data":"b98e82fbb1e40903fba45d17c1b07ac3a0e0b6c3be8ba49bc847d57befbdd5ff"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.842888 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerDied","Data":"106e5d620c0c7dcf69873a17ca1ebc31c7e5a9422f1f81e014cea3df62b62fb9"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.842899 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerDied","Data":"1f21ec545ff0259b4b18263e691718a5351bd9422f37264a3483845c80f1cefb"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.842910 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerDied","Data":"f190d7512b553feb7d1dac05ec55ec86498a8ed1cdd7497628af16abd2455ecd"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.842920 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerDied","Data":"656265f338caee64724f8f589fe0017f1ae9b7a5d7efe0c5ad8db7e9f450d63b"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.842931 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerDied","Data":"c8eff7dd1e13e6c536d79e1ff22aa0f1ba0d443e44584288ac87cb8ec64b4588"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.842941 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerDied","Data":"beb400939f63213e02a46a559003731d0fda6c025a7c4af4fba8c1ee07526691"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.842951 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerDied","Data":"b249e78afb53948517adb9de33f9e258399a60ef239c7f6384d9c12096b96675"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.842961 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerDied","Data":"3b66e42edfceb0c87e0accf31893b0d59c8f918a236d296da7127a8dce3823a5"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.842971 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerDied","Data":"72eb6ab24b0561bd223bc040fab18a8f09dd1c00079159354ab22b869271f0ca"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.871344 4915 generic.go:334] "Generic (PLEG): container finished" podID="c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d" containerID="361b3a3c5ffb20740339364094b1dc7890ed96dc005e72cd38dfd352f50114ed" exitCode=143 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.871381 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d","Type":"ContainerDied","Data":"361b3a3c5ffb20740339364094b1dc7890ed96dc005e72cd38dfd352f50114ed"} Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.905354 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.905634 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="7d059c6f-eb7b-47ad-bdeb-2af976dd43d7" containerName="nova-cell0-conductor-conductor" containerID="cri-o://c7fb90c400c672a38753580b510a0c3a5677129c7aa4308ee8cb3a9337fd46e2" gracePeriod=30 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.923508 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w9vcg"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.925282 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_67897b01-d7d4-465f-9b98-ca325dabb449/ovn-northd/0.log" Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.925353 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.930968 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="b3ead5d8-b1e5-4145-a6de-64c316f4027e" containerName="rabbitmq" containerID="cri-o://87b6ec8322f87a6503368ba614362e611cb45d804ae0510bab0ceb1477305fce" gracePeriod=604800 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.931665 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-zdx67_84da5d2b-b9c1-41ef-9222-aaf3e67ff232/openstack-network-exporter/0.log" Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.931732 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-zdx67" Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.938269 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.940101 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 27 19:04:50 crc kubenswrapper[4915]: E0127 19:04:50.946071 4915 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 19:04:50 crc kubenswrapper[4915]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 27 19:04:50 crc kubenswrapper[4915]: Jan 27 19:04:50 crc kubenswrapper[4915]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 27 19:04:50 crc kubenswrapper[4915]: Jan 27 19:04:50 crc kubenswrapper[4915]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 27 19:04:50 crc kubenswrapper[4915]: Jan 27 19:04:50 crc kubenswrapper[4915]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 27 19:04:50 crc kubenswrapper[4915]: Jan 27 19:04:50 crc kubenswrapper[4915]: if [ -n "placement" ]; then Jan 27 19:04:50 crc kubenswrapper[4915]: GRANT_DATABASE="placement" Jan 27 19:04:50 crc kubenswrapper[4915]: else Jan 27 19:04:50 crc kubenswrapper[4915]: GRANT_DATABASE="*" Jan 27 19:04:50 crc kubenswrapper[4915]: fi Jan 27 19:04:50 crc kubenswrapper[4915]: Jan 27 19:04:50 crc kubenswrapper[4915]: # going for maximum compatibility here: Jan 27 19:04:50 crc kubenswrapper[4915]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 27 19:04:50 crc kubenswrapper[4915]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 27 19:04:50 crc kubenswrapper[4915]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 27 19:04:50 crc kubenswrapper[4915]: # support updates Jan 27 19:04:50 crc kubenswrapper[4915]: Jan 27 19:04:50 crc kubenswrapper[4915]: $MYSQL_CMD < logger="UnhandledError" Jan 27 19:04:50 crc kubenswrapper[4915]: E0127 19:04:50.946140 4915 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 19:04:50 crc kubenswrapper[4915]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 27 19:04:50 crc kubenswrapper[4915]: Jan 27 19:04:50 crc kubenswrapper[4915]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 27 19:04:50 crc kubenswrapper[4915]: Jan 27 19:04:50 crc kubenswrapper[4915]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 27 19:04:50 crc kubenswrapper[4915]: Jan 27 19:04:50 crc kubenswrapper[4915]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 27 19:04:50 crc kubenswrapper[4915]: Jan 27 19:04:50 crc kubenswrapper[4915]: if [ -n "nova_api" ]; then Jan 27 19:04:50 crc kubenswrapper[4915]: GRANT_DATABASE="nova_api" Jan 27 19:04:50 crc kubenswrapper[4915]: else Jan 27 19:04:50 crc kubenswrapper[4915]: GRANT_DATABASE="*" Jan 27 19:04:50 crc kubenswrapper[4915]: fi Jan 27 19:04:50 crc kubenswrapper[4915]: Jan 27 19:04:50 crc kubenswrapper[4915]: # going for maximum compatibility here: Jan 27 19:04:50 crc kubenswrapper[4915]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 27 19:04:50 crc kubenswrapper[4915]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 27 19:04:50 crc kubenswrapper[4915]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 27 19:04:50 crc kubenswrapper[4915]: # support updates Jan 27 19:04:50 crc kubenswrapper[4915]: Jan 27 19:04:50 crc kubenswrapper[4915]: $MYSQL_CMD < logger="UnhandledError" Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.948945 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w9vcg"] Jan 27 19:04:50 crc kubenswrapper[4915]: E0127 19:04:50.949000 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-8e80-account-create-update-47h54" podUID="0dfca067-a625-475b-9443-cde8a54f12af" Jan 27 19:04:50 crc kubenswrapper[4915]: E0127 19:04:50.949032 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-2fe5-account-create-update-j8xr7" podUID="06ad99e6-1205-4b12-8516-b5f0f595f0af" Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.969070 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lrzgd" Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.979303 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.979560 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="5acd6cd6-a7d4-4839-b3c1-aec924797e53" containerName="nova-cell1-conductor-conductor" containerID="cri-o://1ea6f431c86fb5649051ee3f19b1974868377911695706cdfe403159de6a57f6" gracePeriod=30 Jan 27 19:04:50 crc kubenswrapper[4915]: I0127 19:04:50.983069 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:50.998926 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fwtcc"] Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.010848 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fwtcc"] Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.030249 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 19:04:51 crc kubenswrapper[4915]: E0127 19:04:51.064315 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c7fb90c400c672a38753580b510a0c3a5677129c7aa4308ee8cb3a9337fd46e2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 19:04:51 crc kubenswrapper[4915]: E0127 19:04:51.065442 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c7fb90c400c672a38753580b510a0c3a5677129c7aa4308ee8cb3a9337fd46e2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 19:04:51 crc kubenswrapper[4915]: E0127 19:04:51.066426 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c7fb90c400c672a38753580b510a0c3a5677129c7aa4308ee8cb3a9337fd46e2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 19:04:51 crc kubenswrapper[4915]: E0127 19:04:51.066456 4915 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="7d059c6f-eb7b-47ad-bdeb-2af976dd43d7" containerName="nova-cell0-conductor-conductor" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.107416 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb4fp\" (UniqueName: \"kubernetes.io/projected/aae7274f-1da9-4023-96b1-30cca477c6a2-kube-api-access-qb4fp\") pod \"aae7274f-1da9-4023-96b1-30cca477c6a2\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.107463 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-config\") pod \"4ef0121e-da20-4815-bdba-f03c90dea333\" (UID: \"4ef0121e-da20-4815-bdba-f03c90dea333\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.107493 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-dns-swift-storage-0\") pod \"4ef0121e-da20-4815-bdba-f03c90dea333\" (UID: \"4ef0121e-da20-4815-bdba-f03c90dea333\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.107521 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/67897b01-d7d4-465f-9b98-ca325dabb449-ovn-northd-tls-certs\") pod \"67897b01-d7d4-465f-9b98-ca325dabb449\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.107560 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-ovn-rundir\") pod \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\" (UID: \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.107587 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/67897b01-d7d4-465f-9b98-ca325dabb449-ovn-rundir\") pod \"67897b01-d7d4-465f-9b98-ca325dabb449\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.107619 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-ovsdbserver-sb\") pod \"4ef0121e-da20-4815-bdba-f03c90dea333\" (UID: \"4ef0121e-da20-4815-bdba-f03c90dea333\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.107659 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/aae7274f-1da9-4023-96b1-30cca477c6a2-var-log-ovn\") pod \"aae7274f-1da9-4023-96b1-30cca477c6a2\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.107719 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/aae7274f-1da9-4023-96b1-30cca477c6a2-ovn-controller-tls-certs\") pod \"aae7274f-1da9-4023-96b1-30cca477c6a2\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.107782 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67897b01-d7d4-465f-9b98-ca325dabb449-scripts\") pod \"67897b01-d7d4-465f-9b98-ca325dabb449\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.107822 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tq4r\" (UniqueName: \"kubernetes.io/projected/4ef0121e-da20-4815-bdba-f03c90dea333-kube-api-access-2tq4r\") pod \"4ef0121e-da20-4815-bdba-f03c90dea333\" (UID: \"4ef0121e-da20-4815-bdba-f03c90dea333\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.107848 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-ovsdbserver-nb\") pod \"4ef0121e-da20-4815-bdba-f03c90dea333\" (UID: \"4ef0121e-da20-4815-bdba-f03c90dea333\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.107896 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jl87\" (UniqueName: \"kubernetes.io/projected/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-kube-api-access-2jl87\") pod \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\" (UID: \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.107919 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-dns-svc\") pod \"4ef0121e-da20-4815-bdba-f03c90dea333\" (UID: \"4ef0121e-da20-4815-bdba-f03c90dea333\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.107945 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr2qc\" (UniqueName: \"kubernetes.io/projected/67897b01-d7d4-465f-9b98-ca325dabb449-kube-api-access-mr2qc\") pod \"67897b01-d7d4-465f-9b98-ca325dabb449\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.107967 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-config\") pod \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\" (UID: \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.107992 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae7274f-1da9-4023-96b1-30cca477c6a2-combined-ca-bundle\") pod \"aae7274f-1da9-4023-96b1-30cca477c6a2\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.108014 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aae7274f-1da9-4023-96b1-30cca477c6a2-var-run\") pod \"aae7274f-1da9-4023-96b1-30cca477c6a2\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.108035 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-combined-ca-bundle\") pod \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\" (UID: \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.108088 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-ovs-rundir\") pod \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\" (UID: \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.108120 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/67897b01-d7d4-465f-9b98-ca325dabb449-metrics-certs-tls-certs\") pod \"67897b01-d7d4-465f-9b98-ca325dabb449\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.108149 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-metrics-certs-tls-certs\") pod \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\" (UID: \"84da5d2b-b9c1-41ef-9222-aaf3e67ff232\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.108196 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aae7274f-1da9-4023-96b1-30cca477c6a2-scripts\") pod \"aae7274f-1da9-4023-96b1-30cca477c6a2\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.108223 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67897b01-d7d4-465f-9b98-ca325dabb449-combined-ca-bundle\") pod \"67897b01-d7d4-465f-9b98-ca325dabb449\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.108244 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67897b01-d7d4-465f-9b98-ca325dabb449-config\") pod \"67897b01-d7d4-465f-9b98-ca325dabb449\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.108267 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/aae7274f-1da9-4023-96b1-30cca477c6a2-var-run-ovn\") pod \"aae7274f-1da9-4023-96b1-30cca477c6a2\" (UID: \"aae7274f-1da9-4023-96b1-30cca477c6a2\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.108630 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67897b01-d7d4-465f-9b98-ca325dabb449-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "67897b01-d7d4-465f-9b98-ca325dabb449" (UID: "67897b01-d7d4-465f-9b98-ca325dabb449"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.108670 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aae7274f-1da9-4023-96b1-30cca477c6a2-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "aae7274f-1da9-4023-96b1-30cca477c6a2" (UID: "aae7274f-1da9-4023-96b1-30cca477c6a2"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.108699 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "84da5d2b-b9c1-41ef-9222-aaf3e67ff232" (UID: "84da5d2b-b9c1-41ef-9222-aaf3e67ff232"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.108818 4915 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/aae7274f-1da9-4023-96b1-30cca477c6a2-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.108830 4915 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.108838 4915 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/67897b01-d7d4-465f-9b98-ca325dabb449-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.109188 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-config" (OuterVolumeSpecName: "config") pod "84da5d2b-b9c1-41ef-9222-aaf3e67ff232" (UID: "84da5d2b-b9c1-41ef-9222-aaf3e67ff232"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.109233 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aae7274f-1da9-4023-96b1-30cca477c6a2-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "aae7274f-1da9-4023-96b1-30cca477c6a2" (UID: "aae7274f-1da9-4023-96b1-30cca477c6a2"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.109701 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.113715 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67897b01-d7d4-465f-9b98-ca325dabb449-scripts" (OuterVolumeSpecName: "scripts") pod "67897b01-d7d4-465f-9b98-ca325dabb449" (UID: "67897b01-d7d4-465f-9b98-ca325dabb449"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.118132 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aae7274f-1da9-4023-96b1-30cca477c6a2-var-run" (OuterVolumeSpecName: "var-run") pod "aae7274f-1da9-4023-96b1-30cca477c6a2" (UID: "aae7274f-1da9-4023-96b1-30cca477c6a2"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.119410 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aae7274f-1da9-4023-96b1-30cca477c6a2-scripts" (OuterVolumeSpecName: "scripts") pod "aae7274f-1da9-4023-96b1-30cca477c6a2" (UID: "aae7274f-1da9-4023-96b1-30cca477c6a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.120228 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "84da5d2b-b9c1-41ef-9222-aaf3e67ff232" (UID: "84da5d2b-b9c1-41ef-9222-aaf3e67ff232"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.123918 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aae7274f-1da9-4023-96b1-30cca477c6a2-kube-api-access-qb4fp" (OuterVolumeSpecName: "kube-api-access-qb4fp") pod "aae7274f-1da9-4023-96b1-30cca477c6a2" (UID: "aae7274f-1da9-4023-96b1-30cca477c6a2"). InnerVolumeSpecName "kube-api-access-qb4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.129391 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67897b01-d7d4-465f-9b98-ca325dabb449-config" (OuterVolumeSpecName: "config") pod "67897b01-d7d4-465f-9b98-ca325dabb449" (UID: "67897b01-d7d4-465f-9b98-ca325dabb449"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.132089 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-kube-api-access-2jl87" (OuterVolumeSpecName: "kube-api-access-2jl87") pod "84da5d2b-b9c1-41ef-9222-aaf3e67ff232" (UID: "84da5d2b-b9c1-41ef-9222-aaf3e67ff232"). InnerVolumeSpecName "kube-api-access-2jl87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.168856 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ef0121e-da20-4815-bdba-f03c90dea333-kube-api-access-2tq4r" (OuterVolumeSpecName: "kube-api-access-2tq4r") pod "4ef0121e-da20-4815-bdba-f03c90dea333" (UID: "4ef0121e-da20-4815-bdba-f03c90dea333"). InnerVolumeSpecName "kube-api-access-2tq4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.177749 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67897b01-d7d4-465f-9b98-ca325dabb449-kube-api-access-mr2qc" (OuterVolumeSpecName: "kube-api-access-mr2qc") pod "67897b01-d7d4-465f-9b98-ca325dabb449" (UID: "67897b01-d7d4-465f-9b98-ca325dabb449"). InnerVolumeSpecName "kube-api-access-mr2qc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.212008 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="c245e6e6-955f-4f75-9427-3a3bd0f26c97" containerName="galera" probeResult="failure" output="" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.212802 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0060401c-f9b6-4772-bce5-bda7633de81a-openstack-config-secret\") pod \"0060401c-f9b6-4772-bce5-bda7633de81a\" (UID: \"0060401c-f9b6-4772-bce5-bda7633de81a\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.212857 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-config-data-custom\") pod \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\" (UID: \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.212906 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0060401c-f9b6-4772-bce5-bda7633de81a-combined-ca-bundle\") pod \"0060401c-f9b6-4772-bce5-bda7633de81a\" (UID: \"0060401c-f9b6-4772-bce5-bda7633de81a\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.212966 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-config-data\") pod \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\" (UID: \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.212992 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-scripts\") pod \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\" (UID: \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.213019 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg44l\" (UniqueName: \"kubernetes.io/projected/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-kube-api-access-rg44l\") pod \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\" (UID: \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.213142 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4ssx\" (UniqueName: \"kubernetes.io/projected/0060401c-f9b6-4772-bce5-bda7633de81a-kube-api-access-z4ssx\") pod \"0060401c-f9b6-4772-bce5-bda7633de81a\" (UID: \"0060401c-f9b6-4772-bce5-bda7633de81a\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.213605 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0060401c-f9b6-4772-bce5-bda7633de81a-openstack-config\") pod \"0060401c-f9b6-4772-bce5-bda7633de81a\" (UID: \"0060401c-f9b6-4772-bce5-bda7633de81a\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.213667 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-etc-machine-id\") pod \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\" (UID: \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.213713 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-combined-ca-bundle\") pod \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\" (UID: \"fa88bf20-ee44-4049-b46d-75f3a64d3a4d\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.214426 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67897b01-d7d4-465f-9b98-ca325dabb449-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.214448 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tq4r\" (UniqueName: \"kubernetes.io/projected/4ef0121e-da20-4815-bdba-f03c90dea333-kube-api-access-2tq4r\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.214461 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jl87\" (UniqueName: \"kubernetes.io/projected/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-kube-api-access-2jl87\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.214473 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr2qc\" (UniqueName: \"kubernetes.io/projected/67897b01-d7d4-465f-9b98-ca325dabb449-kube-api-access-mr2qc\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.214485 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.214495 4915 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aae7274f-1da9-4023-96b1-30cca477c6a2-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.214506 4915 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.214517 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aae7274f-1da9-4023-96b1-30cca477c6a2-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.214527 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67897b01-d7d4-465f-9b98-ca325dabb449-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.214537 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb4fp\" (UniqueName: \"kubernetes.io/projected/aae7274f-1da9-4023-96b1-30cca477c6a2-kube-api-access-qb4fp\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.214547 4915 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/aae7274f-1da9-4023-96b1-30cca477c6a2-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.221666 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fa88bf20-ee44-4049-b46d-75f3a64d3a4d" (UID: "fa88bf20-ee44-4049-b46d-75f3a64d3a4d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.229886 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-scripts" (OuterVolumeSpecName: "scripts") pod "fa88bf20-ee44-4049-b46d-75f3a64d3a4d" (UID: "fa88bf20-ee44-4049-b46d-75f3a64d3a4d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.235979 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fa88bf20-ee44-4049-b46d-75f3a64d3a4d" (UID: "fa88bf20-ee44-4049-b46d-75f3a64d3a4d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.263131 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0060401c-f9b6-4772-bce5-bda7633de81a-kube-api-access-z4ssx" (OuterVolumeSpecName: "kube-api-access-z4ssx") pod "0060401c-f9b6-4772-bce5-bda7633de81a" (UID: "0060401c-f9b6-4772-bce5-bda7633de81a"). InnerVolumeSpecName "kube-api-access-z4ssx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.263324 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-kube-api-access-rg44l" (OuterVolumeSpecName: "kube-api-access-rg44l") pod "fa88bf20-ee44-4049-b46d-75f3a64d3a4d" (UID: "fa88bf20-ee44-4049-b46d-75f3a64d3a4d"). InnerVolumeSpecName "kube-api-access-rg44l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.316488 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.316733 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg44l\" (UniqueName: \"kubernetes.io/projected/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-kube-api-access-rg44l\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.316743 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4ssx\" (UniqueName: \"kubernetes.io/projected/0060401c-f9b6-4772-bce5-bda7633de81a-kube-api-access-z4ssx\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.316752 4915 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.316761 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.366387 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84da5d2b-b9c1-41ef-9222-aaf3e67ff232" (UID: "84da5d2b-b9c1-41ef-9222-aaf3e67ff232"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.391186 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08034122-eb86-44a0-aba7-6d6d07afee31" path="/var/lib/kubelet/pods/08034122-eb86-44a0-aba7-6d6d07afee31/volumes" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.392354 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="375893e2-1bac-422d-9768-3b462dccb8a5" path="/var/lib/kubelet/pods/375893e2-1bac-422d-9768-3b462dccb8a5/volumes" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.392884 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="725ac4ea-6fad-4426-bab4-3e4a5ff3251c" path="/var/lib/kubelet/pods/725ac4ea-6fad-4426-bab4-3e4a5ff3251c/volumes" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.395938 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75d11dfb-4c76-4968-81c0-d64a2272772b" path="/var/lib/kubelet/pods/75d11dfb-4c76-4968-81c0-d64a2272772b/volumes" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.396643 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77ffe11b-985f-4acd-a2ea-12983908d961" path="/var/lib/kubelet/pods/77ffe11b-985f-4acd-a2ea-12983908d961/volumes" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.397164 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a7a7595-f421-4ec0-afa2-427233ac9191" path="/var/lib/kubelet/pods/8a7a7595-f421-4ec0-afa2-427233ac9191/volumes" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.418577 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.424871 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aae7274f-1da9-4023-96b1-30cca477c6a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aae7274f-1da9-4023-96b1-30cca477c6a2" (UID: "aae7274f-1da9-4023-96b1-30cca477c6a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.448929 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0060401c-f9b6-4772-bce5-bda7633de81a-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "0060401c-f9b6-4772-bce5-bda7633de81a" (UID: "0060401c-f9b6-4772-bce5-bda7633de81a"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.451391 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c061dc9-adce-4ec1-9d89-d55751e9f851" path="/var/lib/kubelet/pods/8c061dc9-adce-4ec1-9d89-d55751e9f851/volumes" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.458664 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90fb062d-f400-4aac-9c83-409336ff071d" path="/var/lib/kubelet/pods/90fb062d-f400-4aac-9c83-409336ff071d/volumes" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.466952 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99e935b3-c64c-4c02-821b-18301c6b6c27" path="/var/lib/kubelet/pods/99e935b3-c64c-4c02-821b-18301c6b6c27/volumes" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.467995 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6908876-dd99-42e8-b22f-de6349dca8e5" path="/var/lib/kubelet/pods/a6908876-dd99-42e8-b22f-de6349dca8e5/volumes" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.468904 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af75964e-e79e-49da-ad8d-37b14e63b5e6" path="/var/lib/kubelet/pods/af75964e-e79e-49da-ad8d-37b14e63b5e6/volumes" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.495962 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c04e9061-ebcc-481f-8875-624ad3914bb0" path="/var/lib/kubelet/pods/c04e9061-ebcc-481f-8875-624ad3914bb0/volumes" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.497090 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d786e434-5fa6-4259-a6bf-594f61b7364a" path="/var/lib/kubelet/pods/d786e434-5fa6-4259-a6bf-594f61b7364a/volumes" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.498752 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9960a75-b93c-4b1b-b6f8-6f2168b597df" path="/var/lib/kubelet/pods/e9960a75-b93c-4b1b-b6f8-6f2168b597df/volumes" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.499690 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1bfa088-3a3e-4393-ae45-1d2d26a39dc7" path="/var/lib/kubelet/pods/f1bfa088-3a3e-4393-ae45-1d2d26a39dc7/volumes" Jan 27 19:04:51 crc kubenswrapper[4915]: W0127 19:04:51.518520 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67a9425f_4628_4cad_a9d0_b455a3b6f2b1.slice/crio-60214b5f964ef3ca2c9c45c1257962fc4645ef6fbf2cdc38ad5dffffe83aea39 WatchSource:0}: Error finding container 60214b5f964ef3ca2c9c45c1257962fc4645ef6fbf2cdc38ad5dffffe83aea39: Status 404 returned error can't find the container with id 60214b5f964ef3ca2c9c45c1257962fc4645ef6fbf2cdc38ad5dffffe83aea39 Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.519481 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67897b01-d7d4-465f-9b98-ca325dabb449-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67897b01-d7d4-465f-9b98-ca325dabb449" (UID: "67897b01-d7d4-465f-9b98-ca325dabb449"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.520328 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67897b01-d7d4-465f-9b98-ca325dabb449-combined-ca-bundle\") pod \"67897b01-d7d4-465f-9b98-ca325dabb449\" (UID: \"67897b01-d7d4-465f-9b98-ca325dabb449\") " Jan 27 19:04:51 crc kubenswrapper[4915]: W0127 19:04:51.520664 4915 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/67897b01-d7d4-465f-9b98-ca325dabb449/volumes/kubernetes.io~secret/combined-ca-bundle Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.520762 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67897b01-d7d4-465f-9b98-ca325dabb449-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67897b01-d7d4-465f-9b98-ca325dabb449" (UID: "67897b01-d7d4-465f-9b98-ca325dabb449"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.521364 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67897b01-d7d4-465f-9b98-ca325dabb449-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.521387 4915 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0060401c-f9b6-4772-bce5-bda7633de81a-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.521421 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae7274f-1da9-4023-96b1-30cca477c6a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.543604 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.546714 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.548283 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0060401c-f9b6-4772-bce5-bda7633de81a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0060401c-f9b6-4772-bce5-bda7633de81a" (UID: "0060401c-f9b6-4772-bce5-bda7633de81a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: E0127 19:04:51.551468 4915 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 19:04:51 crc kubenswrapper[4915]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 27 19:04:51 crc kubenswrapper[4915]: Jan 27 19:04:51 crc kubenswrapper[4915]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 27 19:04:51 crc kubenswrapper[4915]: Jan 27 19:04:51 crc kubenswrapper[4915]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 27 19:04:51 crc kubenswrapper[4915]: Jan 27 19:04:51 crc kubenswrapper[4915]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 27 19:04:51 crc kubenswrapper[4915]: Jan 27 19:04:51 crc kubenswrapper[4915]: if [ -n "nova_cell0" ]; then Jan 27 19:04:51 crc kubenswrapper[4915]: GRANT_DATABASE="nova_cell0" Jan 27 19:04:51 crc kubenswrapper[4915]: else Jan 27 19:04:51 crc kubenswrapper[4915]: GRANT_DATABASE="*" Jan 27 19:04:51 crc kubenswrapper[4915]: fi Jan 27 19:04:51 crc kubenswrapper[4915]: Jan 27 19:04:51 crc kubenswrapper[4915]: # going for maximum compatibility here: Jan 27 19:04:51 crc kubenswrapper[4915]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 27 19:04:51 crc kubenswrapper[4915]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 27 19:04:51 crc kubenswrapper[4915]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 27 19:04:51 crc kubenswrapper[4915]: # support updates Jan 27 19:04:51 crc kubenswrapper[4915]: Jan 27 19:04:51 crc kubenswrapper[4915]: $MYSQL_CMD < logger="UnhandledError" Jan 27 19:04:51 crc kubenswrapper[4915]: E0127 19:04:51.553417 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-fc87-account-create-update-jl699" podUID="67a9425f-4628-4cad-a9d0-b455a3b6f2b1" Jan 27 19:04:51 crc kubenswrapper[4915]: E0127 19:04:51.555799 4915 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 19:04:51 crc kubenswrapper[4915]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 27 19:04:51 crc kubenswrapper[4915]: Jan 27 19:04:51 crc kubenswrapper[4915]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 27 19:04:51 crc kubenswrapper[4915]: Jan 27 19:04:51 crc kubenswrapper[4915]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 27 19:04:51 crc kubenswrapper[4915]: Jan 27 19:04:51 crc kubenswrapper[4915]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 27 19:04:51 crc kubenswrapper[4915]: Jan 27 19:04:51 crc kubenswrapper[4915]: if [ -n "" ]; then Jan 27 19:04:51 crc kubenswrapper[4915]: GRANT_DATABASE="" Jan 27 19:04:51 crc kubenswrapper[4915]: else Jan 27 19:04:51 crc kubenswrapper[4915]: GRANT_DATABASE="*" Jan 27 19:04:51 crc kubenswrapper[4915]: fi Jan 27 19:04:51 crc kubenswrapper[4915]: Jan 27 19:04:51 crc kubenswrapper[4915]: # going for maximum compatibility here: Jan 27 19:04:51 crc kubenswrapper[4915]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 27 19:04:51 crc kubenswrapper[4915]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 27 19:04:51 crc kubenswrapper[4915]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 27 19:04:51 crc kubenswrapper[4915]: # support updates Jan 27 19:04:51 crc kubenswrapper[4915]: Jan 27 19:04:51 crc kubenswrapper[4915]: $MYSQL_CMD < logger="UnhandledError" Jan 27 19:04:51 crc kubenswrapper[4915]: E0127 19:04:51.557258 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-2bngm" podUID="7ff3f576-3c7a-434d-b4be-51ff90cbd199" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.575470 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4ef0121e-da20-4815-bdba-f03c90dea333" (UID: "4ef0121e-da20-4815-bdba-f03c90dea333"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.587873 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4ef0121e-da20-4815-bdba-f03c90dea333" (UID: "4ef0121e-da20-4815-bdba-f03c90dea333"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.593210 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4ef0121e-da20-4815-bdba-f03c90dea333" (UID: "4ef0121e-da20-4815-bdba-f03c90dea333"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.594536 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa88bf20-ee44-4049-b46d-75f3a64d3a4d" (UID: "fa88bf20-ee44-4049-b46d-75f3a64d3a4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.599952 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4ef0121e-da20-4815-bdba-f03c90dea333" (UID: "4ef0121e-da20-4815-bdba-f03c90dea333"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.615820 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-config" (OuterVolumeSpecName: "config") pod "4ef0121e-da20-4815-bdba-f03c90dea333" (UID: "4ef0121e-da20-4815-bdba-f03c90dea333"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.623852 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.623880 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.623935 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.623945 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0060401c-f9b6-4772-bce5-bda7633de81a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.623954 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.623961 4915 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.623970 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ef0121e-da20-4815-bdba-f03c90dea333-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: E0127 19:04:51.627241 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91 is running failed: container process not found" containerID="416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 19:04:51 crc kubenswrapper[4915]: E0127 19:04:51.627687 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91 is running failed: container process not found" containerID="416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 19:04:51 crc kubenswrapper[4915]: E0127 19:04:51.628171 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 19:04:51 crc kubenswrapper[4915]: E0127 19:04:51.628293 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91 is running failed: container process not found" containerID="416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 19:04:51 crc kubenswrapper[4915]: E0127 19:04:51.628334 4915 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-g8pg6" podUID="f070ee25-edfb-4020-b526-3ec9d6c727bc" containerName="ovsdb-server" Jan 27 19:04:51 crc kubenswrapper[4915]: E0127 19:04:51.634759 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 19:04:51 crc kubenswrapper[4915]: E0127 19:04:51.636495 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 19:04:51 crc kubenswrapper[4915]: E0127 19:04:51.636556 4915 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-g8pg6" podUID="f070ee25-edfb-4020-b526-3ec9d6c727bc" containerName="ovs-vswitchd" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.657740 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aae7274f-1da9-4023-96b1-30cca477c6a2-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "aae7274f-1da9-4023-96b1-30cca477c6a2" (UID: "aae7274f-1da9-4023-96b1-30cca477c6a2"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.663019 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0060401c-f9b6-4772-bce5-bda7633de81a-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "0060401c-f9b6-4772-bce5-bda7633de81a" (UID: "0060401c-f9b6-4772-bce5-bda7633de81a"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.664893 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "84da5d2b-b9c1-41ef-9222-aaf3e67ff232" (UID: "84da5d2b-b9c1-41ef-9222-aaf3e67ff232"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.668975 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67897b01-d7d4-465f-9b98-ca325dabb449-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "67897b01-d7d4-465f-9b98-ca325dabb449" (UID: "67897b01-d7d4-465f-9b98-ca325dabb449"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.686898 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67897b01-d7d4-465f-9b98-ca325dabb449-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "67897b01-d7d4-465f-9b98-ca325dabb449" (UID: "67897b01-d7d4-465f-9b98-ca325dabb449"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.728055 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-config-data" (OuterVolumeSpecName: "config-data") pod "fa88bf20-ee44-4049-b46d-75f3a64d3a4d" (UID: "fa88bf20-ee44-4049-b46d-75f3a64d3a4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.729001 4915 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/aae7274f-1da9-4023-96b1-30cca477c6a2-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.729023 4915 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/67897b01-d7d4-465f-9b98-ca325dabb449-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.729034 4915 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0060401c-f9b6-4772-bce5-bda7633de81a-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.729043 4915 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/84da5d2b-b9c1-41ef-9222-aaf3e67ff232-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.729051 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa88bf20-ee44-4049-b46d-75f3a64d3a4d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.729061 4915 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/67897b01-d7d4-465f-9b98-ca325dabb449-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:51 crc kubenswrapper[4915]: E0127 19:04:51.729662 4915 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 27 19:04:51 crc kubenswrapper[4915]: E0127 19:04:51.729725 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-config-data podName:5b5f81dc-48ff-40c8-a0af-84c7c60338fd nodeName:}" failed. No retries permitted until 2026-01-27 19:04:55.729698185 +0000 UTC m=+1387.087551849 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-config-data") pod "rabbitmq-server-0" (UID: "5b5f81dc-48ff-40c8-a0af-84c7c60338fd") : configmap "rabbitmq-config-data" not found Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.817047 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2bngm"] Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.817083 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-fc87-account-create-update-jl699"] Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.852903 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6c2e3568-b735-4d0a-a6ff-a4862f244a53/ovsdbserver-nb/0.log" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.853277 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.862515 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="b5debd92-961a-492a-8e9d-a51652a3a84a" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.182:8776/healthcheck\": read tcp 10.217.0.2:34802->10.217.0.182:8776: read: connection reset by peer" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.862605 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_754faa2e-19b3-47fb-9436-62d0ebd49ea4/ovsdbserver-sb/0.log" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.862688 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.876902 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.884836 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.892768 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fc87-account-create-update-jl699" event={"ID":"67a9425f-4628-4cad-a9d0-b455a3b6f2b1","Type":"ContainerStarted","Data":"60214b5f964ef3ca2c9c45c1257962fc4645ef6fbf2cdc38ad5dffffe83aea39"} Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.898908 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2bngm" event={"ID":"7ff3f576-3c7a-434d-b4be-51ff90cbd199","Type":"ContainerStarted","Data":"a5a5e9f36c5e063b4d2345770629c6d013836cd7d2e150d73ce5aec5a40ee06b"} Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.901425 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.902862 4915 scope.go:117] "RemoveContainer" containerID="e18f372d20db23571278d2902be8558c9c9c1236c8adb8a38d079d6827836640" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.902968 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.920535 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8e80-account-create-update-47h54" event={"ID":"0dfca067-a625-475b-9443-cde8a54f12af","Type":"ContainerStarted","Data":"6c27699e4af34d99e0cb26ce3f19167ee145015210d51302902bc63350d1d7d8"} Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.936193 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2e3568-b735-4d0a-a6ff-a4862f244a53-metrics-certs-tls-certs\") pod \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.936264 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c2e3568-b735-4d0a-a6ff-a4862f244a53-scripts\") pod \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.936289 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2e3568-b735-4d0a-a6ff-a4862f244a53-combined-ca-bundle\") pod \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.936350 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c2e3568-b735-4d0a-a6ff-a4862f244a53-config\") pod \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.936433 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.936500 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6c2e3568-b735-4d0a-a6ff-a4862f244a53-ovsdb-rundir\") pod \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.936528 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2e3568-b735-4d0a-a6ff-a4862f244a53-ovsdbserver-nb-tls-certs\") pod \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.936553 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mphc9\" (UniqueName: \"kubernetes.io/projected/6c2e3568-b735-4d0a-a6ff-a4862f244a53-kube-api-access-mphc9\") pod \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\" (UID: \"6c2e3568-b735-4d0a-a6ff-a4862f244a53\") " Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.939878 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c2e3568-b735-4d0a-a6ff-a4862f244a53-config" (OuterVolumeSpecName: "config") pod "6c2e3568-b735-4d0a-a6ff-a4862f244a53" (UID: "6c2e3568-b735-4d0a-a6ff-a4862f244a53"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.940224 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c2e3568-b735-4d0a-a6ff-a4862f244a53-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "6c2e3568-b735-4d0a-a6ff-a4862f244a53" (UID: "6c2e3568-b735-4d0a-a6ff-a4862f244a53"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.940308 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c2e3568-b735-4d0a-a6ff-a4862f244a53-scripts" (OuterVolumeSpecName: "scripts") pod "6c2e3568-b735-4d0a-a6ff-a4862f244a53" (UID: "6c2e3568-b735-4d0a-a6ff-a4862f244a53"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.947756 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_754faa2e-19b3-47fb-9436-62d0ebd49ea4/ovsdbserver-sb/0.log" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.947843 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"754faa2e-19b3-47fb-9436-62d0ebd49ea4","Type":"ContainerDied","Data":"065c4061afa5e35d12717c28d2f122b0d975243697a3da3af487cc5b3e57e0ed"} Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.947913 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.974185 4915 generic.go:334] "Generic (PLEG): container finished" podID="1b2c635a-a36f-415f-9746-97620456c8c8" containerID="f63983754e0643b78d05ce039d979443c2034e2c687b210c29e0e9c43c6c94c7" exitCode=0 Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.975086 4915 generic.go:334] "Generic (PLEG): container finished" podID="1b2c635a-a36f-415f-9746-97620456c8c8" containerID="ad6cf2c775928f1b27edb4cbc6dc646165d5a44a7fe20aa97f96400f3264d8fe" exitCode=0 Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.974386 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5fb74bb44c-7sqss" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.974405 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c2e3568-b735-4d0a-a6ff-a4862f244a53-kube-api-access-mphc9" (OuterVolumeSpecName: "kube-api-access-mphc9") pod "6c2e3568-b735-4d0a-a6ff-a4862f244a53" (UID: "6c2e3568-b735-4d0a-a6ff-a4862f244a53"). InnerVolumeSpecName "kube-api-access-mphc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.974405 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5fb74bb44c-7sqss" event={"ID":"1b2c635a-a36f-415f-9746-97620456c8c8","Type":"ContainerDied","Data":"f63983754e0643b78d05ce039d979443c2034e2c687b210c29e0e9c43c6c94c7"} Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.975815 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5fb74bb44c-7sqss" event={"ID":"1b2c635a-a36f-415f-9746-97620456c8c8","Type":"ContainerDied","Data":"ad6cf2c775928f1b27edb4cbc6dc646165d5a44a7fe20aa97f96400f3264d8fe"} Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.975877 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5fb74bb44c-7sqss" event={"ID":"1b2c635a-a36f-415f-9746-97620456c8c8","Type":"ContainerDied","Data":"b0f89e18a1656926b9da6f886ebd76bd323d27b6aecdce1513a27fa2441d920b"} Jan 27 19:04:51 crc kubenswrapper[4915]: I0127 19:04:51.977962 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "6c2e3568-b735-4d0a-a6ff-a4862f244a53" (UID: "6c2e3568-b735-4d0a-a6ff-a4862f244a53"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.017536 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c2e3568-b735-4d0a-a6ff-a4862f244a53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c2e3568-b735-4d0a-a6ff-a4862f244a53" (UID: "6c2e3568-b735-4d0a-a6ff-a4862f244a53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.017626 4915 scope.go:117] "RemoveContainer" containerID="f92d8556511e3b49e269eccc8efb0fb37188fb77e03e17b4ccb7a23b18b8ec13" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.018369 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.017917 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-tjs2m" event={"ID":"4ef0121e-da20-4815-bdba-f03c90dea333","Type":"ContainerDied","Data":"477108d88d1dfcde20dfb4ecbfee5cde6d4599361a4aee275eedf5d272f32f6e"} Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.038988 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b2c635a-a36f-415f-9746-97620456c8c8-public-tls-certs\") pod \"1b2c635a-a36f-415f-9746-97620456c8c8\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.039089 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/754faa2e-19b3-47fb-9436-62d0ebd49ea4-combined-ca-bundle\") pod \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.039144 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b2c635a-a36f-415f-9746-97620456c8c8-log-httpd\") pod \"1b2c635a-a36f-415f-9746-97620456c8c8\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.039176 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp4q7\" (UniqueName: \"kubernetes.io/projected/1b2c635a-a36f-415f-9746-97620456c8c8-kube-api-access-zp4q7\") pod \"1b2c635a-a36f-415f-9746-97620456c8c8\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.039263 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/754faa2e-19b3-47fb-9436-62d0ebd49ea4-scripts\") pod \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.039320 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.039349 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bead4142-2b0e-41b1-85dd-cfe102596e93-vencrypt-tls-certs\") pod \"bead4142-2b0e-41b1-85dd-cfe102596e93\" (UID: \"bead4142-2b0e-41b1-85dd-cfe102596e93\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.039396 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1b2c635a-a36f-415f-9746-97620456c8c8-etc-swift\") pod \"1b2c635a-a36f-415f-9746-97620456c8c8\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.040620 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bead4142-2b0e-41b1-85dd-cfe102596e93-combined-ca-bundle\") pod \"bead4142-2b0e-41b1-85dd-cfe102596e93\" (UID: \"bead4142-2b0e-41b1-85dd-cfe102596e93\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.040686 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/754faa2e-19b3-47fb-9436-62d0ebd49ea4-ovsdbserver-sb-tls-certs\") pod \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.040706 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.040741 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/754faa2e-19b3-47fb-9436-62d0ebd49ea4-config\") pod \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.040758 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2c635a-a36f-415f-9746-97620456c8c8-combined-ca-bundle\") pod \"1b2c635a-a36f-415f-9746-97620456c8c8\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.040775 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c245e6e6-955f-4f75-9427-3a3bd0f26c97-combined-ca-bundle\") pod \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.040847 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/754faa2e-19b3-47fb-9436-62d0ebd49ea4-ovsdb-rundir\") pod \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.040871 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28sl6\" (UniqueName: \"kubernetes.io/projected/c245e6e6-955f-4f75-9427-3a3bd0f26c97-kube-api-access-28sl6\") pod \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.041197 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkfzl\" (UniqueName: \"kubernetes.io/projected/bead4142-2b0e-41b1-85dd-cfe102596e93-kube-api-access-kkfzl\") pod \"bead4142-2b0e-41b1-85dd-cfe102596e93\" (UID: \"bead4142-2b0e-41b1-85dd-cfe102596e93\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.041219 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/754faa2e-19b3-47fb-9436-62d0ebd49ea4-metrics-certs-tls-certs\") pod \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.041261 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bead4142-2b0e-41b1-85dd-cfe102596e93-config-data\") pod \"bead4142-2b0e-41b1-85dd-cfe102596e93\" (UID: \"bead4142-2b0e-41b1-85dd-cfe102596e93\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.041284 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c245e6e6-955f-4f75-9427-3a3bd0f26c97-kolla-config\") pod \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.041301 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c245e6e6-955f-4f75-9427-3a3bd0f26c97-config-data-default\") pod \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.041588 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c245e6e6-955f-4f75-9427-3a3bd0f26c97-galera-tls-certs\") pod \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.041766 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b2c635a-a36f-415f-9746-97620456c8c8-internal-tls-certs\") pod \"1b2c635a-a36f-415f-9746-97620456c8c8\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.041834 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c245e6e6-955f-4f75-9427-3a3bd0f26c97-operator-scripts\") pod \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.041854 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b2c635a-a36f-415f-9746-97620456c8c8-run-httpd\") pod \"1b2c635a-a36f-415f-9746-97620456c8c8\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.041907 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2c635a-a36f-415f-9746-97620456c8c8-config-data\") pod \"1b2c635a-a36f-415f-9746-97620456c8c8\" (UID: \"1b2c635a-a36f-415f-9746-97620456c8c8\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.041932 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7dvs\" (UniqueName: \"kubernetes.io/projected/754faa2e-19b3-47fb-9436-62d0ebd49ea4-kube-api-access-c7dvs\") pod \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\" (UID: \"754faa2e-19b3-47fb-9436-62d0ebd49ea4\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.041949 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c245e6e6-955f-4f75-9427-3a3bd0f26c97-config-data-generated\") pod \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\" (UID: \"c245e6e6-955f-4f75-9427-3a3bd0f26c97\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.041987 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bead4142-2b0e-41b1-85dd-cfe102596e93-nova-novncproxy-tls-certs\") pod \"bead4142-2b0e-41b1-85dd-cfe102596e93\" (UID: \"bead4142-2b0e-41b1-85dd-cfe102596e93\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.042161 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c2e3568-b735-4d0a-a6ff-a4862f244a53-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "6c2e3568-b735-4d0a-a6ff-a4862f244a53" (UID: "6c2e3568-b735-4d0a-a6ff-a4862f244a53"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.043554 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/754faa2e-19b3-47fb-9436-62d0ebd49ea4-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "754faa2e-19b3-47fb-9436-62d0ebd49ea4" (UID: "754faa2e-19b3-47fb-9436-62d0ebd49ea4"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.044230 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c245e6e6-955f-4f75-9427-3a3bd0f26c97-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "c245e6e6-955f-4f75-9427-3a3bd0f26c97" (UID: "c245e6e6-955f-4f75-9427-3a3bd0f26c97"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.046374 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c245e6e6-955f-4f75-9427-3a3bd0f26c97-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c245e6e6-955f-4f75-9427-3a3bd0f26c97" (UID: "c245e6e6-955f-4f75-9427-3a3bd0f26c97"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.047507 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b2c635a-a36f-415f-9746-97620456c8c8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1b2c635a-a36f-415f-9746-97620456c8c8" (UID: "1b2c635a-a36f-415f-9746-97620456c8c8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.047557 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c245e6e6-955f-4f75-9427-3a3bd0f26c97-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "c245e6e6-955f-4f75-9427-3a3bd0f26c97" (UID: "c245e6e6-955f-4f75-9427-3a3bd0f26c97"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.051341 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b2c635a-a36f-415f-9746-97620456c8c8-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "1b2c635a-a36f-415f-9746-97620456c8c8" (UID: "1b2c635a-a36f-415f-9746-97620456c8c8"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.052558 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b2c635a-a36f-415f-9746-97620456c8c8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1b2c635a-a36f-415f-9746-97620456c8c8" (UID: "1b2c635a-a36f-415f-9746-97620456c8c8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.053557 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/754faa2e-19b3-47fb-9436-62d0ebd49ea4-kube-api-access-c7dvs" (OuterVolumeSpecName: "kube-api-access-c7dvs") pod "754faa2e-19b3-47fb-9436-62d0ebd49ea4" (UID: "754faa2e-19b3-47fb-9436-62d0ebd49ea4"). InnerVolumeSpecName "kube-api-access-c7dvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.053871 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c245e6e6-955f-4f75-9427-3a3bd0f26c97-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "c245e6e6-955f-4f75-9427-3a3bd0f26c97" (UID: "c245e6e6-955f-4f75-9427-3a3bd0f26c97"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.055687 4915 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2e3568-b735-4d0a-a6ff-a4862f244a53-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.055721 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c2e3568-b735-4d0a-a6ff-a4862f244a53-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.055742 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2e3568-b735-4d0a-a6ff-a4862f244a53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.055754 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c2e3568-b735-4d0a-a6ff-a4862f244a53-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.055808 4915 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.055823 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/754faa2e-19b3-47fb-9436-62d0ebd49ea4-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.055837 4915 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c245e6e6-955f-4f75-9427-3a3bd0f26c97-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.055848 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6c2e3568-b735-4d0a-a6ff-a4862f244a53-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.055860 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mphc9\" (UniqueName: \"kubernetes.io/projected/6c2e3568-b735-4d0a-a6ff-a4862f244a53-kube-api-access-mphc9\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.061634 4915 generic.go:334] "Generic (PLEG): container finished" podID="1ec4f102-db6b-4f45-a5f4-1aad213e05fb" containerID="4e8f634b8ef683d623319b77feb8b9f9b68eb10a115f361187710eedb123c004" exitCode=143 Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.064060 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7644f9784b-dbhxl" event={"ID":"1ec4f102-db6b-4f45-a5f4-1aad213e05fb","Type":"ContainerDied","Data":"4e8f634b8ef683d623319b77feb8b9f9b68eb10a115f361187710eedb123c004"} Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.064250 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c245e6e6-955f-4f75-9427-3a3bd0f26c97-kube-api-access-28sl6" (OuterVolumeSpecName: "kube-api-access-28sl6") pod "c245e6e6-955f-4f75-9427-3a3bd0f26c97" (UID: "c245e6e6-955f-4f75-9427-3a3bd0f26c97"). InnerVolumeSpecName "kube-api-access-28sl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.065541 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b2c635a-a36f-415f-9746-97620456c8c8-kube-api-access-zp4q7" (OuterVolumeSpecName: "kube-api-access-zp4q7") pod "1b2c635a-a36f-415f-9746-97620456c8c8" (UID: "1b2c635a-a36f-415f-9746-97620456c8c8"). InnerVolumeSpecName "kube-api-access-zp4q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.072975 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/754faa2e-19b3-47fb-9436-62d0ebd49ea4-config" (OuterVolumeSpecName: "config") pod "754faa2e-19b3-47fb-9436-62d0ebd49ea4" (UID: "754faa2e-19b3-47fb-9436-62d0ebd49ea4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.078250 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fa88bf20-ee44-4049-b46d-75f3a64d3a4d","Type":"ContainerDied","Data":"b59f2e5a9d96ace452f57e3f94e15694be225aeb92d27dacfb40cfd021cd2cc9"} Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.078363 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.080452 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/754faa2e-19b3-47fb-9436-62d0ebd49ea4-scripts" (OuterVolumeSpecName: "scripts") pod "754faa2e-19b3-47fb-9436-62d0ebd49ea4" (UID: "754faa2e-19b3-47fb-9436-62d0ebd49ea4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.087570 4915 generic.go:334] "Generic (PLEG): container finished" podID="bead4142-2b0e-41b1-85dd-cfe102596e93" containerID="5d4ade3f487b1e4db7db4b01ad564f77089c9959eb1510f7aa9062feafcbd4c4" exitCode=0 Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.087737 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.087749 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bead4142-2b0e-41b1-85dd-cfe102596e93","Type":"ContainerDied","Data":"5d4ade3f487b1e4db7db4b01ad564f77089c9959eb1510f7aa9062feafcbd4c4"} Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.087857 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bead4142-2b0e-41b1-85dd-cfe102596e93","Type":"ContainerDied","Data":"0c26d89328faf1df7d0177da71f0f038c798cddc7e51df26e524cc7b983478b6"} Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.088959 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bead4142-2b0e-41b1-85dd-cfe102596e93-kube-api-access-kkfzl" (OuterVolumeSpecName: "kube-api-access-kkfzl") pod "bead4142-2b0e-41b1-85dd-cfe102596e93" (UID: "bead4142-2b0e-41b1-85dd-cfe102596e93"). InnerVolumeSpecName "kube-api-access-kkfzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.089873 4915 generic.go:334] "Generic (PLEG): container finished" podID="f2041e54-fb55-4f2a-8cf9-e439c7774485" containerID="c4b9293d64815d72131fea7b74e18820d6f50efa91a0dc1e08436c11314f0de5" exitCode=143 Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.089935 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f9776577f-2jndx" event={"ID":"f2041e54-fb55-4f2a-8cf9-e439c7774485","Type":"ContainerDied","Data":"c4b9293d64815d72131fea7b74e18820d6f50efa91a0dc1e08436c11314f0de5"} Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.092237 4915 scope.go:117] "RemoveContainer" containerID="b4e015d7524e506289b969ff9782e3d4f4f302efdbc8387341c6e0b4bb26a7fc" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.098942 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-tjs2m"] Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.104461 4915 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.107589 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-tjs2m"] Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.124239 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6c2e3568-b735-4d0a-a6ff-a4862f244a53/ovsdbserver-nb/0.log" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.124467 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.124863 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6c2e3568-b735-4d0a-a6ff-a4862f244a53","Type":"ContainerDied","Data":"ac9a30e19f080413ec85fb2769002407bada744201034f2e8c8abc3d4efce794"} Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.129273 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2fe5-account-create-update-j8xr7" event={"ID":"06ad99e6-1205-4b12-8516-b5f0f595f0af","Type":"ContainerStarted","Data":"d32d83eaa2dbef64a0e1c5d18195950fada9eabf7c67f62937d8b2d1ecd400d6"} Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.133091 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.133259 4915 generic.go:334] "Generic (PLEG): container finished" podID="c245e6e6-955f-4f75-9427-3a3bd0f26c97" containerID="4f3d5336750659c5dda939caa3a53bdbca4ff115b87947e9d27b31dbf4e09950" exitCode=0 Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.133341 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.133359 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c245e6e6-955f-4f75-9427-3a3bd0f26c97","Type":"ContainerDied","Data":"4f3d5336750659c5dda939caa3a53bdbca4ff115b87947e9d27b31dbf4e09950"} Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.133425 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c245e6e6-955f-4f75-9427-3a3bd0f26c97","Type":"ContainerDied","Data":"7e61a28371d8914f1878731e91b9c9df8fde6ae215a001eae37c46265542323e"} Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.134169 4915 scope.go:117] "RemoveContainer" containerID="f63983754e0643b78d05ce039d979443c2034e2c687b210c29e0e9c43c6c94c7" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.139067 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.146322 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "mysql-db") pod "c245e6e6-955f-4f75-9427-3a3bd0f26c97" (UID: "c245e6e6-955f-4f75-9427-3a3bd0f26c97"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.149133 4915 generic.go:334] "Generic (PLEG): container finished" podID="8079a88f-5f47-4988-b4c8-6031fbfc9dd8" containerID="b733ca9ab35e4ebe64a9190112a4fec324eed485f565cf65e04c47fd1024429e" exitCode=143 Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.149215 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" event={"ID":"8079a88f-5f47-4988-b4c8-6031fbfc9dd8","Type":"ContainerDied","Data":"b733ca9ab35e4ebe64a9190112a4fec324eed485f565cf65e04c47fd1024429e"} Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.150999 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "754faa2e-19b3-47fb-9436-62d0ebd49ea4" (UID: "754faa2e-19b3-47fb-9436-62d0ebd49ea4"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.157264 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7dvs\" (UniqueName: \"kubernetes.io/projected/754faa2e-19b3-47fb-9436-62d0ebd49ea4-kube-api-access-c7dvs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.157301 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c245e6e6-955f-4f75-9427-3a3bd0f26c97-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.157313 4915 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b2c635a-a36f-415f-9746-97620456c8c8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.157326 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp4q7\" (UniqueName: \"kubernetes.io/projected/1b2c635a-a36f-415f-9746-97620456c8c8-kube-api-access-zp4q7\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.157338 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/754faa2e-19b3-47fb-9436-62d0ebd49ea4-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.157369 4915 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.157382 4915 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1b2c635a-a36f-415f-9746-97620456c8c8-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.157401 4915 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.157413 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/754faa2e-19b3-47fb-9436-62d0ebd49ea4-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.157425 4915 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.157436 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28sl6\" (UniqueName: \"kubernetes.io/projected/c245e6e6-955f-4f75-9427-3a3bd0f26c97-kube-api-access-28sl6\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.157447 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkfzl\" (UniqueName: \"kubernetes.io/projected/bead4142-2b0e-41b1-85dd-cfe102596e93-kube-api-access-kkfzl\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.157458 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c245e6e6-955f-4f75-9427-3a3bd0f26c97-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.157470 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c245e6e6-955f-4f75-9427-3a3bd0f26c97-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.157482 4915 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b2c635a-a36f-415f-9746-97620456c8c8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.184524 4915 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.194076 4915 generic.go:334] "Generic (PLEG): container finished" podID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerID="78a076b42bd4e681328f57f13c01615e258359268ea1466d9b974011df9b71db" exitCode=0 Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.194103 4915 generic.go:334] "Generic (PLEG): container finished" podID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerID="e4c6d8d3c41dd8a56303be9ab172d4c9244b4181c2505e4034f82cf5c1a04a49" exitCode=0 Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.194178 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-zdx67" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.195294 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerDied","Data":"78a076b42bd4e681328f57f13c01615e258359268ea1466d9b974011df9b71db"} Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.195326 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerDied","Data":"e4c6d8d3c41dd8a56303be9ab172d4c9244b4181c2505e4034f82cf5c1a04a49"} Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.195372 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lrzgd" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.195756 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.242266 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bead4142-2b0e-41b1-85dd-cfe102596e93-config-data" (OuterVolumeSpecName: "config-data") pod "bead4142-2b0e-41b1-85dd-cfe102596e93" (UID: "bead4142-2b0e-41b1-85dd-cfe102596e93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.260014 4915 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.260042 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bead4142-2b0e-41b1-85dd-cfe102596e93-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.275636 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c2e3568-b735-4d0a-a6ff-a4862f244a53-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "6c2e3568-b735-4d0a-a6ff-a4862f244a53" (UID: "6c2e3568-b735-4d0a-a6ff-a4862f244a53"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.276384 4915 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.291111 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bead4142-2b0e-41b1-85dd-cfe102596e93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bead4142-2b0e-41b1-85dd-cfe102596e93" (UID: "bead4142-2b0e-41b1-85dd-cfe102596e93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.295200 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c245e6e6-955f-4f75-9427-3a3bd0f26c97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c245e6e6-955f-4f75-9427-3a3bd0f26c97" (UID: "c245e6e6-955f-4f75-9427-3a3bd0f26c97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.297316 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/754faa2e-19b3-47fb-9436-62d0ebd49ea4-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "754faa2e-19b3-47fb-9436-62d0ebd49ea4" (UID: "754faa2e-19b3-47fb-9436-62d0ebd49ea4"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.308724 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b2c635a-a36f-415f-9746-97620456c8c8-config-data" (OuterVolumeSpecName: "config-data") pod "1b2c635a-a36f-415f-9746-97620456c8c8" (UID: "1b2c635a-a36f-415f-9746-97620456c8c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.312259 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/754faa2e-19b3-47fb-9436-62d0ebd49ea4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "754faa2e-19b3-47fb-9436-62d0ebd49ea4" (UID: "754faa2e-19b3-47fb-9436-62d0ebd49ea4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.341540 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c245e6e6-955f-4f75-9427-3a3bd0f26c97-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "c245e6e6-955f-4f75-9427-3a3bd0f26c97" (UID: "c245e6e6-955f-4f75-9427-3a3bd0f26c97"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.345556 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b2c635a-a36f-415f-9746-97620456c8c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b2c635a-a36f-415f-9746-97620456c8c8" (UID: "1b2c635a-a36f-415f-9746-97620456c8c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.345628 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bead4142-2b0e-41b1-85dd-cfe102596e93-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "bead4142-2b0e-41b1-85dd-cfe102596e93" (UID: "bead4142-2b0e-41b1-85dd-cfe102596e93"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.347418 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b2c635a-a36f-415f-9746-97620456c8c8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1b2c635a-a36f-415f-9746-97620456c8c8" (UID: "1b2c635a-a36f-415f-9746-97620456c8c8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.355467 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b2c635a-a36f-415f-9746-97620456c8c8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1b2c635a-a36f-415f-9746-97620456c8c8" (UID: "1b2c635a-a36f-415f-9746-97620456c8c8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.361461 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bead4142-2b0e-41b1-85dd-cfe102596e93-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "bead4142-2b0e-41b1-85dd-cfe102596e93" (UID: "bead4142-2b0e-41b1-85dd-cfe102596e93"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.362634 4915 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bead4142-2b0e-41b1-85dd-cfe102596e93-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.362655 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bead4142-2b0e-41b1-85dd-cfe102596e93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.362663 4915 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.362674 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2c635a-a36f-415f-9746-97620456c8c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.362684 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c245e6e6-955f-4f75-9427-3a3bd0f26c97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.362694 4915 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/754faa2e-19b3-47fb-9436-62d0ebd49ea4-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.362702 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2e3568-b735-4d0a-a6ff-a4862f244a53-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.362711 4915 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c245e6e6-955f-4f75-9427-3a3bd0f26c97-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.362719 4915 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b2c635a-a36f-415f-9746-97620456c8c8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.362727 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2c635a-a36f-415f-9746-97620456c8c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.362735 4915 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bead4142-2b0e-41b1-85dd-cfe102596e93-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.362743 4915 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b2c635a-a36f-415f-9746-97620456c8c8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.362751 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/754faa2e-19b3-47fb-9436-62d0ebd49ea4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.387446 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/754faa2e-19b3-47fb-9436-62d0ebd49ea4-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "754faa2e-19b3-47fb-9436-62d0ebd49ea4" (UID: "754faa2e-19b3-47fb-9436-62d0ebd49ea4"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.452209 4915 scope.go:117] "RemoveContainer" containerID="ad6cf2c775928f1b27edb4cbc6dc646165d5a44a7fe20aa97f96400f3264d8fe" Jan 27 19:04:52 crc kubenswrapper[4915]: E0127 19:04:52.465930 4915 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 27 19:04:52 crc kubenswrapper[4915]: E0127 19:04:52.465992 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-config-data podName:b3ead5d8-b1e5-4145-a6de-64c316f4027e nodeName:}" failed. No retries permitted until 2026-01-27 19:04:56.465977618 +0000 UTC m=+1387.823831272 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-config-data") pod "rabbitmq-cell1-server-0" (UID: "b3ead5d8-b1e5-4145-a6de-64c316f4027e") : configmap "rabbitmq-cell1-config-data" not found Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.468655 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/754faa2e-19b3-47fb-9436-62d0ebd49ea4-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.475058 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fc87-account-create-update-jl699" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.508334 4915 scope.go:117] "RemoveContainer" containerID="f63983754e0643b78d05ce039d979443c2034e2c687b210c29e0e9c43c6c94c7" Jan 27 19:04:52 crc kubenswrapper[4915]: E0127 19:04:52.523885 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f63983754e0643b78d05ce039d979443c2034e2c687b210c29e0e9c43c6c94c7\": container with ID starting with f63983754e0643b78d05ce039d979443c2034e2c687b210c29e0e9c43c6c94c7 not found: ID does not exist" containerID="f63983754e0643b78d05ce039d979443c2034e2c687b210c29e0e9c43c6c94c7" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.523928 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f63983754e0643b78d05ce039d979443c2034e2c687b210c29e0e9c43c6c94c7"} err="failed to get container status \"f63983754e0643b78d05ce039d979443c2034e2c687b210c29e0e9c43c6c94c7\": rpc error: code = NotFound desc = could not find container \"f63983754e0643b78d05ce039d979443c2034e2c687b210c29e0e9c43c6c94c7\": container with ID starting with f63983754e0643b78d05ce039d979443c2034e2c687b210c29e0e9c43c6c94c7 not found: ID does not exist" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.523954 4915 scope.go:117] "RemoveContainer" containerID="ad6cf2c775928f1b27edb4cbc6dc646165d5a44a7fe20aa97f96400f3264d8fe" Jan 27 19:04:52 crc kubenswrapper[4915]: E0127 19:04:52.525267 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad6cf2c775928f1b27edb4cbc6dc646165d5a44a7fe20aa97f96400f3264d8fe\": container with ID starting with ad6cf2c775928f1b27edb4cbc6dc646165d5a44a7fe20aa97f96400f3264d8fe not found: ID does not exist" containerID="ad6cf2c775928f1b27edb4cbc6dc646165d5a44a7fe20aa97f96400f3264d8fe" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.525307 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad6cf2c775928f1b27edb4cbc6dc646165d5a44a7fe20aa97f96400f3264d8fe"} err="failed to get container status \"ad6cf2c775928f1b27edb4cbc6dc646165d5a44a7fe20aa97f96400f3264d8fe\": rpc error: code = NotFound desc = could not find container \"ad6cf2c775928f1b27edb4cbc6dc646165d5a44a7fe20aa97f96400f3264d8fe\": container with ID starting with ad6cf2c775928f1b27edb4cbc6dc646165d5a44a7fe20aa97f96400f3264d8fe not found: ID does not exist" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.525331 4915 scope.go:117] "RemoveContainer" containerID="f63983754e0643b78d05ce039d979443c2034e2c687b210c29e0e9c43c6c94c7" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.528373 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f63983754e0643b78d05ce039d979443c2034e2c687b210c29e0e9c43c6c94c7"} err="failed to get container status \"f63983754e0643b78d05ce039d979443c2034e2c687b210c29e0e9c43c6c94c7\": rpc error: code = NotFound desc = could not find container \"f63983754e0643b78d05ce039d979443c2034e2c687b210c29e0e9c43c6c94c7\": container with ID starting with f63983754e0643b78d05ce039d979443c2034e2c687b210c29e0e9c43c6c94c7 not found: ID does not exist" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.528397 4915 scope.go:117] "RemoveContainer" containerID="ad6cf2c775928f1b27edb4cbc6dc646165d5a44a7fe20aa97f96400f3264d8fe" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.533325 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-8mqkb"] Jan 27 19:04:52 crc kubenswrapper[4915]: E0127 19:04:52.533656 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c245e6e6-955f-4f75-9427-3a3bd0f26c97" containerName="galera" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.533671 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c245e6e6-955f-4f75-9427-3a3bd0f26c97" containerName="galera" Jan 27 19:04:52 crc kubenswrapper[4915]: E0127 19:04:52.533688 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bead4142-2b0e-41b1-85dd-cfe102596e93" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.533695 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="bead4142-2b0e-41b1-85dd-cfe102596e93" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 19:04:52 crc kubenswrapper[4915]: E0127 19:04:52.533706 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef0121e-da20-4815-bdba-f03c90dea333" containerName="dnsmasq-dns" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.533711 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef0121e-da20-4815-bdba-f03c90dea333" containerName="dnsmasq-dns" Jan 27 19:04:52 crc kubenswrapper[4915]: E0127 19:04:52.533731 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="754faa2e-19b3-47fb-9436-62d0ebd49ea4" containerName="openstack-network-exporter" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.533738 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="754faa2e-19b3-47fb-9436-62d0ebd49ea4" containerName="openstack-network-exporter" Jan 27 19:04:52 crc kubenswrapper[4915]: E0127 19:04:52.533746 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84da5d2b-b9c1-41ef-9222-aaf3e67ff232" containerName="openstack-network-exporter" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.533751 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="84da5d2b-b9c1-41ef-9222-aaf3e67ff232" containerName="openstack-network-exporter" Jan 27 19:04:52 crc kubenswrapper[4915]: E0127 19:04:52.533765 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef0121e-da20-4815-bdba-f03c90dea333" containerName="init" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.533771 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef0121e-da20-4815-bdba-f03c90dea333" containerName="init" Jan 27 19:04:52 crc kubenswrapper[4915]: E0127 19:04:52.533781 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="754faa2e-19b3-47fb-9436-62d0ebd49ea4" containerName="ovsdbserver-sb" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.533786 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="754faa2e-19b3-47fb-9436-62d0ebd49ea4" containerName="ovsdbserver-sb" Jan 27 19:04:52 crc kubenswrapper[4915]: E0127 19:04:52.533809 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c2e3568-b735-4d0a-a6ff-a4862f244a53" containerName="openstack-network-exporter" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.533815 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c2e3568-b735-4d0a-a6ff-a4862f244a53" containerName="openstack-network-exporter" Jan 27 19:04:52 crc kubenswrapper[4915]: E0127 19:04:52.533833 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa88bf20-ee44-4049-b46d-75f3a64d3a4d" containerName="cinder-scheduler" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.533839 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa88bf20-ee44-4049-b46d-75f3a64d3a4d" containerName="cinder-scheduler" Jan 27 19:04:52 crc kubenswrapper[4915]: E0127 19:04:52.533849 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c2e3568-b735-4d0a-a6ff-a4862f244a53" containerName="ovsdbserver-nb" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.533855 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c2e3568-b735-4d0a-a6ff-a4862f244a53" containerName="ovsdbserver-nb" Jan 27 19:04:52 crc kubenswrapper[4915]: E0127 19:04:52.533867 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67897b01-d7d4-465f-9b98-ca325dabb449" containerName="openstack-network-exporter" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.533872 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="67897b01-d7d4-465f-9b98-ca325dabb449" containerName="openstack-network-exporter" Jan 27 19:04:52 crc kubenswrapper[4915]: E0127 19:04:52.533880 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b2c635a-a36f-415f-9746-97620456c8c8" containerName="proxy-httpd" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.533885 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b2c635a-a36f-415f-9746-97620456c8c8" containerName="proxy-httpd" Jan 27 19:04:52 crc kubenswrapper[4915]: E0127 19:04:52.533896 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae7274f-1da9-4023-96b1-30cca477c6a2" containerName="ovn-controller" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.533901 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae7274f-1da9-4023-96b1-30cca477c6a2" containerName="ovn-controller" Jan 27 19:04:52 crc kubenswrapper[4915]: E0127 19:04:52.533911 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa88bf20-ee44-4049-b46d-75f3a64d3a4d" containerName="probe" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.533916 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa88bf20-ee44-4049-b46d-75f3a64d3a4d" containerName="probe" Jan 27 19:04:52 crc kubenswrapper[4915]: E0127 19:04:52.533926 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b2c635a-a36f-415f-9746-97620456c8c8" containerName="proxy-server" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.533933 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b2c635a-a36f-415f-9746-97620456c8c8" containerName="proxy-server" Jan 27 19:04:52 crc kubenswrapper[4915]: E0127 19:04:52.533942 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c245e6e6-955f-4f75-9427-3a3bd0f26c97" containerName="mysql-bootstrap" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.533947 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c245e6e6-955f-4f75-9427-3a3bd0f26c97" containerName="mysql-bootstrap" Jan 27 19:04:52 crc kubenswrapper[4915]: E0127 19:04:52.533957 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67897b01-d7d4-465f-9b98-ca325dabb449" containerName="ovn-northd" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.533963 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="67897b01-d7d4-465f-9b98-ca325dabb449" containerName="ovn-northd" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.534119 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c245e6e6-955f-4f75-9427-3a3bd0f26c97" containerName="galera" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.534131 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="84da5d2b-b9c1-41ef-9222-aaf3e67ff232" containerName="openstack-network-exporter" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.534142 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="bead4142-2b0e-41b1-85dd-cfe102596e93" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.534148 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="754faa2e-19b3-47fb-9436-62d0ebd49ea4" containerName="ovsdbserver-sb" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.534156 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="754faa2e-19b3-47fb-9436-62d0ebd49ea4" containerName="openstack-network-exporter" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.534165 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b2c635a-a36f-415f-9746-97620456c8c8" containerName="proxy-httpd" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.534173 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa88bf20-ee44-4049-b46d-75f3a64d3a4d" containerName="probe" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.534180 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b2c635a-a36f-415f-9746-97620456c8c8" containerName="proxy-server" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.534189 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="67897b01-d7d4-465f-9b98-ca325dabb449" containerName="openstack-network-exporter" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.534201 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa88bf20-ee44-4049-b46d-75f3a64d3a4d" containerName="cinder-scheduler" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.534211 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c2e3568-b735-4d0a-a6ff-a4862f244a53" containerName="openstack-network-exporter" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.534220 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="67897b01-d7d4-465f-9b98-ca325dabb449" containerName="ovn-northd" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.534227 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ef0121e-da20-4815-bdba-f03c90dea333" containerName="dnsmasq-dns" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.534237 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c2e3568-b735-4d0a-a6ff-a4862f244a53" containerName="ovsdbserver-nb" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.534243 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="aae7274f-1da9-4023-96b1-30cca477c6a2" containerName="ovn-controller" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.534758 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8mqkb" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.535484 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad6cf2c775928f1b27edb4cbc6dc646165d5a44a7fe20aa97f96400f3264d8fe"} err="failed to get container status \"ad6cf2c775928f1b27edb4cbc6dc646165d5a44a7fe20aa97f96400f3264d8fe\": rpc error: code = NotFound desc = could not find container \"ad6cf2c775928f1b27edb4cbc6dc646165d5a44a7fe20aa97f96400f3264d8fe\": container with ID starting with ad6cf2c775928f1b27edb4cbc6dc646165d5a44a7fe20aa97f96400f3264d8fe not found: ID does not exist" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.535509 4915 scope.go:117] "RemoveContainer" containerID="db6a738db0a4ebeeaf1061a5d01ac97d32cdc31fdce6dfd88815e40db583ca2f" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.543842 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.545937 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.573521 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.573534 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg9ms\" (UniqueName: \"kubernetes.io/projected/67a9425f-4628-4cad-a9d0-b455a3b6f2b1-kube-api-access-xg9ms\") pod \"67a9425f-4628-4cad-a9d0-b455a3b6f2b1\" (UID: \"67a9425f-4628-4cad-a9d0-b455a3b6f2b1\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.573760 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67a9425f-4628-4cad-a9d0-b455a3b6f2b1-operator-scripts\") pod \"67a9425f-4628-4cad-a9d0-b455a3b6f2b1\" (UID: \"67a9425f-4628-4cad-a9d0-b455a3b6f2b1\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.575881 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67a9425f-4628-4cad-a9d0-b455a3b6f2b1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "67a9425f-4628-4cad-a9d0-b455a3b6f2b1" (UID: "67a9425f-4628-4cad-a9d0-b455a3b6f2b1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.584551 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67a9425f-4628-4cad-a9d0-b455a3b6f2b1-kube-api-access-xg9ms" (OuterVolumeSpecName: "kube-api-access-xg9ms") pod "67a9425f-4628-4cad-a9d0-b455a3b6f2b1" (UID: "67a9425f-4628-4cad-a9d0-b455a3b6f2b1"). InnerVolumeSpecName "kube-api-access-xg9ms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.583855 4915 scope.go:117] "RemoveContainer" containerID="fd22bfbe90460c42cf3c357448b7851d433eaff827b51a95098297dad113e150" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.639703 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8mqkb"] Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.656360 4915 scope.go:117] "RemoveContainer" containerID="c41546d68783dedc9647f66f8a74dbf9122942d0f78c5b48713646f399bbd1fc" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.656815 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.676782 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp4nt\" (UniqueName: \"kubernetes.io/projected/b7965677-0846-4016-ab96-efe29db9327c-kube-api-access-cp4nt\") pod \"root-account-create-update-8mqkb\" (UID: \"b7965677-0846-4016-ab96-efe29db9327c\") " pod="openstack/root-account-create-update-8mqkb" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.677106 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7965677-0846-4016-ab96-efe29db9327c-operator-scripts\") pod \"root-account-create-update-8mqkb\" (UID: \"b7965677-0846-4016-ab96-efe29db9327c\") " pod="openstack/root-account-create-update-8mqkb" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.677334 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg9ms\" (UniqueName: \"kubernetes.io/projected/67a9425f-4628-4cad-a9d0-b455a3b6f2b1-kube-api-access-xg9ms\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.677347 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67a9425f-4628-4cad-a9d0-b455a3b6f2b1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.698253 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2fe5-account-create-update-j8xr7" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.703893 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8e80-account-create-update-47h54" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.711130 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.718700 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-lrzgd"] Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.722540 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2bngm" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.740328 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-lrzgd"] Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.740369 4915 scope.go:117] "RemoveContainer" containerID="cae1cda15bc05c31ba74684aadd103b351b56954bbbc994031b930a06b055b44" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.740972 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.753969 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.759178 4915 scope.go:117] "RemoveContainer" containerID="5d4ade3f487b1e4db7db4b01ad564f77089c9959eb1510f7aa9062feafcbd4c4" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.773563 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.778673 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72m74\" (UniqueName: \"kubernetes.io/projected/06ad99e6-1205-4b12-8516-b5f0f595f0af-kube-api-access-72m74\") pod \"06ad99e6-1205-4b12-8516-b5f0f595f0af\" (UID: \"06ad99e6-1205-4b12-8516-b5f0f595f0af\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.778746 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06ad99e6-1205-4b12-8516-b5f0f595f0af-operator-scripts\") pod \"06ad99e6-1205-4b12-8516-b5f0f595f0af\" (UID: \"06ad99e6-1205-4b12-8516-b5f0f595f0af\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.779072 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp4nt\" (UniqueName: \"kubernetes.io/projected/b7965677-0846-4016-ab96-efe29db9327c-kube-api-access-cp4nt\") pod \"root-account-create-update-8mqkb\" (UID: \"b7965677-0846-4016-ab96-efe29db9327c\") " pod="openstack/root-account-create-update-8mqkb" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.779968 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06ad99e6-1205-4b12-8516-b5f0f595f0af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "06ad99e6-1205-4b12-8516-b5f0f595f0af" (UID: "06ad99e6-1205-4b12-8516-b5f0f595f0af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.781717 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7965677-0846-4016-ab96-efe29db9327c-operator-scripts\") pod \"root-account-create-update-8mqkb\" (UID: \"b7965677-0846-4016-ab96-efe29db9327c\") " pod="openstack/root-account-create-update-8mqkb" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.781827 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06ad99e6-1205-4b12-8516-b5f0f595f0af-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.783021 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06ad99e6-1205-4b12-8516-b5f0f595f0af-kube-api-access-72m74" (OuterVolumeSpecName: "kube-api-access-72m74") pod "06ad99e6-1205-4b12-8516-b5f0f595f0af" (UID: "06ad99e6-1205-4b12-8516-b5f0f595f0af"). InnerVolumeSpecName "kube-api-access-72m74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.783202 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7965677-0846-4016-ab96-efe29db9327c-operator-scripts\") pod \"root-account-create-update-8mqkb\" (UID: \"b7965677-0846-4016-ab96-efe29db9327c\") " pod="openstack/root-account-create-update-8mqkb" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.784860 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.787397 4915 scope.go:117] "RemoveContainer" containerID="5d4ade3f487b1e4db7db4b01ad564f77089c9959eb1510f7aa9062feafcbd4c4" Jan 27 19:04:52 crc kubenswrapper[4915]: E0127 19:04:52.788203 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d4ade3f487b1e4db7db4b01ad564f77089c9959eb1510f7aa9062feafcbd4c4\": container with ID starting with 5d4ade3f487b1e4db7db4b01ad564f77089c9959eb1510f7aa9062feafcbd4c4 not found: ID does not exist" containerID="5d4ade3f487b1e4db7db4b01ad564f77089c9959eb1510f7aa9062feafcbd4c4" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.788225 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d4ade3f487b1e4db7db4b01ad564f77089c9959eb1510f7aa9062feafcbd4c4"} err="failed to get container status \"5d4ade3f487b1e4db7db4b01ad564f77089c9959eb1510f7aa9062feafcbd4c4\": rpc error: code = NotFound desc = could not find container \"5d4ade3f487b1e4db7db4b01ad564f77089c9959eb1510f7aa9062feafcbd4c4\": container with ID starting with 5d4ade3f487b1e4db7db4b01ad564f77089c9959eb1510f7aa9062feafcbd4c4 not found: ID does not exist" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.788249 4915 scope.go:117] "RemoveContainer" containerID="c06502ec99303d1016158ff95946da489a6d421aee3a2b886bff7e1bb8d8498f" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.789561 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.795166 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp4nt\" (UniqueName: \"kubernetes.io/projected/b7965677-0846-4016-ab96-efe29db9327c-kube-api-access-cp4nt\") pod \"root-account-create-update-8mqkb\" (UID: \"b7965677-0846-4016-ab96-efe29db9327c\") " pod="openstack/root-account-create-update-8mqkb" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.795231 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-zdx67"] Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.800662 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-zdx67"] Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.812723 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-5fb74bb44c-7sqss"] Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.815296 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-5fb74bb44c-7sqss"] Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.815557 4915 scope.go:117] "RemoveContainer" containerID="0ab2050c29aa330700365f1e8b79ab3720b9477e1281313c21a50e2387bf14ae" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.820200 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.825565 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.834374 4915 scope.go:117] "RemoveContainer" containerID="4f3d5336750659c5dda939caa3a53bdbca4ff115b87947e9d27b31dbf4e09950" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.857388 4915 scope.go:117] "RemoveContainer" containerID="baa30aa2c659658be9ba23bbd37aac31d9208153a31b6caf3fc6bdb6c9e2629b" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.878400 4915 scope.go:117] "RemoveContainer" containerID="4f3d5336750659c5dda939caa3a53bdbca4ff115b87947e9d27b31dbf4e09950" Jan 27 19:04:52 crc kubenswrapper[4915]: E0127 19:04:52.878866 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f3d5336750659c5dda939caa3a53bdbca4ff115b87947e9d27b31dbf4e09950\": container with ID starting with 4f3d5336750659c5dda939caa3a53bdbca4ff115b87947e9d27b31dbf4e09950 not found: ID does not exist" containerID="4f3d5336750659c5dda939caa3a53bdbca4ff115b87947e9d27b31dbf4e09950" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.878909 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f3d5336750659c5dda939caa3a53bdbca4ff115b87947e9d27b31dbf4e09950"} err="failed to get container status \"4f3d5336750659c5dda939caa3a53bdbca4ff115b87947e9d27b31dbf4e09950\": rpc error: code = NotFound desc = could not find container \"4f3d5336750659c5dda939caa3a53bdbca4ff115b87947e9d27b31dbf4e09950\": container with ID starting with 4f3d5336750659c5dda939caa3a53bdbca4ff115b87947e9d27b31dbf4e09950 not found: ID does not exist" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.878934 4915 scope.go:117] "RemoveContainer" containerID="baa30aa2c659658be9ba23bbd37aac31d9208153a31b6caf3fc6bdb6c9e2629b" Jan 27 19:04:52 crc kubenswrapper[4915]: E0127 19:04:52.879751 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baa30aa2c659658be9ba23bbd37aac31d9208153a31b6caf3fc6bdb6c9e2629b\": container with ID starting with baa30aa2c659658be9ba23bbd37aac31d9208153a31b6caf3fc6bdb6c9e2629b not found: ID does not exist" containerID="baa30aa2c659658be9ba23bbd37aac31d9208153a31b6caf3fc6bdb6c9e2629b" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.879788 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baa30aa2c659658be9ba23bbd37aac31d9208153a31b6caf3fc6bdb6c9e2629b"} err="failed to get container status \"baa30aa2c659658be9ba23bbd37aac31d9208153a31b6caf3fc6bdb6c9e2629b\": rpc error: code = NotFound desc = could not find container \"baa30aa2c659658be9ba23bbd37aac31d9208153a31b6caf3fc6bdb6c9e2629b\": container with ID starting with baa30aa2c659658be9ba23bbd37aac31d9208153a31b6caf3fc6bdb6c9e2629b not found: ID does not exist" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.882357 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-internal-tls-certs\") pod \"b5debd92-961a-492a-8e9d-a51652a3a84a\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.882417 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slwsx\" (UniqueName: \"kubernetes.io/projected/0dfca067-a625-475b-9443-cde8a54f12af-kube-api-access-slwsx\") pod \"0dfca067-a625-475b-9443-cde8a54f12af\" (UID: \"0dfca067-a625-475b-9443-cde8a54f12af\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.882441 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c7qh\" (UniqueName: \"kubernetes.io/projected/b5debd92-961a-492a-8e9d-a51652a3a84a-kube-api-access-7c7qh\") pod \"b5debd92-961a-492a-8e9d-a51652a3a84a\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.882475 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff3f576-3c7a-434d-b4be-51ff90cbd199-operator-scripts\") pod \"7ff3f576-3c7a-434d-b4be-51ff90cbd199\" (UID: \"7ff3f576-3c7a-434d-b4be-51ff90cbd199\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.882507 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-scripts\") pod \"b5debd92-961a-492a-8e9d-a51652a3a84a\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.882556 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-config-data\") pod \"b5debd92-961a-492a-8e9d-a51652a3a84a\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.882621 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dfca067-a625-475b-9443-cde8a54f12af-operator-scripts\") pod \"0dfca067-a625-475b-9443-cde8a54f12af\" (UID: \"0dfca067-a625-475b-9443-cde8a54f12af\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.882657 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-public-tls-certs\") pod \"b5debd92-961a-492a-8e9d-a51652a3a84a\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.882756 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-combined-ca-bundle\") pod \"b5debd92-961a-492a-8e9d-a51652a3a84a\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.882811 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5debd92-961a-492a-8e9d-a51652a3a84a-logs\") pod \"b5debd92-961a-492a-8e9d-a51652a3a84a\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.882853 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5debd92-961a-492a-8e9d-a51652a3a84a-etc-machine-id\") pod \"b5debd92-961a-492a-8e9d-a51652a3a84a\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.882888 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p84z\" (UniqueName: \"kubernetes.io/projected/7ff3f576-3c7a-434d-b4be-51ff90cbd199-kube-api-access-7p84z\") pod \"7ff3f576-3c7a-434d-b4be-51ff90cbd199\" (UID: \"7ff3f576-3c7a-434d-b4be-51ff90cbd199\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.882925 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-config-data-custom\") pod \"b5debd92-961a-492a-8e9d-a51652a3a84a\" (UID: \"b5debd92-961a-492a-8e9d-a51652a3a84a\") " Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.883229 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ff3f576-3c7a-434d-b4be-51ff90cbd199-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ff3f576-3c7a-434d-b4be-51ff90cbd199" (UID: "7ff3f576-3c7a-434d-b4be-51ff90cbd199"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.883474 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff3f576-3c7a-434d-b4be-51ff90cbd199-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.883494 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72m74\" (UniqueName: \"kubernetes.io/projected/06ad99e6-1205-4b12-8516-b5f0f595f0af-kube-api-access-72m74\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.883638 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5debd92-961a-492a-8e9d-a51652a3a84a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b5debd92-961a-492a-8e9d-a51652a3a84a" (UID: "b5debd92-961a-492a-8e9d-a51652a3a84a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.884116 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dfca067-a625-475b-9443-cde8a54f12af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0dfca067-a625-475b-9443-cde8a54f12af" (UID: "0dfca067-a625-475b-9443-cde8a54f12af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.884202 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5debd92-961a-492a-8e9d-a51652a3a84a-logs" (OuterVolumeSpecName: "logs") pod "b5debd92-961a-492a-8e9d-a51652a3a84a" (UID: "b5debd92-961a-492a-8e9d-a51652a3a84a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.885956 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dfca067-a625-475b-9443-cde8a54f12af-kube-api-access-slwsx" (OuterVolumeSpecName: "kube-api-access-slwsx") pod "0dfca067-a625-475b-9443-cde8a54f12af" (UID: "0dfca067-a625-475b-9443-cde8a54f12af"). InnerVolumeSpecName "kube-api-access-slwsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.886027 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5debd92-961a-492a-8e9d-a51652a3a84a-kube-api-access-7c7qh" (OuterVolumeSpecName: "kube-api-access-7c7qh") pod "b5debd92-961a-492a-8e9d-a51652a3a84a" (UID: "b5debd92-961a-492a-8e9d-a51652a3a84a"). InnerVolumeSpecName "kube-api-access-7c7qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.886591 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ff3f576-3c7a-434d-b4be-51ff90cbd199-kube-api-access-7p84z" (OuterVolumeSpecName: "kube-api-access-7p84z") pod "7ff3f576-3c7a-434d-b4be-51ff90cbd199" (UID: "7ff3f576-3c7a-434d-b4be-51ff90cbd199"). InnerVolumeSpecName "kube-api-access-7p84z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.887183 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-scripts" (OuterVolumeSpecName: "scripts") pod "b5debd92-961a-492a-8e9d-a51652a3a84a" (UID: "b5debd92-961a-492a-8e9d-a51652a3a84a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.887630 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b5debd92-961a-492a-8e9d-a51652a3a84a" (UID: "b5debd92-961a-492a-8e9d-a51652a3a84a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.893434 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8mqkb" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.911453 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5debd92-961a-492a-8e9d-a51652a3a84a" (UID: "b5debd92-961a-492a-8e9d-a51652a3a84a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.933465 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b5debd92-961a-492a-8e9d-a51652a3a84a" (UID: "b5debd92-961a-492a-8e9d-a51652a3a84a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.944012 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b5debd92-961a-492a-8e9d-a51652a3a84a" (UID: "b5debd92-961a-492a-8e9d-a51652a3a84a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.952979 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-config-data" (OuterVolumeSpecName: "config-data") pod "b5debd92-961a-492a-8e9d-a51652a3a84a" (UID: "b5debd92-961a-492a-8e9d-a51652a3a84a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.986484 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dfca067-a625-475b-9443-cde8a54f12af-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.986675 4915 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.986684 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.986693 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5debd92-961a-492a-8e9d-a51652a3a84a-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.986702 4915 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5debd92-961a-492a-8e9d-a51652a3a84a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.986710 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p84z\" (UniqueName: \"kubernetes.io/projected/7ff3f576-3c7a-434d-b4be-51ff90cbd199-kube-api-access-7p84z\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.986718 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.986726 4915 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.986735 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slwsx\" (UniqueName: \"kubernetes.io/projected/0dfca067-a625-475b-9443-cde8a54f12af-kube-api-access-slwsx\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.986745 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c7qh\" (UniqueName: \"kubernetes.io/projected/b5debd92-961a-492a-8e9d-a51652a3a84a-kube-api-access-7c7qh\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.986753 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:52 crc kubenswrapper[4915]: I0127 19:04:52.986762 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5debd92-961a-492a-8e9d-a51652a3a84a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.142253 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.142558 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f765c967-cd1f-44be-bf35-200a93f06c08" containerName="ceilometer-central-agent" containerID="cri-o://91dde6520b09dd9a3bee7b1dccad272eed3957544bd65c78044cde31b5a5a33a" gracePeriod=30 Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.143030 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f765c967-cd1f-44be-bf35-200a93f06c08" containerName="proxy-httpd" containerID="cri-o://442aa21a700a655bb2c5e592d27927399226ce7e4818c5d5466669e4014e4238" gracePeriod=30 Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.156153 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f765c967-cd1f-44be-bf35-200a93f06c08" containerName="ceilometer-notification-agent" containerID="cri-o://b6d441dae839212e31371ba1c55077036e6541f4ea0dbed85fba70fa1100e2eb" gracePeriod=30 Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.158162 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f765c967-cd1f-44be-bf35-200a93f06c08" containerName="sg-core" containerID="cri-o://fe19ba02a7b46df32800af54c264a8f50785062daa8c7d556b47dcdfaa7f8440" gracePeriod=30 Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.186859 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.187066 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="900ea28b-9ef8-41fe-a522-044443efa94b" containerName="kube-state-metrics" containerID="cri-o://d35fdae5565950f6f21c26cee4bc83f55e54749039323306d48c13acce01c4b3" gracePeriod=30 Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.304873 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0e30-account-create-update-pf994"] Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.328097 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.328330 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="b01be235-2ab9-4e61-a5a4-1d006a9e6679" containerName="memcached" containerID="cri-o://e8dc10179f8954b400ea8cb8db73ee6638c728824ddb7504365e79a8f13b926f" gracePeriod=30 Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.337098 4915 generic.go:334] "Generic (PLEG): container finished" podID="b5debd92-961a-492a-8e9d-a51652a3a84a" containerID="8fca0d05b5fe82e3c340f29da10ad9c25f93a4f93ae8f0f3914c11e06c53298f" exitCode=0 Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.337153 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b5debd92-961a-492a-8e9d-a51652a3a84a","Type":"ContainerDied","Data":"8fca0d05b5fe82e3c340f29da10ad9c25f93a4f93ae8f0f3914c11e06c53298f"} Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.337179 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b5debd92-961a-492a-8e9d-a51652a3a84a","Type":"ContainerDied","Data":"f9ad03f4f9e761435ad7cf78ce899ccee0fa26fae2f7c76ed3e984839f42ecc2"} Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.337213 4915 scope.go:117] "RemoveContainer" containerID="8fca0d05b5fe82e3c340f29da10ad9c25f93a4f93ae8f0f3914c11e06c53298f" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.337306 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.349392 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-0e30-account-create-update-pf994"] Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.385038 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2bngm" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.404292 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8e80-account-create-update-47h54" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.411732 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0060401c-f9b6-4772-bce5-bda7633de81a" path="/var/lib/kubelet/pods/0060401c-f9b6-4772-bce5-bda7633de81a/volumes" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.412504 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b2c635a-a36f-415f-9746-97620456c8c8" path="/var/lib/kubelet/pods/1b2c635a-a36f-415f-9746-97620456c8c8/volumes" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.413074 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ef0121e-da20-4815-bdba-f03c90dea333" path="/var/lib/kubelet/pods/4ef0121e-da20-4815-bdba-f03c90dea333/volumes" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.436554 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67897b01-d7d4-465f-9b98-ca325dabb449" path="/var/lib/kubelet/pods/67897b01-d7d4-465f-9b98-ca325dabb449/volumes" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.438577 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c2e3568-b735-4d0a-a6ff-a4862f244a53" path="/var/lib/kubelet/pods/6c2e3568-b735-4d0a-a6ff-a4862f244a53/volumes" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.439862 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="754faa2e-19b3-47fb-9436-62d0ebd49ea4" path="/var/lib/kubelet/pods/754faa2e-19b3-47fb-9436-62d0ebd49ea4/volumes" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.447097 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84da5d2b-b9c1-41ef-9222-aaf3e67ff232" path="/var/lib/kubelet/pods/84da5d2b-b9c1-41ef-9222-aaf3e67ff232/volumes" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.451142 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95cbe256-8e24-4222-9638-78b805a12278" path="/var/lib/kubelet/pods/95cbe256-8e24-4222-9638-78b805a12278/volumes" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.451678 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aae7274f-1da9-4023-96b1-30cca477c6a2" path="/var/lib/kubelet/pods/aae7274f-1da9-4023-96b1-30cca477c6a2/volumes" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.453110 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bead4142-2b0e-41b1-85dd-cfe102596e93" path="/var/lib/kubelet/pods/bead4142-2b0e-41b1-85dd-cfe102596e93/volumes" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.453902 4915 generic.go:334] "Generic (PLEG): container finished" podID="102e986e-f101-4f49-af96-50368468f7b4" containerID="ad1a62b5c13ac79f78f79de53e79d8b6e2d71d37af6a621d62c0f82ef67828b5" exitCode=0 Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.454695 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c245e6e6-955f-4f75-9427-3a3bd0f26c97" path="/var/lib/kubelet/pods/c245e6e6-955f-4f75-9427-3a3bd0f26c97/volumes" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.456482 4915 scope.go:117] "RemoveContainer" containerID="a1132d8d6f288010605cbaf9d48ce537f25622fc42632084c22783f5251724a2" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.466388 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fc87-account-create-update-jl699" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.469665 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa88bf20-ee44-4049-b46d-75f3a64d3a4d" path="/var/lib/kubelet/pods/fa88bf20-ee44-4049-b46d-75f3a64d3a4d/volumes" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.471319 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-0e30-account-create-update-jdsnb"] Jan 27 19:04:53 crc kubenswrapper[4915]: E0127 19:04:53.471645 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5debd92-961a-492a-8e9d-a51652a3a84a" containerName="cinder-api" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.471665 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5debd92-961a-492a-8e9d-a51652a3a84a" containerName="cinder-api" Jan 27 19:04:53 crc kubenswrapper[4915]: E0127 19:04:53.471700 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5debd92-961a-492a-8e9d-a51652a3a84a" containerName="cinder-api-log" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.471709 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5debd92-961a-492a-8e9d-a51652a3a84a" containerName="cinder-api-log" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.481539 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5debd92-961a-492a-8e9d-a51652a3a84a" containerName="cinder-api-log" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.481611 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5debd92-961a-492a-8e9d-a51652a3a84a" containerName="cinder-api" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.482961 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0e30-account-create-update-jdsnb" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.482845 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2bngm" event={"ID":"7ff3f576-3c7a-434d-b4be-51ff90cbd199","Type":"ContainerDied","Data":"a5a5e9f36c5e063b4d2345770629c6d013836cd7d2e150d73ce5aec5a40ee06b"} Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.483113 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8e80-account-create-update-47h54" event={"ID":"0dfca067-a625-475b-9443-cde8a54f12af","Type":"ContainerDied","Data":"6c27699e4af34d99e0cb26ce3f19167ee145015210d51302902bc63350d1d7d8"} Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.483133 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0e30-account-create-update-jdsnb"] Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.483152 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-snpdp"] Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.483168 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-snpdp"] Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.483186 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cff5fcc84-lxsfm" event={"ID":"102e986e-f101-4f49-af96-50368468f7b4","Type":"ContainerDied","Data":"ad1a62b5c13ac79f78f79de53e79d8b6e2d71d37af6a621d62c0f82ef67828b5"} Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.483202 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fc87-account-create-update-jl699" event={"ID":"67a9425f-4628-4cad-a9d0-b455a3b6f2b1","Type":"ContainerDied","Data":"60214b5f964ef3ca2c9c45c1257962fc4645ef6fbf2cdc38ad5dffffe83aea39"} Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.485550 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.491607 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="cc16c397-45e2-4878-8927-752a1832ec0a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": read tcp 10.217.0.2:60372->10.217.0.207:8775: read: connection reset by peer" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.491902 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="cc16c397-45e2-4878-8927-752a1832ec0a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": read tcp 10.217.0.2:60380->10.217.0.207:8775: read: connection reset by peer" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.493962 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2fe5-account-create-update-j8xr7" event={"ID":"06ad99e6-1205-4b12-8516-b5f0f595f0af","Type":"ContainerDied","Data":"d32d83eaa2dbef64a0e1c5d18195950fada9eabf7c67f62937d8b2d1ecd400d6"} Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.494138 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2fe5-account-create-update-j8xr7" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.498890 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8mqkb"] Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.540915 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-lqql2"] Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.557883 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-lqql2"] Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.610985 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2tnx\" (UniqueName: \"kubernetes.io/projected/1586b620-c9ac-4c33-a1d4-5c83dec7c5f5-kube-api-access-j2tnx\") pod \"keystone-0e30-account-create-update-jdsnb\" (UID: \"1586b620-c9ac-4c33-a1d4-5c83dec7c5f5\") " pod="openstack/keystone-0e30-account-create-update-jdsnb" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.611080 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1586b620-c9ac-4c33-a1d4-5c83dec7c5f5-operator-scripts\") pod \"keystone-0e30-account-create-update-jdsnb\" (UID: \"1586b620-c9ac-4c33-a1d4-5c83dec7c5f5\") " pod="openstack/keystone-0e30-account-create-update-jdsnb" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.611982 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.655898 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.667901 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.721315 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2tnx\" (UniqueName: \"kubernetes.io/projected/1586b620-c9ac-4c33-a1d4-5c83dec7c5f5-kube-api-access-j2tnx\") pod \"keystone-0e30-account-create-update-jdsnb\" (UID: \"1586b620-c9ac-4c33-a1d4-5c83dec7c5f5\") " pod="openstack/keystone-0e30-account-create-update-jdsnb" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.721364 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1586b620-c9ac-4c33-a1d4-5c83dec7c5f5-operator-scripts\") pod \"keystone-0e30-account-create-update-jdsnb\" (UID: \"1586b620-c9ac-4c33-a1d4-5c83dec7c5f5\") " pod="openstack/keystone-0e30-account-create-update-jdsnb" Jan 27 19:04:53 crc kubenswrapper[4915]: E0127 19:04:53.721483 4915 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 27 19:04:53 crc kubenswrapper[4915]: E0127 19:04:53.721531 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1586b620-c9ac-4c33-a1d4-5c83dec7c5f5-operator-scripts podName:1586b620-c9ac-4c33-a1d4-5c83dec7c5f5 nodeName:}" failed. No retries permitted until 2026-01-27 19:04:54.221516577 +0000 UTC m=+1385.579370241 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/1586b620-c9ac-4c33-a1d4-5c83dec7c5f5-operator-scripts") pod "keystone-0e30-account-create-update-jdsnb" (UID: "1586b620-c9ac-4c33-a1d4-5c83dec7c5f5") : configmap "openstack-scripts" not found Jan 27 19:04:53 crc kubenswrapper[4915]: E0127 19:04:53.768062 4915 projected.go:194] Error preparing data for projected volume kube-api-access-j2tnx for pod openstack/keystone-0e30-account-create-update-jdsnb: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 27 19:04:53 crc kubenswrapper[4915]: E0127 19:04:53.770124 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1586b620-c9ac-4c33-a1d4-5c83dec7c5f5-kube-api-access-j2tnx podName:1586b620-c9ac-4c33-a1d4-5c83dec7c5f5 nodeName:}" failed. No retries permitted until 2026-01-27 19:04:54.270099015 +0000 UTC m=+1385.627952679 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-j2tnx" (UniqueName: "kubernetes.io/projected/1586b620-c9ac-4c33-a1d4-5c83dec7c5f5-kube-api-access-j2tnx") pod "keystone-0e30-account-create-update-jdsnb" (UID: "1586b620-c9ac-4c33-a1d4-5c83dec7c5f5") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.792931 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5bf7f58cfb-6c779"] Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.793418 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-5bf7f58cfb-6c779" podUID="d0031b79-12aa-4487-8501-6e122053cc13" containerName="keystone-api" containerID="cri-o://5462e3db7b4374ade37ec810af37e0e036487f16475e907a721f3b531cb23675" gracePeriod=30 Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.801334 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-st6bz"] Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.841929 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-st6bz"] Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.848222 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0e30-account-create-update-jdsnb"] Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.863961 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-8mqkb"] Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.920957 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7644f9784b-dbhxl" podUID="1ec4f102-db6b-4f45-a5f4-1aad213e05fb" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.166:9311/healthcheck\": read tcp 10.217.0.2:42946->10.217.0.166:9311: read: connection reset by peer" Jan 27 19:04:53 crc kubenswrapper[4915]: I0127 19:04:53.921018 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7644f9784b-dbhxl" podUID="1ec4f102-db6b-4f45-a5f4-1aad213e05fb" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.166:9311/healthcheck\": read tcp 10.217.0.2:42952->10.217.0.166:9311: read: connection reset by peer" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.048402 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="69486e9e-4ef8-4749-842f-a38dfeba60d3" containerName="galera" containerID="cri-o://b05c34d07645b4a14d56b4d52e63ae462133c80866f332009ef2ce490415bcdc" gracePeriod=30 Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.240717 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1586b620-c9ac-4c33-a1d4-5c83dec7c5f5-operator-scripts\") pod \"keystone-0e30-account-create-update-jdsnb\" (UID: \"1586b620-c9ac-4c33-a1d4-5c83dec7c5f5\") " pod="openstack/keystone-0e30-account-create-update-jdsnb" Jan 27 19:04:54 crc kubenswrapper[4915]: E0127 19:04:54.240948 4915 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 27 19:04:54 crc kubenswrapper[4915]: E0127 19:04:54.241001 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1586b620-c9ac-4c33-a1d4-5c83dec7c5f5-operator-scripts podName:1586b620-c9ac-4c33-a1d4-5c83dec7c5f5 nodeName:}" failed. No retries permitted until 2026-01-27 19:04:55.240986576 +0000 UTC m=+1386.598840240 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/1586b620-c9ac-4c33-a1d4-5c83dec7c5f5-operator-scripts") pod "keystone-0e30-account-create-update-jdsnb" (UID: "1586b620-c9ac-4c33-a1d4-5c83dec7c5f5") : configmap "openstack-scripts" not found Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.264706 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-btsbx" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.323034 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-btsbx" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.342720 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2tnx\" (UniqueName: \"kubernetes.io/projected/1586b620-c9ac-4c33-a1d4-5c83dec7c5f5-kube-api-access-j2tnx\") pod \"keystone-0e30-account-create-update-jdsnb\" (UID: \"1586b620-c9ac-4c33-a1d4-5c83dec7c5f5\") " pod="openstack/keystone-0e30-account-create-update-jdsnb" Jan 27 19:04:54 crc kubenswrapper[4915]: E0127 19:04:54.351619 4915 projected.go:194] Error preparing data for projected volume kube-api-access-j2tnx for pod openstack/keystone-0e30-account-create-update-jdsnb: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 27 19:04:54 crc kubenswrapper[4915]: E0127 19:04:54.351701 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1586b620-c9ac-4c33-a1d4-5c83dec7c5f5-kube-api-access-j2tnx podName:1586b620-c9ac-4c33-a1d4-5c83dec7c5f5 nodeName:}" failed. No retries permitted until 2026-01-27 19:04:55.351679268 +0000 UTC m=+1386.709532932 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-j2tnx" (UniqueName: "kubernetes.io/projected/1586b620-c9ac-4c33-a1d4-5c83dec7c5f5-kube-api-access-j2tnx") pod "keystone-0e30-account-create-update-jdsnb" (UID: "1586b620-c9ac-4c33-a1d4-5c83dec7c5f5") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 27 19:04:54 crc kubenswrapper[4915]: E0127 19:04:54.439465 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ea6f431c86fb5649051ee3f19b1974868377911695706cdfe403159de6a57f6" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 19:04:54 crc kubenswrapper[4915]: E0127 19:04:54.446594 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ea6f431c86fb5649051ee3f19b1974868377911695706cdfe403159de6a57f6" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 19:04:54 crc kubenswrapper[4915]: E0127 19:04:54.465317 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ea6f431c86fb5649051ee3f19b1974868377911695706cdfe403159de6a57f6" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 19:04:54 crc kubenswrapper[4915]: E0127 19:04:54.465398 4915 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="5acd6cd6-a7d4-4839-b3c1-aec924797e53" containerName="nova-cell1-conductor-conductor" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.532154 4915 generic.go:334] "Generic (PLEG): container finished" podID="f765c967-cd1f-44be-bf35-200a93f06c08" containerID="442aa21a700a655bb2c5e592d27927399226ce7e4818c5d5466669e4014e4238" exitCode=0 Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.532181 4915 generic.go:334] "Generic (PLEG): container finished" podID="f765c967-cd1f-44be-bf35-200a93f06c08" containerID="fe19ba02a7b46df32800af54c264a8f50785062daa8c7d556b47dcdfaa7f8440" exitCode=2 Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.532189 4915 generic.go:334] "Generic (PLEG): container finished" podID="f765c967-cd1f-44be-bf35-200a93f06c08" containerID="91dde6520b09dd9a3bee7b1dccad272eed3957544bd65c78044cde31b5a5a33a" exitCode=0 Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.532224 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f765c967-cd1f-44be-bf35-200a93f06c08","Type":"ContainerDied","Data":"442aa21a700a655bb2c5e592d27927399226ce7e4818c5d5466669e4014e4238"} Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.532249 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f765c967-cd1f-44be-bf35-200a93f06c08","Type":"ContainerDied","Data":"fe19ba02a7b46df32800af54c264a8f50785062daa8c7d556b47dcdfaa7f8440"} Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.532259 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f765c967-cd1f-44be-bf35-200a93f06c08","Type":"ContainerDied","Data":"91dde6520b09dd9a3bee7b1dccad272eed3957544bd65c78044cde31b5a5a33a"} Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.535222 4915 generic.go:334] "Generic (PLEG): container finished" podID="c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d" containerID="a933786d7cc9361f17ff67cb45c92e9e82dd73239e589fc136036527cb4a3031" exitCode=0 Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.535259 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d","Type":"ContainerDied","Data":"a933786d7cc9361f17ff67cb45c92e9e82dd73239e589fc136036527cb4a3031"} Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.535275 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d","Type":"ContainerDied","Data":"5025463ff8a29d3149d68f010b1b41c3ce646798a3bf71eeadf7d64743293617"} Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.535284 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5025463ff8a29d3149d68f010b1b41c3ce646798a3bf71eeadf7d64743293617" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.536704 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cff5fcc84-lxsfm" event={"ID":"102e986e-f101-4f49-af96-50368468f7b4","Type":"ContainerDied","Data":"55fb094174121245c8c88645bfb4614591493838efd05635bfa7bab760a3f60e"} Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.536724 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55fb094174121245c8c88645bfb4614591493838efd05635bfa7bab760a3f60e" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.538141 4915 generic.go:334] "Generic (PLEG): container finished" podID="900ea28b-9ef8-41fe-a522-044443efa94b" containerID="d35fdae5565950f6f21c26cee4bc83f55e54749039323306d48c13acce01c4b3" exitCode=2 Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.538188 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"900ea28b-9ef8-41fe-a522-044443efa94b","Type":"ContainerDied","Data":"d35fdae5565950f6f21c26cee4bc83f55e54749039323306d48c13acce01c4b3"} Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.538204 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"900ea28b-9ef8-41fe-a522-044443efa94b","Type":"ContainerDied","Data":"5ad5df697990ab497d2703a3e7316bbfe3f5b9aef5b1c68bf85abc512589c490"} Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.538215 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ad5df697990ab497d2703a3e7316bbfe3f5b9aef5b1c68bf85abc512589c490" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.539770 4915 generic.go:334] "Generic (PLEG): container finished" podID="1ec4f102-db6b-4f45-a5f4-1aad213e05fb" containerID="e45fa431d5b8ceb54b435ca3de9cafe6ec697d885699991a82ee10f7135d868b" exitCode=0 Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.539828 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7644f9784b-dbhxl" event={"ID":"1ec4f102-db6b-4f45-a5f4-1aad213e05fb","Type":"ContainerDied","Data":"e45fa431d5b8ceb54b435ca3de9cafe6ec697d885699991a82ee10f7135d868b"} Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.541494 4915 generic.go:334] "Generic (PLEG): container finished" podID="d5f29847-cfad-4f69-aff9-9c62b0088754" containerID="34e977874b962f76e26b957f6cccd2391cc8d2a127538d3d1be4030d291b2063" exitCode=0 Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.541576 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d5f29847-cfad-4f69-aff9-9c62b0088754","Type":"ContainerDied","Data":"34e977874b962f76e26b957f6cccd2391cc8d2a127538d3d1be4030d291b2063"} Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.541613 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d5f29847-cfad-4f69-aff9-9c62b0088754","Type":"ContainerDied","Data":"4ecd5e9be248d7e219dd1ae230699a91ebdddf39f4b01e58edfa4c7ff1a2e0e4"} Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.541627 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ecd5e9be248d7e219dd1ae230699a91ebdddf39f4b01e58edfa4c7ff1a2e0e4" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.543470 4915 generic.go:334] "Generic (PLEG): container finished" podID="02e16a69-5c98-4e52-ad1c-bf08c989cd88" containerID="f76d58b03c67cca894c7c03c90ad9ed4cc48b08790c07b0c5345806e5f49dc1d" exitCode=0 Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.543502 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"02e16a69-5c98-4e52-ad1c-bf08c989cd88","Type":"ContainerDied","Data":"f76d58b03c67cca894c7c03c90ad9ed4cc48b08790c07b0c5345806e5f49dc1d"} Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.544362 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8mqkb" event={"ID":"b7965677-0846-4016-ab96-efe29db9327c","Type":"ContainerStarted","Data":"1645614c1fed111898ba08055b669b2d4f5fa6163bb096c5eb76c0fb246392cd"} Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.545554 4915 generic.go:334] "Generic (PLEG): container finished" podID="b01be235-2ab9-4e61-a5a4-1d006a9e6679" containerID="e8dc10179f8954b400ea8cb8db73ee6638c728824ddb7504365e79a8f13b926f" exitCode=0 Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.545624 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b01be235-2ab9-4e61-a5a4-1d006a9e6679","Type":"ContainerDied","Data":"e8dc10179f8954b400ea8cb8db73ee6638c728824ddb7504365e79a8f13b926f"} Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.549280 4915 generic.go:334] "Generic (PLEG): container finished" podID="cc16c397-45e2-4878-8927-752a1832ec0a" containerID="3071a4d932d0954ba665a0964a52de9a30feb98cb297eae5bdd4c4d385c63546" exitCode=0 Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.549374 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cc16c397-45e2-4878-8927-752a1832ec0a","Type":"ContainerDied","Data":"3071a4d932d0954ba665a0964a52de9a30feb98cb297eae5bdd4c4d385c63546"} Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.549405 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cc16c397-45e2-4878-8927-752a1832ec0a","Type":"ContainerDied","Data":"3f20568226549974c479a6833447d9a996f9dcbf1046d70b71bd24fce0d9f202"} Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.549418 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f20568226549974c479a6833447d9a996f9dcbf1046d70b71bd24fce0d9f202" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.551722 4915 generic.go:334] "Generic (PLEG): container finished" podID="8079a88f-5f47-4988-b4c8-6031fbfc9dd8" containerID="6cce6eea7541be78647a81657904cffa63e1bcb52b367db413e89aa732f8add2" exitCode=0 Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.551772 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" event={"ID":"8079a88f-5f47-4988-b4c8-6031fbfc9dd8","Type":"ContainerDied","Data":"6cce6eea7541be78647a81657904cffa63e1bcb52b367db413e89aa732f8add2"} Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.554607 4915 generic.go:334] "Generic (PLEG): container finished" podID="09d25f2d-d205-4b7a-a17f-ca7e5b26ee43" containerID="dd40b8fd0daebe36f826489ae0a0b8562d952c77d4c20c94744566de6a48b7a9" exitCode=0 Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.554871 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43","Type":"ContainerDied","Data":"dd40b8fd0daebe36f826489ae0a0b8562d952c77d4c20c94744566de6a48b7a9"} Jan 27 19:04:54 crc kubenswrapper[4915]: E0127 19:04:54.613130 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34e977874b962f76e26b957f6cccd2391cc8d2a127538d3d1be4030d291b2063 is running failed: container process not found" containerID="34e977874b962f76e26b957f6cccd2391cc8d2a127538d3d1be4030d291b2063" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 19:04:54 crc kubenswrapper[4915]: E0127 19:04:54.613411 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34e977874b962f76e26b957f6cccd2391cc8d2a127538d3d1be4030d291b2063 is running failed: container process not found" containerID="34e977874b962f76e26b957f6cccd2391cc8d2a127538d3d1be4030d291b2063" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 19:04:54 crc kubenswrapper[4915]: E0127 19:04:54.613632 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34e977874b962f76e26b957f6cccd2391cc8d2a127538d3d1be4030d291b2063 is running failed: container process not found" containerID="34e977874b962f76e26b957f6cccd2391cc8d2a127538d3d1be4030d291b2063" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 19:04:54 crc kubenswrapper[4915]: E0127 19:04:54.613668 4915 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34e977874b962f76e26b957f6cccd2391cc8d2a127538d3d1be4030d291b2063 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d5f29847-cfad-4f69-aff9-9c62b0088754" containerName="nova-scheduler-scheduler" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.741806 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.760351 4915 scope.go:117] "RemoveContainer" containerID="8fca0d05b5fe82e3c340f29da10ad9c25f93a4f93ae8f0f3914c11e06c53298f" Jan 27 19:04:54 crc kubenswrapper[4915]: E0127 19:04:54.760706 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fca0d05b5fe82e3c340f29da10ad9c25f93a4f93ae8f0f3914c11e06c53298f\": container with ID starting with 8fca0d05b5fe82e3c340f29da10ad9c25f93a4f93ae8f0f3914c11e06c53298f not found: ID does not exist" containerID="8fca0d05b5fe82e3c340f29da10ad9c25f93a4f93ae8f0f3914c11e06c53298f" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.760728 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fca0d05b5fe82e3c340f29da10ad9c25f93a4f93ae8f0f3914c11e06c53298f"} err="failed to get container status \"8fca0d05b5fe82e3c340f29da10ad9c25f93a4f93ae8f0f3914c11e06c53298f\": rpc error: code = NotFound desc = could not find container \"8fca0d05b5fe82e3c340f29da10ad9c25f93a4f93ae8f0f3914c11e06c53298f\": container with ID starting with 8fca0d05b5fe82e3c340f29da10ad9c25f93a4f93ae8f0f3914c11e06c53298f not found: ID does not exist" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.760748 4915 scope.go:117] "RemoveContainer" containerID="a1132d8d6f288010605cbaf9d48ce537f25622fc42632084c22783f5251724a2" Jan 27 19:04:54 crc kubenswrapper[4915]: E0127 19:04:54.760933 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1132d8d6f288010605cbaf9d48ce537f25622fc42632084c22783f5251724a2\": container with ID starting with a1132d8d6f288010605cbaf9d48ce537f25622fc42632084c22783f5251724a2 not found: ID does not exist" containerID="a1132d8d6f288010605cbaf9d48ce537f25622fc42632084c22783f5251724a2" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.760950 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1132d8d6f288010605cbaf9d48ce537f25622fc42632084c22783f5251724a2"} err="failed to get container status \"a1132d8d6f288010605cbaf9d48ce537f25622fc42632084c22783f5251724a2\": rpc error: code = NotFound desc = could not find container \"a1132d8d6f288010605cbaf9d48ce537f25622fc42632084c22783f5251724a2\": container with ID starting with a1132d8d6f288010605cbaf9d48ce537f25622fc42632084c22783f5251724a2 not found: ID does not exist" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.777320 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:04:54 crc kubenswrapper[4915]: E0127 19:04:54.783295 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-j2tnx operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-0e30-account-create-update-jdsnb" podUID="1586b620-c9ac-4c33-a1d4-5c83dec7c5f5" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.804662 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.814139 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2bngm"] Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.818181 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-2bngm"] Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.831259 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.835325 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.841401 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.852468 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzzj2\" (UniqueName: \"kubernetes.io/projected/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-kube-api-access-tzzj2\") pod \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\" (UID: \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.852543 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-logs\") pod \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\" (UID: \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.852569 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mchd4\" (UniqueName: \"kubernetes.io/projected/102e986e-f101-4f49-af96-50368468f7b4-kube-api-access-mchd4\") pod \"102e986e-f101-4f49-af96-50368468f7b4\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.852592 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-public-tls-certs\") pod \"102e986e-f101-4f49-af96-50368468f7b4\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.852632 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-public-tls-certs\") pod \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\" (UID: \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.852700 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-internal-tls-certs\") pod \"102e986e-f101-4f49-af96-50368468f7b4\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.852774 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-scripts\") pod \"102e986e-f101-4f49-af96-50368468f7b4\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.852797 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-combined-ca-bundle\") pod \"102e986e-f101-4f49-af96-50368468f7b4\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.852815 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-config-data\") pod \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\" (UID: \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.852852 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-config-data\") pod \"102e986e-f101-4f49-af96-50368468f7b4\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.852868 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-internal-tls-certs\") pod \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\" (UID: \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.852893 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-combined-ca-bundle\") pod \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\" (UID: \"c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.852911 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/102e986e-f101-4f49-af96-50368468f7b4-logs\") pod \"102e986e-f101-4f49-af96-50368468f7b4\" (UID: \"102e986e-f101-4f49-af96-50368468f7b4\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.853920 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/102e986e-f101-4f49-af96-50368468f7b4-logs" (OuterVolumeSpecName: "logs") pod "102e986e-f101-4f49-af96-50368468f7b4" (UID: "102e986e-f101-4f49-af96-50368468f7b4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.857734 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.860247 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/102e986e-f101-4f49-af96-50368468f7b4-kube-api-access-mchd4" (OuterVolumeSpecName: "kube-api-access-mchd4") pod "102e986e-f101-4f49-af96-50368468f7b4" (UID: "102e986e-f101-4f49-af96-50368468f7b4"). InnerVolumeSpecName "kube-api-access-mchd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.861198 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-logs" (OuterVolumeSpecName: "logs") pod "c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d" (UID: "c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.867988 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.874226 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-scripts" (OuterVolumeSpecName: "scripts") pod "102e986e-f101-4f49-af96-50368468f7b4" (UID: "102e986e-f101-4f49-af96-50368468f7b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.876625 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-kube-api-access-tzzj2" (OuterVolumeSpecName: "kube-api-access-tzzj2") pod "c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d" (UID: "c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d"). InnerVolumeSpecName "kube-api-access-tzzj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.926970 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-config-data" (OuterVolumeSpecName: "config-data") pod "c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d" (UID: "c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.941528 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-fc87-account-create-update-jl699"] Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.958305 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-config-data-custom\") pod \"8079a88f-5f47-4988-b4c8-6031fbfc9dd8\" (UID: \"8079a88f-5f47-4988-b4c8-6031fbfc9dd8\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.958429 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q948r\" (UniqueName: \"kubernetes.io/projected/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-kube-api-access-q948r\") pod \"8079a88f-5f47-4988-b4c8-6031fbfc9dd8\" (UID: \"8079a88f-5f47-4988-b4c8-6031fbfc9dd8\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.958711 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-config-data\") pod \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.958855 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/900ea28b-9ef8-41fe-a522-044443efa94b-kube-state-metrics-tls-config\") pod \"900ea28b-9ef8-41fe-a522-044443efa94b\" (UID: \"900ea28b-9ef8-41fe-a522-044443efa94b\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.958969 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc16c397-45e2-4878-8927-752a1832ec0a-config-data\") pod \"cc16c397-45e2-4878-8927-752a1832ec0a\" (UID: \"cc16c397-45e2-4878-8927-752a1832ec0a\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.959100 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f29847-cfad-4f69-aff9-9c62b0088754-combined-ca-bundle\") pod \"d5f29847-cfad-4f69-aff9-9c62b0088754\" (UID: \"d5f29847-cfad-4f69-aff9-9c62b0088754\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.959215 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-public-tls-certs\") pod \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.959343 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-combined-ca-bundle\") pod \"8079a88f-5f47-4988-b4c8-6031fbfc9dd8\" (UID: \"8079a88f-5f47-4988-b4c8-6031fbfc9dd8\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.959465 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm9cd\" (UniqueName: \"kubernetes.io/projected/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-kube-api-access-sm9cd\") pod \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.959605 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.959706 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/900ea28b-9ef8-41fe-a522-044443efa94b-kube-state-metrics-tls-certs\") pod \"900ea28b-9ef8-41fe-a522-044443efa94b\" (UID: \"900ea28b-9ef8-41fe-a522-044443efa94b\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.959801 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fbtw\" (UniqueName: \"kubernetes.io/projected/cc16c397-45e2-4878-8927-752a1832ec0a-kube-api-access-5fbtw\") pod \"cc16c397-45e2-4878-8927-752a1832ec0a\" (UID: \"cc16c397-45e2-4878-8927-752a1832ec0a\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.959921 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnx5s\" (UniqueName: \"kubernetes.io/projected/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-kube-api-access-pnx5s\") pod \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.960024 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-logs\") pod \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.960149 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-combined-ca-bundle\") pod \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.960245 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f29847-cfad-4f69-aff9-9c62b0088754-config-data\") pod \"d5f29847-cfad-4f69-aff9-9c62b0088754\" (UID: \"d5f29847-cfad-4f69-aff9-9c62b0088754\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.960356 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc16c397-45e2-4878-8927-752a1832ec0a-logs\") pod \"cc16c397-45e2-4878-8927-752a1832ec0a\" (UID: \"cc16c397-45e2-4878-8927-752a1832ec0a\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.960463 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-logs\") pod \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.960558 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfm5d\" (UniqueName: \"kubernetes.io/projected/900ea28b-9ef8-41fe-a522-044443efa94b-kube-api-access-cfm5d\") pod \"900ea28b-9ef8-41fe-a522-044443efa94b\" (UID: \"900ea28b-9ef8-41fe-a522-044443efa94b\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.960772 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-httpd-run\") pod \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.960888 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc16c397-45e2-4878-8927-752a1832ec0a-nova-metadata-tls-certs\") pod \"cc16c397-45e2-4878-8927-752a1832ec0a\" (UID: \"cc16c397-45e2-4878-8927-752a1832ec0a\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.961000 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-public-tls-certs\") pod \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.961110 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddpd\" (UniqueName: \"kubernetes.io/projected/d5f29847-cfad-4f69-aff9-9c62b0088754-kube-api-access-pddpd\") pod \"d5f29847-cfad-4f69-aff9-9c62b0088754\" (UID: \"d5f29847-cfad-4f69-aff9-9c62b0088754\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.961218 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-combined-ca-bundle\") pod \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.961315 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc16c397-45e2-4878-8927-752a1832ec0a-combined-ca-bundle\") pod \"cc16c397-45e2-4878-8927-752a1832ec0a\" (UID: \"cc16c397-45e2-4878-8927-752a1832ec0a\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.961416 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-config-data-custom\") pod \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.961504 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-scripts\") pod \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.961622 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-internal-tls-certs\") pod \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\" (UID: \"1ec4f102-db6b-4f45-a5f4-1aad213e05fb\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.961729 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-config-data\") pod \"8079a88f-5f47-4988-b4c8-6031fbfc9dd8\" (UID: \"8079a88f-5f47-4988-b4c8-6031fbfc9dd8\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.961845 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-config-data\") pod \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\" (UID: \"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.961939 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-logs\") pod \"8079a88f-5f47-4988-b4c8-6031fbfc9dd8\" (UID: \"8079a88f-5f47-4988-b4c8-6031fbfc9dd8\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.962045 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900ea28b-9ef8-41fe-a522-044443efa94b-combined-ca-bundle\") pod \"900ea28b-9ef8-41fe-a522-044443efa94b\" (UID: \"900ea28b-9ef8-41fe-a522-044443efa94b\") " Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.962628 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.962727 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.963561 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/102e986e-f101-4f49-af96-50368468f7b4-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.963707 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzzj2\" (UniqueName: \"kubernetes.io/projected/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-kube-api-access-tzzj2\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.976606 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.976634 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mchd4\" (UniqueName: \"kubernetes.io/projected/102e986e-f101-4f49-af96-50368468f7b4-kube-api-access-mchd4\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.974084 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.965167 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-kube-api-access-q948r" (OuterVolumeSpecName: "kube-api-access-q948r") pod "8079a88f-5f47-4988-b4c8-6031fbfc9dd8" (UID: "8079a88f-5f47-4988-b4c8-6031fbfc9dd8"). InnerVolumeSpecName "kube-api-access-q948r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.965253 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc16c397-45e2-4878-8927-752a1832ec0a-kube-api-access-5fbtw" (OuterVolumeSpecName: "kube-api-access-5fbtw") pod "cc16c397-45e2-4878-8927-752a1832ec0a" (UID: "cc16c397-45e2-4878-8927-752a1832ec0a"). InnerVolumeSpecName "kube-api-access-5fbtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.975555 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-kube-api-access-sm9cd" (OuterVolumeSpecName: "kube-api-access-sm9cd") pod "09d25f2d-d205-4b7a-a17f-ca7e5b26ee43" (UID: "09d25f2d-d205-4b7a-a17f-ca7e5b26ee43"). InnerVolumeSpecName "kube-api-access-sm9cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.976930 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d" (UID: "c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.977784 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc16c397-45e2-4878-8927-752a1832ec0a-logs" (OuterVolumeSpecName: "logs") pod "cc16c397-45e2-4878-8927-752a1832ec0a" (UID: "cc16c397-45e2-4878-8927-752a1832ec0a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:04:54 crc kubenswrapper[4915]: I0127 19:04:54.978091 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-logs" (OuterVolumeSpecName: "logs") pod "09d25f2d-d205-4b7a-a17f-ca7e5b26ee43" (UID: "09d25f2d-d205-4b7a-a17f-ca7e5b26ee43"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:54.997998 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8079a88f-5f47-4988-b4c8-6031fbfc9dd8" (UID: "8079a88f-5f47-4988-b4c8-6031fbfc9dd8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.004274 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "102e986e-f101-4f49-af96-50368468f7b4" (UID: "102e986e-f101-4f49-af96-50368468f7b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.015396 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-fc87-account-create-update-jl699"] Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.015665 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "09d25f2d-d205-4b7a-a17f-ca7e5b26ee43" (UID: "09d25f2d-d205-4b7a-a17f-ca7e5b26ee43"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.016006 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-logs" (OuterVolumeSpecName: "logs") pod "1ec4f102-db6b-4f45-a5f4-1aad213e05fb" (UID: "1ec4f102-db6b-4f45-a5f4-1aad213e05fb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.016829 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-logs" (OuterVolumeSpecName: "logs") pod "8079a88f-5f47-4988-b4c8-6031fbfc9dd8" (UID: "8079a88f-5f47-4988-b4c8-6031fbfc9dd8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.031252 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.039306 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "09d25f2d-d205-4b7a-a17f-ca7e5b26ee43" (UID: "09d25f2d-d205-4b7a-a17f-ca7e5b26ee43"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.039918 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-kube-api-access-pnx5s" (OuterVolumeSpecName: "kube-api-access-pnx5s") pod "1ec4f102-db6b-4f45-a5f4-1aad213e05fb" (UID: "1ec4f102-db6b-4f45-a5f4-1aad213e05fb"). InnerVolumeSpecName "kube-api-access-pnx5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.040394 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5f29847-cfad-4f69-aff9-9c62b0088754-kube-api-access-pddpd" (OuterVolumeSpecName: "kube-api-access-pddpd") pod "d5f29847-cfad-4f69-aff9-9c62b0088754" (UID: "d5f29847-cfad-4f69-aff9-9c62b0088754"). InnerVolumeSpecName "kube-api-access-pddpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.053095 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/900ea28b-9ef8-41fe-a522-044443efa94b-kube-api-access-cfm5d" (OuterVolumeSpecName: "kube-api-access-cfm5d") pod "900ea28b-9ef8-41fe-a522-044443efa94b" (UID: "900ea28b-9ef8-41fe-a522-044443efa94b"). InnerVolumeSpecName "kube-api-access-cfm5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.053204 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-scripts" (OuterVolumeSpecName: "scripts") pod "09d25f2d-d205-4b7a-a17f-ca7e5b26ee43" (UID: "09d25f2d-d205-4b7a-a17f-ca7e5b26ee43"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.054624 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1ec4f102-db6b-4f45-a5f4-1aad213e05fb" (UID: "1ec4f102-db6b-4f45-a5f4-1aad213e05fb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.069522 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-2fe5-account-create-update-j8xr7"] Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.078192 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwhj2\" (UniqueName: \"kubernetes.io/projected/b01be235-2ab9-4e61-a5a4-1d006a9e6679-kube-api-access-cwhj2\") pod \"b01be235-2ab9-4e61-a5a4-1d006a9e6679\" (UID: \"b01be235-2ab9-4e61-a5a4-1d006a9e6679\") " Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.078244 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b01be235-2ab9-4e61-a5a4-1d006a9e6679-kolla-config\") pod \"b01be235-2ab9-4e61-a5a4-1d006a9e6679\" (UID: \"b01be235-2ab9-4e61-a5a4-1d006a9e6679\") " Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.078319 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b01be235-2ab9-4e61-a5a4-1d006a9e6679-config-data\") pod \"b01be235-2ab9-4e61-a5a4-1d006a9e6679\" (UID: \"b01be235-2ab9-4e61-a5a4-1d006a9e6679\") " Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.078399 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b01be235-2ab9-4e61-a5a4-1d006a9e6679-combined-ca-bundle\") pod \"b01be235-2ab9-4e61-a5a4-1d006a9e6679\" (UID: \"b01be235-2ab9-4e61-a5a4-1d006a9e6679\") " Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.078522 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b01be235-2ab9-4e61-a5a4-1d006a9e6679-memcached-tls-certs\") pod \"b01be235-2ab9-4e61-a5a4-1d006a9e6679\" (UID: \"b01be235-2ab9-4e61-a5a4-1d006a9e6679\") " Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.079218 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b01be235-2ab9-4e61-a5a4-1d006a9e6679-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "b01be235-2ab9-4e61-a5a4-1d006a9e6679" (UID: "b01be235-2ab9-4e61-a5a4-1d006a9e6679"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.079980 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b01be235-2ab9-4e61-a5a4-1d006a9e6679-config-data" (OuterVolumeSpecName: "config-data") pod "b01be235-2ab9-4e61-a5a4-1d006a9e6679" (UID: "b01be235-2ab9-4e61-a5a4-1d006a9e6679"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.080123 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm9cd\" (UniqueName: \"kubernetes.io/projected/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-kube-api-access-sm9cd\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.080144 4915 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.080154 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fbtw\" (UniqueName: \"kubernetes.io/projected/cc16c397-45e2-4878-8927-752a1832ec0a-kube-api-access-5fbtw\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.080163 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnx5s\" (UniqueName: \"kubernetes.io/projected/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-kube-api-access-pnx5s\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.080171 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.080181 4915 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.080189 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc16c397-45e2-4878-8927-752a1832ec0a-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.080198 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.080206 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfm5d\" (UniqueName: \"kubernetes.io/projected/900ea28b-9ef8-41fe-a522-044443efa94b-kube-api-access-cfm5d\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.080214 4915 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.080222 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pddpd\" (UniqueName: \"kubernetes.io/projected/d5f29847-cfad-4f69-aff9-9c62b0088754-kube-api-access-pddpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.080230 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.080238 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.080246 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.080254 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.080262 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q948r\" (UniqueName: \"kubernetes.io/projected/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-kube-api-access-q948r\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.080271 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.082947 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-2fe5-account-create-update-j8xr7"] Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.089376 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b01be235-2ab9-4e61-a5a4-1d006a9e6679-kube-api-access-cwhj2" (OuterVolumeSpecName: "kube-api-access-cwhj2") pod "b01be235-2ab9-4e61-a5a4-1d006a9e6679" (UID: "b01be235-2ab9-4e61-a5a4-1d006a9e6679"). InnerVolumeSpecName "kube-api-access-cwhj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.093886 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d" (UID: "c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.114464 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.167486 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ec4f102-db6b-4f45-a5f4-1aad213e05fb" (UID: "1ec4f102-db6b-4f45-a5f4-1aad213e05fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.168184 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc16c397-45e2-4878-8927-752a1832ec0a-config-data" (OuterVolumeSpecName: "config-data") pod "cc16c397-45e2-4878-8927-752a1832ec0a" (UID: "cc16c397-45e2-4878-8927-752a1832ec0a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.168458 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b01be235-2ab9-4e61-a5a4-1d006a9e6679-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b01be235-2ab9-4e61-a5a4-1d006a9e6679" (UID: "b01be235-2ab9-4e61-a5a4-1d006a9e6679"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.173749 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-config-data" (OuterVolumeSpecName: "config-data") pod "102e986e-f101-4f49-af96-50368468f7b4" (UID: "102e986e-f101-4f49-af96-50368468f7b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.181023 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5acd6cd6-a7d4-4839-b3c1-aec924797e53-config-data\") pod \"5acd6cd6-a7d4-4839-b3c1-aec924797e53\" (UID: \"5acd6cd6-a7d4-4839-b3c1-aec924797e53\") " Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.181196 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5acd6cd6-a7d4-4839-b3c1-aec924797e53-combined-ca-bundle\") pod \"5acd6cd6-a7d4-4839-b3c1-aec924797e53\" (UID: \"5acd6cd6-a7d4-4839-b3c1-aec924797e53\") " Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.181241 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.181332 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/02e16a69-5c98-4e52-ad1c-bf08c989cd88-httpd-run\") pod \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.181372 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02e16a69-5c98-4e52-ad1c-bf08c989cd88-internal-tls-certs\") pod \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.181425 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e16a69-5c98-4e52-ad1c-bf08c989cd88-config-data\") pod \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.181461 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02e16a69-5c98-4e52-ad1c-bf08c989cd88-scripts\") pod \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.181503 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdmjj\" (UniqueName: \"kubernetes.io/projected/02e16a69-5c98-4e52-ad1c-bf08c989cd88-kube-api-access-tdmjj\") pod \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.181618 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrck2\" (UniqueName: \"kubernetes.io/projected/5acd6cd6-a7d4-4839-b3c1-aec924797e53-kube-api-access-zrck2\") pod \"5acd6cd6-a7d4-4839-b3c1-aec924797e53\" (UID: \"5acd6cd6-a7d4-4839-b3c1-aec924797e53\") " Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.181666 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02e16a69-5c98-4e52-ad1c-bf08c989cd88-logs\") pod \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.181714 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e16a69-5c98-4e52-ad1c-bf08c989cd88-combined-ca-bundle\") pod \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\" (UID: \"02e16a69-5c98-4e52-ad1c-bf08c989cd88\") " Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.181896 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02e16a69-5c98-4e52-ad1c-bf08c989cd88-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "02e16a69-5c98-4e52-ad1c-bf08c989cd88" (UID: "02e16a69-5c98-4e52-ad1c-bf08c989cd88"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.182378 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.182398 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwhj2\" (UniqueName: \"kubernetes.io/projected/b01be235-2ab9-4e61-a5a4-1d006a9e6679-kube-api-access-cwhj2\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.182413 4915 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b01be235-2ab9-4e61-a5a4-1d006a9e6679-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.182426 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc16c397-45e2-4878-8927-752a1832ec0a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.182437 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.182448 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b01be235-2ab9-4e61-a5a4-1d006a9e6679-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.182458 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.182469 4915 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/02e16a69-5c98-4e52-ad1c-bf08c989cd88-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.182479 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b01be235-2ab9-4e61-a5a4-1d006a9e6679-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.184125 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d" (UID: "c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.185218 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900ea28b-9ef8-41fe-a522-044443efa94b-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "900ea28b-9ef8-41fe-a522-044443efa94b" (UID: "900ea28b-9ef8-41fe-a522-044443efa94b"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.188652 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02e16a69-5c98-4e52-ad1c-bf08c989cd88-logs" (OuterVolumeSpecName: "logs") pod "02e16a69-5c98-4e52-ad1c-bf08c989cd88" (UID: "02e16a69-5c98-4e52-ad1c-bf08c989cd88"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.193677 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02e16a69-5c98-4e52-ad1c-bf08c989cd88-scripts" (OuterVolumeSpecName: "scripts") pod "02e16a69-5c98-4e52-ad1c-bf08c989cd88" (UID: "02e16a69-5c98-4e52-ad1c-bf08c989cd88"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.209431 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5acd6cd6-a7d4-4839-b3c1-aec924797e53-kube-api-access-zrck2" (OuterVolumeSpecName: "kube-api-access-zrck2") pod "5acd6cd6-a7d4-4839-b3c1-aec924797e53" (UID: "5acd6cd6-a7d4-4839-b3c1-aec924797e53"). InnerVolumeSpecName "kube-api-access-zrck2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.221987 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "02e16a69-5c98-4e52-ad1c-bf08c989cd88" (UID: "02e16a69-5c98-4e52-ad1c-bf08c989cd88"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.225776 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02e16a69-5c98-4e52-ad1c-bf08c989cd88-kube-api-access-tdmjj" (OuterVolumeSpecName: "kube-api-access-tdmjj") pod "02e16a69-5c98-4e52-ad1c-bf08c989cd88" (UID: "02e16a69-5c98-4e52-ad1c-bf08c989cd88"). InnerVolumeSpecName "kube-api-access-tdmjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.248097 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900ea28b-9ef8-41fe-a522-044443efa94b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "900ea28b-9ef8-41fe-a522-044443efa94b" (UID: "900ea28b-9ef8-41fe-a522-044443efa94b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.248840 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5f29847-cfad-4f69-aff9-9c62b0088754-config-data" (OuterVolumeSpecName: "config-data") pod "d5f29847-cfad-4f69-aff9-9c62b0088754" (UID: "d5f29847-cfad-4f69-aff9-9c62b0088754"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.255038 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1ec4f102-db6b-4f45-a5f4-1aad213e05fb" (UID: "1ec4f102-db6b-4f45-a5f4-1aad213e05fb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.255692 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8079a88f-5f47-4988-b4c8-6031fbfc9dd8" (UID: "8079a88f-5f47-4988-b4c8-6031fbfc9dd8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.258929 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5f29847-cfad-4f69-aff9-9c62b0088754-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5f29847-cfad-4f69-aff9-9c62b0088754" (UID: "d5f29847-cfad-4f69-aff9-9c62b0088754"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.284019 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1586b620-c9ac-4c33-a1d4-5c83dec7c5f5-operator-scripts\") pod \"keystone-0e30-account-create-update-jdsnb\" (UID: \"1586b620-c9ac-4c33-a1d4-5c83dec7c5f5\") " pod="openstack/keystone-0e30-account-create-update-jdsnb" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.284140 4915 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.284161 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900ea28b-9ef8-41fe-a522-044443efa94b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.284173 4915 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/900ea28b-9ef8-41fe-a522-044443efa94b-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.284187 4915 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.284212 4915 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.284225 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f29847-cfad-4f69-aff9-9c62b0088754-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.284237 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.284250 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02e16a69-5c98-4e52-ad1c-bf08c989cd88-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.284262 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdmjj\" (UniqueName: \"kubernetes.io/projected/02e16a69-5c98-4e52-ad1c-bf08c989cd88-kube-api-access-tdmjj\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.284427 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f29847-cfad-4f69-aff9-9c62b0088754-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.284440 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrck2\" (UniqueName: \"kubernetes.io/projected/5acd6cd6-a7d4-4839-b3c1-aec924797e53-kube-api-access-zrck2\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.284450 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02e16a69-5c98-4e52-ad1c-bf08c989cd88-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: E0127 19:04:55.286041 4915 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 27 19:04:55 crc kubenswrapper[4915]: E0127 19:04:55.286224 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1586b620-c9ac-4c33-a1d4-5c83dec7c5f5-operator-scripts podName:1586b620-c9ac-4c33-a1d4-5c83dec7c5f5 nodeName:}" failed. No retries permitted until 2026-01-27 19:04:57.286199718 +0000 UTC m=+1388.644053452 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/1586b620-c9ac-4c33-a1d4-5c83dec7c5f5-operator-scripts") pod "keystone-0e30-account-create-update-jdsnb" (UID: "1586b620-c9ac-4c33-a1d4-5c83dec7c5f5") : configmap "openstack-scripts" not found Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.293014 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b01be235-2ab9-4e61-a5a4-1d006a9e6679-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "b01be235-2ab9-4e61-a5a4-1d006a9e6679" (UID: "b01be235-2ab9-4e61-a5a4-1d006a9e6679"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.297607 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-config-data" (OuterVolumeSpecName: "config-data") pod "1ec4f102-db6b-4f45-a5f4-1aad213e05fb" (UID: "1ec4f102-db6b-4f45-a5f4-1aad213e05fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.306554 4915 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.334610 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5acd6cd6-a7d4-4839-b3c1-aec924797e53-config-data" (OuterVolumeSpecName: "config-data") pod "5acd6cd6-a7d4-4839-b3c1-aec924797e53" (UID: "5acd6cd6-a7d4-4839-b3c1-aec924797e53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.336560 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1ec4f102-db6b-4f45-a5f4-1aad213e05fb" (UID: "1ec4f102-db6b-4f45-a5f4-1aad213e05fb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.336701 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc16c397-45e2-4878-8927-752a1832ec0a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "cc16c397-45e2-4878-8927-752a1832ec0a" (UID: "cc16c397-45e2-4878-8927-752a1832ec0a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.343464 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02e16a69-5c98-4e52-ad1c-bf08c989cd88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02e16a69-5c98-4e52-ad1c-bf08c989cd88" (UID: "02e16a69-5c98-4e52-ad1c-bf08c989cd88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.353724 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "102e986e-f101-4f49-af96-50368468f7b4" (UID: "102e986e-f101-4f49-af96-50368468f7b4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.356063 4915 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.390621 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc16c397-45e2-4878-8927-752a1832ec0a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc16c397-45e2-4878-8927-752a1832ec0a" (UID: "cc16c397-45e2-4878-8927-752a1832ec0a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.390828 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc16c397-45e2-4878-8927-752a1832ec0a-combined-ca-bundle\") pod \"cc16c397-45e2-4878-8927-752a1832ec0a\" (UID: \"cc16c397-45e2-4878-8927-752a1832ec0a\") " Jan 27 19:04:55 crc kubenswrapper[4915]: W0127 19:04:55.391487 4915 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/cc16c397-45e2-4878-8927-752a1832ec0a/volumes/kubernetes.io~secret/combined-ca-bundle Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.391509 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc16c397-45e2-4878-8927-752a1832ec0a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc16c397-45e2-4878-8927-752a1832ec0a" (UID: "cc16c397-45e2-4878-8927-752a1832ec0a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.392055 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2tnx\" (UniqueName: \"kubernetes.io/projected/1586b620-c9ac-4c33-a1d4-5c83dec7c5f5-kube-api-access-j2tnx\") pod \"keystone-0e30-account-create-update-jdsnb\" (UID: \"1586b620-c9ac-4c33-a1d4-5c83dec7c5f5\") " pod="openstack/keystone-0e30-account-create-update-jdsnb" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.392317 4915 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.392344 4915 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.392358 4915 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.392373 4915 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.392391 4915 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc16c397-45e2-4878-8927-752a1832ec0a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.392403 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e16a69-5c98-4e52-ad1c-bf08c989cd88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.392416 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc16c397-45e2-4878-8927-752a1832ec0a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.392427 4915 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b01be235-2ab9-4e61-a5a4-1d006a9e6679-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.392445 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ec4f102-db6b-4f45-a5f4-1aad213e05fb-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.392457 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5acd6cd6-a7d4-4839-b3c1-aec924797e53-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: E0127 19:04:55.400669 4915 projected.go:194] Error preparing data for projected volume kube-api-access-j2tnx for pod openstack/keystone-0e30-account-create-update-jdsnb: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 27 19:04:55 crc kubenswrapper[4915]: E0127 19:04:55.400758 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1586b620-c9ac-4c33-a1d4-5c83dec7c5f5-kube-api-access-j2tnx podName:1586b620-c9ac-4c33-a1d4-5c83dec7c5f5 nodeName:}" failed. No retries permitted until 2026-01-27 19:04:57.400733802 +0000 UTC m=+1388.758587466 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-j2tnx" (UniqueName: "kubernetes.io/projected/1586b620-c9ac-4c33-a1d4-5c83dec7c5f5-kube-api-access-j2tnx") pod "keystone-0e30-account-create-update-jdsnb" (UID: "1586b620-c9ac-4c33-a1d4-5c83dec7c5f5") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.401075 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06ad99e6-1205-4b12-8516-b5f0f595f0af" path="/var/lib/kubelet/pods/06ad99e6-1205-4b12-8516-b5f0f595f0af/volumes" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.402058 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50b0c87d-6ff3-47cc-8991-884a52945586" path="/var/lib/kubelet/pods/50b0c87d-6ff3-47cc-8991-884a52945586/volumes" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.403063 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67a9425f-4628-4cad-a9d0-b455a3b6f2b1" path="/var/lib/kubelet/pods/67a9425f-4628-4cad-a9d0-b455a3b6f2b1/volumes" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.403562 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ff3f576-3c7a-434d-b4be-51ff90cbd199" path="/var/lib/kubelet/pods/7ff3f576-3c7a-434d-b4be-51ff90cbd199/volumes" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.404101 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6c0c7d8-6044-4702-ac31-e90652d15248" path="/var/lib/kubelet/pods/a6c0c7d8-6044-4702-ac31-e90652d15248/volumes" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.409217 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09d25f2d-d205-4b7a-a17f-ca7e5b26ee43" (UID: "09d25f2d-d205-4b7a-a17f-ca7e5b26ee43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.410303 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5debd92-961a-492a-8e9d-a51652a3a84a" path="/var/lib/kubelet/pods/b5debd92-961a-492a-8e9d-a51652a3a84a/volumes" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.412927 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d599751d-9b39-4de0-bce6-d1fd71f1fe0e" path="/var/lib/kubelet/pods/d599751d-9b39-4de0-bce6-d1fd71f1fe0e/volumes" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.422273 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "09d25f2d-d205-4b7a-a17f-ca7e5b26ee43" (UID: "09d25f2d-d205-4b7a-a17f-ca7e5b26ee43"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.424100 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900ea28b-9ef8-41fe-a522-044443efa94b-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "900ea28b-9ef8-41fe-a522-044443efa94b" (UID: "900ea28b-9ef8-41fe-a522-044443efa94b"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.437937 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02e16a69-5c98-4e52-ad1c-bf08c989cd88-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "02e16a69-5c98-4e52-ad1c-bf08c989cd88" (UID: "02e16a69-5c98-4e52-ad1c-bf08c989cd88"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.438167 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02e16a69-5c98-4e52-ad1c-bf08c989cd88-config-data" (OuterVolumeSpecName: "config-data") pod "02e16a69-5c98-4e52-ad1c-bf08c989cd88" (UID: "02e16a69-5c98-4e52-ad1c-bf08c989cd88"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.439894 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "102e986e-f101-4f49-af96-50368468f7b4" (UID: "102e986e-f101-4f49-af96-50368468f7b4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.486924 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-config-data" (OuterVolumeSpecName: "config-data") pod "09d25f2d-d205-4b7a-a17f-ca7e5b26ee43" (UID: "09d25f2d-d205-4b7a-a17f-ca7e5b26ee43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.488548 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-config-data" (OuterVolumeSpecName: "config-data") pod "8079a88f-5f47-4988-b4c8-6031fbfc9dd8" (UID: "8079a88f-5f47-4988-b4c8-6031fbfc9dd8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.493627 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.493657 4915 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02e16a69-5c98-4e52-ad1c-bf08c989cd88-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.493669 4915 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/900ea28b-9ef8-41fe-a522-044443efa94b-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.493678 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e16a69-5c98-4e52-ad1c-bf08c989cd88-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.493689 4915 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.493697 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.493705 4915 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/102e986e-f101-4f49-af96-50368468f7b4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.493714 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8079a88f-5f47-4988-b4c8-6031fbfc9dd8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.495766 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5acd6cd6-a7d4-4839-b3c1-aec924797e53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5acd6cd6-a7d4-4839-b3c1-aec924797e53" (UID: "5acd6cd6-a7d4-4839-b3c1-aec924797e53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.552647 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-btsbx"] Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.586253 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" event={"ID":"8079a88f-5f47-4988-b4c8-6031fbfc9dd8","Type":"ContainerDied","Data":"d39158ff46672ce4353c66afac0a87241133bfc0e6b9bd69d455fdaacb04f63c"} Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.586305 4915 scope.go:117] "RemoveContainer" containerID="6cce6eea7541be78647a81657904cffa63e1bcb52b367db413e89aa732f8add2" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.586464 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-854f9c8998-68jxd" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.589767 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09d25f2d-d205-4b7a-a17f-ca7e5b26ee43","Type":"ContainerDied","Data":"189fc85982919d425616b908ee089e3dbafb8c6fc0b6fa2d5bf412971e21f280"} Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.589839 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.594984 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5acd6cd6-a7d4-4839-b3c1-aec924797e53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.595709 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7644f9784b-dbhxl" event={"ID":"1ec4f102-db6b-4f45-a5f4-1aad213e05fb","Type":"ContainerDied","Data":"1bee0958d742c43266ba16c4e675c2cb0143e957cb5b3d242b1120f1e83cbf06"} Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.595829 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7644f9784b-dbhxl" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.603085 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"02e16a69-5c98-4e52-ad1c-bf08c989cd88","Type":"ContainerDied","Data":"e380097d1f9c4506ffc90c27f63341d15c33021f6e9ed9484e26a305efd420d2"} Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.603095 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.613149 4915 generic.go:334] "Generic (PLEG): container finished" podID="b7965677-0846-4016-ab96-efe29db9327c" containerID="51296994ea27c94c8b0de608b89745a8a40cf938c30cfb5bcbd6934c9687eaf3" exitCode=1 Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.613233 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8mqkb" event={"ID":"b7965677-0846-4016-ab96-efe29db9327c","Type":"ContainerDied","Data":"51296994ea27c94c8b0de608b89745a8a40cf938c30cfb5bcbd6934c9687eaf3"} Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.616722 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.616763 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b01be235-2ab9-4e61-a5a4-1d006a9e6679","Type":"ContainerDied","Data":"45f7a51ba93b3d190c66f83b3dbf1c7803ec32c1aea00154175d792189c6c6ba"} Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.621916 4915 generic.go:334] "Generic (PLEG): container finished" podID="5acd6cd6-a7d4-4839-b3c1-aec924797e53" containerID="1ea6f431c86fb5649051ee3f19b1974868377911695706cdfe403159de6a57f6" exitCode=0 Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.622550 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0e30-account-create-update-jdsnb" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.623737 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.625725 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.625810 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5acd6cd6-a7d4-4839-b3c1-aec924797e53","Type":"ContainerDied","Data":"1ea6f431c86fb5649051ee3f19b1974868377911695706cdfe403159de6a57f6"} Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.625846 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5acd6cd6-a7d4-4839-b3c1-aec924797e53","Type":"ContainerDied","Data":"d8424f65a2ac917a5cb2614f100ad21d5e7e5af93d2e28b09e5457143522d542"} Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.625987 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.626055 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cff5fcc84-lxsfm" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.626162 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.626579 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.627557 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-btsbx" podUID="fab41436-1aa4-47d5-aa00-9a1199ce8d97" containerName="registry-server" containerID="cri-o://bd8024a44597258da389c76231d4b668491b3a18d4b37b10d5b6d21c9b7cef39" gracePeriod=2 Jan 27 19:04:55 crc kubenswrapper[4915]: E0127 19:04:55.804039 4915 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 27 19:04:55 crc kubenswrapper[4915]: E0127 19:04:55.804300 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-config-data podName:5b5f81dc-48ff-40c8-a0af-84c7c60338fd nodeName:}" failed. No retries permitted until 2026-01-27 19:05:03.804286665 +0000 UTC m=+1395.162140329 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-config-data") pod "rabbitmq-server-0" (UID: "5b5f81dc-48ff-40c8-a0af-84c7c60338fd") : configmap "rabbitmq-config-data" not found Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.955499 4915 scope.go:117] "RemoveContainer" containerID="b733ca9ab35e4ebe64a9190112a4fec324eed485f565cf65e04c47fd1024429e" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.957070 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0e30-account-create-update-jdsnb" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.969418 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8mqkb" Jan 27 19:04:55 crc kubenswrapper[4915]: I0127 19:04:55.976739 4915 scope.go:117] "RemoveContainer" containerID="dd40b8fd0daebe36f826489ae0a0b8562d952c77d4c20c94744566de6a48b7a9" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.029016 4915 scope.go:117] "RemoveContainer" containerID="06641066358db82da7a50ef5141d5040acea1b4c9a664c360a4dc77e290a9f7d" Jan 27 19:04:56 crc kubenswrapper[4915]: E0127 19:04:56.067496 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c7fb90c400c672a38753580b510a0c3a5677129c7aa4308ee8cb3a9337fd46e2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.070555 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 27 19:04:56 crc kubenswrapper[4915]: E0127 19:04:56.080783 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c7fb90c400c672a38753580b510a0c3a5677129c7aa4308ee8cb3a9337fd46e2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 19:04:56 crc kubenswrapper[4915]: E0127 19:04:56.083531 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c7fb90c400c672a38753580b510a0c3a5677129c7aa4308ee8cb3a9337fd46e2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 19:04:56 crc kubenswrapper[4915]: E0127 19:04:56.083586 4915 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="7d059c6f-eb7b-47ad-bdeb-2af976dd43d7" containerName="nova-cell0-conductor-conductor" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.086623 4915 scope.go:117] "RemoveContainer" containerID="e45fa431d5b8ceb54b435ca3de9cafe6ec697d885699991a82ee10f7135d868b" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.098432 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-btsbx" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.101994 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.111671 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-cff5fcc84-lxsfm"] Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.112533 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp4nt\" (UniqueName: \"kubernetes.io/projected/b7965677-0846-4016-ab96-efe29db9327c-kube-api-access-cp4nt\") pod \"b7965677-0846-4016-ab96-efe29db9327c\" (UID: \"b7965677-0846-4016-ab96-efe29db9327c\") " Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.112604 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7965677-0846-4016-ab96-efe29db9327c-operator-scripts\") pod \"b7965677-0846-4016-ab96-efe29db9327c\" (UID: \"b7965677-0846-4016-ab96-efe29db9327c\") " Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.114294 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7965677-0846-4016-ab96-efe29db9327c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7965677-0846-4016-ab96-efe29db9327c" (UID: "b7965677-0846-4016-ab96-efe29db9327c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.126647 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-cff5fcc84-lxsfm"] Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.134972 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7644f9784b-dbhxl"] Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.140251 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7965677-0846-4016-ab96-efe29db9327c-kube-api-access-cp4nt" (OuterVolumeSpecName: "kube-api-access-cp4nt") pod "b7965677-0846-4016-ab96-efe29db9327c" (UID: "b7965677-0846-4016-ab96-efe29db9327c"). InnerVolumeSpecName "kube-api-access-cp4nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.146131 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7644f9784b-dbhxl"] Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.152483 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.160899 4915 scope.go:117] "RemoveContainer" containerID="4e8f634b8ef683d623319b77feb8b9f9b68eb10a115f361187710eedb123c004" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.162473 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.175079 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-854f9c8998-68jxd"] Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.182595 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-854f9c8998-68jxd"] Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.204964 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.213651 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.213687 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fab41436-1aa4-47d5-aa00-9a1199ce8d97-catalog-content\") pod \"fab41436-1aa4-47d5-aa00-9a1199ce8d97\" (UID: \"fab41436-1aa4-47d5-aa00-9a1199ce8d97\") " Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.213963 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fab41436-1aa4-47d5-aa00-9a1199ce8d97-utilities\") pod \"fab41436-1aa4-47d5-aa00-9a1199ce8d97\" (UID: \"fab41436-1aa4-47d5-aa00-9a1199ce8d97\") " Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.213990 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wln4n\" (UniqueName: \"kubernetes.io/projected/fab41436-1aa4-47d5-aa00-9a1199ce8d97-kube-api-access-wln4n\") pod \"fab41436-1aa4-47d5-aa00-9a1199ce8d97\" (UID: \"fab41436-1aa4-47d5-aa00-9a1199ce8d97\") " Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.214599 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp4nt\" (UniqueName: \"kubernetes.io/projected/b7965677-0846-4016-ab96-efe29db9327c-kube-api-access-cp4nt\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.214613 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7965677-0846-4016-ab96-efe29db9327c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.214649 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fab41436-1aa4-47d5-aa00-9a1199ce8d97-utilities" (OuterVolumeSpecName: "utilities") pod "fab41436-1aa4-47d5-aa00-9a1199ce8d97" (UID: "fab41436-1aa4-47d5-aa00-9a1199ce8d97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.220002 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fab41436-1aa4-47d5-aa00-9a1199ce8d97-kube-api-access-wln4n" (OuterVolumeSpecName: "kube-api-access-wln4n") pod "fab41436-1aa4-47d5-aa00-9a1199ce8d97" (UID: "fab41436-1aa4-47d5-aa00-9a1199ce8d97"). InnerVolumeSpecName "kube-api-access-wln4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.223164 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.229682 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.236334 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.246802 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.252930 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.258588 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.264608 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.283161 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.290912 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.295951 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.318117 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fab41436-1aa4-47d5-aa00-9a1199ce8d97-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.318161 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wln4n\" (UniqueName: \"kubernetes.io/projected/fab41436-1aa4-47d5-aa00-9a1199ce8d97-kube-api-access-wln4n\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.395989 4915 scope.go:117] "RemoveContainer" containerID="f76d58b03c67cca894c7c03c90ad9ed4cc48b08790c07b0c5345806e5f49dc1d" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.407579 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fab41436-1aa4-47d5-aa00-9a1199ce8d97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fab41436-1aa4-47d5-aa00-9a1199ce8d97" (UID: "fab41436-1aa4-47d5-aa00-9a1199ce8d97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.421951 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fab41436-1aa4-47d5-aa00-9a1199ce8d97-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.503944 4915 scope.go:117] "RemoveContainer" containerID="cd55b928203a100adbd69bb6a3576cba209345a89c3fe528d411607287b3964d" Jan 27 19:04:56 crc kubenswrapper[4915]: E0127 19:04:56.524400 4915 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 27 19:04:56 crc kubenswrapper[4915]: E0127 19:04:56.524479 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-config-data podName:b3ead5d8-b1e5-4145-a6de-64c316f4027e nodeName:}" failed. No retries permitted until 2026-01-27 19:05:04.524455091 +0000 UTC m=+1395.882308755 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-config-data") pod "rabbitmq-cell1-server-0" (UID: "b3ead5d8-b1e5-4145-a6de-64c316f4027e") : configmap "rabbitmq-cell1-config-data" not found Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.535083 4915 scope.go:117] "RemoveContainer" containerID="e8dc10179f8954b400ea8cb8db73ee6638c728824ddb7504365e79a8f13b926f" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.554355 4915 scope.go:117] "RemoveContainer" containerID="1ea6f431c86fb5649051ee3f19b1974868377911695706cdfe403159de6a57f6" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.596945 4915 scope.go:117] "RemoveContainer" containerID="1ea6f431c86fb5649051ee3f19b1974868377911695706cdfe403159de6a57f6" Jan 27 19:04:56 crc kubenswrapper[4915]: E0127 19:04:56.597338 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ea6f431c86fb5649051ee3f19b1974868377911695706cdfe403159de6a57f6\": container with ID starting with 1ea6f431c86fb5649051ee3f19b1974868377911695706cdfe403159de6a57f6 not found: ID does not exist" containerID="1ea6f431c86fb5649051ee3f19b1974868377911695706cdfe403159de6a57f6" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.597362 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ea6f431c86fb5649051ee3f19b1974868377911695706cdfe403159de6a57f6"} err="failed to get container status \"1ea6f431c86fb5649051ee3f19b1974868377911695706cdfe403159de6a57f6\": rpc error: code = NotFound desc = could not find container \"1ea6f431c86fb5649051ee3f19b1974868377911695706cdfe403159de6a57f6\": container with ID starting with 1ea6f431c86fb5649051ee3f19b1974868377911695706cdfe403159de6a57f6 not found: ID does not exist" Jan 27 19:04:56 crc kubenswrapper[4915]: E0127 19:04:56.629275 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91 is running failed: container process not found" containerID="416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 19:04:56 crc kubenswrapper[4915]: E0127 19:04:56.629371 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 19:04:56 crc kubenswrapper[4915]: E0127 19:04:56.629760 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91 is running failed: container process not found" containerID="416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 19:04:56 crc kubenswrapper[4915]: E0127 19:04:56.630029 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91 is running failed: container process not found" containerID="416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 19:04:56 crc kubenswrapper[4915]: E0127 19:04:56.630048 4915 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-g8pg6" podUID="f070ee25-edfb-4020-b526-3ec9d6c727bc" containerName="ovsdb-server" Jan 27 19:04:56 crc kubenswrapper[4915]: E0127 19:04:56.630592 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 19:04:56 crc kubenswrapper[4915]: E0127 19:04:56.632893 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 19:04:56 crc kubenswrapper[4915]: E0127 19:04:56.632924 4915 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-g8pg6" podUID="f070ee25-edfb-4020-b526-3ec9d6c727bc" containerName="ovs-vswitchd" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.634695 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8mqkb" event={"ID":"b7965677-0846-4016-ab96-efe29db9327c","Type":"ContainerDied","Data":"1645614c1fed111898ba08055b669b2d4f5fa6163bb096c5eb76c0fb246392cd"} Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.634750 4915 scope.go:117] "RemoveContainer" containerID="51296994ea27c94c8b0de608b89745a8a40cf938c30cfb5bcbd6934c9687eaf3" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.634869 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8mqkb" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.656753 4915 generic.go:334] "Generic (PLEG): container finished" podID="5b5f81dc-48ff-40c8-a0af-84c7c60338fd" containerID="2b4f416be9fb86b0cb75f45fd91a7a0c66676aedc9385d72b5cec37350e25d70" exitCode=0 Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.656856 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5b5f81dc-48ff-40c8-a0af-84c7c60338fd","Type":"ContainerDied","Data":"2b4f416be9fb86b0cb75f45fd91a7a0c66676aedc9385d72b5cec37350e25d70"} Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.660081 4915 generic.go:334] "Generic (PLEG): container finished" podID="69486e9e-4ef8-4749-842f-a38dfeba60d3" containerID="b05c34d07645b4a14d56b4d52e63ae462133c80866f332009ef2ce490415bcdc" exitCode=0 Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.660132 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"69486e9e-4ef8-4749-842f-a38dfeba60d3","Type":"ContainerDied","Data":"b05c34d07645b4a14d56b4d52e63ae462133c80866f332009ef2ce490415bcdc"} Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.660149 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"69486e9e-4ef8-4749-842f-a38dfeba60d3","Type":"ContainerDied","Data":"c469f41e66e538d2e937f34b0e269c15ac8c21d6493aa2369f95ba537e6a75b7"} Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.660159 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c469f41e66e538d2e937f34b0e269c15ac8c21d6493aa2369f95ba537e6a75b7" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.661715 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.663526 4915 generic.go:334] "Generic (PLEG): container finished" podID="fab41436-1aa4-47d5-aa00-9a1199ce8d97" containerID="bd8024a44597258da389c76231d4b668491b3a18d4b37b10d5b6d21c9b7cef39" exitCode=0 Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.663584 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btsbx" event={"ID":"fab41436-1aa4-47d5-aa00-9a1199ce8d97","Type":"ContainerDied","Data":"bd8024a44597258da389c76231d4b668491b3a18d4b37b10d5b6d21c9b7cef39"} Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.663606 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btsbx" event={"ID":"fab41436-1aa4-47d5-aa00-9a1199ce8d97","Type":"ContainerDied","Data":"ff4d4653225d64498e65d9f4bea6a59d29dd117687df20b61e746540917dfa91"} Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.663671 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-btsbx" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.668515 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0e30-account-create-update-jdsnb" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.677855 4915 scope.go:117] "RemoveContainer" containerID="bd8024a44597258da389c76231d4b668491b3a18d4b37b10d5b6d21c9b7cef39" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.727185 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7zkx\" (UniqueName: \"kubernetes.io/projected/69486e9e-4ef8-4749-842f-a38dfeba60d3-kube-api-access-f7zkx\") pod \"69486e9e-4ef8-4749-842f-a38dfeba60d3\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.727238 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/69486e9e-4ef8-4749-842f-a38dfeba60d3-config-data-generated\") pod \"69486e9e-4ef8-4749-842f-a38dfeba60d3\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.727259 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/69486e9e-4ef8-4749-842f-a38dfeba60d3-kolla-config\") pod \"69486e9e-4ef8-4749-842f-a38dfeba60d3\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.727288 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/69486e9e-4ef8-4749-842f-a38dfeba60d3-galera-tls-certs\") pod \"69486e9e-4ef8-4749-842f-a38dfeba60d3\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.727350 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69486e9e-4ef8-4749-842f-a38dfeba60d3-combined-ca-bundle\") pod \"69486e9e-4ef8-4749-842f-a38dfeba60d3\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.727383 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/69486e9e-4ef8-4749-842f-a38dfeba60d3-config-data-default\") pod \"69486e9e-4ef8-4749-842f-a38dfeba60d3\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.727409 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69486e9e-4ef8-4749-842f-a38dfeba60d3-operator-scripts\") pod \"69486e9e-4ef8-4749-842f-a38dfeba60d3\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.727428 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"69486e9e-4ef8-4749-842f-a38dfeba60d3\" (UID: \"69486e9e-4ef8-4749-842f-a38dfeba60d3\") " Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.728533 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69486e9e-4ef8-4749-842f-a38dfeba60d3-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "69486e9e-4ef8-4749-842f-a38dfeba60d3" (UID: "69486e9e-4ef8-4749-842f-a38dfeba60d3"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.728937 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69486e9e-4ef8-4749-842f-a38dfeba60d3-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "69486e9e-4ef8-4749-842f-a38dfeba60d3" (UID: "69486e9e-4ef8-4749-842f-a38dfeba60d3"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.733044 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69486e9e-4ef8-4749-842f-a38dfeba60d3-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "69486e9e-4ef8-4749-842f-a38dfeba60d3" (UID: "69486e9e-4ef8-4749-842f-a38dfeba60d3"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.733517 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69486e9e-4ef8-4749-842f-a38dfeba60d3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "69486e9e-4ef8-4749-842f-a38dfeba60d3" (UID: "69486e9e-4ef8-4749-842f-a38dfeba60d3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.738100 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69486e9e-4ef8-4749-842f-a38dfeba60d3-kube-api-access-f7zkx" (OuterVolumeSpecName: "kube-api-access-f7zkx") pod "69486e9e-4ef8-4749-842f-a38dfeba60d3" (UID: "69486e9e-4ef8-4749-842f-a38dfeba60d3"). InnerVolumeSpecName "kube-api-access-f7zkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.748398 4915 scope.go:117] "RemoveContainer" containerID="bdc2b25c5f12fd04b75e7cbf223a2aa1009b7fd2ce5bf98e1ce4c3b2962cd138" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.748703 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.756489 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "mysql-db") pod "69486e9e-4ef8-4749-842f-a38dfeba60d3" (UID: "69486e9e-4ef8-4749-842f-a38dfeba60d3"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.759162 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-8mqkb"] Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.766701 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-8mqkb"] Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.788046 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69486e9e-4ef8-4749-842f-a38dfeba60d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69486e9e-4ef8-4749-842f-a38dfeba60d3" (UID: "69486e9e-4ef8-4749-842f-a38dfeba60d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.790397 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69486e9e-4ef8-4749-842f-a38dfeba60d3-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "69486e9e-4ef8-4749-842f-a38dfeba60d3" (UID: "69486e9e-4ef8-4749-842f-a38dfeba60d3"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.796901 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0e30-account-create-update-jdsnb"] Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.801850 4915 scope.go:117] "RemoveContainer" containerID="c327c5cb51ac79a1c8e9006c568bbab7eb5c46b5982eafb21e6cf57f2a366526" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.803083 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-0e30-account-create-update-jdsnb"] Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.819365 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-btsbx"] Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.824009 4915 scope.go:117] "RemoveContainer" containerID="bd8024a44597258da389c76231d4b668491b3a18d4b37b10d5b6d21c9b7cef39" Jan 27 19:04:56 crc kubenswrapper[4915]: E0127 19:04:56.824514 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd8024a44597258da389c76231d4b668491b3a18d4b37b10d5b6d21c9b7cef39\": container with ID starting with bd8024a44597258da389c76231d4b668491b3a18d4b37b10d5b6d21c9b7cef39 not found: ID does not exist" containerID="bd8024a44597258da389c76231d4b668491b3a18d4b37b10d5b6d21c9b7cef39" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.824545 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd8024a44597258da389c76231d4b668491b3a18d4b37b10d5b6d21c9b7cef39"} err="failed to get container status \"bd8024a44597258da389c76231d4b668491b3a18d4b37b10d5b6d21c9b7cef39\": rpc error: code = NotFound desc = could not find container \"bd8024a44597258da389c76231d4b668491b3a18d4b37b10d5b6d21c9b7cef39\": container with ID starting with bd8024a44597258da389c76231d4b668491b3a18d4b37b10d5b6d21c9b7cef39 not found: ID does not exist" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.824565 4915 scope.go:117] "RemoveContainer" containerID="bdc2b25c5f12fd04b75e7cbf223a2aa1009b7fd2ce5bf98e1ce4c3b2962cd138" Jan 27 19:04:56 crc kubenswrapper[4915]: E0127 19:04:56.824905 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdc2b25c5f12fd04b75e7cbf223a2aa1009b7fd2ce5bf98e1ce4c3b2962cd138\": container with ID starting with bdc2b25c5f12fd04b75e7cbf223a2aa1009b7fd2ce5bf98e1ce4c3b2962cd138 not found: ID does not exist" containerID="bdc2b25c5f12fd04b75e7cbf223a2aa1009b7fd2ce5bf98e1ce4c3b2962cd138" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.824945 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdc2b25c5f12fd04b75e7cbf223a2aa1009b7fd2ce5bf98e1ce4c3b2962cd138"} err="failed to get container status \"bdc2b25c5f12fd04b75e7cbf223a2aa1009b7fd2ce5bf98e1ce4c3b2962cd138\": rpc error: code = NotFound desc = could not find container \"bdc2b25c5f12fd04b75e7cbf223a2aa1009b7fd2ce5bf98e1ce4c3b2962cd138\": container with ID starting with bdc2b25c5f12fd04b75e7cbf223a2aa1009b7fd2ce5bf98e1ce4c3b2962cd138 not found: ID does not exist" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.824972 4915 scope.go:117] "RemoveContainer" containerID="c327c5cb51ac79a1c8e9006c568bbab7eb5c46b5982eafb21e6cf57f2a366526" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.825347 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-btsbx"] Jan 27 19:04:56 crc kubenswrapper[4915]: E0127 19:04:56.825558 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c327c5cb51ac79a1c8e9006c568bbab7eb5c46b5982eafb21e6cf57f2a366526\": container with ID starting with c327c5cb51ac79a1c8e9006c568bbab7eb5c46b5982eafb21e6cf57f2a366526 not found: ID does not exist" containerID="c327c5cb51ac79a1c8e9006c568bbab7eb5c46b5982eafb21e6cf57f2a366526" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.825586 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c327c5cb51ac79a1c8e9006c568bbab7eb5c46b5982eafb21e6cf57f2a366526"} err="failed to get container status \"c327c5cb51ac79a1c8e9006c568bbab7eb5c46b5982eafb21e6cf57f2a366526\": rpc error: code = NotFound desc = could not find container \"c327c5cb51ac79a1c8e9006c568bbab7eb5c46b5982eafb21e6cf57f2a366526\": container with ID starting with c327c5cb51ac79a1c8e9006c568bbab7eb5c46b5982eafb21e6cf57f2a366526 not found: ID does not exist" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.828089 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-server-conf\") pod \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.828187 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-erlang-cookie-secret\") pod \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.828215 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-rabbitmq-confd\") pod \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.828237 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grbct\" (UniqueName: \"kubernetes.io/projected/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-kube-api-access-grbct\") pod \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.828289 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-plugins-conf\") pod \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.828315 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-config-data\") pod \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.828348 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-rabbitmq-erlang-cookie\") pod \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.828380 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-rabbitmq-tls\") pod \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.828406 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.828438 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-rabbitmq-plugins\") pod \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.828472 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-pod-info\") pod \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\" (UID: \"5b5f81dc-48ff-40c8-a0af-84c7c60338fd\") " Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.828777 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/69486e9e-4ef8-4749-842f-a38dfeba60d3-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.828796 4915 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/69486e9e-4ef8-4749-842f-a38dfeba60d3-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.828805 4915 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/69486e9e-4ef8-4749-842f-a38dfeba60d3-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.828823 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69486e9e-4ef8-4749-842f-a38dfeba60d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.828832 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2tnx\" (UniqueName: \"kubernetes.io/projected/1586b620-c9ac-4c33-a1d4-5c83dec7c5f5-kube-api-access-j2tnx\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.828841 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/69486e9e-4ef8-4749-842f-a38dfeba60d3-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.828849 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69486e9e-4ef8-4749-842f-a38dfeba60d3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.828866 4915 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.828875 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1586b620-c9ac-4c33-a1d4-5c83dec7c5f5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.828883 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7zkx\" (UniqueName: \"kubernetes.io/projected/69486e9e-4ef8-4749-842f-a38dfeba60d3-kube-api-access-f7zkx\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.830524 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "5b5f81dc-48ff-40c8-a0af-84c7c60338fd" (UID: "5b5f81dc-48ff-40c8-a0af-84c7c60338fd"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.832537 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "5b5f81dc-48ff-40c8-a0af-84c7c60338fd" (UID: "5b5f81dc-48ff-40c8-a0af-84c7c60338fd"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.832875 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "5b5f81dc-48ff-40c8-a0af-84c7c60338fd" (UID: "5b5f81dc-48ff-40c8-a0af-84c7c60338fd"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.833370 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "5b5f81dc-48ff-40c8-a0af-84c7c60338fd" (UID: "5b5f81dc-48ff-40c8-a0af-84c7c60338fd"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.837040 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-pod-info" (OuterVolumeSpecName: "pod-info") pod "5b5f81dc-48ff-40c8-a0af-84c7c60338fd" (UID: "5b5f81dc-48ff-40c8-a0af-84c7c60338fd"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.837391 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "5b5f81dc-48ff-40c8-a0af-84c7c60338fd" (UID: "5b5f81dc-48ff-40c8-a0af-84c7c60338fd"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.843321 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "5b5f81dc-48ff-40c8-a0af-84c7c60338fd" (UID: "5b5f81dc-48ff-40c8-a0af-84c7c60338fd"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.845955 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-kube-api-access-grbct" (OuterVolumeSpecName: "kube-api-access-grbct") pod "5b5f81dc-48ff-40c8-a0af-84c7c60338fd" (UID: "5b5f81dc-48ff-40c8-a0af-84c7c60338fd"). InnerVolumeSpecName "kube-api-access-grbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.853566 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-config-data" (OuterVolumeSpecName: "config-data") pod "5b5f81dc-48ff-40c8-a0af-84c7c60338fd" (UID: "5b5f81dc-48ff-40c8-a0af-84c7c60338fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.856363 4915 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.873979 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-server-conf" (OuterVolumeSpecName: "server-conf") pod "5b5f81dc-48ff-40c8-a0af-84c7c60338fd" (UID: "5b5f81dc-48ff-40c8-a0af-84c7c60338fd"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.907358 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "5b5f81dc-48ff-40c8-a0af-84c7c60338fd" (UID: "5b5f81dc-48ff-40c8-a0af-84c7c60338fd"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.930216 4915 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.930248 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.930258 4915 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.930268 4915 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.930296 4915 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.930306 4915 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.930315 4915 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.930324 4915 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.930331 4915 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.930340 4915 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.930350 4915 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.930358 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grbct\" (UniqueName: \"kubernetes.io/projected/5b5f81dc-48ff-40c8-a0af-84c7c60338fd-kube-api-access-grbct\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:56 crc kubenswrapper[4915]: I0127 19:04:56.949911 4915 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.037255 4915 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.370030 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02e16a69-5c98-4e52-ad1c-bf08c989cd88" path="/var/lib/kubelet/pods/02e16a69-5c98-4e52-ad1c-bf08c989cd88/volumes" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.371937 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09d25f2d-d205-4b7a-a17f-ca7e5b26ee43" path="/var/lib/kubelet/pods/09d25f2d-d205-4b7a-a17f-ca7e5b26ee43/volumes" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.372687 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="102e986e-f101-4f49-af96-50368468f7b4" path="/var/lib/kubelet/pods/102e986e-f101-4f49-af96-50368468f7b4/volumes" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.373605 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1586b620-c9ac-4c33-a1d4-5c83dec7c5f5" path="/var/lib/kubelet/pods/1586b620-c9ac-4c33-a1d4-5c83dec7c5f5/volumes" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.374008 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ec4f102-db6b-4f45-a5f4-1aad213e05fb" path="/var/lib/kubelet/pods/1ec4f102-db6b-4f45-a5f4-1aad213e05fb/volumes" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.374542 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5acd6cd6-a7d4-4839-b3c1-aec924797e53" path="/var/lib/kubelet/pods/5acd6cd6-a7d4-4839-b3c1-aec924797e53/volumes" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.375569 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8079a88f-5f47-4988-b4c8-6031fbfc9dd8" path="/var/lib/kubelet/pods/8079a88f-5f47-4988-b4c8-6031fbfc9dd8/volumes" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.376190 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="900ea28b-9ef8-41fe-a522-044443efa94b" path="/var/lib/kubelet/pods/900ea28b-9ef8-41fe-a522-044443efa94b/volumes" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.376654 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b01be235-2ab9-4e61-a5a4-1d006a9e6679" path="/var/lib/kubelet/pods/b01be235-2ab9-4e61-a5a4-1d006a9e6679/volumes" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.377914 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7965677-0846-4016-ab96-efe29db9327c" path="/var/lib/kubelet/pods/b7965677-0846-4016-ab96-efe29db9327c/volumes" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.378394 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d" path="/var/lib/kubelet/pods/c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d/volumes" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.379056 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc16c397-45e2-4878-8927-752a1832ec0a" path="/var/lib/kubelet/pods/cc16c397-45e2-4878-8927-752a1832ec0a/volumes" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.379990 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5f29847-cfad-4f69-aff9-9c62b0088754" path="/var/lib/kubelet/pods/d5f29847-cfad-4f69-aff9-9c62b0088754/volumes" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.380448 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fab41436-1aa4-47d5-aa00-9a1199ce8d97" path="/var/lib/kubelet/pods/fab41436-1aa4-47d5-aa00-9a1199ce8d97/volumes" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.425186 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.547437 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrccc\" (UniqueName: \"kubernetes.io/projected/d0031b79-12aa-4487-8501-6e122053cc13-kube-api-access-qrccc\") pod \"d0031b79-12aa-4487-8501-6e122053cc13\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.547469 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-credential-keys\") pod \"d0031b79-12aa-4487-8501-6e122053cc13\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.547608 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-config-data\") pod \"d0031b79-12aa-4487-8501-6e122053cc13\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.547631 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-internal-tls-certs\") pod \"d0031b79-12aa-4487-8501-6e122053cc13\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.547673 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-public-tls-certs\") pod \"d0031b79-12aa-4487-8501-6e122053cc13\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.547711 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-fernet-keys\") pod \"d0031b79-12aa-4487-8501-6e122053cc13\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.547764 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-combined-ca-bundle\") pod \"d0031b79-12aa-4487-8501-6e122053cc13\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.547819 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-scripts\") pod \"d0031b79-12aa-4487-8501-6e122053cc13\" (UID: \"d0031b79-12aa-4487-8501-6e122053cc13\") " Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.556511 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-scripts" (OuterVolumeSpecName: "scripts") pod "d0031b79-12aa-4487-8501-6e122053cc13" (UID: "d0031b79-12aa-4487-8501-6e122053cc13"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.556629 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d0031b79-12aa-4487-8501-6e122053cc13" (UID: "d0031b79-12aa-4487-8501-6e122053cc13"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.556757 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.556882 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0031b79-12aa-4487-8501-6e122053cc13-kube-api-access-qrccc" (OuterVolumeSpecName: "kube-api-access-qrccc") pod "d0031b79-12aa-4487-8501-6e122053cc13" (UID: "d0031b79-12aa-4487-8501-6e122053cc13"). InnerVolumeSpecName "kube-api-access-qrccc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.559933 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d0031b79-12aa-4487-8501-6e122053cc13" (UID: "d0031b79-12aa-4487-8501-6e122053cc13"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.583610 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0031b79-12aa-4487-8501-6e122053cc13" (UID: "d0031b79-12aa-4487-8501-6e122053cc13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.592288 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-config-data" (OuterVolumeSpecName: "config-data") pod "d0031b79-12aa-4487-8501-6e122053cc13" (UID: "d0031b79-12aa-4487-8501-6e122053cc13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.599974 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d0031b79-12aa-4487-8501-6e122053cc13" (UID: "d0031b79-12aa-4487-8501-6e122053cc13"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.617849 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d0031b79-12aa-4487-8501-6e122053cc13" (UID: "d0031b79-12aa-4487-8501-6e122053cc13"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.649871 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.649904 4915 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.649918 4915 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.649928 4915 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.649936 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.649944 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.649954 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrccc\" (UniqueName: \"kubernetes.io/projected/d0031b79-12aa-4487-8501-6e122053cc13-kube-api-access-qrccc\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.649962 4915 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d0031b79-12aa-4487-8501-6e122053cc13-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.681501 4915 generic.go:334] "Generic (PLEG): container finished" podID="f2041e54-fb55-4f2a-8cf9-e439c7774485" containerID="23aa745b7dd112c1f66341caed263053b7f5ea24b9e8c71afd2243cebd1cab75" exitCode=0 Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.681554 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f9776577f-2jndx" event={"ID":"f2041e54-fb55-4f2a-8cf9-e439c7774485","Type":"ContainerDied","Data":"23aa745b7dd112c1f66341caed263053b7f5ea24b9e8c71afd2243cebd1cab75"} Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.683547 4915 generic.go:334] "Generic (PLEG): container finished" podID="b3ead5d8-b1e5-4145-a6de-64c316f4027e" containerID="87b6ec8322f87a6503368ba614362e611cb45d804ae0510bab0ceb1477305fce" exitCode=0 Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.683610 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.683617 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b3ead5d8-b1e5-4145-a6de-64c316f4027e","Type":"ContainerDied","Data":"87b6ec8322f87a6503368ba614362e611cb45d804ae0510bab0ceb1477305fce"} Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.683637 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b3ead5d8-b1e5-4145-a6de-64c316f4027e","Type":"ContainerDied","Data":"e24c285ab5913c5a999c9b9b5afcb30590ee7a769cd4ffc0ffa815e68c1209cb"} Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.683661 4915 scope.go:117] "RemoveContainer" containerID="87b6ec8322f87a6503368ba614362e611cb45d804ae0510bab0ceb1477305fce" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.690081 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.690511 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5b5f81dc-48ff-40c8-a0af-84c7c60338fd","Type":"ContainerDied","Data":"e41f4777fadbbb920de51d9857441037e2199f54ad2cdc08d5b5adce5f49bc85"} Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.693606 4915 generic.go:334] "Generic (PLEG): container finished" podID="d0031b79-12aa-4487-8501-6e122053cc13" containerID="5462e3db7b4374ade37ec810af37e0e036487f16475e907a721f3b531cb23675" exitCode=0 Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.693660 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.693660 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5bf7f58cfb-6c779" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.693698 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5bf7f58cfb-6c779" event={"ID":"d0031b79-12aa-4487-8501-6e122053cc13","Type":"ContainerDied","Data":"5462e3db7b4374ade37ec810af37e0e036487f16475e907a721f3b531cb23675"} Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.693721 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5bf7f58cfb-6c779" event={"ID":"d0031b79-12aa-4487-8501-6e122053cc13","Type":"ContainerDied","Data":"c1d97c77399d2cd02cd8e2b010752cad135e8e77a434c2a2fa694a86efaecc7d"} Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.751170 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b3ead5d8-b1e5-4145-a6de-64c316f4027e-rabbitmq-confd\") pod \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.751227 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-config-data\") pod \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.751279 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b3ead5d8-b1e5-4145-a6de-64c316f4027e-rabbitmq-erlang-cookie\") pod \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.753201 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b3ead5d8-b1e5-4145-a6de-64c316f4027e-rabbitmq-tls\") pod \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.753251 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-server-conf\") pod \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.753282 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.753314 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b3ead5d8-b1e5-4145-a6de-64c316f4027e-rabbitmq-plugins\") pod \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.753336 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-plugins-conf\") pod \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.753377 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b3ead5d8-b1e5-4145-a6de-64c316f4027e-erlang-cookie-secret\") pod \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.753408 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b3ead5d8-b1e5-4145-a6de-64c316f4027e-pod-info\") pod \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.753481 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjk48\" (UniqueName: \"kubernetes.io/projected/b3ead5d8-b1e5-4145-a6de-64c316f4027e-kube-api-access-xjk48\") pod \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\" (UID: \"b3ead5d8-b1e5-4145-a6de-64c316f4027e\") " Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.754373 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.759431 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b3ead5d8-b1e5-4145-a6de-64c316f4027e" (UID: "b3ead5d8-b1e5-4145-a6de-64c316f4027e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.759945 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3ead5d8-b1e5-4145-a6de-64c316f4027e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b3ead5d8-b1e5-4145-a6de-64c316f4027e" (UID: "b3ead5d8-b1e5-4145-a6de-64c316f4027e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.760116 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ead5d8-b1e5-4145-a6de-64c316f4027e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b3ead5d8-b1e5-4145-a6de-64c316f4027e" (UID: "b3ead5d8-b1e5-4145-a6de-64c316f4027e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.760306 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3ead5d8-b1e5-4145-a6de-64c316f4027e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b3ead5d8-b1e5-4145-a6de-64c316f4027e" (UID: "b3ead5d8-b1e5-4145-a6de-64c316f4027e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.761365 4915 scope.go:117] "RemoveContainer" containerID="30e1cae36df5f0d1cc0a2108f960ad759e0d97261fe96cef9f3de92ab69add31" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.761629 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "b3ead5d8-b1e5-4145-a6de-64c316f4027e" (UID: "b3ead5d8-b1e5-4145-a6de-64c316f4027e"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.763205 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b3ead5d8-b1e5-4145-a6de-64c316f4027e-pod-info" (OuterVolumeSpecName: "pod-info") pod "b3ead5d8-b1e5-4145-a6de-64c316f4027e" (UID: "b3ead5d8-b1e5-4145-a6de-64c316f4027e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.764973 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3ead5d8-b1e5-4145-a6de-64c316f4027e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b3ead5d8-b1e5-4145-a6de-64c316f4027e" (UID: "b3ead5d8-b1e5-4145-a6de-64c316f4027e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.765600 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3ead5d8-b1e5-4145-a6de-64c316f4027e-kube-api-access-xjk48" (OuterVolumeSpecName: "kube-api-access-xjk48") pod "b3ead5d8-b1e5-4145-a6de-64c316f4027e" (UID: "b3ead5d8-b1e5-4145-a6de-64c316f4027e"). InnerVolumeSpecName "kube-api-access-xjk48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.770856 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.784254 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.814989 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.819012 4915 scope.go:117] "RemoveContainer" containerID="87b6ec8322f87a6503368ba614362e611cb45d804ae0510bab0ceb1477305fce" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.819627 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-server-conf" (OuterVolumeSpecName: "server-conf") pod "b3ead5d8-b1e5-4145-a6de-64c316f4027e" (UID: "b3ead5d8-b1e5-4145-a6de-64c316f4027e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:57 crc kubenswrapper[4915]: E0127 19:04:57.819886 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87b6ec8322f87a6503368ba614362e611cb45d804ae0510bab0ceb1477305fce\": container with ID starting with 87b6ec8322f87a6503368ba614362e611cb45d804ae0510bab0ceb1477305fce not found: ID does not exist" containerID="87b6ec8322f87a6503368ba614362e611cb45d804ae0510bab0ceb1477305fce" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.819914 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87b6ec8322f87a6503368ba614362e611cb45d804ae0510bab0ceb1477305fce"} err="failed to get container status \"87b6ec8322f87a6503368ba614362e611cb45d804ae0510bab0ceb1477305fce\": rpc error: code = NotFound desc = could not find container \"87b6ec8322f87a6503368ba614362e611cb45d804ae0510bab0ceb1477305fce\": container with ID starting with 87b6ec8322f87a6503368ba614362e611cb45d804ae0510bab0ceb1477305fce not found: ID does not exist" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.819937 4915 scope.go:117] "RemoveContainer" containerID="30e1cae36df5f0d1cc0a2108f960ad759e0d97261fe96cef9f3de92ab69add31" Jan 27 19:04:57 crc kubenswrapper[4915]: E0127 19:04:57.820252 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30e1cae36df5f0d1cc0a2108f960ad759e0d97261fe96cef9f3de92ab69add31\": container with ID starting with 30e1cae36df5f0d1cc0a2108f960ad759e0d97261fe96cef9f3de92ab69add31 not found: ID does not exist" containerID="30e1cae36df5f0d1cc0a2108f960ad759e0d97261fe96cef9f3de92ab69add31" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.820270 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30e1cae36df5f0d1cc0a2108f960ad759e0d97261fe96cef9f3de92ab69add31"} err="failed to get container status \"30e1cae36df5f0d1cc0a2108f960ad759e0d97261fe96cef9f3de92ab69add31\": rpc error: code = NotFound desc = could not find container \"30e1cae36df5f0d1cc0a2108f960ad759e0d97261fe96cef9f3de92ab69add31\": container with ID starting with 30e1cae36df5f0d1cc0a2108f960ad759e0d97261fe96cef9f3de92ab69add31 not found: ID does not exist" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.820283 4915 scope.go:117] "RemoveContainer" containerID="2b4f416be9fb86b0cb75f45fd91a7a0c66676aedc9385d72b5cec37350e25d70" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.823143 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5bf7f58cfb-6c779"] Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.829572 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-config-data" (OuterVolumeSpecName: "config-data") pod "b3ead5d8-b1e5-4145-a6de-64c316f4027e" (UID: "b3ead5d8-b1e5-4145-a6de-64c316f4027e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.836263 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-5bf7f58cfb-6c779"] Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.847395 4915 scope.go:117] "RemoveContainer" containerID="1358fb4c705e4868c1b83ea13e0f2c10cac4558883cc330d554154fd44be9f97" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.856248 4915 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.856438 4915 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.856543 4915 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b3ead5d8-b1e5-4145-a6de-64c316f4027e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.856621 4915 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.856750 4915 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b3ead5d8-b1e5-4145-a6de-64c316f4027e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.857114 4915 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b3ead5d8-b1e5-4145-a6de-64c316f4027e-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.857210 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjk48\" (UniqueName: \"kubernetes.io/projected/b3ead5d8-b1e5-4145-a6de-64c316f4027e-kube-api-access-xjk48\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.857300 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3ead5d8-b1e5-4145-a6de-64c316f4027e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.857398 4915 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b3ead5d8-b1e5-4145-a6de-64c316f4027e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.857481 4915 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b3ead5d8-b1e5-4145-a6de-64c316f4027e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.883937 4915 scope.go:117] "RemoveContainer" containerID="5462e3db7b4374ade37ec810af37e0e036487f16475e907a721f3b531cb23675" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.887171 4915 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.890435 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3ead5d8-b1e5-4145-a6de-64c316f4027e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b3ead5d8-b1e5-4145-a6de-64c316f4027e" (UID: "b3ead5d8-b1e5-4145-a6de-64c316f4027e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.908225 4915 scope.go:117] "RemoveContainer" containerID="5462e3db7b4374ade37ec810af37e0e036487f16475e907a721f3b531cb23675" Jan 27 19:04:57 crc kubenswrapper[4915]: E0127 19:04:57.908611 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5462e3db7b4374ade37ec810af37e0e036487f16475e907a721f3b531cb23675\": container with ID starting with 5462e3db7b4374ade37ec810af37e0e036487f16475e907a721f3b531cb23675 not found: ID does not exist" containerID="5462e3db7b4374ade37ec810af37e0e036487f16475e907a721f3b531cb23675" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.908647 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5462e3db7b4374ade37ec810af37e0e036487f16475e907a721f3b531cb23675"} err="failed to get container status \"5462e3db7b4374ade37ec810af37e0e036487f16475e907a721f3b531cb23675\": rpc error: code = NotFound desc = could not find container \"5462e3db7b4374ade37ec810af37e0e036487f16475e907a721f3b531cb23675\": container with ID starting with 5462e3db7b4374ade37ec810af37e0e036487f16475e907a721f3b531cb23675 not found: ID does not exist" Jan 27 19:04:57 crc kubenswrapper[4915]: I0127 19:04:57.932454 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-f9776577f-2jndx" Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:57.959274 4915 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:57.959299 4915 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b3ead5d8-b1e5-4145-a6de-64c316f4027e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.031785 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.042430 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.046669 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.059766 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2041e54-fb55-4f2a-8cf9-e439c7774485-combined-ca-bundle\") pod \"f2041e54-fb55-4f2a-8cf9-e439c7774485\" (UID: \"f2041e54-fb55-4f2a-8cf9-e439c7774485\") " Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.059865 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2041e54-fb55-4f2a-8cf9-e439c7774485-config-data-custom\") pod \"f2041e54-fb55-4f2a-8cf9-e439c7774485\" (UID: \"f2041e54-fb55-4f2a-8cf9-e439c7774485\") " Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.059953 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2041e54-fb55-4f2a-8cf9-e439c7774485-logs\") pod \"f2041e54-fb55-4f2a-8cf9-e439c7774485\" (UID: \"f2041e54-fb55-4f2a-8cf9-e439c7774485\") " Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.059985 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2041e54-fb55-4f2a-8cf9-e439c7774485-config-data\") pod \"f2041e54-fb55-4f2a-8cf9-e439c7774485\" (UID: \"f2041e54-fb55-4f2a-8cf9-e439c7774485\") " Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.060057 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4k5r\" (UniqueName: \"kubernetes.io/projected/f2041e54-fb55-4f2a-8cf9-e439c7774485-kube-api-access-m4k5r\") pod \"f2041e54-fb55-4f2a-8cf9-e439c7774485\" (UID: \"f2041e54-fb55-4f2a-8cf9-e439c7774485\") " Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.062891 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2041e54-fb55-4f2a-8cf9-e439c7774485-logs" (OuterVolumeSpecName: "logs") pod "f2041e54-fb55-4f2a-8cf9-e439c7774485" (UID: "f2041e54-fb55-4f2a-8cf9-e439c7774485"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.063212 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2041e54-fb55-4f2a-8cf9-e439c7774485-kube-api-access-m4k5r" (OuterVolumeSpecName: "kube-api-access-m4k5r") pod "f2041e54-fb55-4f2a-8cf9-e439c7774485" (UID: "f2041e54-fb55-4f2a-8cf9-e439c7774485"). InnerVolumeSpecName "kube-api-access-m4k5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.064604 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2041e54-fb55-4f2a-8cf9-e439c7774485-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f2041e54-fb55-4f2a-8cf9-e439c7774485" (UID: "f2041e54-fb55-4f2a-8cf9-e439c7774485"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.081147 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2041e54-fb55-4f2a-8cf9-e439c7774485-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2041e54-fb55-4f2a-8cf9-e439c7774485" (UID: "f2041e54-fb55-4f2a-8cf9-e439c7774485"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.098632 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2041e54-fb55-4f2a-8cf9-e439c7774485-config-data" (OuterVolumeSpecName: "config-data") pod "f2041e54-fb55-4f2a-8cf9-e439c7774485" (UID: "f2041e54-fb55-4f2a-8cf9-e439c7774485"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.164382 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrh2n\" (UniqueName: \"kubernetes.io/projected/7d059c6f-eb7b-47ad-bdeb-2af976dd43d7-kube-api-access-vrh2n\") pod \"7d059c6f-eb7b-47ad-bdeb-2af976dd43d7\" (UID: \"7d059c6f-eb7b-47ad-bdeb-2af976dd43d7\") " Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.164700 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d059c6f-eb7b-47ad-bdeb-2af976dd43d7-config-data\") pod \"7d059c6f-eb7b-47ad-bdeb-2af976dd43d7\" (UID: \"7d059c6f-eb7b-47ad-bdeb-2af976dd43d7\") " Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.164741 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d059c6f-eb7b-47ad-bdeb-2af976dd43d7-combined-ca-bundle\") pod \"7d059c6f-eb7b-47ad-bdeb-2af976dd43d7\" (UID: \"7d059c6f-eb7b-47ad-bdeb-2af976dd43d7\") " Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.165094 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2041e54-fb55-4f2a-8cf9-e439c7774485-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.165142 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4k5r\" (UniqueName: \"kubernetes.io/projected/f2041e54-fb55-4f2a-8cf9-e439c7774485-kube-api-access-m4k5r\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.165158 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2041e54-fb55-4f2a-8cf9-e439c7774485-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.165170 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2041e54-fb55-4f2a-8cf9-e439c7774485-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.165181 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2041e54-fb55-4f2a-8cf9-e439c7774485-logs\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.169141 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d059c6f-eb7b-47ad-bdeb-2af976dd43d7-kube-api-access-vrh2n" (OuterVolumeSpecName: "kube-api-access-vrh2n") pod "7d059c6f-eb7b-47ad-bdeb-2af976dd43d7" (UID: "7d059c6f-eb7b-47ad-bdeb-2af976dd43d7"). InnerVolumeSpecName "kube-api-access-vrh2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.191024 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d059c6f-eb7b-47ad-bdeb-2af976dd43d7-config-data" (OuterVolumeSpecName: "config-data") pod "7d059c6f-eb7b-47ad-bdeb-2af976dd43d7" (UID: "7d059c6f-eb7b-47ad-bdeb-2af976dd43d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.191743 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d059c6f-eb7b-47ad-bdeb-2af976dd43d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d059c6f-eb7b-47ad-bdeb-2af976dd43d7" (UID: "7d059c6f-eb7b-47ad-bdeb-2af976dd43d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.266340 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d059c6f-eb7b-47ad-bdeb-2af976dd43d7-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.266705 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d059c6f-eb7b-47ad-bdeb-2af976dd43d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.266719 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrh2n\" (UniqueName: \"kubernetes.io/projected/7d059c6f-eb7b-47ad-bdeb-2af976dd43d7-kube-api-access-vrh2n\") on node \"crc\" DevicePath \"\"" Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.713155 4915 generic.go:334] "Generic (PLEG): container finished" podID="7d059c6f-eb7b-47ad-bdeb-2af976dd43d7" containerID="c7fb90c400c672a38753580b510a0c3a5677129c7aa4308ee8cb3a9337fd46e2" exitCode=0 Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.713252 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.713275 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7d059c6f-eb7b-47ad-bdeb-2af976dd43d7","Type":"ContainerDied","Data":"c7fb90c400c672a38753580b510a0c3a5677129c7aa4308ee8cb3a9337fd46e2"} Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.713307 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7d059c6f-eb7b-47ad-bdeb-2af976dd43d7","Type":"ContainerDied","Data":"eb386f79be2f2c517c1a63cc09c48bc18f5204fafe2161a73fd4032c6a7e536d"} Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.713326 4915 scope.go:117] "RemoveContainer" containerID="c7fb90c400c672a38753580b510a0c3a5677129c7aa4308ee8cb3a9337fd46e2" Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.715190 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f9776577f-2jndx" event={"ID":"f2041e54-fb55-4f2a-8cf9-e439c7774485","Type":"ContainerDied","Data":"2bd3496683d10b4d9c7be3493afac9d6f97f82c7c11b9e02477f35b574ff4b57"} Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.715278 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-f9776577f-2jndx" Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.737716 4915 scope.go:117] "RemoveContainer" containerID="c7fb90c400c672a38753580b510a0c3a5677129c7aa4308ee8cb3a9337fd46e2" Jan 27 19:04:58 crc kubenswrapper[4915]: E0127 19:04:58.739051 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7fb90c400c672a38753580b510a0c3a5677129c7aa4308ee8cb3a9337fd46e2\": container with ID starting with c7fb90c400c672a38753580b510a0c3a5677129c7aa4308ee8cb3a9337fd46e2 not found: ID does not exist" containerID="c7fb90c400c672a38753580b510a0c3a5677129c7aa4308ee8cb3a9337fd46e2" Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.739090 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7fb90c400c672a38753580b510a0c3a5677129c7aa4308ee8cb3a9337fd46e2"} err="failed to get container status \"c7fb90c400c672a38753580b510a0c3a5677129c7aa4308ee8cb3a9337fd46e2\": rpc error: code = NotFound desc = could not find container \"c7fb90c400c672a38753580b510a0c3a5677129c7aa4308ee8cb3a9337fd46e2\": container with ID starting with c7fb90c400c672a38753580b510a0c3a5677129c7aa4308ee8cb3a9337fd46e2 not found: ID does not exist" Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.739114 4915 scope.go:117] "RemoveContainer" containerID="23aa745b7dd112c1f66341caed263053b7f5ea24b9e8c71afd2243cebd1cab75" Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.762628 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-f9776577f-2jndx"] Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.775992 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-f9776577f-2jndx"] Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.781679 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.784358 4915 scope.go:117] "RemoveContainer" containerID="c4b9293d64815d72131fea7b74e18820d6f50efa91a0dc1e08436c11314f0de5" Jan 27 19:04:58 crc kubenswrapper[4915]: I0127 19:04:58.786277 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 19:04:59 crc kubenswrapper[4915]: I0127 19:04:59.370409 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b5f81dc-48ff-40c8-a0af-84c7c60338fd" path="/var/lib/kubelet/pods/5b5f81dc-48ff-40c8-a0af-84c7c60338fd/volumes" Jan 27 19:04:59 crc kubenswrapper[4915]: I0127 19:04:59.371117 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69486e9e-4ef8-4749-842f-a38dfeba60d3" path="/var/lib/kubelet/pods/69486e9e-4ef8-4749-842f-a38dfeba60d3/volumes" Jan 27 19:04:59 crc kubenswrapper[4915]: I0127 19:04:59.372246 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d059c6f-eb7b-47ad-bdeb-2af976dd43d7" path="/var/lib/kubelet/pods/7d059c6f-eb7b-47ad-bdeb-2af976dd43d7/volumes" Jan 27 19:04:59 crc kubenswrapper[4915]: I0127 19:04:59.373318 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3ead5d8-b1e5-4145-a6de-64c316f4027e" path="/var/lib/kubelet/pods/b3ead5d8-b1e5-4145-a6de-64c316f4027e/volumes" Jan 27 19:04:59 crc kubenswrapper[4915]: I0127 19:04:59.374168 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0031b79-12aa-4487-8501-6e122053cc13" path="/var/lib/kubelet/pods/d0031b79-12aa-4487-8501-6e122053cc13/volumes" Jan 27 19:04:59 crc kubenswrapper[4915]: I0127 19:04:59.375620 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2041e54-fb55-4f2a-8cf9-e439c7774485" path="/var/lib/kubelet/pods/f2041e54-fb55-4f2a-8cf9-e439c7774485/volumes" Jan 27 19:05:00 crc kubenswrapper[4915]: I0127 19:05:00.737409 4915 generic.go:334] "Generic (PLEG): container finished" podID="0b70cef8-be7d-4d25-87e3-c9916452d855" containerID="62678ab433696d7351f81f9c1770c3c809c3a53ec6feeae8a28042bf8f437350" exitCode=0 Jan 27 19:05:00 crc kubenswrapper[4915]: I0127 19:05:00.737500 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cc59d8b57-zj69c" event={"ID":"0b70cef8-be7d-4d25-87e3-c9916452d855","Type":"ContainerDied","Data":"62678ab433696d7351f81f9c1770c3c809c3a53ec6feeae8a28042bf8f437350"} Jan 27 19:05:00 crc kubenswrapper[4915]: I0127 19:05:00.737883 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cc59d8b57-zj69c" event={"ID":"0b70cef8-be7d-4d25-87e3-c9916452d855","Type":"ContainerDied","Data":"46d2d0a45b46bd93cc97664b69410895904ed5527da3fc4ab8f2104c135b3805"} Jan 27 19:05:00 crc kubenswrapper[4915]: I0127 19:05:00.737900 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46d2d0a45b46bd93cc97664b69410895904ed5527da3fc4ab8f2104c135b3805" Jan 27 19:05:00 crc kubenswrapper[4915]: I0127 19:05:00.740949 4915 generic.go:334] "Generic (PLEG): container finished" podID="f765c967-cd1f-44be-bf35-200a93f06c08" containerID="b6d441dae839212e31371ba1c55077036e6541f4ea0dbed85fba70fa1100e2eb" exitCode=0 Jan 27 19:05:00 crc kubenswrapper[4915]: I0127 19:05:00.740972 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f765c967-cd1f-44be-bf35-200a93f06c08","Type":"ContainerDied","Data":"b6d441dae839212e31371ba1c55077036e6541f4ea0dbed85fba70fa1100e2eb"} Jan 27 19:05:00 crc kubenswrapper[4915]: I0127 19:05:00.779993 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cc59d8b57-zj69c" Jan 27 19:05:00 crc kubenswrapper[4915]: I0127 19:05:00.925455 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vg7v\" (UniqueName: \"kubernetes.io/projected/0b70cef8-be7d-4d25-87e3-c9916452d855-kube-api-access-2vg7v\") pod \"0b70cef8-be7d-4d25-87e3-c9916452d855\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " Jan 27 19:05:00 crc kubenswrapper[4915]: I0127 19:05:00.925493 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-public-tls-certs\") pod \"0b70cef8-be7d-4d25-87e3-c9916452d855\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " Jan 27 19:05:00 crc kubenswrapper[4915]: I0127 19:05:00.925514 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-httpd-config\") pod \"0b70cef8-be7d-4d25-87e3-c9916452d855\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " Jan 27 19:05:00 crc kubenswrapper[4915]: I0127 19:05:00.925565 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-internal-tls-certs\") pod \"0b70cef8-be7d-4d25-87e3-c9916452d855\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " Jan 27 19:05:00 crc kubenswrapper[4915]: I0127 19:05:00.925602 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-config\") pod \"0b70cef8-be7d-4d25-87e3-c9916452d855\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " Jan 27 19:05:00 crc kubenswrapper[4915]: I0127 19:05:00.925639 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-ovndb-tls-certs\") pod \"0b70cef8-be7d-4d25-87e3-c9916452d855\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " Jan 27 19:05:00 crc kubenswrapper[4915]: I0127 19:05:00.925716 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-combined-ca-bundle\") pod \"0b70cef8-be7d-4d25-87e3-c9916452d855\" (UID: \"0b70cef8-be7d-4d25-87e3-c9916452d855\") " Jan 27 19:05:00 crc kubenswrapper[4915]: I0127 19:05:00.930647 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b70cef8-be7d-4d25-87e3-c9916452d855-kube-api-access-2vg7v" (OuterVolumeSpecName: "kube-api-access-2vg7v") pod "0b70cef8-be7d-4d25-87e3-c9916452d855" (UID: "0b70cef8-be7d-4d25-87e3-c9916452d855"). InnerVolumeSpecName "kube-api-access-2vg7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:05:00 crc kubenswrapper[4915]: I0127 19:05:00.962285 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0b70cef8-be7d-4d25-87e3-c9916452d855" (UID: "0b70cef8-be7d-4d25-87e3-c9916452d855"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:05:00 crc kubenswrapper[4915]: I0127 19:05:00.970197 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0b70cef8-be7d-4d25-87e3-c9916452d855" (UID: "0b70cef8-be7d-4d25-87e3-c9916452d855"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:05:00 crc kubenswrapper[4915]: I0127 19:05:00.981879 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-config" (OuterVolumeSpecName: "config") pod "0b70cef8-be7d-4d25-87e3-c9916452d855" (UID: "0b70cef8-be7d-4d25-87e3-c9916452d855"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:05:00 crc kubenswrapper[4915]: I0127 19:05:00.982508 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b70cef8-be7d-4d25-87e3-c9916452d855" (UID: "0b70cef8-be7d-4d25-87e3-c9916452d855"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:05:00 crc kubenswrapper[4915]: I0127 19:05:00.986005 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0b70cef8-be7d-4d25-87e3-c9916452d855" (UID: "0b70cef8-be7d-4d25-87e3-c9916452d855"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.003493 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "0b70cef8-be7d-4d25-87e3-c9916452d855" (UID: "0b70cef8-be7d-4d25-87e3-c9916452d855"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.021993 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.027824 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.027878 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vg7v\" (UniqueName: \"kubernetes.io/projected/0b70cef8-be7d-4d25-87e3-c9916452d855-kube-api-access-2vg7v\") on node \"crc\" DevicePath \"\"" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.028680 4915 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.028756 4915 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.028779 4915 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.028798 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-config\") on node \"crc\" DevicePath \"\"" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.028833 4915 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b70cef8-be7d-4d25-87e3-c9916452d855-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.129887 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nw9x\" (UniqueName: \"kubernetes.io/projected/f765c967-cd1f-44be-bf35-200a93f06c08-kube-api-access-6nw9x\") pod \"f765c967-cd1f-44be-bf35-200a93f06c08\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.129950 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-ceilometer-tls-certs\") pod \"f765c967-cd1f-44be-bf35-200a93f06c08\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.130040 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-config-data\") pod \"f765c967-cd1f-44be-bf35-200a93f06c08\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.130068 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f765c967-cd1f-44be-bf35-200a93f06c08-log-httpd\") pod \"f765c967-cd1f-44be-bf35-200a93f06c08\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.130101 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f765c967-cd1f-44be-bf35-200a93f06c08-run-httpd\") pod \"f765c967-cd1f-44be-bf35-200a93f06c08\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.130129 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-sg-core-conf-yaml\") pod \"f765c967-cd1f-44be-bf35-200a93f06c08\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.130157 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-scripts\") pod \"f765c967-cd1f-44be-bf35-200a93f06c08\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.130194 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-combined-ca-bundle\") pod \"f765c967-cd1f-44be-bf35-200a93f06c08\" (UID: \"f765c967-cd1f-44be-bf35-200a93f06c08\") " Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.130665 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f765c967-cd1f-44be-bf35-200a93f06c08-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f765c967-cd1f-44be-bf35-200a93f06c08" (UID: "f765c967-cd1f-44be-bf35-200a93f06c08"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.131164 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f765c967-cd1f-44be-bf35-200a93f06c08-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f765c967-cd1f-44be-bf35-200a93f06c08" (UID: "f765c967-cd1f-44be-bf35-200a93f06c08"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.133714 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f765c967-cd1f-44be-bf35-200a93f06c08-kube-api-access-6nw9x" (OuterVolumeSpecName: "kube-api-access-6nw9x") pod "f765c967-cd1f-44be-bf35-200a93f06c08" (UID: "f765c967-cd1f-44be-bf35-200a93f06c08"). InnerVolumeSpecName "kube-api-access-6nw9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.134377 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-scripts" (OuterVolumeSpecName: "scripts") pod "f765c967-cd1f-44be-bf35-200a93f06c08" (UID: "f765c967-cd1f-44be-bf35-200a93f06c08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.149468 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f765c967-cd1f-44be-bf35-200a93f06c08" (UID: "f765c967-cd1f-44be-bf35-200a93f06c08"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.181122 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f765c967-cd1f-44be-bf35-200a93f06c08" (UID: "f765c967-cd1f-44be-bf35-200a93f06c08"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.200941 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-config-data" (OuterVolumeSpecName: "config-data") pod "f765c967-cd1f-44be-bf35-200a93f06c08" (UID: "f765c967-cd1f-44be-bf35-200a93f06c08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.207944 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f765c967-cd1f-44be-bf35-200a93f06c08" (UID: "f765c967-cd1f-44be-bf35-200a93f06c08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.231714 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.231765 4915 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f765c967-cd1f-44be-bf35-200a93f06c08-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.231777 4915 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f765c967-cd1f-44be-bf35-200a93f06c08-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.231792 4915 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.231822 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.231834 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.231847 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nw9x\" (UniqueName: \"kubernetes.io/projected/f765c967-cd1f-44be-bf35-200a93f06c08-kube-api-access-6nw9x\") on node \"crc\" DevicePath \"\"" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.231858 4915 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f765c967-cd1f-44be-bf35-200a93f06c08-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 19:05:01 crc kubenswrapper[4915]: E0127 19:05:01.627385 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91 is running failed: container process not found" containerID="416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 19:05:01 crc kubenswrapper[4915]: E0127 19:05:01.628261 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91 is running failed: container process not found" containerID="416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 19:05:01 crc kubenswrapper[4915]: E0127 19:05:01.628930 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91 is running failed: container process not found" containerID="416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 19:05:01 crc kubenswrapper[4915]: E0127 19:05:01.628979 4915 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-g8pg6" podUID="f070ee25-edfb-4020-b526-3ec9d6c727bc" containerName="ovsdb-server" Jan 27 19:05:01 crc kubenswrapper[4915]: E0127 19:05:01.629173 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 19:05:01 crc kubenswrapper[4915]: E0127 19:05:01.630638 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 19:05:01 crc kubenswrapper[4915]: E0127 19:05:01.631899 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 19:05:01 crc kubenswrapper[4915]: E0127 19:05:01.631939 4915 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-g8pg6" podUID="f070ee25-edfb-4020-b526-3ec9d6c727bc" containerName="ovs-vswitchd" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.760653 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f765c967-cd1f-44be-bf35-200a93f06c08","Type":"ContainerDied","Data":"03756e0180e06eef65db1069717371cc3e130eab91d786bd0c865735e0e8527b"} Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.760690 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.760733 4915 scope.go:117] "RemoveContainer" containerID="442aa21a700a655bb2c5e592d27927399226ce7e4818c5d5466669e4014e4238" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.761991 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cc59d8b57-zj69c" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.807632 4915 scope.go:117] "RemoveContainer" containerID="fe19ba02a7b46df32800af54c264a8f50785062daa8c7d556b47dcdfaa7f8440" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.814152 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cc59d8b57-zj69c"] Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.823549 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-cc59d8b57-zj69c"] Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.829742 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.834316 4915 scope.go:117] "RemoveContainer" containerID="b6d441dae839212e31371ba1c55077036e6541f4ea0dbed85fba70fa1100e2eb" Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.835509 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 19:05:01 crc kubenswrapper[4915]: I0127 19:05:01.863096 4915 scope.go:117] "RemoveContainer" containerID="91dde6520b09dd9a3bee7b1dccad272eed3957544bd65c78044cde31b5a5a33a" Jan 27 19:05:03 crc kubenswrapper[4915]: I0127 19:05:03.369942 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b70cef8-be7d-4d25-87e3-c9916452d855" path="/var/lib/kubelet/pods/0b70cef8-be7d-4d25-87e3-c9916452d855/volumes" Jan 27 19:05:03 crc kubenswrapper[4915]: I0127 19:05:03.371126 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f765c967-cd1f-44be-bf35-200a93f06c08" path="/var/lib/kubelet/pods/f765c967-cd1f-44be-bf35-200a93f06c08/volumes" Jan 27 19:05:06 crc kubenswrapper[4915]: E0127 19:05:06.627098 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91 is running failed: container process not found" containerID="416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 19:05:06 crc kubenswrapper[4915]: E0127 19:05:06.627931 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91 is running failed: container process not found" containerID="416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 19:05:06 crc kubenswrapper[4915]: E0127 19:05:06.628166 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91 is running failed: container process not found" containerID="416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 19:05:06 crc kubenswrapper[4915]: E0127 19:05:06.628203 4915 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-g8pg6" podUID="f070ee25-edfb-4020-b526-3ec9d6c727bc" containerName="ovsdb-server" Jan 27 19:05:06 crc kubenswrapper[4915]: E0127 19:05:06.629563 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 19:05:06 crc kubenswrapper[4915]: E0127 19:05:06.631828 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 19:05:06 crc kubenswrapper[4915]: E0127 19:05:06.633518 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 19:05:06 crc kubenswrapper[4915]: E0127 19:05:06.633566 4915 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-g8pg6" podUID="f070ee25-edfb-4020-b526-3ec9d6c727bc" containerName="ovs-vswitchd" Jan 27 19:05:11 crc kubenswrapper[4915]: E0127 19:05:11.627092 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91 is running failed: container process not found" containerID="416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 19:05:11 crc kubenswrapper[4915]: E0127 19:05:11.628000 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91 is running failed: container process not found" containerID="416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 19:05:11 crc kubenswrapper[4915]: E0127 19:05:11.628356 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91 is running failed: container process not found" containerID="416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 19:05:11 crc kubenswrapper[4915]: E0127 19:05:11.628414 4915 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-g8pg6" podUID="f070ee25-edfb-4020-b526-3ec9d6c727bc" containerName="ovsdb-server" Jan 27 19:05:11 crc kubenswrapper[4915]: E0127 19:05:11.630080 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 19:05:11 crc kubenswrapper[4915]: E0127 19:05:11.631748 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 19:05:11 crc kubenswrapper[4915]: E0127 19:05:11.633333 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 19:05:11 crc kubenswrapper[4915]: E0127 19:05:11.633458 4915 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-g8pg6" podUID="f070ee25-edfb-4020-b526-3ec9d6c727bc" containerName="ovs-vswitchd" Jan 27 19:05:16 crc kubenswrapper[4915]: E0127 19:05:16.627562 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91 is running failed: container process not found" containerID="416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 19:05:16 crc kubenswrapper[4915]: E0127 19:05:16.628497 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91 is running failed: container process not found" containerID="416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 19:05:16 crc kubenswrapper[4915]: E0127 19:05:16.628549 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 19:05:16 crc kubenswrapper[4915]: E0127 19:05:16.629382 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91 is running failed: container process not found" containerID="416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 19:05:16 crc kubenswrapper[4915]: E0127 19:05:16.629476 4915 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-g8pg6" podUID="f070ee25-edfb-4020-b526-3ec9d6c727bc" containerName="ovsdb-server" Jan 27 19:05:16 crc kubenswrapper[4915]: E0127 19:05:16.629828 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 19:05:16 crc kubenswrapper[4915]: E0127 19:05:16.631682 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 19:05:16 crc kubenswrapper[4915]: E0127 19:05:16.631773 4915 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-g8pg6" podUID="f070ee25-edfb-4020-b526-3ec9d6c727bc" containerName="ovs-vswitchd" Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.659211 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-g8pg6_f070ee25-edfb-4020-b526-3ec9d6c727bc/ovs-vswitchd/0.log" Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.660605 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-g8pg6" Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.717992 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f070ee25-edfb-4020-b526-3ec9d6c727bc-etc-ovs\") pod \"f070ee25-edfb-4020-b526-3ec9d6c727bc\" (UID: \"f070ee25-edfb-4020-b526-3ec9d6c727bc\") " Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.718047 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f070ee25-edfb-4020-b526-3ec9d6c727bc-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "f070ee25-edfb-4020-b526-3ec9d6c727bc" (UID: "f070ee25-edfb-4020-b526-3ec9d6c727bc"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.718091 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f070ee25-edfb-4020-b526-3ec9d6c727bc-var-lib\") pod \"f070ee25-edfb-4020-b526-3ec9d6c727bc\" (UID: \"f070ee25-edfb-4020-b526-3ec9d6c727bc\") " Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.718169 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f070ee25-edfb-4020-b526-3ec9d6c727bc-var-lib" (OuterVolumeSpecName: "var-lib") pod "f070ee25-edfb-4020-b526-3ec9d6c727bc" (UID: "f070ee25-edfb-4020-b526-3ec9d6c727bc"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.718225 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg78k\" (UniqueName: \"kubernetes.io/projected/f070ee25-edfb-4020-b526-3ec9d6c727bc-kube-api-access-cg78k\") pod \"f070ee25-edfb-4020-b526-3ec9d6c727bc\" (UID: \"f070ee25-edfb-4020-b526-3ec9d6c727bc\") " Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.718259 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f070ee25-edfb-4020-b526-3ec9d6c727bc-scripts\") pod \"f070ee25-edfb-4020-b526-3ec9d6c727bc\" (UID: \"f070ee25-edfb-4020-b526-3ec9d6c727bc\") " Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.718279 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f070ee25-edfb-4020-b526-3ec9d6c727bc-var-run\") pod \"f070ee25-edfb-4020-b526-3ec9d6c727bc\" (UID: \"f070ee25-edfb-4020-b526-3ec9d6c727bc\") " Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.718309 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f070ee25-edfb-4020-b526-3ec9d6c727bc-var-log\") pod \"f070ee25-edfb-4020-b526-3ec9d6c727bc\" (UID: \"f070ee25-edfb-4020-b526-3ec9d6c727bc\") " Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.718384 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f070ee25-edfb-4020-b526-3ec9d6c727bc-var-run" (OuterVolumeSpecName: "var-run") pod "f070ee25-edfb-4020-b526-3ec9d6c727bc" (UID: "f070ee25-edfb-4020-b526-3ec9d6c727bc"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.718483 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f070ee25-edfb-4020-b526-3ec9d6c727bc-var-log" (OuterVolumeSpecName: "var-log") pod "f070ee25-edfb-4020-b526-3ec9d6c727bc" (UID: "f070ee25-edfb-4020-b526-3ec9d6c727bc"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.718690 4915 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f070ee25-edfb-4020-b526-3ec9d6c727bc-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.718706 4915 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f070ee25-edfb-4020-b526-3ec9d6c727bc-var-lib\") on node \"crc\" DevicePath \"\"" Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.718716 4915 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f070ee25-edfb-4020-b526-3ec9d6c727bc-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.718725 4915 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f070ee25-edfb-4020-b526-3ec9d6c727bc-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.719639 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f070ee25-edfb-4020-b526-3ec9d6c727bc-scripts" (OuterVolumeSpecName: "scripts") pod "f070ee25-edfb-4020-b526-3ec9d6c727bc" (UID: "f070ee25-edfb-4020-b526-3ec9d6c727bc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.724972 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f070ee25-edfb-4020-b526-3ec9d6c727bc-kube-api-access-cg78k" (OuterVolumeSpecName: "kube-api-access-cg78k") pod "f070ee25-edfb-4020-b526-3ec9d6c727bc" (UID: "f070ee25-edfb-4020-b526-3ec9d6c727bc"). InnerVolumeSpecName "kube-api-access-cg78k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.820170 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg78k\" (UniqueName: \"kubernetes.io/projected/f070ee25-edfb-4020-b526-3ec9d6c727bc-kube-api-access-cg78k\") on node \"crc\" DevicePath \"\"" Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.820208 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f070ee25-edfb-4020-b526-3ec9d6c727bc-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.930666 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-g8pg6_f070ee25-edfb-4020-b526-3ec9d6c727bc/ovs-vswitchd/0.log" Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.932184 4915 generic.go:334] "Generic (PLEG): container finished" podID="f070ee25-edfb-4020-b526-3ec9d6c727bc" containerID="d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f" exitCode=137 Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.932224 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-g8pg6" event={"ID":"f070ee25-edfb-4020-b526-3ec9d6c727bc","Type":"ContainerDied","Data":"d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f"} Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.932251 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-g8pg6" event={"ID":"f070ee25-edfb-4020-b526-3ec9d6c727bc","Type":"ContainerDied","Data":"12a34a821760a0f530c261e714f0db2f0f22dc4335757b7906bda953def91aa4"} Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.932268 4915 scope.go:117] "RemoveContainer" containerID="d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f" Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.932431 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-g8pg6" Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.992516 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-g8pg6"] Jan 27 19:05:19 crc kubenswrapper[4915]: I0127 19:05:19.999206 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-g8pg6"] Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.003186 4915 scope.go:117] "RemoveContainer" containerID="416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.037472 4915 scope.go:117] "RemoveContainer" containerID="acec8eb90e92e5f007da92bc17cf2b949fa14a35fa9a64da0f7f65b3d4845a0e" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.067618 4915 scope.go:117] "RemoveContainer" containerID="d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f" Jan 27 19:05:20 crc kubenswrapper[4915]: E0127 19:05:20.068156 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f\": container with ID starting with d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f not found: ID does not exist" containerID="d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.068194 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f"} err="failed to get container status \"d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f\": rpc error: code = NotFound desc = could not find container \"d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f\": container with ID starting with d43c55e36ed978e90f8191161811612daa0e9289fff647ae3c9131ebe3e2800f not found: ID does not exist" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.068219 4915 scope.go:117] "RemoveContainer" containerID="416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91" Jan 27 19:05:20 crc kubenswrapper[4915]: E0127 19:05:20.068501 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91\": container with ID starting with 416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91 not found: ID does not exist" containerID="416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.068528 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91"} err="failed to get container status \"416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91\": rpc error: code = NotFound desc = could not find container \"416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91\": container with ID starting with 416aa024dd84718d2278e120e5b9548f1485a7efe9eacb3f0b102fb77cb0cd91 not found: ID does not exist" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.068546 4915 scope.go:117] "RemoveContainer" containerID="acec8eb90e92e5f007da92bc17cf2b949fa14a35fa9a64da0f7f65b3d4845a0e" Jan 27 19:05:20 crc kubenswrapper[4915]: E0127 19:05:20.068992 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acec8eb90e92e5f007da92bc17cf2b949fa14a35fa9a64da0f7f65b3d4845a0e\": container with ID starting with acec8eb90e92e5f007da92bc17cf2b949fa14a35fa9a64da0f7f65b3d4845a0e not found: ID does not exist" containerID="acec8eb90e92e5f007da92bc17cf2b949fa14a35fa9a64da0f7f65b3d4845a0e" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.069024 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acec8eb90e92e5f007da92bc17cf2b949fa14a35fa9a64da0f7f65b3d4845a0e"} err="failed to get container status \"acec8eb90e92e5f007da92bc17cf2b949fa14a35fa9a64da0f7f65b3d4845a0e\": rpc error: code = NotFound desc = could not find container \"acec8eb90e92e5f007da92bc17cf2b949fa14a35fa9a64da0f7f65b3d4845a0e\": container with ID starting with acec8eb90e92e5f007da92bc17cf2b949fa14a35fa9a64da0f7f65b3d4845a0e not found: ID does not exist" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.318618 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.427505 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkjcq\" (UniqueName: \"kubernetes.io/projected/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-kube-api-access-mkjcq\") pod \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.427538 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.427570 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-lock\") pod \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.427611 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-etc-swift\") pod \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.427638 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-combined-ca-bundle\") pod \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.427690 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-cache\") pod \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\" (UID: \"a50240d6-5cb2-4e11-a9da-5a7c682b5d93\") " Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.428008 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-lock" (OuterVolumeSpecName: "lock") pod "a50240d6-5cb2-4e11-a9da-5a7c682b5d93" (UID: "a50240d6-5cb2-4e11-a9da-5a7c682b5d93"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.428180 4915 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-lock\") on node \"crc\" DevicePath \"\"" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.428369 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-cache" (OuterVolumeSpecName: "cache") pod "a50240d6-5cb2-4e11-a9da-5a7c682b5d93" (UID: "a50240d6-5cb2-4e11-a9da-5a7c682b5d93"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.432281 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-kube-api-access-mkjcq" (OuterVolumeSpecName: "kube-api-access-mkjcq") pod "a50240d6-5cb2-4e11-a9da-5a7c682b5d93" (UID: "a50240d6-5cb2-4e11-a9da-5a7c682b5d93"). InnerVolumeSpecName "kube-api-access-mkjcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.434875 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a50240d6-5cb2-4e11-a9da-5a7c682b5d93" (UID: "a50240d6-5cb2-4e11-a9da-5a7c682b5d93"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.436951 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "swift") pod "a50240d6-5cb2-4e11-a9da-5a7c682b5d93" (UID: "a50240d6-5cb2-4e11-a9da-5a7c682b5d93"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.529605 4915 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-cache\") on node \"crc\" DevicePath \"\"" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.529642 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkjcq\" (UniqueName: \"kubernetes.io/projected/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-kube-api-access-mkjcq\") on node \"crc\" DevicePath \"\"" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.529676 4915 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.529689 4915 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.549038 4915 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.625018 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.625077 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.625123 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.625741 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0ffdf2d8bcfcee0fd3a2af2a920f2189c02f8aef159f2288deb6331f5a73c9e0"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.625831 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://0ffdf2d8bcfcee0fd3a2af2a920f2189c02f8aef159f2288deb6331f5a73c9e0" gracePeriod=600 Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.630632 4915 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.671042 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a50240d6-5cb2-4e11-a9da-5a7c682b5d93" (UID: "a50240d6-5cb2-4e11-a9da-5a7c682b5d93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.732240 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a50240d6-5cb2-4e11-a9da-5a7c682b5d93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.953045 4915 generic.go:334] "Generic (PLEG): container finished" podID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerID="46b8d42d59a4c9134b4742daa5c213bf555353d1a589cfa941606255e7eafb0d" exitCode=137 Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.953148 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerDied","Data":"46b8d42d59a4c9134b4742daa5c213bf555353d1a589cfa941606255e7eafb0d"} Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.953554 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a50240d6-5cb2-4e11-a9da-5a7c682b5d93","Type":"ContainerDied","Data":"3f3ea44280e6cf33cd3b40f5391706066ff6128d8fe5802e49d970c147950d80"} Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.953176 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.953618 4915 scope.go:117] "RemoveContainer" containerID="46b8d42d59a4c9134b4742daa5c213bf555353d1a589cfa941606255e7eafb0d" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.958441 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="0ffdf2d8bcfcee0fd3a2af2a920f2189c02f8aef159f2288deb6331f5a73c9e0" exitCode=0 Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.958484 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"0ffdf2d8bcfcee0fd3a2af2a920f2189c02f8aef159f2288deb6331f5a73c9e0"} Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.958516 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080"} Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.978065 4915 scope.go:117] "RemoveContainer" containerID="fd4068096fd5f2903d7d5df0c34be282b42e91a2fefeb9a0704db78cfc32d4e3" Jan 27 19:05:20 crc kubenswrapper[4915]: I0127 19:05:20.995644 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.001131 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.002026 4915 scope.go:117] "RemoveContainer" containerID="ff8cba282b52306ad467900708f6df7a1eaf511c00b3b5981ea26c5c9e70e013" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.019137 4915 scope.go:117] "RemoveContainer" containerID="b98e82fbb1e40903fba45d17c1b07ac3a0e0b6c3be8ba49bc847d57befbdd5ff" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.036396 4915 scope.go:117] "RemoveContainer" containerID="106e5d620c0c7dcf69873a17ca1ebc31c7e5a9422f1f81e014cea3df62b62fb9" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.058638 4915 scope.go:117] "RemoveContainer" containerID="1f21ec545ff0259b4b18263e691718a5351bd9422f37264a3483845c80f1cefb" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.086908 4915 scope.go:117] "RemoveContainer" containerID="78a076b42bd4e681328f57f13c01615e258359268ea1466d9b974011df9b71db" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.109545 4915 scope.go:117] "RemoveContainer" containerID="f190d7512b553feb7d1dac05ec55ec86498a8ed1cdd7497628af16abd2455ecd" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.128022 4915 scope.go:117] "RemoveContainer" containerID="656265f338caee64724f8f589fe0017f1ae9b7a5d7efe0c5ad8db7e9f450d63b" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.153434 4915 scope.go:117] "RemoveContainer" containerID="c8eff7dd1e13e6c536d79e1ff22aa0f1ba0d443e44584288ac87cb8ec64b4588" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.174664 4915 scope.go:117] "RemoveContainer" containerID="beb400939f63213e02a46a559003731d0fda6c025a7c4af4fba8c1ee07526691" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.197100 4915 scope.go:117] "RemoveContainer" containerID="b249e78afb53948517adb9de33f9e258399a60ef239c7f6384d9c12096b96675" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.269197 4915 scope.go:117] "RemoveContainer" containerID="3b66e42edfceb0c87e0accf31893b0d59c8f918a236d296da7127a8dce3823a5" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.287546 4915 scope.go:117] "RemoveContainer" containerID="72eb6ab24b0561bd223bc040fab18a8f09dd1c00079159354ab22b869271f0ca" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.311353 4915 scope.go:117] "RemoveContainer" containerID="e4c6d8d3c41dd8a56303be9ab172d4c9244b4181c2505e4034f82cf5c1a04a49" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.337203 4915 scope.go:117] "RemoveContainer" containerID="46b8d42d59a4c9134b4742daa5c213bf555353d1a589cfa941606255e7eafb0d" Jan 27 19:05:21 crc kubenswrapper[4915]: E0127 19:05:21.337708 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46b8d42d59a4c9134b4742daa5c213bf555353d1a589cfa941606255e7eafb0d\": container with ID starting with 46b8d42d59a4c9134b4742daa5c213bf555353d1a589cfa941606255e7eafb0d not found: ID does not exist" containerID="46b8d42d59a4c9134b4742daa5c213bf555353d1a589cfa941606255e7eafb0d" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.337759 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46b8d42d59a4c9134b4742daa5c213bf555353d1a589cfa941606255e7eafb0d"} err="failed to get container status \"46b8d42d59a4c9134b4742daa5c213bf555353d1a589cfa941606255e7eafb0d\": rpc error: code = NotFound desc = could not find container \"46b8d42d59a4c9134b4742daa5c213bf555353d1a589cfa941606255e7eafb0d\": container with ID starting with 46b8d42d59a4c9134b4742daa5c213bf555353d1a589cfa941606255e7eafb0d not found: ID does not exist" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.337799 4915 scope.go:117] "RemoveContainer" containerID="fd4068096fd5f2903d7d5df0c34be282b42e91a2fefeb9a0704db78cfc32d4e3" Jan 27 19:05:21 crc kubenswrapper[4915]: E0127 19:05:21.338217 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd4068096fd5f2903d7d5df0c34be282b42e91a2fefeb9a0704db78cfc32d4e3\": container with ID starting with fd4068096fd5f2903d7d5df0c34be282b42e91a2fefeb9a0704db78cfc32d4e3 not found: ID does not exist" containerID="fd4068096fd5f2903d7d5df0c34be282b42e91a2fefeb9a0704db78cfc32d4e3" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.338254 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd4068096fd5f2903d7d5df0c34be282b42e91a2fefeb9a0704db78cfc32d4e3"} err="failed to get container status \"fd4068096fd5f2903d7d5df0c34be282b42e91a2fefeb9a0704db78cfc32d4e3\": rpc error: code = NotFound desc = could not find container \"fd4068096fd5f2903d7d5df0c34be282b42e91a2fefeb9a0704db78cfc32d4e3\": container with ID starting with fd4068096fd5f2903d7d5df0c34be282b42e91a2fefeb9a0704db78cfc32d4e3 not found: ID does not exist" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.338280 4915 scope.go:117] "RemoveContainer" containerID="ff8cba282b52306ad467900708f6df7a1eaf511c00b3b5981ea26c5c9e70e013" Jan 27 19:05:21 crc kubenswrapper[4915]: E0127 19:05:21.338520 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff8cba282b52306ad467900708f6df7a1eaf511c00b3b5981ea26c5c9e70e013\": container with ID starting with ff8cba282b52306ad467900708f6df7a1eaf511c00b3b5981ea26c5c9e70e013 not found: ID does not exist" containerID="ff8cba282b52306ad467900708f6df7a1eaf511c00b3b5981ea26c5c9e70e013" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.338549 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff8cba282b52306ad467900708f6df7a1eaf511c00b3b5981ea26c5c9e70e013"} err="failed to get container status \"ff8cba282b52306ad467900708f6df7a1eaf511c00b3b5981ea26c5c9e70e013\": rpc error: code = NotFound desc = could not find container \"ff8cba282b52306ad467900708f6df7a1eaf511c00b3b5981ea26c5c9e70e013\": container with ID starting with ff8cba282b52306ad467900708f6df7a1eaf511c00b3b5981ea26c5c9e70e013 not found: ID does not exist" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.338563 4915 scope.go:117] "RemoveContainer" containerID="b98e82fbb1e40903fba45d17c1b07ac3a0e0b6c3be8ba49bc847d57befbdd5ff" Jan 27 19:05:21 crc kubenswrapper[4915]: E0127 19:05:21.339050 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b98e82fbb1e40903fba45d17c1b07ac3a0e0b6c3be8ba49bc847d57befbdd5ff\": container with ID starting with b98e82fbb1e40903fba45d17c1b07ac3a0e0b6c3be8ba49bc847d57befbdd5ff not found: ID does not exist" containerID="b98e82fbb1e40903fba45d17c1b07ac3a0e0b6c3be8ba49bc847d57befbdd5ff" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.339099 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b98e82fbb1e40903fba45d17c1b07ac3a0e0b6c3be8ba49bc847d57befbdd5ff"} err="failed to get container status \"b98e82fbb1e40903fba45d17c1b07ac3a0e0b6c3be8ba49bc847d57befbdd5ff\": rpc error: code = NotFound desc = could not find container \"b98e82fbb1e40903fba45d17c1b07ac3a0e0b6c3be8ba49bc847d57befbdd5ff\": container with ID starting with b98e82fbb1e40903fba45d17c1b07ac3a0e0b6c3be8ba49bc847d57befbdd5ff not found: ID does not exist" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.339131 4915 scope.go:117] "RemoveContainer" containerID="106e5d620c0c7dcf69873a17ca1ebc31c7e5a9422f1f81e014cea3df62b62fb9" Jan 27 19:05:21 crc kubenswrapper[4915]: E0127 19:05:21.339545 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"106e5d620c0c7dcf69873a17ca1ebc31c7e5a9422f1f81e014cea3df62b62fb9\": container with ID starting with 106e5d620c0c7dcf69873a17ca1ebc31c7e5a9422f1f81e014cea3df62b62fb9 not found: ID does not exist" containerID="106e5d620c0c7dcf69873a17ca1ebc31c7e5a9422f1f81e014cea3df62b62fb9" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.339566 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"106e5d620c0c7dcf69873a17ca1ebc31c7e5a9422f1f81e014cea3df62b62fb9"} err="failed to get container status \"106e5d620c0c7dcf69873a17ca1ebc31c7e5a9422f1f81e014cea3df62b62fb9\": rpc error: code = NotFound desc = could not find container \"106e5d620c0c7dcf69873a17ca1ebc31c7e5a9422f1f81e014cea3df62b62fb9\": container with ID starting with 106e5d620c0c7dcf69873a17ca1ebc31c7e5a9422f1f81e014cea3df62b62fb9 not found: ID does not exist" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.339584 4915 scope.go:117] "RemoveContainer" containerID="1f21ec545ff0259b4b18263e691718a5351bd9422f37264a3483845c80f1cefb" Jan 27 19:05:21 crc kubenswrapper[4915]: E0127 19:05:21.340039 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f21ec545ff0259b4b18263e691718a5351bd9422f37264a3483845c80f1cefb\": container with ID starting with 1f21ec545ff0259b4b18263e691718a5351bd9422f37264a3483845c80f1cefb not found: ID does not exist" containerID="1f21ec545ff0259b4b18263e691718a5351bd9422f37264a3483845c80f1cefb" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.340061 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f21ec545ff0259b4b18263e691718a5351bd9422f37264a3483845c80f1cefb"} err="failed to get container status \"1f21ec545ff0259b4b18263e691718a5351bd9422f37264a3483845c80f1cefb\": rpc error: code = NotFound desc = could not find container \"1f21ec545ff0259b4b18263e691718a5351bd9422f37264a3483845c80f1cefb\": container with ID starting with 1f21ec545ff0259b4b18263e691718a5351bd9422f37264a3483845c80f1cefb not found: ID does not exist" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.340074 4915 scope.go:117] "RemoveContainer" containerID="78a076b42bd4e681328f57f13c01615e258359268ea1466d9b974011df9b71db" Jan 27 19:05:21 crc kubenswrapper[4915]: E0127 19:05:21.340496 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78a076b42bd4e681328f57f13c01615e258359268ea1466d9b974011df9b71db\": container with ID starting with 78a076b42bd4e681328f57f13c01615e258359268ea1466d9b974011df9b71db not found: ID does not exist" containerID="78a076b42bd4e681328f57f13c01615e258359268ea1466d9b974011df9b71db" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.340554 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78a076b42bd4e681328f57f13c01615e258359268ea1466d9b974011df9b71db"} err="failed to get container status \"78a076b42bd4e681328f57f13c01615e258359268ea1466d9b974011df9b71db\": rpc error: code = NotFound desc = could not find container \"78a076b42bd4e681328f57f13c01615e258359268ea1466d9b974011df9b71db\": container with ID starting with 78a076b42bd4e681328f57f13c01615e258359268ea1466d9b974011df9b71db not found: ID does not exist" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.340577 4915 scope.go:117] "RemoveContainer" containerID="f190d7512b553feb7d1dac05ec55ec86498a8ed1cdd7497628af16abd2455ecd" Jan 27 19:05:21 crc kubenswrapper[4915]: E0127 19:05:21.340970 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f190d7512b553feb7d1dac05ec55ec86498a8ed1cdd7497628af16abd2455ecd\": container with ID starting with f190d7512b553feb7d1dac05ec55ec86498a8ed1cdd7497628af16abd2455ecd not found: ID does not exist" containerID="f190d7512b553feb7d1dac05ec55ec86498a8ed1cdd7497628af16abd2455ecd" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.340999 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f190d7512b553feb7d1dac05ec55ec86498a8ed1cdd7497628af16abd2455ecd"} err="failed to get container status \"f190d7512b553feb7d1dac05ec55ec86498a8ed1cdd7497628af16abd2455ecd\": rpc error: code = NotFound desc = could not find container \"f190d7512b553feb7d1dac05ec55ec86498a8ed1cdd7497628af16abd2455ecd\": container with ID starting with f190d7512b553feb7d1dac05ec55ec86498a8ed1cdd7497628af16abd2455ecd not found: ID does not exist" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.341019 4915 scope.go:117] "RemoveContainer" containerID="656265f338caee64724f8f589fe0017f1ae9b7a5d7efe0c5ad8db7e9f450d63b" Jan 27 19:05:21 crc kubenswrapper[4915]: E0127 19:05:21.341403 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"656265f338caee64724f8f589fe0017f1ae9b7a5d7efe0c5ad8db7e9f450d63b\": container with ID starting with 656265f338caee64724f8f589fe0017f1ae9b7a5d7efe0c5ad8db7e9f450d63b not found: ID does not exist" containerID="656265f338caee64724f8f589fe0017f1ae9b7a5d7efe0c5ad8db7e9f450d63b" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.341431 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"656265f338caee64724f8f589fe0017f1ae9b7a5d7efe0c5ad8db7e9f450d63b"} err="failed to get container status \"656265f338caee64724f8f589fe0017f1ae9b7a5d7efe0c5ad8db7e9f450d63b\": rpc error: code = NotFound desc = could not find container \"656265f338caee64724f8f589fe0017f1ae9b7a5d7efe0c5ad8db7e9f450d63b\": container with ID starting with 656265f338caee64724f8f589fe0017f1ae9b7a5d7efe0c5ad8db7e9f450d63b not found: ID does not exist" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.341448 4915 scope.go:117] "RemoveContainer" containerID="c8eff7dd1e13e6c536d79e1ff22aa0f1ba0d443e44584288ac87cb8ec64b4588" Jan 27 19:05:21 crc kubenswrapper[4915]: E0127 19:05:21.341724 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8eff7dd1e13e6c536d79e1ff22aa0f1ba0d443e44584288ac87cb8ec64b4588\": container with ID starting with c8eff7dd1e13e6c536d79e1ff22aa0f1ba0d443e44584288ac87cb8ec64b4588 not found: ID does not exist" containerID="c8eff7dd1e13e6c536d79e1ff22aa0f1ba0d443e44584288ac87cb8ec64b4588" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.341752 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8eff7dd1e13e6c536d79e1ff22aa0f1ba0d443e44584288ac87cb8ec64b4588"} err="failed to get container status \"c8eff7dd1e13e6c536d79e1ff22aa0f1ba0d443e44584288ac87cb8ec64b4588\": rpc error: code = NotFound desc = could not find container \"c8eff7dd1e13e6c536d79e1ff22aa0f1ba0d443e44584288ac87cb8ec64b4588\": container with ID starting with c8eff7dd1e13e6c536d79e1ff22aa0f1ba0d443e44584288ac87cb8ec64b4588 not found: ID does not exist" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.341768 4915 scope.go:117] "RemoveContainer" containerID="beb400939f63213e02a46a559003731d0fda6c025a7c4af4fba8c1ee07526691" Jan 27 19:05:21 crc kubenswrapper[4915]: E0127 19:05:21.342042 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"beb400939f63213e02a46a559003731d0fda6c025a7c4af4fba8c1ee07526691\": container with ID starting with beb400939f63213e02a46a559003731d0fda6c025a7c4af4fba8c1ee07526691 not found: ID does not exist" containerID="beb400939f63213e02a46a559003731d0fda6c025a7c4af4fba8c1ee07526691" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.342068 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"beb400939f63213e02a46a559003731d0fda6c025a7c4af4fba8c1ee07526691"} err="failed to get container status \"beb400939f63213e02a46a559003731d0fda6c025a7c4af4fba8c1ee07526691\": rpc error: code = NotFound desc = could not find container \"beb400939f63213e02a46a559003731d0fda6c025a7c4af4fba8c1ee07526691\": container with ID starting with beb400939f63213e02a46a559003731d0fda6c025a7c4af4fba8c1ee07526691 not found: ID does not exist" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.342084 4915 scope.go:117] "RemoveContainer" containerID="b249e78afb53948517adb9de33f9e258399a60ef239c7f6384d9c12096b96675" Jan 27 19:05:21 crc kubenswrapper[4915]: E0127 19:05:21.342339 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b249e78afb53948517adb9de33f9e258399a60ef239c7f6384d9c12096b96675\": container with ID starting with b249e78afb53948517adb9de33f9e258399a60ef239c7f6384d9c12096b96675 not found: ID does not exist" containerID="b249e78afb53948517adb9de33f9e258399a60ef239c7f6384d9c12096b96675" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.342366 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b249e78afb53948517adb9de33f9e258399a60ef239c7f6384d9c12096b96675"} err="failed to get container status \"b249e78afb53948517adb9de33f9e258399a60ef239c7f6384d9c12096b96675\": rpc error: code = NotFound desc = could not find container \"b249e78afb53948517adb9de33f9e258399a60ef239c7f6384d9c12096b96675\": container with ID starting with b249e78afb53948517adb9de33f9e258399a60ef239c7f6384d9c12096b96675 not found: ID does not exist" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.342381 4915 scope.go:117] "RemoveContainer" containerID="3b66e42edfceb0c87e0accf31893b0d59c8f918a236d296da7127a8dce3823a5" Jan 27 19:05:21 crc kubenswrapper[4915]: E0127 19:05:21.342637 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b66e42edfceb0c87e0accf31893b0d59c8f918a236d296da7127a8dce3823a5\": container with ID starting with 3b66e42edfceb0c87e0accf31893b0d59c8f918a236d296da7127a8dce3823a5 not found: ID does not exist" containerID="3b66e42edfceb0c87e0accf31893b0d59c8f918a236d296da7127a8dce3823a5" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.342655 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b66e42edfceb0c87e0accf31893b0d59c8f918a236d296da7127a8dce3823a5"} err="failed to get container status \"3b66e42edfceb0c87e0accf31893b0d59c8f918a236d296da7127a8dce3823a5\": rpc error: code = NotFound desc = could not find container \"3b66e42edfceb0c87e0accf31893b0d59c8f918a236d296da7127a8dce3823a5\": container with ID starting with 3b66e42edfceb0c87e0accf31893b0d59c8f918a236d296da7127a8dce3823a5 not found: ID does not exist" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.342670 4915 scope.go:117] "RemoveContainer" containerID="72eb6ab24b0561bd223bc040fab18a8f09dd1c00079159354ab22b869271f0ca" Jan 27 19:05:21 crc kubenswrapper[4915]: E0127 19:05:21.343015 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72eb6ab24b0561bd223bc040fab18a8f09dd1c00079159354ab22b869271f0ca\": container with ID starting with 72eb6ab24b0561bd223bc040fab18a8f09dd1c00079159354ab22b869271f0ca not found: ID does not exist" containerID="72eb6ab24b0561bd223bc040fab18a8f09dd1c00079159354ab22b869271f0ca" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.343035 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72eb6ab24b0561bd223bc040fab18a8f09dd1c00079159354ab22b869271f0ca"} err="failed to get container status \"72eb6ab24b0561bd223bc040fab18a8f09dd1c00079159354ab22b869271f0ca\": rpc error: code = NotFound desc = could not find container \"72eb6ab24b0561bd223bc040fab18a8f09dd1c00079159354ab22b869271f0ca\": container with ID starting with 72eb6ab24b0561bd223bc040fab18a8f09dd1c00079159354ab22b869271f0ca not found: ID does not exist" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.343047 4915 scope.go:117] "RemoveContainer" containerID="e4c6d8d3c41dd8a56303be9ab172d4c9244b4181c2505e4034f82cf5c1a04a49" Jan 27 19:05:21 crc kubenswrapper[4915]: E0127 19:05:21.343348 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4c6d8d3c41dd8a56303be9ab172d4c9244b4181c2505e4034f82cf5c1a04a49\": container with ID starting with e4c6d8d3c41dd8a56303be9ab172d4c9244b4181c2505e4034f82cf5c1a04a49 not found: ID does not exist" containerID="e4c6d8d3c41dd8a56303be9ab172d4c9244b4181c2505e4034f82cf5c1a04a49" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.343379 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4c6d8d3c41dd8a56303be9ab172d4c9244b4181c2505e4034f82cf5c1a04a49"} err="failed to get container status \"e4c6d8d3c41dd8a56303be9ab172d4c9244b4181c2505e4034f82cf5c1a04a49\": rpc error: code = NotFound desc = could not find container \"e4c6d8d3c41dd8a56303be9ab172d4c9244b4181c2505e4034f82cf5c1a04a49\": container with ID starting with e4c6d8d3c41dd8a56303be9ab172d4c9244b4181c2505e4034f82cf5c1a04a49 not found: ID does not exist" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.343398 4915 scope.go:117] "RemoveContainer" containerID="f848ad2f1cae042bc567d9f4705384b39a6ceca79b0cc5f51ad49a89ebe4229a" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.371427 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" path="/var/lib/kubelet/pods/a50240d6-5cb2-4e11-a9da-5a7c682b5d93/volumes" Jan 27 19:05:21 crc kubenswrapper[4915]: I0127 19:05:21.373761 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f070ee25-edfb-4020-b526-3ec9d6c727bc" path="/var/lib/kubelet/pods/f070ee25-edfb-4020-b526-3ec9d6c727bc/volumes" Jan 27 19:05:24 crc kubenswrapper[4915]: I0127 19:05:24.691434 4915 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod0dfca067-a625-475b-9443-cde8a54f12af"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod0dfca067-a625-475b-9443-cde8a54f12af] : Timed out while waiting for systemd to remove kubepods-besteffort-pod0dfca067_a625_475b_9443_cde8a54f12af.slice" Jan 27 19:05:24 crc kubenswrapper[4915]: E0127 19:05:24.692343 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod0dfca067-a625-475b-9443-cde8a54f12af] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod0dfca067-a625-475b-9443-cde8a54f12af] : Timed out while waiting for systemd to remove kubepods-besteffort-pod0dfca067_a625_475b_9443_cde8a54f12af.slice" pod="openstack/nova-api-8e80-account-create-update-47h54" podUID="0dfca067-a625-475b-9443-cde8a54f12af" Jan 27 19:05:24 crc kubenswrapper[4915]: I0127 19:05:24.721729 4915 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod67a9425f-4628-4cad-a9d0-b455a3b6f2b1"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod67a9425f-4628-4cad-a9d0-b455a3b6f2b1] : Timed out while waiting for systemd to remove kubepods-besteffort-pod67a9425f_4628_4cad_a9d0_b455a3b6f2b1.slice" Jan 27 19:05:24 crc kubenswrapper[4915]: I0127 19:05:24.998006 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8e80-account-create-update-47h54" Jan 27 19:05:25 crc kubenswrapper[4915]: I0127 19:05:25.043008 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-8e80-account-create-update-47h54"] Jan 27 19:05:25 crc kubenswrapper[4915]: I0127 19:05:25.060925 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-8e80-account-create-update-47h54"] Jan 27 19:05:25 crc kubenswrapper[4915]: I0127 19:05:25.369694 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dfca067-a625-475b-9443-cde8a54f12af" path="/var/lib/kubelet/pods/0dfca067-a625-475b-9443-cde8a54f12af/volumes" Jan 27 19:06:51 crc kubenswrapper[4915]: I0127 19:06:51.772066 4915 scope.go:117] "RemoveContainer" containerID="a498e18cb645d2af96f71b6a4845679bc9d84cc14976fc27f3a86fb1021149d5" Jan 27 19:06:51 crc kubenswrapper[4915]: I0127 19:06:51.807998 4915 scope.go:117] "RemoveContainer" containerID="f58363042e991a9612d8ad38408e5dfa591ca7acbcfb33b084f57cad80503a36" Jan 27 19:06:51 crc kubenswrapper[4915]: I0127 19:06:51.837482 4915 scope.go:117] "RemoveContainer" containerID="6887c25ea3297356377874383d56d4801336bcec59eeece8e5861f7ce89c3764" Jan 27 19:06:51 crc kubenswrapper[4915]: I0127 19:06:51.864673 4915 scope.go:117] "RemoveContainer" containerID="aaf6233019e696a30d5a3e828efaf0591ebe9af4cbbb29f70d99a50b3c0b4520" Jan 27 19:06:51 crc kubenswrapper[4915]: I0127 19:06:51.895189 4915 scope.go:117] "RemoveContainer" containerID="82d02e4e93b364d97c657c32d58bfee07e0aedaa25c9ce966dd80158a75b2166" Jan 27 19:06:51 crc kubenswrapper[4915]: I0127 19:06:51.923861 4915 scope.go:117] "RemoveContainer" containerID="49805bddf1add5eeff06cfb04aca4a72f2aa708024fbe735b10e3d7051309f6f" Jan 27 19:06:51 crc kubenswrapper[4915]: I0127 19:06:51.954240 4915 scope.go:117] "RemoveContainer" containerID="94362fedde009c8c4e6d7acae6bce7dcc515be645c53ecd648341b39fd19e4cc" Jan 27 19:06:51 crc kubenswrapper[4915]: I0127 19:06:51.971991 4915 scope.go:117] "RemoveContainer" containerID="dbafdb8a016d70bdd8b59a83241783799f12d6166b6a556bd79131357dd84181" Jan 27 19:06:51 crc kubenswrapper[4915]: I0127 19:06:51.990538 4915 scope.go:117] "RemoveContainer" containerID="ddeea46d0068d0692a1877e62fa4cfe602af838cb6034de47ada5fb45c968cda" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.012396 4915 scope.go:117] "RemoveContainer" containerID="9bf35278466b1eb6865efbb8b4d74a29b2c9e804afe1f6f529238009b91cf6d9" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.030274 4915 scope.go:117] "RemoveContainer" containerID="b05c34d07645b4a14d56b4d52e63ae462133c80866f332009ef2ce490415bcdc" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.062100 4915 scope.go:117] "RemoveContainer" containerID="97d42013e83b1c626eff029049c8ac6906ce51db020228b85bd96d062e4efb8f" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.077618 4915 scope.go:117] "RemoveContainer" containerID="1ee0dcf89f1039dcee1b4b017dfe4671e43e97013e3f8cc394b283a18e9ecf21" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.094063 4915 scope.go:117] "RemoveContainer" containerID="2307aac0f134ed53b3b103835823b9021aefeb9cd3a5ef2f8acd9077a9f61d25" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.125128 4915 scope.go:117] "RemoveContainer" containerID="9b6ca4421df3d3980e0456e18383ce4d6cf3d1302169aab1892b3f60264786ad" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.140554 4915 scope.go:117] "RemoveContainer" containerID="47de9cacd64cfebe73b1c97722b60389306b8f69b7d5151985869e539f89e176" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.159911 4915 scope.go:117] "RemoveContainer" containerID="1dd3f550ad87852c603ab43ce4c551f3e98213b4a6234d95db5496706e6065eb" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.185594 4915 scope.go:117] "RemoveContainer" containerID="6fca1c9af90ea84832fc9b8dba07e316c52eae6fd6d8b293f38c5cf84004bd05" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.206844 4915 scope.go:117] "RemoveContainer" containerID="a33da52f918876be7e04c73bf550e4d751371de9f1f41a808dfdf89c2d34a2de" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.221085 4915 scope.go:117] "RemoveContainer" containerID="dc92f7d6abd39e3ef81eaf6dc25224f561ed1b346a562d573a0cb55379bcf521" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.236423 4915 scope.go:117] "RemoveContainer" containerID="fbe7fa792658ac0d2938034c71e6e12768548bb7e57cdd4e8498d3744da73a91" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.253686 4915 scope.go:117] "RemoveContainer" containerID="de86cd114c117f9cffe320ebf31f70906d6b6d5b162d70c11a8ccad94f9510e2" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.274661 4915 scope.go:117] "RemoveContainer" containerID="5d11d9c2ec59f64a391e2f8f2814d678f9353abb0906a39b1d2681cc0384e324" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.298361 4915 scope.go:117] "RemoveContainer" containerID="7fffb4fdd6a80fbec7e6cf82acb3dcd293ed9492fc9c6f489ec1057ec100aae3" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.827164 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zgfhw"] Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.827656 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d059c6f-eb7b-47ad-bdeb-2af976dd43d7" containerName="nova-cell0-conductor-conductor" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.827700 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d059c6f-eb7b-47ad-bdeb-2af976dd43d7" containerName="nova-cell0-conductor-conductor" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.827731 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d" containerName="nova-api-log" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.827750 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d" containerName="nova-api-log" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.827780 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b70cef8-be7d-4d25-87e3-c9916452d855" containerName="neutron-httpd" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.827832 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b70cef8-be7d-4d25-87e3-c9916452d855" containerName="neutron-httpd" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.827848 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b5f81dc-48ff-40c8-a0af-84c7c60338fd" containerName="rabbitmq" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.827861 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b5f81dc-48ff-40c8-a0af-84c7c60338fd" containerName="rabbitmq" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.827880 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69486e9e-4ef8-4749-842f-a38dfeba60d3" containerName="galera" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.827893 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="69486e9e-4ef8-4749-842f-a38dfeba60d3" containerName="galera" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.827917 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="account-reaper" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.827930 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="account-reaper" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.827953 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc16c397-45e2-4878-8927-752a1832ec0a" containerName="nova-metadata-metadata" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.827966 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc16c397-45e2-4878-8927-752a1832ec0a" containerName="nova-metadata-metadata" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.827980 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="container-replicator" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.827993 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="container-replicator" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828010 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="container-auditor" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828023 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="container-auditor" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828045 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="object-updater" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828058 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="object-updater" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828072 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09d25f2d-d205-4b7a-a17f-ca7e5b26ee43" containerName="glance-httpd" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828086 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d25f2d-d205-4b7a-a17f-ca7e5b26ee43" containerName="glance-httpd" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828109 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f765c967-cd1f-44be-bf35-200a93f06c08" containerName="sg-core" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828122 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f765c967-cd1f-44be-bf35-200a93f06c08" containerName="sg-core" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828142 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="object-auditor" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828155 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="object-auditor" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828171 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7965677-0846-4016-ab96-efe29db9327c" containerName="mariadb-account-create-update" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828184 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7965677-0846-4016-ab96-efe29db9327c" containerName="mariadb-account-create-update" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828198 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02e16a69-5c98-4e52-ad1c-bf08c989cd88" containerName="glance-log" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828211 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="02e16a69-5c98-4e52-ad1c-bf08c989cd88" containerName="glance-log" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828230 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc16c397-45e2-4878-8927-752a1832ec0a" containerName="nova-metadata-log" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828243 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc16c397-45e2-4878-8927-752a1832ec0a" containerName="nova-metadata-log" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828257 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09d25f2d-d205-4b7a-a17f-ca7e5b26ee43" containerName="glance-log" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828269 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d25f2d-d205-4b7a-a17f-ca7e5b26ee43" containerName="glance-log" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828288 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fab41436-1aa4-47d5-aa00-9a1199ce8d97" containerName="registry-server" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828304 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="fab41436-1aa4-47d5-aa00-9a1199ce8d97" containerName="registry-server" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828326 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ead5d8-b1e5-4145-a6de-64c316f4027e" containerName="rabbitmq" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828340 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ead5d8-b1e5-4145-a6de-64c316f4027e" containerName="rabbitmq" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828352 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8079a88f-5f47-4988-b4c8-6031fbfc9dd8" containerName="barbican-keystone-listener" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828365 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8079a88f-5f47-4988-b4c8-6031fbfc9dd8" containerName="barbican-keystone-listener" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828387 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900ea28b-9ef8-41fe-a522-044443efa94b" containerName="kube-state-metrics" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828402 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="900ea28b-9ef8-41fe-a522-044443efa94b" containerName="kube-state-metrics" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828420 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="object-expirer" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828433 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="object-expirer" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828458 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d" containerName="nova-api-api" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828471 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d" containerName="nova-api-api" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828488 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f070ee25-edfb-4020-b526-3ec9d6c727bc" containerName="ovsdb-server-init" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828500 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f070ee25-edfb-4020-b526-3ec9d6c727bc" containerName="ovsdb-server-init" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828521 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="object-server" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828534 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="object-server" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828550 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec4f102-db6b-4f45-a5f4-1aad213e05fb" containerName="barbican-api-log" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828562 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec4f102-db6b-4f45-a5f4-1aad213e05fb" containerName="barbican-api-log" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828588 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="container-server" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828600 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="container-server" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828616 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f29847-cfad-4f69-aff9-9c62b0088754" containerName="nova-scheduler-scheduler" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828629 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f29847-cfad-4f69-aff9-9c62b0088754" containerName="nova-scheduler-scheduler" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828647 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8079a88f-5f47-4988-b4c8-6031fbfc9dd8" containerName="barbican-keystone-listener-log" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828660 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8079a88f-5f47-4988-b4c8-6031fbfc9dd8" containerName="barbican-keystone-listener-log" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828684 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2041e54-fb55-4f2a-8cf9-e439c7774485" containerName="barbican-worker" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828697 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2041e54-fb55-4f2a-8cf9-e439c7774485" containerName="barbican-worker" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828715 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="102e986e-f101-4f49-af96-50368468f7b4" containerName="placement-api" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828728 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="102e986e-f101-4f49-af96-50368468f7b4" containerName="placement-api" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828752 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="rsync" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828764 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="rsync" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828777 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0031b79-12aa-4487-8501-6e122053cc13" containerName="keystone-api" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828812 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0031b79-12aa-4487-8501-6e122053cc13" containerName="keystone-api" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828827 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5acd6cd6-a7d4-4839-b3c1-aec924797e53" containerName="nova-cell1-conductor-conductor" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828854 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="5acd6cd6-a7d4-4839-b3c1-aec924797e53" containerName="nova-cell1-conductor-conductor" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828869 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fab41436-1aa4-47d5-aa00-9a1199ce8d97" containerName="extract-content" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828881 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="fab41436-1aa4-47d5-aa00-9a1199ce8d97" containerName="extract-content" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828897 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2041e54-fb55-4f2a-8cf9-e439c7774485" containerName="barbican-worker-log" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828910 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2041e54-fb55-4f2a-8cf9-e439c7774485" containerName="barbican-worker-log" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828927 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ead5d8-b1e5-4145-a6de-64c316f4027e" containerName="setup-container" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828940 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ead5d8-b1e5-4145-a6de-64c316f4027e" containerName="setup-container" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828954 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec4f102-db6b-4f45-a5f4-1aad213e05fb" containerName="barbican-api" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828966 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec4f102-db6b-4f45-a5f4-1aad213e05fb" containerName="barbican-api" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.828982 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="account-replicator" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.828995 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="account-replicator" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.829010 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02e16a69-5c98-4e52-ad1c-bf08c989cd88" containerName="glance-httpd" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.829028 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="02e16a69-5c98-4e52-ad1c-bf08c989cd88" containerName="glance-httpd" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.829048 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f070ee25-edfb-4020-b526-3ec9d6c727bc" containerName="ovsdb-server" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.829061 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f070ee25-edfb-4020-b526-3ec9d6c727bc" containerName="ovsdb-server" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.829083 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="account-server" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.829096 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="account-server" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.829119 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f765c967-cd1f-44be-bf35-200a93f06c08" containerName="proxy-httpd" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.829131 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f765c967-cd1f-44be-bf35-200a93f06c08" containerName="proxy-httpd" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.829152 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b01be235-2ab9-4e61-a5a4-1d006a9e6679" containerName="memcached" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.829165 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b01be235-2ab9-4e61-a5a4-1d006a9e6679" containerName="memcached" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.829185 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="container-updater" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.829197 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="container-updater" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.829232 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="102e986e-f101-4f49-af96-50368468f7b4" containerName="placement-log" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.829244 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="102e986e-f101-4f49-af96-50368468f7b4" containerName="placement-log" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.829264 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fab41436-1aa4-47d5-aa00-9a1199ce8d97" containerName="extract-utilities" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.829277 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="fab41436-1aa4-47d5-aa00-9a1199ce8d97" containerName="extract-utilities" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.829301 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b5f81dc-48ff-40c8-a0af-84c7c60338fd" containerName="setup-container" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.829313 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b5f81dc-48ff-40c8-a0af-84c7c60338fd" containerName="setup-container" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.829329 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f765c967-cd1f-44be-bf35-200a93f06c08" containerName="ceilometer-notification-agent" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.829341 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f765c967-cd1f-44be-bf35-200a93f06c08" containerName="ceilometer-notification-agent" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.829361 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="object-replicator" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.829373 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="object-replicator" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.829391 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="swift-recon-cron" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.829404 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="swift-recon-cron" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.829421 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b70cef8-be7d-4d25-87e3-c9916452d855" containerName="neutron-api" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.829433 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b70cef8-be7d-4d25-87e3-c9916452d855" containerName="neutron-api" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.829448 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69486e9e-4ef8-4749-842f-a38dfeba60d3" containerName="mysql-bootstrap" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.829460 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="69486e9e-4ef8-4749-842f-a38dfeba60d3" containerName="mysql-bootstrap" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.829482 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="account-auditor" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.829499 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="account-auditor" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.829521 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f070ee25-edfb-4020-b526-3ec9d6c727bc" containerName="ovs-vswitchd" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.829537 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f070ee25-edfb-4020-b526-3ec9d6c727bc" containerName="ovs-vswitchd" Jan 27 19:06:52 crc kubenswrapper[4915]: E0127 19:06:52.829574 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f765c967-cd1f-44be-bf35-200a93f06c08" containerName="ceilometer-central-agent" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.829594 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f765c967-cd1f-44be-bf35-200a93f06c08" containerName="ceilometer-central-agent" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836314 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="b01be235-2ab9-4e61-a5a4-1d006a9e6679" containerName="memcached" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836358 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b70cef8-be7d-4d25-87e3-c9916452d855" containerName="neutron-api" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836380 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="102e986e-f101-4f49-af96-50368468f7b4" containerName="placement-log" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836399 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f765c967-cd1f-44be-bf35-200a93f06c08" containerName="ceilometer-notification-agent" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836417 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ec4f102-db6b-4f45-a5f4-1aad213e05fb" containerName="barbican-api-log" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836445 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="container-replicator" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836456 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ec4f102-db6b-4f45-a5f4-1aad213e05fb" containerName="barbican-api" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836479 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="rsync" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836495 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f070ee25-edfb-4020-b526-3ec9d6c727bc" containerName="ovs-vswitchd" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836512 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d059c6f-eb7b-47ad-bdeb-2af976dd43d7" containerName="nova-cell0-conductor-conductor" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836523 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f765c967-cd1f-44be-bf35-200a93f06c08" containerName="ceilometer-central-agent" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836550 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0031b79-12aa-4487-8501-6e122053cc13" containerName="keystone-api" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836567 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="02e16a69-5c98-4e52-ad1c-bf08c989cd88" containerName="glance-httpd" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836598 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="object-replicator" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836615 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2041e54-fb55-4f2a-8cf9-e439c7774485" containerName="barbican-worker" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836633 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7965677-0846-4016-ab96-efe29db9327c" containerName="mariadb-account-create-update" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836651 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="02e16a69-5c98-4e52-ad1c-bf08c989cd88" containerName="glance-log" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836667 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="102e986e-f101-4f49-af96-50368468f7b4" containerName="placement-api" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836682 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="container-updater" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836698 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc16c397-45e2-4878-8927-752a1832ec0a" containerName="nova-metadata-log" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836710 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="09d25f2d-d205-4b7a-a17f-ca7e5b26ee43" containerName="glance-log" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836727 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f765c967-cd1f-44be-bf35-200a93f06c08" containerName="proxy-httpd" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836735 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="69486e9e-4ef8-4749-842f-a38dfeba60d3" containerName="galera" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836750 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ead5d8-b1e5-4145-a6de-64c316f4027e" containerName="rabbitmq" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836769 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="object-auditor" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836777 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="900ea28b-9ef8-41fe-a522-044443efa94b" containerName="kube-state-metrics" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836826 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f070ee25-edfb-4020-b526-3ec9d6c727bc" containerName="ovsdb-server" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836850 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="object-expirer" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836871 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d" containerName="nova-api-log" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836896 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2041e54-fb55-4f2a-8cf9-e439c7774485" containerName="barbican-worker-log" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836906 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="account-reaper" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836929 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5f29847-cfad-4f69-aff9-9c62b0088754" containerName="nova-scheduler-scheduler" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836947 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="09d25f2d-d205-4b7a-a17f-ca7e5b26ee43" containerName="glance-httpd" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836966 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b5f81dc-48ff-40c8-a0af-84c7c60338fd" containerName="rabbitmq" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.836984 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="account-server" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.837000 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="container-server" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.837015 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b70cef8-be7d-4d25-87e3-c9916452d855" containerName="neutron-httpd" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.837024 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc16c397-45e2-4878-8927-752a1832ec0a" containerName="nova-metadata-metadata" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.837038 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="account-replicator" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.837055 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="swift-recon-cron" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.837072 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="object-server" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.837089 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="account-auditor" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.837105 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f765c967-cd1f-44be-bf35-200a93f06c08" containerName="sg-core" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.837120 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1925fb8-7f5d-482e-8cd8-9f0d4bd8b35d" containerName="nova-api-api" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.837141 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="fab41436-1aa4-47d5-aa00-9a1199ce8d97" containerName="registry-server" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.837160 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8079a88f-5f47-4988-b4c8-6031fbfc9dd8" containerName="barbican-keystone-listener" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.837178 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="container-auditor" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.837191 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="5acd6cd6-a7d4-4839-b3c1-aec924797e53" containerName="nova-cell1-conductor-conductor" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.837209 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a50240d6-5cb2-4e11-a9da-5a7c682b5d93" containerName="object-updater" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.837226 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8079a88f-5f47-4988-b4c8-6031fbfc9dd8" containerName="barbican-keystone-listener-log" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.840683 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgfhw" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.898999 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgfhw"] Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.933618 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20-catalog-content\") pod \"redhat-marketplace-zgfhw\" (UID: \"ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20\") " pod="openshift-marketplace/redhat-marketplace-zgfhw" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.933657 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-626vs\" (UniqueName: \"kubernetes.io/projected/ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20-kube-api-access-626vs\") pod \"redhat-marketplace-zgfhw\" (UID: \"ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20\") " pod="openshift-marketplace/redhat-marketplace-zgfhw" Jan 27 19:06:52 crc kubenswrapper[4915]: I0127 19:06:52.933683 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20-utilities\") pod \"redhat-marketplace-zgfhw\" (UID: \"ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20\") " pod="openshift-marketplace/redhat-marketplace-zgfhw" Jan 27 19:06:53 crc kubenswrapper[4915]: I0127 19:06:53.034945 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20-catalog-content\") pod \"redhat-marketplace-zgfhw\" (UID: \"ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20\") " pod="openshift-marketplace/redhat-marketplace-zgfhw" Jan 27 19:06:53 crc kubenswrapper[4915]: I0127 19:06:53.035013 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-626vs\" (UniqueName: \"kubernetes.io/projected/ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20-kube-api-access-626vs\") pod \"redhat-marketplace-zgfhw\" (UID: \"ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20\") " pod="openshift-marketplace/redhat-marketplace-zgfhw" Jan 27 19:06:53 crc kubenswrapper[4915]: I0127 19:06:53.035056 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20-utilities\") pod \"redhat-marketplace-zgfhw\" (UID: \"ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20\") " pod="openshift-marketplace/redhat-marketplace-zgfhw" Jan 27 19:06:53 crc kubenswrapper[4915]: I0127 19:06:53.035518 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20-catalog-content\") pod \"redhat-marketplace-zgfhw\" (UID: \"ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20\") " pod="openshift-marketplace/redhat-marketplace-zgfhw" Jan 27 19:06:53 crc kubenswrapper[4915]: I0127 19:06:53.035681 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20-utilities\") pod \"redhat-marketplace-zgfhw\" (UID: \"ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20\") " pod="openshift-marketplace/redhat-marketplace-zgfhw" Jan 27 19:06:53 crc kubenswrapper[4915]: I0127 19:06:53.056178 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-626vs\" (UniqueName: \"kubernetes.io/projected/ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20-kube-api-access-626vs\") pod \"redhat-marketplace-zgfhw\" (UID: \"ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20\") " pod="openshift-marketplace/redhat-marketplace-zgfhw" Jan 27 19:06:53 crc kubenswrapper[4915]: I0127 19:06:53.211173 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgfhw" Jan 27 19:06:53 crc kubenswrapper[4915]: I0127 19:06:53.652343 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgfhw"] Jan 27 19:06:53 crc kubenswrapper[4915]: I0127 19:06:53.862840 4915 generic.go:334] "Generic (PLEG): container finished" podID="ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20" containerID="8077fc85ba64e1fbc81dd3485c57b648deb5c8b53d625c3bd7acbf32b8cb838d" exitCode=0 Jan 27 19:06:53 crc kubenswrapper[4915]: I0127 19:06:53.863019 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgfhw" event={"ID":"ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20","Type":"ContainerDied","Data":"8077fc85ba64e1fbc81dd3485c57b648deb5c8b53d625c3bd7acbf32b8cb838d"} Jan 27 19:06:53 crc kubenswrapper[4915]: I0127 19:06:53.863191 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgfhw" event={"ID":"ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20","Type":"ContainerStarted","Data":"c78bc5cefb6a9756b142dc6b1b4160cc14d6fd3105f3d232bca9a0a0860f0729"} Jan 27 19:06:55 crc kubenswrapper[4915]: I0127 19:06:55.885979 4915 generic.go:334] "Generic (PLEG): container finished" podID="ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20" containerID="31de7982e80e1750b0da413b5a63cbdd83906385fb4b446c7dd0ac3e4e5c1b85" exitCode=0 Jan 27 19:06:55 crc kubenswrapper[4915]: I0127 19:06:55.886109 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgfhw" event={"ID":"ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20","Type":"ContainerDied","Data":"31de7982e80e1750b0da413b5a63cbdd83906385fb4b446c7dd0ac3e4e5c1b85"} Jan 27 19:06:56 crc kubenswrapper[4915]: I0127 19:06:56.895062 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgfhw" event={"ID":"ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20","Type":"ContainerStarted","Data":"f7abf6f9b6b0ff2e02d720b153948b6104f3217611fe1f15fa2e04777d482707"} Jan 27 19:06:56 crc kubenswrapper[4915]: I0127 19:06:56.920190 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zgfhw" podStartSLOduration=2.478586618 podStartE2EDuration="4.920170368s" podCreationTimestamp="2026-01-27 19:06:52 +0000 UTC" firstStartedPulling="2026-01-27 19:06:53.864250445 +0000 UTC m=+1505.222104109" lastFinishedPulling="2026-01-27 19:06:56.305834185 +0000 UTC m=+1507.663687859" observedRunningTime="2026-01-27 19:06:56.915234878 +0000 UTC m=+1508.273088552" watchObservedRunningTime="2026-01-27 19:06:56.920170368 +0000 UTC m=+1508.278024052" Jan 27 19:07:03 crc kubenswrapper[4915]: I0127 19:07:03.211327 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zgfhw" Jan 27 19:07:03 crc kubenswrapper[4915]: I0127 19:07:03.211905 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zgfhw" Jan 27 19:07:03 crc kubenswrapper[4915]: I0127 19:07:03.256605 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zgfhw" Jan 27 19:07:03 crc kubenswrapper[4915]: I0127 19:07:03.501009 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k6btd"] Jan 27 19:07:03 crc kubenswrapper[4915]: I0127 19:07:03.503459 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k6btd" Jan 27 19:07:03 crc kubenswrapper[4915]: I0127 19:07:03.510816 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k6btd"] Jan 27 19:07:03 crc kubenswrapper[4915]: I0127 19:07:03.595676 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeb0e96a-728f-48e2-9f1e-ea4fbd168764-utilities\") pod \"community-operators-k6btd\" (UID: \"aeb0e96a-728f-48e2-9f1e-ea4fbd168764\") " pod="openshift-marketplace/community-operators-k6btd" Jan 27 19:07:03 crc kubenswrapper[4915]: I0127 19:07:03.595964 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n2gs\" (UniqueName: \"kubernetes.io/projected/aeb0e96a-728f-48e2-9f1e-ea4fbd168764-kube-api-access-4n2gs\") pod \"community-operators-k6btd\" (UID: \"aeb0e96a-728f-48e2-9f1e-ea4fbd168764\") " pod="openshift-marketplace/community-operators-k6btd" Jan 27 19:07:03 crc kubenswrapper[4915]: I0127 19:07:03.596099 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeb0e96a-728f-48e2-9f1e-ea4fbd168764-catalog-content\") pod \"community-operators-k6btd\" (UID: \"aeb0e96a-728f-48e2-9f1e-ea4fbd168764\") " pod="openshift-marketplace/community-operators-k6btd" Jan 27 19:07:03 crc kubenswrapper[4915]: I0127 19:07:03.697152 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeb0e96a-728f-48e2-9f1e-ea4fbd168764-catalog-content\") pod \"community-operators-k6btd\" (UID: \"aeb0e96a-728f-48e2-9f1e-ea4fbd168764\") " pod="openshift-marketplace/community-operators-k6btd" Jan 27 19:07:03 crc kubenswrapper[4915]: I0127 19:07:03.697221 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeb0e96a-728f-48e2-9f1e-ea4fbd168764-utilities\") pod \"community-operators-k6btd\" (UID: \"aeb0e96a-728f-48e2-9f1e-ea4fbd168764\") " pod="openshift-marketplace/community-operators-k6btd" Jan 27 19:07:03 crc kubenswrapper[4915]: I0127 19:07:03.697331 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n2gs\" (UniqueName: \"kubernetes.io/projected/aeb0e96a-728f-48e2-9f1e-ea4fbd168764-kube-api-access-4n2gs\") pod \"community-operators-k6btd\" (UID: \"aeb0e96a-728f-48e2-9f1e-ea4fbd168764\") " pod="openshift-marketplace/community-operators-k6btd" Jan 27 19:07:03 crc kubenswrapper[4915]: I0127 19:07:03.697738 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeb0e96a-728f-48e2-9f1e-ea4fbd168764-catalog-content\") pod \"community-operators-k6btd\" (UID: \"aeb0e96a-728f-48e2-9f1e-ea4fbd168764\") " pod="openshift-marketplace/community-operators-k6btd" Jan 27 19:07:03 crc kubenswrapper[4915]: I0127 19:07:03.697778 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeb0e96a-728f-48e2-9f1e-ea4fbd168764-utilities\") pod \"community-operators-k6btd\" (UID: \"aeb0e96a-728f-48e2-9f1e-ea4fbd168764\") " pod="openshift-marketplace/community-operators-k6btd" Jan 27 19:07:03 crc kubenswrapper[4915]: I0127 19:07:03.718838 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n2gs\" (UniqueName: \"kubernetes.io/projected/aeb0e96a-728f-48e2-9f1e-ea4fbd168764-kube-api-access-4n2gs\") pod \"community-operators-k6btd\" (UID: \"aeb0e96a-728f-48e2-9f1e-ea4fbd168764\") " pod="openshift-marketplace/community-operators-k6btd" Jan 27 19:07:03 crc kubenswrapper[4915]: I0127 19:07:03.841262 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k6btd" Jan 27 19:07:04 crc kubenswrapper[4915]: I0127 19:07:04.012995 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zgfhw" Jan 27 19:07:04 crc kubenswrapper[4915]: I0127 19:07:04.109748 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k6btd"] Jan 27 19:07:04 crc kubenswrapper[4915]: I0127 19:07:04.968979 4915 generic.go:334] "Generic (PLEG): container finished" podID="aeb0e96a-728f-48e2-9f1e-ea4fbd168764" containerID="4702c3ae0519a244da2c46009b68be2fa364a4d52799e1f8e8caaf8c678d5b3d" exitCode=0 Jan 27 19:07:04 crc kubenswrapper[4915]: I0127 19:07:04.970175 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k6btd" event={"ID":"aeb0e96a-728f-48e2-9f1e-ea4fbd168764","Type":"ContainerDied","Data":"4702c3ae0519a244da2c46009b68be2fa364a4d52799e1f8e8caaf8c678d5b3d"} Jan 27 19:07:04 crc kubenswrapper[4915]: I0127 19:07:04.970213 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k6btd" event={"ID":"aeb0e96a-728f-48e2-9f1e-ea4fbd168764","Type":"ContainerStarted","Data":"e0ed102bf651ef83cb181284b6823b9883c04b43c6f61c665e03120c97f16ca6"} Jan 27 19:07:06 crc kubenswrapper[4915]: I0127 19:07:06.287029 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgfhw"] Jan 27 19:07:06 crc kubenswrapper[4915]: I0127 19:07:06.287519 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zgfhw" podUID="ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20" containerName="registry-server" containerID="cri-o://f7abf6f9b6b0ff2e02d720b153948b6104f3217611fe1f15fa2e04777d482707" gracePeriod=2 Jan 27 19:07:06 crc kubenswrapper[4915]: I0127 19:07:06.718437 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgfhw" Jan 27 19:07:06 crc kubenswrapper[4915]: I0127 19:07:06.745448 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20-utilities\") pod \"ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20\" (UID: \"ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20\") " Jan 27 19:07:06 crc kubenswrapper[4915]: I0127 19:07:06.745524 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-626vs\" (UniqueName: \"kubernetes.io/projected/ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20-kube-api-access-626vs\") pod \"ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20\" (UID: \"ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20\") " Jan 27 19:07:06 crc kubenswrapper[4915]: I0127 19:07:06.745638 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20-catalog-content\") pod \"ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20\" (UID: \"ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20\") " Jan 27 19:07:06 crc kubenswrapper[4915]: I0127 19:07:06.746204 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20-utilities" (OuterVolumeSpecName: "utilities") pod "ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20" (UID: "ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:07:06 crc kubenswrapper[4915]: I0127 19:07:06.750781 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20-kube-api-access-626vs" (OuterVolumeSpecName: "kube-api-access-626vs") pod "ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20" (UID: "ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20"). InnerVolumeSpecName "kube-api-access-626vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:07:06 crc kubenswrapper[4915]: I0127 19:07:06.773813 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20" (UID: "ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:07:06 crc kubenswrapper[4915]: I0127 19:07:06.847351 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:07:06 crc kubenswrapper[4915]: I0127 19:07:06.847670 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-626vs\" (UniqueName: \"kubernetes.io/projected/ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20-kube-api-access-626vs\") on node \"crc\" DevicePath \"\"" Jan 27 19:07:06 crc kubenswrapper[4915]: I0127 19:07:06.847680 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:07:06 crc kubenswrapper[4915]: I0127 19:07:06.990184 4915 generic.go:334] "Generic (PLEG): container finished" podID="ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20" containerID="f7abf6f9b6b0ff2e02d720b153948b6104f3217611fe1f15fa2e04777d482707" exitCode=0 Jan 27 19:07:06 crc kubenswrapper[4915]: I0127 19:07:06.990302 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgfhw" event={"ID":"ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20","Type":"ContainerDied","Data":"f7abf6f9b6b0ff2e02d720b153948b6104f3217611fe1f15fa2e04777d482707"} Jan 27 19:07:06 crc kubenswrapper[4915]: I0127 19:07:06.990362 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgfhw" event={"ID":"ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20","Type":"ContainerDied","Data":"c78bc5cefb6a9756b142dc6b1b4160cc14d6fd3105f3d232bca9a0a0860f0729"} Jan 27 19:07:06 crc kubenswrapper[4915]: I0127 19:07:06.990380 4915 scope.go:117] "RemoveContainer" containerID="f7abf6f9b6b0ff2e02d720b153948b6104f3217611fe1f15fa2e04777d482707" Jan 27 19:07:06 crc kubenswrapper[4915]: I0127 19:07:06.990780 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgfhw" Jan 27 19:07:07 crc kubenswrapper[4915]: I0127 19:07:06.999884 4915 generic.go:334] "Generic (PLEG): container finished" podID="aeb0e96a-728f-48e2-9f1e-ea4fbd168764" containerID="5a1ae25332a16a03a34f842b34cf1de7cf85212284ea6b32a109facd5fc9446b" exitCode=0 Jan 27 19:07:07 crc kubenswrapper[4915]: I0127 19:07:07.000481 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k6btd" event={"ID":"aeb0e96a-728f-48e2-9f1e-ea4fbd168764","Type":"ContainerDied","Data":"5a1ae25332a16a03a34f842b34cf1de7cf85212284ea6b32a109facd5fc9446b"} Jan 27 19:07:07 crc kubenswrapper[4915]: I0127 19:07:07.018221 4915 scope.go:117] "RemoveContainer" containerID="31de7982e80e1750b0da413b5a63cbdd83906385fb4b446c7dd0ac3e4e5c1b85" Jan 27 19:07:07 crc kubenswrapper[4915]: I0127 19:07:07.046054 4915 scope.go:117] "RemoveContainer" containerID="8077fc85ba64e1fbc81dd3485c57b648deb5c8b53d625c3bd7acbf32b8cb838d" Jan 27 19:07:07 crc kubenswrapper[4915]: I0127 19:07:07.099674 4915 scope.go:117] "RemoveContainer" containerID="f7abf6f9b6b0ff2e02d720b153948b6104f3217611fe1f15fa2e04777d482707" Jan 27 19:07:07 crc kubenswrapper[4915]: E0127 19:07:07.100278 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7abf6f9b6b0ff2e02d720b153948b6104f3217611fe1f15fa2e04777d482707\": container with ID starting with f7abf6f9b6b0ff2e02d720b153948b6104f3217611fe1f15fa2e04777d482707 not found: ID does not exist" containerID="f7abf6f9b6b0ff2e02d720b153948b6104f3217611fe1f15fa2e04777d482707" Jan 27 19:07:07 crc kubenswrapper[4915]: I0127 19:07:07.100312 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7abf6f9b6b0ff2e02d720b153948b6104f3217611fe1f15fa2e04777d482707"} err="failed to get container status \"f7abf6f9b6b0ff2e02d720b153948b6104f3217611fe1f15fa2e04777d482707\": rpc error: code = NotFound desc = could not find container \"f7abf6f9b6b0ff2e02d720b153948b6104f3217611fe1f15fa2e04777d482707\": container with ID starting with f7abf6f9b6b0ff2e02d720b153948b6104f3217611fe1f15fa2e04777d482707 not found: ID does not exist" Jan 27 19:07:07 crc kubenswrapper[4915]: I0127 19:07:07.100337 4915 scope.go:117] "RemoveContainer" containerID="31de7982e80e1750b0da413b5a63cbdd83906385fb4b446c7dd0ac3e4e5c1b85" Jan 27 19:07:07 crc kubenswrapper[4915]: E0127 19:07:07.100584 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31de7982e80e1750b0da413b5a63cbdd83906385fb4b446c7dd0ac3e4e5c1b85\": container with ID starting with 31de7982e80e1750b0da413b5a63cbdd83906385fb4b446c7dd0ac3e4e5c1b85 not found: ID does not exist" containerID="31de7982e80e1750b0da413b5a63cbdd83906385fb4b446c7dd0ac3e4e5c1b85" Jan 27 19:07:07 crc kubenswrapper[4915]: I0127 19:07:07.100606 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31de7982e80e1750b0da413b5a63cbdd83906385fb4b446c7dd0ac3e4e5c1b85"} err="failed to get container status \"31de7982e80e1750b0da413b5a63cbdd83906385fb4b446c7dd0ac3e4e5c1b85\": rpc error: code = NotFound desc = could not find container \"31de7982e80e1750b0da413b5a63cbdd83906385fb4b446c7dd0ac3e4e5c1b85\": container with ID starting with 31de7982e80e1750b0da413b5a63cbdd83906385fb4b446c7dd0ac3e4e5c1b85 not found: ID does not exist" Jan 27 19:07:07 crc kubenswrapper[4915]: I0127 19:07:07.100623 4915 scope.go:117] "RemoveContainer" containerID="8077fc85ba64e1fbc81dd3485c57b648deb5c8b53d625c3bd7acbf32b8cb838d" Jan 27 19:07:07 crc kubenswrapper[4915]: E0127 19:07:07.100864 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8077fc85ba64e1fbc81dd3485c57b648deb5c8b53d625c3bd7acbf32b8cb838d\": container with ID starting with 8077fc85ba64e1fbc81dd3485c57b648deb5c8b53d625c3bd7acbf32b8cb838d not found: ID does not exist" containerID="8077fc85ba64e1fbc81dd3485c57b648deb5c8b53d625c3bd7acbf32b8cb838d" Jan 27 19:07:07 crc kubenswrapper[4915]: I0127 19:07:07.100885 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8077fc85ba64e1fbc81dd3485c57b648deb5c8b53d625c3bd7acbf32b8cb838d"} err="failed to get container status \"8077fc85ba64e1fbc81dd3485c57b648deb5c8b53d625c3bd7acbf32b8cb838d\": rpc error: code = NotFound desc = could not find container \"8077fc85ba64e1fbc81dd3485c57b648deb5c8b53d625c3bd7acbf32b8cb838d\": container with ID starting with 8077fc85ba64e1fbc81dd3485c57b648deb5c8b53d625c3bd7acbf32b8cb838d not found: ID does not exist" Jan 27 19:07:07 crc kubenswrapper[4915]: I0127 19:07:07.100909 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgfhw"] Jan 27 19:07:07 crc kubenswrapper[4915]: I0127 19:07:07.106848 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgfhw"] Jan 27 19:07:07 crc kubenswrapper[4915]: I0127 19:07:07.367454 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20" path="/var/lib/kubelet/pods/ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20/volumes" Jan 27 19:07:08 crc kubenswrapper[4915]: I0127 19:07:08.021658 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k6btd" event={"ID":"aeb0e96a-728f-48e2-9f1e-ea4fbd168764","Type":"ContainerStarted","Data":"ad2f8a22df26ab66ac2041e4b69bfa31ec50f7349d57ebf88489837494f13fd9"} Jan 27 19:07:08 crc kubenswrapper[4915]: I0127 19:07:08.042773 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k6btd" podStartSLOduration=2.637509368 podStartE2EDuration="5.042751602s" podCreationTimestamp="2026-01-27 19:07:03 +0000 UTC" firstStartedPulling="2026-01-27 19:07:04.971000264 +0000 UTC m=+1516.328853928" lastFinishedPulling="2026-01-27 19:07:07.376242488 +0000 UTC m=+1518.734096162" observedRunningTime="2026-01-27 19:07:08.037026613 +0000 UTC m=+1519.394880267" watchObservedRunningTime="2026-01-27 19:07:08.042751602 +0000 UTC m=+1519.400605266" Jan 27 19:07:11 crc kubenswrapper[4915]: I0127 19:07:11.899908 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w24xn"] Jan 27 19:07:11 crc kubenswrapper[4915]: E0127 19:07:11.900608 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20" containerName="extract-content" Jan 27 19:07:11 crc kubenswrapper[4915]: I0127 19:07:11.900623 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20" containerName="extract-content" Jan 27 19:07:11 crc kubenswrapper[4915]: E0127 19:07:11.900637 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20" containerName="extract-utilities" Jan 27 19:07:11 crc kubenswrapper[4915]: I0127 19:07:11.900646 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20" containerName="extract-utilities" Jan 27 19:07:11 crc kubenswrapper[4915]: E0127 19:07:11.900664 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20" containerName="registry-server" Jan 27 19:07:11 crc kubenswrapper[4915]: I0127 19:07:11.900675 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20" containerName="registry-server" Jan 27 19:07:11 crc kubenswrapper[4915]: I0127 19:07:11.900857 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce32ed7f-f3e3-4e3c-b25b-7235dcfebe20" containerName="registry-server" Jan 27 19:07:11 crc kubenswrapper[4915]: I0127 19:07:11.902152 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w24xn" Jan 27 19:07:11 crc kubenswrapper[4915]: I0127 19:07:11.918701 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w24xn"] Jan 27 19:07:11 crc kubenswrapper[4915]: I0127 19:07:11.929988 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zspp\" (UniqueName: \"kubernetes.io/projected/6d44b695-348f-4c00-8fce-61c25f326838-kube-api-access-7zspp\") pod \"certified-operators-w24xn\" (UID: \"6d44b695-348f-4c00-8fce-61c25f326838\") " pod="openshift-marketplace/certified-operators-w24xn" Jan 27 19:07:11 crc kubenswrapper[4915]: I0127 19:07:11.930080 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d44b695-348f-4c00-8fce-61c25f326838-catalog-content\") pod \"certified-operators-w24xn\" (UID: \"6d44b695-348f-4c00-8fce-61c25f326838\") " pod="openshift-marketplace/certified-operators-w24xn" Jan 27 19:07:11 crc kubenswrapper[4915]: I0127 19:07:11.930107 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d44b695-348f-4c00-8fce-61c25f326838-utilities\") pod \"certified-operators-w24xn\" (UID: \"6d44b695-348f-4c00-8fce-61c25f326838\") " pod="openshift-marketplace/certified-operators-w24xn" Jan 27 19:07:12 crc kubenswrapper[4915]: I0127 19:07:12.032158 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zspp\" (UniqueName: \"kubernetes.io/projected/6d44b695-348f-4c00-8fce-61c25f326838-kube-api-access-7zspp\") pod \"certified-operators-w24xn\" (UID: \"6d44b695-348f-4c00-8fce-61c25f326838\") " pod="openshift-marketplace/certified-operators-w24xn" Jan 27 19:07:12 crc kubenswrapper[4915]: I0127 19:07:12.032258 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d44b695-348f-4c00-8fce-61c25f326838-catalog-content\") pod \"certified-operators-w24xn\" (UID: \"6d44b695-348f-4c00-8fce-61c25f326838\") " pod="openshift-marketplace/certified-operators-w24xn" Jan 27 19:07:12 crc kubenswrapper[4915]: I0127 19:07:12.032286 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d44b695-348f-4c00-8fce-61c25f326838-utilities\") pod \"certified-operators-w24xn\" (UID: \"6d44b695-348f-4c00-8fce-61c25f326838\") " pod="openshift-marketplace/certified-operators-w24xn" Jan 27 19:07:12 crc kubenswrapper[4915]: I0127 19:07:12.032893 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d44b695-348f-4c00-8fce-61c25f326838-utilities\") pod \"certified-operators-w24xn\" (UID: \"6d44b695-348f-4c00-8fce-61c25f326838\") " pod="openshift-marketplace/certified-operators-w24xn" Jan 27 19:07:12 crc kubenswrapper[4915]: I0127 19:07:12.033094 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d44b695-348f-4c00-8fce-61c25f326838-catalog-content\") pod \"certified-operators-w24xn\" (UID: \"6d44b695-348f-4c00-8fce-61c25f326838\") " pod="openshift-marketplace/certified-operators-w24xn" Jan 27 19:07:12 crc kubenswrapper[4915]: I0127 19:07:12.051423 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zspp\" (UniqueName: \"kubernetes.io/projected/6d44b695-348f-4c00-8fce-61c25f326838-kube-api-access-7zspp\") pod \"certified-operators-w24xn\" (UID: \"6d44b695-348f-4c00-8fce-61c25f326838\") " pod="openshift-marketplace/certified-operators-w24xn" Jan 27 19:07:12 crc kubenswrapper[4915]: I0127 19:07:12.228247 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w24xn" Jan 27 19:07:12 crc kubenswrapper[4915]: I0127 19:07:12.763299 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w24xn"] Jan 27 19:07:13 crc kubenswrapper[4915]: I0127 19:07:13.061962 4915 generic.go:334] "Generic (PLEG): container finished" podID="6d44b695-348f-4c00-8fce-61c25f326838" containerID="15e494de55e4023a3d4bff342ae7167f1fd7ca99b6f7761135ed6275ac0ac676" exitCode=0 Jan 27 19:07:13 crc kubenswrapper[4915]: I0127 19:07:13.062055 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w24xn" event={"ID":"6d44b695-348f-4c00-8fce-61c25f326838","Type":"ContainerDied","Data":"15e494de55e4023a3d4bff342ae7167f1fd7ca99b6f7761135ed6275ac0ac676"} Jan 27 19:07:13 crc kubenswrapper[4915]: I0127 19:07:13.062310 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w24xn" event={"ID":"6d44b695-348f-4c00-8fce-61c25f326838","Type":"ContainerStarted","Data":"746a9da8ed23ae0c3adf7714b263e2de1a8b2023bf72982305da77dc39064125"} Jan 27 19:07:13 crc kubenswrapper[4915]: I0127 19:07:13.842271 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k6btd" Jan 27 19:07:13 crc kubenswrapper[4915]: I0127 19:07:13.842972 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k6btd" Jan 27 19:07:13 crc kubenswrapper[4915]: I0127 19:07:13.885307 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k6btd" Jan 27 19:07:14 crc kubenswrapper[4915]: I0127 19:07:14.070002 4915 generic.go:334] "Generic (PLEG): container finished" podID="6d44b695-348f-4c00-8fce-61c25f326838" containerID="e75ec05724d54970598b658d8582e9446a8b984682e08ad66be786e2c9a42f34" exitCode=0 Jan 27 19:07:14 crc kubenswrapper[4915]: I0127 19:07:14.070071 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w24xn" event={"ID":"6d44b695-348f-4c00-8fce-61c25f326838","Type":"ContainerDied","Data":"e75ec05724d54970598b658d8582e9446a8b984682e08ad66be786e2c9a42f34"} Jan 27 19:07:14 crc kubenswrapper[4915]: I0127 19:07:14.121122 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k6btd" Jan 27 19:07:15 crc kubenswrapper[4915]: I0127 19:07:15.080722 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w24xn" event={"ID":"6d44b695-348f-4c00-8fce-61c25f326838","Type":"ContainerStarted","Data":"9211f867a51d00e2d9da2a3d545f73da0489b150a45517136544abf8267033e2"} Jan 27 19:07:15 crc kubenswrapper[4915]: I0127 19:07:15.101959 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w24xn" podStartSLOduration=2.663754164 podStartE2EDuration="4.101939394s" podCreationTimestamp="2026-01-27 19:07:11 +0000 UTC" firstStartedPulling="2026-01-27 19:07:13.063723819 +0000 UTC m=+1524.421577483" lastFinishedPulling="2026-01-27 19:07:14.501909009 +0000 UTC m=+1525.859762713" observedRunningTime="2026-01-27 19:07:15.099503074 +0000 UTC m=+1526.457356738" watchObservedRunningTime="2026-01-27 19:07:15.101939394 +0000 UTC m=+1526.459793058" Jan 27 19:07:16 crc kubenswrapper[4915]: I0127 19:07:16.290733 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k6btd"] Jan 27 19:07:17 crc kubenswrapper[4915]: I0127 19:07:17.148711 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k6btd" podUID="aeb0e96a-728f-48e2-9f1e-ea4fbd168764" containerName="registry-server" containerID="cri-o://ad2f8a22df26ab66ac2041e4b69bfa31ec50f7349d57ebf88489837494f13fd9" gracePeriod=2 Jan 27 19:07:19 crc kubenswrapper[4915]: I0127 19:07:19.169152 4915 generic.go:334] "Generic (PLEG): container finished" podID="aeb0e96a-728f-48e2-9f1e-ea4fbd168764" containerID="ad2f8a22df26ab66ac2041e4b69bfa31ec50f7349d57ebf88489837494f13fd9" exitCode=0 Jan 27 19:07:19 crc kubenswrapper[4915]: I0127 19:07:19.169205 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k6btd" event={"ID":"aeb0e96a-728f-48e2-9f1e-ea4fbd168764","Type":"ContainerDied","Data":"ad2f8a22df26ab66ac2041e4b69bfa31ec50f7349d57ebf88489837494f13fd9"} Jan 27 19:07:19 crc kubenswrapper[4915]: I0127 19:07:19.698097 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k6btd" Jan 27 19:07:19 crc kubenswrapper[4915]: I0127 19:07:19.753199 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n2gs\" (UniqueName: \"kubernetes.io/projected/aeb0e96a-728f-48e2-9f1e-ea4fbd168764-kube-api-access-4n2gs\") pod \"aeb0e96a-728f-48e2-9f1e-ea4fbd168764\" (UID: \"aeb0e96a-728f-48e2-9f1e-ea4fbd168764\") " Jan 27 19:07:19 crc kubenswrapper[4915]: I0127 19:07:19.753262 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeb0e96a-728f-48e2-9f1e-ea4fbd168764-utilities\") pod \"aeb0e96a-728f-48e2-9f1e-ea4fbd168764\" (UID: \"aeb0e96a-728f-48e2-9f1e-ea4fbd168764\") " Jan 27 19:07:19 crc kubenswrapper[4915]: I0127 19:07:19.753377 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeb0e96a-728f-48e2-9f1e-ea4fbd168764-catalog-content\") pod \"aeb0e96a-728f-48e2-9f1e-ea4fbd168764\" (UID: \"aeb0e96a-728f-48e2-9f1e-ea4fbd168764\") " Jan 27 19:07:19 crc kubenswrapper[4915]: I0127 19:07:19.754618 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aeb0e96a-728f-48e2-9f1e-ea4fbd168764-utilities" (OuterVolumeSpecName: "utilities") pod "aeb0e96a-728f-48e2-9f1e-ea4fbd168764" (UID: "aeb0e96a-728f-48e2-9f1e-ea4fbd168764"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:07:19 crc kubenswrapper[4915]: I0127 19:07:19.768123 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeb0e96a-728f-48e2-9f1e-ea4fbd168764-kube-api-access-4n2gs" (OuterVolumeSpecName: "kube-api-access-4n2gs") pod "aeb0e96a-728f-48e2-9f1e-ea4fbd168764" (UID: "aeb0e96a-728f-48e2-9f1e-ea4fbd168764"). InnerVolumeSpecName "kube-api-access-4n2gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:07:19 crc kubenswrapper[4915]: I0127 19:07:19.810543 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aeb0e96a-728f-48e2-9f1e-ea4fbd168764-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aeb0e96a-728f-48e2-9f1e-ea4fbd168764" (UID: "aeb0e96a-728f-48e2-9f1e-ea4fbd168764"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:07:19 crc kubenswrapper[4915]: I0127 19:07:19.854846 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n2gs\" (UniqueName: \"kubernetes.io/projected/aeb0e96a-728f-48e2-9f1e-ea4fbd168764-kube-api-access-4n2gs\") on node \"crc\" DevicePath \"\"" Jan 27 19:07:19 crc kubenswrapper[4915]: I0127 19:07:19.854885 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeb0e96a-728f-48e2-9f1e-ea4fbd168764-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:07:19 crc kubenswrapper[4915]: I0127 19:07:19.854896 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeb0e96a-728f-48e2-9f1e-ea4fbd168764-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:07:20 crc kubenswrapper[4915]: I0127 19:07:20.179741 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k6btd" event={"ID":"aeb0e96a-728f-48e2-9f1e-ea4fbd168764","Type":"ContainerDied","Data":"e0ed102bf651ef83cb181284b6823b9883c04b43c6f61c665e03120c97f16ca6"} Jan 27 19:07:20 crc kubenswrapper[4915]: I0127 19:07:20.179892 4915 scope.go:117] "RemoveContainer" containerID="ad2f8a22df26ab66ac2041e4b69bfa31ec50f7349d57ebf88489837494f13fd9" Jan 27 19:07:20 crc kubenswrapper[4915]: I0127 19:07:20.179955 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k6btd" Jan 27 19:07:20 crc kubenswrapper[4915]: I0127 19:07:20.212748 4915 scope.go:117] "RemoveContainer" containerID="5a1ae25332a16a03a34f842b34cf1de7cf85212284ea6b32a109facd5fc9446b" Jan 27 19:07:20 crc kubenswrapper[4915]: I0127 19:07:20.221771 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k6btd"] Jan 27 19:07:20 crc kubenswrapper[4915]: I0127 19:07:20.228280 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k6btd"] Jan 27 19:07:20 crc kubenswrapper[4915]: I0127 19:07:20.234713 4915 scope.go:117] "RemoveContainer" containerID="4702c3ae0519a244da2c46009b68be2fa364a4d52799e1f8e8caaf8c678d5b3d" Jan 27 19:07:20 crc kubenswrapper[4915]: I0127 19:07:20.625515 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:07:20 crc kubenswrapper[4915]: I0127 19:07:20.625585 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:07:21 crc kubenswrapper[4915]: I0127 19:07:21.366494 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeb0e96a-728f-48e2-9f1e-ea4fbd168764" path="/var/lib/kubelet/pods/aeb0e96a-728f-48e2-9f1e-ea4fbd168764/volumes" Jan 27 19:07:22 crc kubenswrapper[4915]: I0127 19:07:22.229379 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w24xn" Jan 27 19:07:22 crc kubenswrapper[4915]: I0127 19:07:22.229456 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w24xn" Jan 27 19:07:22 crc kubenswrapper[4915]: I0127 19:07:22.272652 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w24xn" Jan 27 19:07:23 crc kubenswrapper[4915]: I0127 19:07:23.251723 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w24xn" Jan 27 19:07:24 crc kubenswrapper[4915]: I0127 19:07:24.689590 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w24xn"] Jan 27 19:07:26 crc kubenswrapper[4915]: I0127 19:07:26.229566 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w24xn" podUID="6d44b695-348f-4c00-8fce-61c25f326838" containerName="registry-server" containerID="cri-o://9211f867a51d00e2d9da2a3d545f73da0489b150a45517136544abf8267033e2" gracePeriod=2 Jan 27 19:07:26 crc kubenswrapper[4915]: I0127 19:07:26.666399 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w24xn" Jan 27 19:07:26 crc kubenswrapper[4915]: I0127 19:07:26.856081 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d44b695-348f-4c00-8fce-61c25f326838-catalog-content\") pod \"6d44b695-348f-4c00-8fce-61c25f326838\" (UID: \"6d44b695-348f-4c00-8fce-61c25f326838\") " Jan 27 19:07:26 crc kubenswrapper[4915]: I0127 19:07:26.856165 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zspp\" (UniqueName: \"kubernetes.io/projected/6d44b695-348f-4c00-8fce-61c25f326838-kube-api-access-7zspp\") pod \"6d44b695-348f-4c00-8fce-61c25f326838\" (UID: \"6d44b695-348f-4c00-8fce-61c25f326838\") " Jan 27 19:07:26 crc kubenswrapper[4915]: I0127 19:07:26.856259 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d44b695-348f-4c00-8fce-61c25f326838-utilities\") pod \"6d44b695-348f-4c00-8fce-61c25f326838\" (UID: \"6d44b695-348f-4c00-8fce-61c25f326838\") " Jan 27 19:07:26 crc kubenswrapper[4915]: I0127 19:07:26.857246 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d44b695-348f-4c00-8fce-61c25f326838-utilities" (OuterVolumeSpecName: "utilities") pod "6d44b695-348f-4c00-8fce-61c25f326838" (UID: "6d44b695-348f-4c00-8fce-61c25f326838"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:07:26 crc kubenswrapper[4915]: I0127 19:07:26.861087 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d44b695-348f-4c00-8fce-61c25f326838-kube-api-access-7zspp" (OuterVolumeSpecName: "kube-api-access-7zspp") pod "6d44b695-348f-4c00-8fce-61c25f326838" (UID: "6d44b695-348f-4c00-8fce-61c25f326838"). InnerVolumeSpecName "kube-api-access-7zspp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:07:26 crc kubenswrapper[4915]: I0127 19:07:26.908464 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d44b695-348f-4c00-8fce-61c25f326838-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d44b695-348f-4c00-8fce-61c25f326838" (UID: "6d44b695-348f-4c00-8fce-61c25f326838"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:07:26 crc kubenswrapper[4915]: I0127 19:07:26.957655 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d44b695-348f-4c00-8fce-61c25f326838-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:07:26 crc kubenswrapper[4915]: I0127 19:07:26.957690 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d44b695-348f-4c00-8fce-61c25f326838-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:07:26 crc kubenswrapper[4915]: I0127 19:07:26.957701 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zspp\" (UniqueName: \"kubernetes.io/projected/6d44b695-348f-4c00-8fce-61c25f326838-kube-api-access-7zspp\") on node \"crc\" DevicePath \"\"" Jan 27 19:07:27 crc kubenswrapper[4915]: I0127 19:07:27.246073 4915 generic.go:334] "Generic (PLEG): container finished" podID="6d44b695-348f-4c00-8fce-61c25f326838" containerID="9211f867a51d00e2d9da2a3d545f73da0489b150a45517136544abf8267033e2" exitCode=0 Jan 27 19:07:27 crc kubenswrapper[4915]: I0127 19:07:27.246123 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w24xn" event={"ID":"6d44b695-348f-4c00-8fce-61c25f326838","Type":"ContainerDied","Data":"9211f867a51d00e2d9da2a3d545f73da0489b150a45517136544abf8267033e2"} Jan 27 19:07:27 crc kubenswrapper[4915]: I0127 19:07:27.246151 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w24xn" event={"ID":"6d44b695-348f-4c00-8fce-61c25f326838","Type":"ContainerDied","Data":"746a9da8ed23ae0c3adf7714b263e2de1a8b2023bf72982305da77dc39064125"} Jan 27 19:07:27 crc kubenswrapper[4915]: I0127 19:07:27.246167 4915 scope.go:117] "RemoveContainer" containerID="9211f867a51d00e2d9da2a3d545f73da0489b150a45517136544abf8267033e2" Jan 27 19:07:27 crc kubenswrapper[4915]: I0127 19:07:27.246197 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w24xn" Jan 27 19:07:27 crc kubenswrapper[4915]: I0127 19:07:27.268668 4915 scope.go:117] "RemoveContainer" containerID="e75ec05724d54970598b658d8582e9446a8b984682e08ad66be786e2c9a42f34" Jan 27 19:07:27 crc kubenswrapper[4915]: I0127 19:07:27.287019 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w24xn"] Jan 27 19:07:27 crc kubenswrapper[4915]: I0127 19:07:27.293586 4915 scope.go:117] "RemoveContainer" containerID="15e494de55e4023a3d4bff342ae7167f1fd7ca99b6f7761135ed6275ac0ac676" Jan 27 19:07:27 crc kubenswrapper[4915]: I0127 19:07:27.294802 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w24xn"] Jan 27 19:07:27 crc kubenswrapper[4915]: I0127 19:07:27.327200 4915 scope.go:117] "RemoveContainer" containerID="9211f867a51d00e2d9da2a3d545f73da0489b150a45517136544abf8267033e2" Jan 27 19:07:27 crc kubenswrapper[4915]: E0127 19:07:27.327689 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9211f867a51d00e2d9da2a3d545f73da0489b150a45517136544abf8267033e2\": container with ID starting with 9211f867a51d00e2d9da2a3d545f73da0489b150a45517136544abf8267033e2 not found: ID does not exist" containerID="9211f867a51d00e2d9da2a3d545f73da0489b150a45517136544abf8267033e2" Jan 27 19:07:27 crc kubenswrapper[4915]: I0127 19:07:27.327719 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9211f867a51d00e2d9da2a3d545f73da0489b150a45517136544abf8267033e2"} err="failed to get container status \"9211f867a51d00e2d9da2a3d545f73da0489b150a45517136544abf8267033e2\": rpc error: code = NotFound desc = could not find container \"9211f867a51d00e2d9da2a3d545f73da0489b150a45517136544abf8267033e2\": container with ID starting with 9211f867a51d00e2d9da2a3d545f73da0489b150a45517136544abf8267033e2 not found: ID does not exist" Jan 27 19:07:27 crc kubenswrapper[4915]: I0127 19:07:27.327739 4915 scope.go:117] "RemoveContainer" containerID="e75ec05724d54970598b658d8582e9446a8b984682e08ad66be786e2c9a42f34" Jan 27 19:07:27 crc kubenswrapper[4915]: E0127 19:07:27.328114 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e75ec05724d54970598b658d8582e9446a8b984682e08ad66be786e2c9a42f34\": container with ID starting with e75ec05724d54970598b658d8582e9446a8b984682e08ad66be786e2c9a42f34 not found: ID does not exist" containerID="e75ec05724d54970598b658d8582e9446a8b984682e08ad66be786e2c9a42f34" Jan 27 19:07:27 crc kubenswrapper[4915]: I0127 19:07:27.328151 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e75ec05724d54970598b658d8582e9446a8b984682e08ad66be786e2c9a42f34"} err="failed to get container status \"e75ec05724d54970598b658d8582e9446a8b984682e08ad66be786e2c9a42f34\": rpc error: code = NotFound desc = could not find container \"e75ec05724d54970598b658d8582e9446a8b984682e08ad66be786e2c9a42f34\": container with ID starting with e75ec05724d54970598b658d8582e9446a8b984682e08ad66be786e2c9a42f34 not found: ID does not exist" Jan 27 19:07:27 crc kubenswrapper[4915]: I0127 19:07:27.328179 4915 scope.go:117] "RemoveContainer" containerID="15e494de55e4023a3d4bff342ae7167f1fd7ca99b6f7761135ed6275ac0ac676" Jan 27 19:07:27 crc kubenswrapper[4915]: E0127 19:07:27.328546 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15e494de55e4023a3d4bff342ae7167f1fd7ca99b6f7761135ed6275ac0ac676\": container with ID starting with 15e494de55e4023a3d4bff342ae7167f1fd7ca99b6f7761135ed6275ac0ac676 not found: ID does not exist" containerID="15e494de55e4023a3d4bff342ae7167f1fd7ca99b6f7761135ed6275ac0ac676" Jan 27 19:07:27 crc kubenswrapper[4915]: I0127 19:07:27.328611 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e494de55e4023a3d4bff342ae7167f1fd7ca99b6f7761135ed6275ac0ac676"} err="failed to get container status \"15e494de55e4023a3d4bff342ae7167f1fd7ca99b6f7761135ed6275ac0ac676\": rpc error: code = NotFound desc = could not find container \"15e494de55e4023a3d4bff342ae7167f1fd7ca99b6f7761135ed6275ac0ac676\": container with ID starting with 15e494de55e4023a3d4bff342ae7167f1fd7ca99b6f7761135ed6275ac0ac676 not found: ID does not exist" Jan 27 19:07:27 crc kubenswrapper[4915]: I0127 19:07:27.367680 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d44b695-348f-4c00-8fce-61c25f326838" path="/var/lib/kubelet/pods/6d44b695-348f-4c00-8fce-61c25f326838/volumes" Jan 27 19:07:50 crc kubenswrapper[4915]: I0127 19:07:50.625175 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:07:50 crc kubenswrapper[4915]: I0127 19:07:50.627506 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:07:52 crc kubenswrapper[4915]: I0127 19:07:52.574983 4915 scope.go:117] "RemoveContainer" containerID="753760b10b6c7ebc4f0b10f1a269dd6898612cf07c1c2cfc1191d8b5e78b2a3f" Jan 27 19:07:52 crc kubenswrapper[4915]: I0127 19:07:52.624750 4915 scope.go:117] "RemoveContainer" containerID="9dcb7ba1f3ec9fd6b49c31a0cd733639a18e603ad9340d0bb81f8a3609ce0718" Jan 27 19:07:52 crc kubenswrapper[4915]: I0127 19:07:52.653742 4915 scope.go:117] "RemoveContainer" containerID="e2a265c1ce7bdb5d70e035fb5c422c6c12f3d29c42f1e121829c270efeb41f4f" Jan 27 19:07:52 crc kubenswrapper[4915]: I0127 19:07:52.675783 4915 scope.go:117] "RemoveContainer" containerID="986dc9ed2573c43d80b5094114f512209e4b19c99546c56aae48a2da45b7c15d" Jan 27 19:07:52 crc kubenswrapper[4915]: I0127 19:07:52.717302 4915 scope.go:117] "RemoveContainer" containerID="ad1a62b5c13ac79f78f79de53e79d8b6e2d71d37af6a621d62c0f82ef67828b5" Jan 27 19:07:52 crc kubenswrapper[4915]: I0127 19:07:52.735497 4915 scope.go:117] "RemoveContainer" containerID="4b0112d5705b109ff815f61cf7bdeeaff9197c76f99b6d90650c55afe47637e3" Jan 27 19:07:52 crc kubenswrapper[4915]: I0127 19:07:52.771270 4915 scope.go:117] "RemoveContainer" containerID="5b5fd7950d039f8f25f1800441b524fb35816018c64293f78005b6a7b70b7696" Jan 27 19:07:52 crc kubenswrapper[4915]: I0127 19:07:52.786540 4915 scope.go:117] "RemoveContainer" containerID="2f5ebdc9256b15339951da1e7a6b802d34dbb6a7dde142f5f99b12f9156c78e0" Jan 27 19:07:52 crc kubenswrapper[4915]: I0127 19:07:52.851215 4915 scope.go:117] "RemoveContainer" containerID="2be69f7a262552a158049c3b0bbfbb5ea2b1a992e0c212434065b95a436e316e" Jan 27 19:07:52 crc kubenswrapper[4915]: I0127 19:07:52.872603 4915 scope.go:117] "RemoveContainer" containerID="01e3ce09a2526a0318f8a67730a60d1b0cc8744b130e54f91d75d1a77fb8f6a7" Jan 27 19:07:52 crc kubenswrapper[4915]: I0127 19:07:52.918423 4915 scope.go:117] "RemoveContainer" containerID="77c76021d33ae4822a885e5c353f68fba8512e26f1aaf7f82c6dc167a05c68d6" Jan 27 19:07:52 crc kubenswrapper[4915]: I0127 19:07:52.956650 4915 scope.go:117] "RemoveContainer" containerID="62678ab433696d7351f81f9c1770c3c809c3a53ec6feeae8a28042bf8f437350" Jan 27 19:07:52 crc kubenswrapper[4915]: I0127 19:07:52.975582 4915 scope.go:117] "RemoveContainer" containerID="8e59ad30f91880c70b1191ba7fdf5268caf84fda39488bb872a4b8b61d203b13" Jan 27 19:07:52 crc kubenswrapper[4915]: I0127 19:07:52.998207 4915 scope.go:117] "RemoveContainer" containerID="1b588bc3241f6527c713e1b6ce11c85307d42360eacca87a03d21e1910e5ed34" Jan 27 19:08:20 crc kubenswrapper[4915]: I0127 19:08:20.624667 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:08:20 crc kubenswrapper[4915]: I0127 19:08:20.625283 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:08:20 crc kubenswrapper[4915]: I0127 19:08:20.625329 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 19:08:20 crc kubenswrapper[4915]: I0127 19:08:20.626032 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:08:20 crc kubenswrapper[4915]: I0127 19:08:20.626088 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080" gracePeriod=600 Jan 27 19:08:20 crc kubenswrapper[4915]: E0127 19:08:20.750396 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:08:21 crc kubenswrapper[4915]: I0127 19:08:21.725469 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080" exitCode=0 Jan 27 19:08:21 crc kubenswrapper[4915]: I0127 19:08:21.725523 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080"} Jan 27 19:08:21 crc kubenswrapper[4915]: I0127 19:08:21.727003 4915 scope.go:117] "RemoveContainer" containerID="0ffdf2d8bcfcee0fd3a2af2a920f2189c02f8aef159f2288deb6331f5a73c9e0" Jan 27 19:08:21 crc kubenswrapper[4915]: I0127 19:08:21.727584 4915 scope.go:117] "RemoveContainer" containerID="aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080" Jan 27 19:08:21 crc kubenswrapper[4915]: E0127 19:08:21.728014 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:08:36 crc kubenswrapper[4915]: I0127 19:08:36.357512 4915 scope.go:117] "RemoveContainer" containerID="aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080" Jan 27 19:08:36 crc kubenswrapper[4915]: E0127 19:08:36.359523 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:08:47 crc kubenswrapper[4915]: I0127 19:08:47.358992 4915 scope.go:117] "RemoveContainer" containerID="aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080" Jan 27 19:08:47 crc kubenswrapper[4915]: E0127 19:08:47.360374 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:08:53 crc kubenswrapper[4915]: I0127 19:08:53.270776 4915 scope.go:117] "RemoveContainer" containerID="94b37cb1181144ca154da470500875bce1c1eb852ba38ce3063c820112e7cac3" Jan 27 19:08:53 crc kubenswrapper[4915]: I0127 19:08:53.329288 4915 scope.go:117] "RemoveContainer" containerID="4a61282c22786665c1c6c450e9b65eb6a23a9e6b7e17f42e87e361901c21f628" Jan 27 19:08:53 crc kubenswrapper[4915]: I0127 19:08:53.392257 4915 scope.go:117] "RemoveContainer" containerID="e41e2a390a13486abb25e163c192e86770faad8b33152303528c9702730fc6bb" Jan 27 19:08:53 crc kubenswrapper[4915]: I0127 19:08:53.426508 4915 scope.go:117] "RemoveContainer" containerID="fb3794bee7f36e17deb1063b4afdf6f679603461eaf2081a52d75e27cbc706d1" Jan 27 19:08:53 crc kubenswrapper[4915]: I0127 19:08:53.468333 4915 scope.go:117] "RemoveContainer" containerID="f597c72d6e48db1ed7f8a9a65cb96d325a3bf4ccf903fe72a0904e1613c1cfca" Jan 27 19:08:53 crc kubenswrapper[4915]: I0127 19:08:53.486379 4915 scope.go:117] "RemoveContainer" containerID="db398784d22a8ab35ea75bbe90593b343f7ea20905f57079cee11f49dd81b517" Jan 27 19:08:53 crc kubenswrapper[4915]: I0127 19:08:53.512116 4915 scope.go:117] "RemoveContainer" containerID="e48ae78e0ee85508aad39735796b0214f22e733b94c1e67c95c5d48eb01c99e4" Jan 27 19:09:00 crc kubenswrapper[4915]: I0127 19:09:00.357233 4915 scope.go:117] "RemoveContainer" containerID="aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080" Jan 27 19:09:00 crc kubenswrapper[4915]: E0127 19:09:00.358047 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:09:14 crc kubenswrapper[4915]: I0127 19:09:14.357467 4915 scope.go:117] "RemoveContainer" containerID="aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080" Jan 27 19:09:14 crc kubenswrapper[4915]: E0127 19:09:14.358591 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:09:27 crc kubenswrapper[4915]: I0127 19:09:27.357845 4915 scope.go:117] "RemoveContainer" containerID="aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080" Jan 27 19:09:27 crc kubenswrapper[4915]: E0127 19:09:27.358663 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:09:41 crc kubenswrapper[4915]: I0127 19:09:41.357682 4915 scope.go:117] "RemoveContainer" containerID="aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080" Jan 27 19:09:41 crc kubenswrapper[4915]: E0127 19:09:41.358536 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:09:53 crc kubenswrapper[4915]: I0127 19:09:53.676111 4915 scope.go:117] "RemoveContainer" containerID="aa2060fdabb091563f415077006bb9c0a8815f2020f87e80cd0822b5238c5ef6" Jan 27 19:09:53 crc kubenswrapper[4915]: I0127 19:09:53.732555 4915 scope.go:117] "RemoveContainer" containerID="16a9a2d97d7bcb250afc9ef0659b63ec28ea84248a94d15544709c1947ac3836" Jan 27 19:09:53 crc kubenswrapper[4915]: I0127 19:09:53.784057 4915 scope.go:117] "RemoveContainer" containerID="d35fdae5565950f6f21c26cee4bc83f55e54749039323306d48c13acce01c4b3" Jan 27 19:09:56 crc kubenswrapper[4915]: I0127 19:09:56.358060 4915 scope.go:117] "RemoveContainer" containerID="aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080" Jan 27 19:09:56 crc kubenswrapper[4915]: E0127 19:09:56.358684 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:10:10 crc kubenswrapper[4915]: I0127 19:10:10.357694 4915 scope.go:117] "RemoveContainer" containerID="aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080" Jan 27 19:10:10 crc kubenswrapper[4915]: E0127 19:10:10.358893 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:10:22 crc kubenswrapper[4915]: I0127 19:10:22.359142 4915 scope.go:117] "RemoveContainer" containerID="aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080" Jan 27 19:10:22 crc kubenswrapper[4915]: E0127 19:10:22.359953 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:10:34 crc kubenswrapper[4915]: I0127 19:10:34.358473 4915 scope.go:117] "RemoveContainer" containerID="aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080" Jan 27 19:10:34 crc kubenswrapper[4915]: E0127 19:10:34.359617 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:10:48 crc kubenswrapper[4915]: I0127 19:10:48.357769 4915 scope.go:117] "RemoveContainer" containerID="aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080" Jan 27 19:10:48 crc kubenswrapper[4915]: E0127 19:10:48.358885 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:10:53 crc kubenswrapper[4915]: I0127 19:10:53.863155 4915 scope.go:117] "RemoveContainer" containerID="34e977874b962f76e26b957f6cccd2391cc8d2a127538d3d1be4030d291b2063" Jan 27 19:10:53 crc kubenswrapper[4915]: I0127 19:10:53.895030 4915 scope.go:117] "RemoveContainer" containerID="361b3a3c5ffb20740339364094b1dc7890ed96dc005e72cd38dfd352f50114ed" Jan 27 19:10:53 crc kubenswrapper[4915]: I0127 19:10:53.916004 4915 scope.go:117] "RemoveContainer" containerID="819859927b2d78abacf57b6ca8d9b1fe29088977d7d8e0a43137061459a30ee7" Jan 27 19:10:53 crc kubenswrapper[4915]: I0127 19:10:53.957145 4915 scope.go:117] "RemoveContainer" containerID="d835385c0bed6004e4974dcc64c9e29fb80d508d9fd2f147d3785f1645ad55ed" Jan 27 19:10:53 crc kubenswrapper[4915]: I0127 19:10:53.976129 4915 scope.go:117] "RemoveContainer" containerID="3071a4d932d0954ba665a0964a52de9a30feb98cb297eae5bdd4c4d385c63546" Jan 27 19:10:53 crc kubenswrapper[4915]: I0127 19:10:53.997045 4915 scope.go:117] "RemoveContainer" containerID="a933786d7cc9361f17ff67cb45c92e9e82dd73239e589fc136036527cb4a3031" Jan 27 19:11:00 crc kubenswrapper[4915]: I0127 19:11:00.357969 4915 scope.go:117] "RemoveContainer" containerID="aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080" Jan 27 19:11:00 crc kubenswrapper[4915]: E0127 19:11:00.358827 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:11:11 crc kubenswrapper[4915]: I0127 19:11:11.358030 4915 scope.go:117] "RemoveContainer" containerID="aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080" Jan 27 19:11:11 crc kubenswrapper[4915]: E0127 19:11:11.358758 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:11:22 crc kubenswrapper[4915]: I0127 19:11:22.358299 4915 scope.go:117] "RemoveContainer" containerID="aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080" Jan 27 19:11:22 crc kubenswrapper[4915]: E0127 19:11:22.360376 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:11:37 crc kubenswrapper[4915]: I0127 19:11:37.358393 4915 scope.go:117] "RemoveContainer" containerID="aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080" Jan 27 19:11:37 crc kubenswrapper[4915]: E0127 19:11:37.359137 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:11:50 crc kubenswrapper[4915]: I0127 19:11:50.358590 4915 scope.go:117] "RemoveContainer" containerID="aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080" Jan 27 19:11:50 crc kubenswrapper[4915]: E0127 19:11:50.359744 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:12:05 crc kubenswrapper[4915]: I0127 19:12:05.358381 4915 scope.go:117] "RemoveContainer" containerID="aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080" Jan 27 19:12:05 crc kubenswrapper[4915]: E0127 19:12:05.362012 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:12:17 crc kubenswrapper[4915]: I0127 19:12:17.358571 4915 scope.go:117] "RemoveContainer" containerID="aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080" Jan 27 19:12:17 crc kubenswrapper[4915]: E0127 19:12:17.359846 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:12:30 crc kubenswrapper[4915]: I0127 19:12:30.357536 4915 scope.go:117] "RemoveContainer" containerID="aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080" Jan 27 19:12:30 crc kubenswrapper[4915]: E0127 19:12:30.358282 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:12:42 crc kubenswrapper[4915]: I0127 19:12:42.358968 4915 scope.go:117] "RemoveContainer" containerID="aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080" Jan 27 19:12:42 crc kubenswrapper[4915]: E0127 19:12:42.360309 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:12:55 crc kubenswrapper[4915]: I0127 19:12:55.357688 4915 scope.go:117] "RemoveContainer" containerID="aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080" Jan 27 19:12:55 crc kubenswrapper[4915]: E0127 19:12:55.359900 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:13:07 crc kubenswrapper[4915]: I0127 19:13:07.358122 4915 scope.go:117] "RemoveContainer" containerID="aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080" Jan 27 19:13:07 crc kubenswrapper[4915]: E0127 19:13:07.358892 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:13:21 crc kubenswrapper[4915]: I0127 19:13:21.358208 4915 scope.go:117] "RemoveContainer" containerID="aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080" Jan 27 19:13:21 crc kubenswrapper[4915]: I0127 19:13:21.708239 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"8d47fc5c3f37fa6ee489fa0a9af8c76f413dbfa7e0da95d9404e899e4bc14052"} Jan 27 19:13:24 crc kubenswrapper[4915]: I0127 19:13:24.856066 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4lskf" podUID="04ed2ea6-d6e8-4e28-a038-7e9b23259535" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:13:24 crc kubenswrapper[4915]: I0127 19:13:24.856062 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-4lskf" podUID="04ed2ea6-d6e8-4e28-a038-7e9b23259535" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:13:25 crc kubenswrapper[4915]: I0127 19:13:25.047633 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jdv22" podUID="0a712a3e-d212-4452-950d-51d8c803ffdf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:13:25 crc kubenswrapper[4915]: I0127 19:13:25.047720 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jdv22" podUID="0a712a3e-d212-4452-950d-51d8c803ffdf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 19:14:34 crc kubenswrapper[4915]: I0127 19:14:34.554103 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kwx4s"] Jan 27 19:14:34 crc kubenswrapper[4915]: E0127 19:14:34.555190 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb0e96a-728f-48e2-9f1e-ea4fbd168764" containerName="extract-utilities" Jan 27 19:14:34 crc kubenswrapper[4915]: I0127 19:14:34.555212 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb0e96a-728f-48e2-9f1e-ea4fbd168764" containerName="extract-utilities" Jan 27 19:14:34 crc kubenswrapper[4915]: E0127 19:14:34.555237 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d44b695-348f-4c00-8fce-61c25f326838" containerName="extract-content" Jan 27 19:14:34 crc kubenswrapper[4915]: I0127 19:14:34.555249 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d44b695-348f-4c00-8fce-61c25f326838" containerName="extract-content" Jan 27 19:14:34 crc kubenswrapper[4915]: E0127 19:14:34.555277 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb0e96a-728f-48e2-9f1e-ea4fbd168764" containerName="registry-server" Jan 27 19:14:34 crc kubenswrapper[4915]: I0127 19:14:34.555285 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb0e96a-728f-48e2-9f1e-ea4fbd168764" containerName="registry-server" Jan 27 19:14:34 crc kubenswrapper[4915]: E0127 19:14:34.555303 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d44b695-348f-4c00-8fce-61c25f326838" containerName="registry-server" Jan 27 19:14:34 crc kubenswrapper[4915]: I0127 19:14:34.555311 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d44b695-348f-4c00-8fce-61c25f326838" containerName="registry-server" Jan 27 19:14:34 crc kubenswrapper[4915]: E0127 19:14:34.555329 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d44b695-348f-4c00-8fce-61c25f326838" containerName="extract-utilities" Jan 27 19:14:34 crc kubenswrapper[4915]: I0127 19:14:34.555338 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d44b695-348f-4c00-8fce-61c25f326838" containerName="extract-utilities" Jan 27 19:14:34 crc kubenswrapper[4915]: E0127 19:14:34.555352 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb0e96a-728f-48e2-9f1e-ea4fbd168764" containerName="extract-content" Jan 27 19:14:34 crc kubenswrapper[4915]: I0127 19:14:34.555360 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb0e96a-728f-48e2-9f1e-ea4fbd168764" containerName="extract-content" Jan 27 19:14:34 crc kubenswrapper[4915]: I0127 19:14:34.555549 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d44b695-348f-4c00-8fce-61c25f326838" containerName="registry-server" Jan 27 19:14:34 crc kubenswrapper[4915]: I0127 19:14:34.555568 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeb0e96a-728f-48e2-9f1e-ea4fbd168764" containerName="registry-server" Jan 27 19:14:34 crc kubenswrapper[4915]: I0127 19:14:34.557988 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kwx4s" Jan 27 19:14:34 crc kubenswrapper[4915]: I0127 19:14:34.574666 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kwx4s"] Jan 27 19:14:34 crc kubenswrapper[4915]: I0127 19:14:34.648320 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b32eea1-752e-43a7-a42d-4ac88e340185-utilities\") pod \"redhat-operators-kwx4s\" (UID: \"0b32eea1-752e-43a7-a42d-4ac88e340185\") " pod="openshift-marketplace/redhat-operators-kwx4s" Jan 27 19:14:34 crc kubenswrapper[4915]: I0127 19:14:34.648375 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b32eea1-752e-43a7-a42d-4ac88e340185-catalog-content\") pod \"redhat-operators-kwx4s\" (UID: \"0b32eea1-752e-43a7-a42d-4ac88e340185\") " pod="openshift-marketplace/redhat-operators-kwx4s" Jan 27 19:14:34 crc kubenswrapper[4915]: I0127 19:14:34.648398 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmckc\" (UniqueName: \"kubernetes.io/projected/0b32eea1-752e-43a7-a42d-4ac88e340185-kube-api-access-dmckc\") pod \"redhat-operators-kwx4s\" (UID: \"0b32eea1-752e-43a7-a42d-4ac88e340185\") " pod="openshift-marketplace/redhat-operators-kwx4s" Jan 27 19:14:34 crc kubenswrapper[4915]: I0127 19:14:34.749464 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b32eea1-752e-43a7-a42d-4ac88e340185-utilities\") pod \"redhat-operators-kwx4s\" (UID: \"0b32eea1-752e-43a7-a42d-4ac88e340185\") " pod="openshift-marketplace/redhat-operators-kwx4s" Jan 27 19:14:34 crc kubenswrapper[4915]: I0127 19:14:34.749519 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b32eea1-752e-43a7-a42d-4ac88e340185-catalog-content\") pod \"redhat-operators-kwx4s\" (UID: \"0b32eea1-752e-43a7-a42d-4ac88e340185\") " pod="openshift-marketplace/redhat-operators-kwx4s" Jan 27 19:14:34 crc kubenswrapper[4915]: I0127 19:14:34.749541 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmckc\" (UniqueName: \"kubernetes.io/projected/0b32eea1-752e-43a7-a42d-4ac88e340185-kube-api-access-dmckc\") pod \"redhat-operators-kwx4s\" (UID: \"0b32eea1-752e-43a7-a42d-4ac88e340185\") " pod="openshift-marketplace/redhat-operators-kwx4s" Jan 27 19:14:34 crc kubenswrapper[4915]: I0127 19:14:34.750089 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b32eea1-752e-43a7-a42d-4ac88e340185-utilities\") pod \"redhat-operators-kwx4s\" (UID: \"0b32eea1-752e-43a7-a42d-4ac88e340185\") " pod="openshift-marketplace/redhat-operators-kwx4s" Jan 27 19:14:34 crc kubenswrapper[4915]: I0127 19:14:34.750305 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b32eea1-752e-43a7-a42d-4ac88e340185-catalog-content\") pod \"redhat-operators-kwx4s\" (UID: \"0b32eea1-752e-43a7-a42d-4ac88e340185\") " pod="openshift-marketplace/redhat-operators-kwx4s" Jan 27 19:14:34 crc kubenswrapper[4915]: I0127 19:14:34.777913 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmckc\" (UniqueName: \"kubernetes.io/projected/0b32eea1-752e-43a7-a42d-4ac88e340185-kube-api-access-dmckc\") pod \"redhat-operators-kwx4s\" (UID: \"0b32eea1-752e-43a7-a42d-4ac88e340185\") " pod="openshift-marketplace/redhat-operators-kwx4s" Jan 27 19:14:34 crc kubenswrapper[4915]: I0127 19:14:34.880945 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kwx4s" Jan 27 19:14:35 crc kubenswrapper[4915]: I0127 19:14:35.336219 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kwx4s"] Jan 27 19:14:35 crc kubenswrapper[4915]: W0127 19:14:35.352402 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b32eea1_752e_43a7_a42d_4ac88e340185.slice/crio-1ff9b43a5527281c010f8cccae557dfa752927e3f34da404fa9caf66e8bef4fb WatchSource:0}: Error finding container 1ff9b43a5527281c010f8cccae557dfa752927e3f34da404fa9caf66e8bef4fb: Status 404 returned error can't find the container with id 1ff9b43a5527281c010f8cccae557dfa752927e3f34da404fa9caf66e8bef4fb Jan 27 19:14:35 crc kubenswrapper[4915]: I0127 19:14:35.368570 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kwx4s" event={"ID":"0b32eea1-752e-43a7-a42d-4ac88e340185","Type":"ContainerStarted","Data":"1ff9b43a5527281c010f8cccae557dfa752927e3f34da404fa9caf66e8bef4fb"} Jan 27 19:14:36 crc kubenswrapper[4915]: I0127 19:14:36.379695 4915 generic.go:334] "Generic (PLEG): container finished" podID="0b32eea1-752e-43a7-a42d-4ac88e340185" containerID="9388b305776e8fc29977b60607ca356024e37118a21cb01aafb17b1408b9a6a3" exitCode=0 Jan 27 19:14:36 crc kubenswrapper[4915]: I0127 19:14:36.379749 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kwx4s" event={"ID":"0b32eea1-752e-43a7-a42d-4ac88e340185","Type":"ContainerDied","Data":"9388b305776e8fc29977b60607ca356024e37118a21cb01aafb17b1408b9a6a3"} Jan 27 19:14:36 crc kubenswrapper[4915]: I0127 19:14:36.382631 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:14:37 crc kubenswrapper[4915]: I0127 19:14:37.388892 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kwx4s" event={"ID":"0b32eea1-752e-43a7-a42d-4ac88e340185","Type":"ContainerStarted","Data":"a6eea720325aeb952efb58352cfdbeba7ba6edb634328e0c7a7298a3206eb3ce"} Jan 27 19:14:38 crc kubenswrapper[4915]: I0127 19:14:38.401691 4915 generic.go:334] "Generic (PLEG): container finished" podID="0b32eea1-752e-43a7-a42d-4ac88e340185" containerID="a6eea720325aeb952efb58352cfdbeba7ba6edb634328e0c7a7298a3206eb3ce" exitCode=0 Jan 27 19:14:38 crc kubenswrapper[4915]: I0127 19:14:38.401756 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kwx4s" event={"ID":"0b32eea1-752e-43a7-a42d-4ac88e340185","Type":"ContainerDied","Data":"a6eea720325aeb952efb58352cfdbeba7ba6edb634328e0c7a7298a3206eb3ce"} Jan 27 19:14:39 crc kubenswrapper[4915]: I0127 19:14:39.415077 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kwx4s" event={"ID":"0b32eea1-752e-43a7-a42d-4ac88e340185","Type":"ContainerStarted","Data":"3f3c719c0f3d0e4a29cbb81b3821fdd13c6675610e9af1f1d674659046a47c1e"} Jan 27 19:14:39 crc kubenswrapper[4915]: I0127 19:14:39.443636 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kwx4s" podStartSLOduration=2.9358800240000003 podStartE2EDuration="5.443618566s" podCreationTimestamp="2026-01-27 19:14:34 +0000 UTC" firstStartedPulling="2026-01-27 19:14:36.382344244 +0000 UTC m=+1967.740197908" lastFinishedPulling="2026-01-27 19:14:38.890082776 +0000 UTC m=+1970.247936450" observedRunningTime="2026-01-27 19:14:39.437450774 +0000 UTC m=+1970.795304438" watchObservedRunningTime="2026-01-27 19:14:39.443618566 +0000 UTC m=+1970.801472230" Jan 27 19:14:44 crc kubenswrapper[4915]: I0127 19:14:44.881952 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kwx4s" Jan 27 19:14:44 crc kubenswrapper[4915]: I0127 19:14:44.882370 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kwx4s" Jan 27 19:14:45 crc kubenswrapper[4915]: I0127 19:14:45.934825 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kwx4s" podUID="0b32eea1-752e-43a7-a42d-4ac88e340185" containerName="registry-server" probeResult="failure" output=< Jan 27 19:14:45 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Jan 27 19:14:45 crc kubenswrapper[4915]: > Jan 27 19:14:54 crc kubenswrapper[4915]: I0127 19:14:54.948358 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kwx4s" Jan 27 19:14:55 crc kubenswrapper[4915]: I0127 19:14:55.008568 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kwx4s" Jan 27 19:14:55 crc kubenswrapper[4915]: I0127 19:14:55.193991 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kwx4s"] Jan 27 19:14:56 crc kubenswrapper[4915]: I0127 19:14:56.541137 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kwx4s" podUID="0b32eea1-752e-43a7-a42d-4ac88e340185" containerName="registry-server" containerID="cri-o://3f3c719c0f3d0e4a29cbb81b3821fdd13c6675610e9af1f1d674659046a47c1e" gracePeriod=2 Jan 27 19:14:56 crc kubenswrapper[4915]: I0127 19:14:56.968124 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kwx4s" Jan 27 19:14:57 crc kubenswrapper[4915]: I0127 19:14:57.132519 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b32eea1-752e-43a7-a42d-4ac88e340185-catalog-content\") pod \"0b32eea1-752e-43a7-a42d-4ac88e340185\" (UID: \"0b32eea1-752e-43a7-a42d-4ac88e340185\") " Jan 27 19:14:57 crc kubenswrapper[4915]: I0127 19:14:57.132606 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmckc\" (UniqueName: \"kubernetes.io/projected/0b32eea1-752e-43a7-a42d-4ac88e340185-kube-api-access-dmckc\") pod \"0b32eea1-752e-43a7-a42d-4ac88e340185\" (UID: \"0b32eea1-752e-43a7-a42d-4ac88e340185\") " Jan 27 19:14:57 crc kubenswrapper[4915]: I0127 19:14:57.132672 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b32eea1-752e-43a7-a42d-4ac88e340185-utilities\") pod \"0b32eea1-752e-43a7-a42d-4ac88e340185\" (UID: \"0b32eea1-752e-43a7-a42d-4ac88e340185\") " Jan 27 19:14:57 crc kubenswrapper[4915]: I0127 19:14:57.133845 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b32eea1-752e-43a7-a42d-4ac88e340185-utilities" (OuterVolumeSpecName: "utilities") pod "0b32eea1-752e-43a7-a42d-4ac88e340185" (UID: "0b32eea1-752e-43a7-a42d-4ac88e340185"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:14:57 crc kubenswrapper[4915]: I0127 19:14:57.137524 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b32eea1-752e-43a7-a42d-4ac88e340185-kube-api-access-dmckc" (OuterVolumeSpecName: "kube-api-access-dmckc") pod "0b32eea1-752e-43a7-a42d-4ac88e340185" (UID: "0b32eea1-752e-43a7-a42d-4ac88e340185"). InnerVolumeSpecName "kube-api-access-dmckc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:14:57 crc kubenswrapper[4915]: I0127 19:14:57.234377 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmckc\" (UniqueName: \"kubernetes.io/projected/0b32eea1-752e-43a7-a42d-4ac88e340185-kube-api-access-dmckc\") on node \"crc\" DevicePath \"\"" Jan 27 19:14:57 crc kubenswrapper[4915]: I0127 19:14:57.234424 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b32eea1-752e-43a7-a42d-4ac88e340185-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:14:57 crc kubenswrapper[4915]: I0127 19:14:57.253312 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b32eea1-752e-43a7-a42d-4ac88e340185-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b32eea1-752e-43a7-a42d-4ac88e340185" (UID: "0b32eea1-752e-43a7-a42d-4ac88e340185"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:14:57 crc kubenswrapper[4915]: I0127 19:14:57.336147 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b32eea1-752e-43a7-a42d-4ac88e340185-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:14:57 crc kubenswrapper[4915]: I0127 19:14:57.550469 4915 generic.go:334] "Generic (PLEG): container finished" podID="0b32eea1-752e-43a7-a42d-4ac88e340185" containerID="3f3c719c0f3d0e4a29cbb81b3821fdd13c6675610e9af1f1d674659046a47c1e" exitCode=0 Jan 27 19:14:57 crc kubenswrapper[4915]: I0127 19:14:57.551090 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kwx4s" event={"ID":"0b32eea1-752e-43a7-a42d-4ac88e340185","Type":"ContainerDied","Data":"3f3c719c0f3d0e4a29cbb81b3821fdd13c6675610e9af1f1d674659046a47c1e"} Jan 27 19:14:57 crc kubenswrapper[4915]: I0127 19:14:57.551137 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kwx4s" event={"ID":"0b32eea1-752e-43a7-a42d-4ac88e340185","Type":"ContainerDied","Data":"1ff9b43a5527281c010f8cccae557dfa752927e3f34da404fa9caf66e8bef4fb"} Jan 27 19:14:57 crc kubenswrapper[4915]: I0127 19:14:57.551158 4915 scope.go:117] "RemoveContainer" containerID="3f3c719c0f3d0e4a29cbb81b3821fdd13c6675610e9af1f1d674659046a47c1e" Jan 27 19:14:57 crc kubenswrapper[4915]: I0127 19:14:57.551304 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kwx4s" Jan 27 19:14:57 crc kubenswrapper[4915]: I0127 19:14:57.573726 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kwx4s"] Jan 27 19:14:57 crc kubenswrapper[4915]: I0127 19:14:57.578934 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kwx4s"] Jan 27 19:14:57 crc kubenswrapper[4915]: I0127 19:14:57.580631 4915 scope.go:117] "RemoveContainer" containerID="a6eea720325aeb952efb58352cfdbeba7ba6edb634328e0c7a7298a3206eb3ce" Jan 27 19:14:57 crc kubenswrapper[4915]: I0127 19:14:57.607019 4915 scope.go:117] "RemoveContainer" containerID="9388b305776e8fc29977b60607ca356024e37118a21cb01aafb17b1408b9a6a3" Jan 27 19:14:57 crc kubenswrapper[4915]: I0127 19:14:57.639448 4915 scope.go:117] "RemoveContainer" containerID="3f3c719c0f3d0e4a29cbb81b3821fdd13c6675610e9af1f1d674659046a47c1e" Jan 27 19:14:57 crc kubenswrapper[4915]: E0127 19:14:57.639930 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f3c719c0f3d0e4a29cbb81b3821fdd13c6675610e9af1f1d674659046a47c1e\": container with ID starting with 3f3c719c0f3d0e4a29cbb81b3821fdd13c6675610e9af1f1d674659046a47c1e not found: ID does not exist" containerID="3f3c719c0f3d0e4a29cbb81b3821fdd13c6675610e9af1f1d674659046a47c1e" Jan 27 19:14:57 crc kubenswrapper[4915]: I0127 19:14:57.639979 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f3c719c0f3d0e4a29cbb81b3821fdd13c6675610e9af1f1d674659046a47c1e"} err="failed to get container status \"3f3c719c0f3d0e4a29cbb81b3821fdd13c6675610e9af1f1d674659046a47c1e\": rpc error: code = NotFound desc = could not find container \"3f3c719c0f3d0e4a29cbb81b3821fdd13c6675610e9af1f1d674659046a47c1e\": container with ID starting with 3f3c719c0f3d0e4a29cbb81b3821fdd13c6675610e9af1f1d674659046a47c1e not found: ID does not exist" Jan 27 19:14:57 crc kubenswrapper[4915]: I0127 19:14:57.640009 4915 scope.go:117] "RemoveContainer" containerID="a6eea720325aeb952efb58352cfdbeba7ba6edb634328e0c7a7298a3206eb3ce" Jan 27 19:14:57 crc kubenswrapper[4915]: E0127 19:14:57.640416 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6eea720325aeb952efb58352cfdbeba7ba6edb634328e0c7a7298a3206eb3ce\": container with ID starting with a6eea720325aeb952efb58352cfdbeba7ba6edb634328e0c7a7298a3206eb3ce not found: ID does not exist" containerID="a6eea720325aeb952efb58352cfdbeba7ba6edb634328e0c7a7298a3206eb3ce" Jan 27 19:14:57 crc kubenswrapper[4915]: I0127 19:14:57.640468 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6eea720325aeb952efb58352cfdbeba7ba6edb634328e0c7a7298a3206eb3ce"} err="failed to get container status \"a6eea720325aeb952efb58352cfdbeba7ba6edb634328e0c7a7298a3206eb3ce\": rpc error: code = NotFound desc = could not find container \"a6eea720325aeb952efb58352cfdbeba7ba6edb634328e0c7a7298a3206eb3ce\": container with ID starting with a6eea720325aeb952efb58352cfdbeba7ba6edb634328e0c7a7298a3206eb3ce not found: ID does not exist" Jan 27 19:14:57 crc kubenswrapper[4915]: I0127 19:14:57.640499 4915 scope.go:117] "RemoveContainer" containerID="9388b305776e8fc29977b60607ca356024e37118a21cb01aafb17b1408b9a6a3" Jan 27 19:14:57 crc kubenswrapper[4915]: E0127 19:14:57.640937 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9388b305776e8fc29977b60607ca356024e37118a21cb01aafb17b1408b9a6a3\": container with ID starting with 9388b305776e8fc29977b60607ca356024e37118a21cb01aafb17b1408b9a6a3 not found: ID does not exist" containerID="9388b305776e8fc29977b60607ca356024e37118a21cb01aafb17b1408b9a6a3" Jan 27 19:14:57 crc kubenswrapper[4915]: I0127 19:14:57.640976 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9388b305776e8fc29977b60607ca356024e37118a21cb01aafb17b1408b9a6a3"} err="failed to get container status \"9388b305776e8fc29977b60607ca356024e37118a21cb01aafb17b1408b9a6a3\": rpc error: code = NotFound desc = could not find container \"9388b305776e8fc29977b60607ca356024e37118a21cb01aafb17b1408b9a6a3\": container with ID starting with 9388b305776e8fc29977b60607ca356024e37118a21cb01aafb17b1408b9a6a3 not found: ID does not exist" Jan 27 19:14:59 crc kubenswrapper[4915]: I0127 19:14:59.373188 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b32eea1-752e-43a7-a42d-4ac88e340185" path="/var/lib/kubelet/pods/0b32eea1-752e-43a7-a42d-4ac88e340185/volumes" Jan 27 19:15:00 crc kubenswrapper[4915]: I0127 19:15:00.161060 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492355-w2cbr"] Jan 27 19:15:00 crc kubenswrapper[4915]: E0127 19:15:00.161537 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b32eea1-752e-43a7-a42d-4ac88e340185" containerName="extract-content" Jan 27 19:15:00 crc kubenswrapper[4915]: I0127 19:15:00.161562 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b32eea1-752e-43a7-a42d-4ac88e340185" containerName="extract-content" Jan 27 19:15:00 crc kubenswrapper[4915]: E0127 19:15:00.161582 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b32eea1-752e-43a7-a42d-4ac88e340185" containerName="registry-server" Jan 27 19:15:00 crc kubenswrapper[4915]: I0127 19:15:00.161591 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b32eea1-752e-43a7-a42d-4ac88e340185" containerName="registry-server" Jan 27 19:15:00 crc kubenswrapper[4915]: E0127 19:15:00.161609 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b32eea1-752e-43a7-a42d-4ac88e340185" containerName="extract-utilities" Jan 27 19:15:00 crc kubenswrapper[4915]: I0127 19:15:00.161619 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b32eea1-752e-43a7-a42d-4ac88e340185" containerName="extract-utilities" Jan 27 19:15:00 crc kubenswrapper[4915]: I0127 19:15:00.161821 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b32eea1-752e-43a7-a42d-4ac88e340185" containerName="registry-server" Jan 27 19:15:00 crc kubenswrapper[4915]: I0127 19:15:00.162523 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-w2cbr" Jan 27 19:15:00 crc kubenswrapper[4915]: I0127 19:15:00.166308 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 19:15:00 crc kubenswrapper[4915]: I0127 19:15:00.166495 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 19:15:00 crc kubenswrapper[4915]: I0127 19:15:00.178865 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492355-w2cbr"] Jan 27 19:15:00 crc kubenswrapper[4915]: I0127 19:15:00.280370 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9jxf\" (UniqueName: \"kubernetes.io/projected/89136b8e-1c34-4298-9d3d-9619f50ace98-kube-api-access-g9jxf\") pod \"collect-profiles-29492355-w2cbr\" (UID: \"89136b8e-1c34-4298-9d3d-9619f50ace98\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-w2cbr" Jan 27 19:15:00 crc kubenswrapper[4915]: I0127 19:15:00.280464 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89136b8e-1c34-4298-9d3d-9619f50ace98-config-volume\") pod \"collect-profiles-29492355-w2cbr\" (UID: \"89136b8e-1c34-4298-9d3d-9619f50ace98\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-w2cbr" Jan 27 19:15:00 crc kubenswrapper[4915]: I0127 19:15:00.280483 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89136b8e-1c34-4298-9d3d-9619f50ace98-secret-volume\") pod \"collect-profiles-29492355-w2cbr\" (UID: \"89136b8e-1c34-4298-9d3d-9619f50ace98\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-w2cbr" Jan 27 19:15:00 crc kubenswrapper[4915]: I0127 19:15:00.382290 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9jxf\" (UniqueName: \"kubernetes.io/projected/89136b8e-1c34-4298-9d3d-9619f50ace98-kube-api-access-g9jxf\") pod \"collect-profiles-29492355-w2cbr\" (UID: \"89136b8e-1c34-4298-9d3d-9619f50ace98\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-w2cbr" Jan 27 19:15:00 crc kubenswrapper[4915]: I0127 19:15:00.382442 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89136b8e-1c34-4298-9d3d-9619f50ace98-config-volume\") pod \"collect-profiles-29492355-w2cbr\" (UID: \"89136b8e-1c34-4298-9d3d-9619f50ace98\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-w2cbr" Jan 27 19:15:00 crc kubenswrapper[4915]: I0127 19:15:00.382472 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89136b8e-1c34-4298-9d3d-9619f50ace98-secret-volume\") pod \"collect-profiles-29492355-w2cbr\" (UID: \"89136b8e-1c34-4298-9d3d-9619f50ace98\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-w2cbr" Jan 27 19:15:00 crc kubenswrapper[4915]: I0127 19:15:00.384063 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89136b8e-1c34-4298-9d3d-9619f50ace98-config-volume\") pod \"collect-profiles-29492355-w2cbr\" (UID: \"89136b8e-1c34-4298-9d3d-9619f50ace98\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-w2cbr" Jan 27 19:15:00 crc kubenswrapper[4915]: I0127 19:15:00.388425 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89136b8e-1c34-4298-9d3d-9619f50ace98-secret-volume\") pod \"collect-profiles-29492355-w2cbr\" (UID: \"89136b8e-1c34-4298-9d3d-9619f50ace98\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-w2cbr" Jan 27 19:15:00 crc kubenswrapper[4915]: I0127 19:15:00.406760 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9jxf\" (UniqueName: \"kubernetes.io/projected/89136b8e-1c34-4298-9d3d-9619f50ace98-kube-api-access-g9jxf\") pod \"collect-profiles-29492355-w2cbr\" (UID: \"89136b8e-1c34-4298-9d3d-9619f50ace98\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-w2cbr" Jan 27 19:15:00 crc kubenswrapper[4915]: I0127 19:15:00.483952 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-w2cbr" Jan 27 19:15:00 crc kubenswrapper[4915]: I0127 19:15:00.911817 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492355-w2cbr"] Jan 27 19:15:01 crc kubenswrapper[4915]: I0127 19:15:01.607668 4915 generic.go:334] "Generic (PLEG): container finished" podID="89136b8e-1c34-4298-9d3d-9619f50ace98" containerID="e1f76f2a25d4ff1e856ba0dc8a99d8921f506735e3f4d5ed28228dc8e30badfa" exitCode=0 Jan 27 19:15:01 crc kubenswrapper[4915]: I0127 19:15:01.607706 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-w2cbr" event={"ID":"89136b8e-1c34-4298-9d3d-9619f50ace98","Type":"ContainerDied","Data":"e1f76f2a25d4ff1e856ba0dc8a99d8921f506735e3f4d5ed28228dc8e30badfa"} Jan 27 19:15:01 crc kubenswrapper[4915]: I0127 19:15:01.607731 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-w2cbr" event={"ID":"89136b8e-1c34-4298-9d3d-9619f50ace98","Type":"ContainerStarted","Data":"09b70e806886e98e53ba95ad23212458bc6182a7677c857b6f7831c70e08856f"} Jan 27 19:15:02 crc kubenswrapper[4915]: I0127 19:15:02.942117 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-w2cbr" Jan 27 19:15:03 crc kubenswrapper[4915]: I0127 19:15:03.030538 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9jxf\" (UniqueName: \"kubernetes.io/projected/89136b8e-1c34-4298-9d3d-9619f50ace98-kube-api-access-g9jxf\") pod \"89136b8e-1c34-4298-9d3d-9619f50ace98\" (UID: \"89136b8e-1c34-4298-9d3d-9619f50ace98\") " Jan 27 19:15:03 crc kubenswrapper[4915]: I0127 19:15:03.030781 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89136b8e-1c34-4298-9d3d-9619f50ace98-config-volume\") pod \"89136b8e-1c34-4298-9d3d-9619f50ace98\" (UID: \"89136b8e-1c34-4298-9d3d-9619f50ace98\") " Jan 27 19:15:03 crc kubenswrapper[4915]: I0127 19:15:03.030856 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89136b8e-1c34-4298-9d3d-9619f50ace98-secret-volume\") pod \"89136b8e-1c34-4298-9d3d-9619f50ace98\" (UID: \"89136b8e-1c34-4298-9d3d-9619f50ace98\") " Jan 27 19:15:03 crc kubenswrapper[4915]: I0127 19:15:03.032525 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89136b8e-1c34-4298-9d3d-9619f50ace98-config-volume" (OuterVolumeSpecName: "config-volume") pod "89136b8e-1c34-4298-9d3d-9619f50ace98" (UID: "89136b8e-1c34-4298-9d3d-9619f50ace98"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:15:03 crc kubenswrapper[4915]: I0127 19:15:03.041177 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89136b8e-1c34-4298-9d3d-9619f50ace98-kube-api-access-g9jxf" (OuterVolumeSpecName: "kube-api-access-g9jxf") pod "89136b8e-1c34-4298-9d3d-9619f50ace98" (UID: "89136b8e-1c34-4298-9d3d-9619f50ace98"). InnerVolumeSpecName "kube-api-access-g9jxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:15:03 crc kubenswrapper[4915]: I0127 19:15:03.044508 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89136b8e-1c34-4298-9d3d-9619f50ace98-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "89136b8e-1c34-4298-9d3d-9619f50ace98" (UID: "89136b8e-1c34-4298-9d3d-9619f50ace98"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:15:03 crc kubenswrapper[4915]: I0127 19:15:03.132100 4915 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89136b8e-1c34-4298-9d3d-9619f50ace98-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:03 crc kubenswrapper[4915]: I0127 19:15:03.132146 4915 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89136b8e-1c34-4298-9d3d-9619f50ace98-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:03 crc kubenswrapper[4915]: I0127 19:15:03.132159 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9jxf\" (UniqueName: \"kubernetes.io/projected/89136b8e-1c34-4298-9d3d-9619f50ace98-kube-api-access-g9jxf\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:03 crc kubenswrapper[4915]: I0127 19:15:03.630441 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-w2cbr" event={"ID":"89136b8e-1c34-4298-9d3d-9619f50ace98","Type":"ContainerDied","Data":"09b70e806886e98e53ba95ad23212458bc6182a7677c857b6f7831c70e08856f"} Jan 27 19:15:03 crc kubenswrapper[4915]: I0127 19:15:03.630858 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09b70e806886e98e53ba95ad23212458bc6182a7677c857b6f7831c70e08856f" Jan 27 19:15:03 crc kubenswrapper[4915]: I0127 19:15:03.630505 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-w2cbr" Jan 27 19:15:04 crc kubenswrapper[4915]: I0127 19:15:04.035012 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492310-cvxsw"] Jan 27 19:15:04 crc kubenswrapper[4915]: I0127 19:15:04.040387 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492310-cvxsw"] Jan 27 19:15:05 crc kubenswrapper[4915]: I0127 19:15:05.370732 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9565c20-c5f0-49a7-acd2-1cc2f4862085" path="/var/lib/kubelet/pods/c9565c20-c5f0-49a7-acd2-1cc2f4862085/volumes" Jan 27 19:15:50 crc kubenswrapper[4915]: I0127 19:15:50.625129 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:15:50 crc kubenswrapper[4915]: I0127 19:15:50.625881 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:15:54 crc kubenswrapper[4915]: I0127 19:15:54.140741 4915 scope.go:117] "RemoveContainer" containerID="6a7e413166283ba88181998fce69b87f999667af6f8ab0d9276cd140bee67cbb" Jan 27 19:16:20 crc kubenswrapper[4915]: I0127 19:16:20.624862 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:16:20 crc kubenswrapper[4915]: I0127 19:16:20.625445 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:16:50 crc kubenswrapper[4915]: I0127 19:16:50.624233 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:16:50 crc kubenswrapper[4915]: I0127 19:16:50.624675 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:16:50 crc kubenswrapper[4915]: I0127 19:16:50.624714 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 19:16:50 crc kubenswrapper[4915]: I0127 19:16:50.625353 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8d47fc5c3f37fa6ee489fa0a9af8c76f413dbfa7e0da95d9404e899e4bc14052"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:16:50 crc kubenswrapper[4915]: I0127 19:16:50.625419 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://8d47fc5c3f37fa6ee489fa0a9af8c76f413dbfa7e0da95d9404e899e4bc14052" gracePeriod=600 Jan 27 19:16:51 crc kubenswrapper[4915]: I0127 19:16:51.544504 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="8d47fc5c3f37fa6ee489fa0a9af8c76f413dbfa7e0da95d9404e899e4bc14052" exitCode=0 Jan 27 19:16:51 crc kubenswrapper[4915]: I0127 19:16:51.544566 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"8d47fc5c3f37fa6ee489fa0a9af8c76f413dbfa7e0da95d9404e899e4bc14052"} Jan 27 19:16:51 crc kubenswrapper[4915]: I0127 19:16:51.545298 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89"} Jan 27 19:16:51 crc kubenswrapper[4915]: I0127 19:16:51.545329 4915 scope.go:117] "RemoveContainer" containerID="aefcdce8356ec002ef9a884e59913a7ff3ffec9bdee63e15e38dae72f7c92080" Jan 27 19:18:37 crc kubenswrapper[4915]: I0127 19:18:37.845788 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f698h"] Jan 27 19:18:37 crc kubenswrapper[4915]: E0127 19:18:37.846932 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89136b8e-1c34-4298-9d3d-9619f50ace98" containerName="collect-profiles" Jan 27 19:18:37 crc kubenswrapper[4915]: I0127 19:18:37.846955 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="89136b8e-1c34-4298-9d3d-9619f50ace98" containerName="collect-profiles" Jan 27 19:18:37 crc kubenswrapper[4915]: I0127 19:18:37.847165 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="89136b8e-1c34-4298-9d3d-9619f50ace98" containerName="collect-profiles" Jan 27 19:18:37 crc kubenswrapper[4915]: I0127 19:18:37.848856 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f698h" Jan 27 19:18:37 crc kubenswrapper[4915]: I0127 19:18:37.858177 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f698h"] Jan 27 19:18:37 crc kubenswrapper[4915]: I0127 19:18:37.898006 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a61a6cf7-110e-4e61-adb4-02098df50f3f-utilities\") pod \"certified-operators-f698h\" (UID: \"a61a6cf7-110e-4e61-adb4-02098df50f3f\") " pod="openshift-marketplace/certified-operators-f698h" Jan 27 19:18:37 crc kubenswrapper[4915]: I0127 19:18:37.898083 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a61a6cf7-110e-4e61-adb4-02098df50f3f-catalog-content\") pod \"certified-operators-f698h\" (UID: \"a61a6cf7-110e-4e61-adb4-02098df50f3f\") " pod="openshift-marketplace/certified-operators-f698h" Jan 27 19:18:37 crc kubenswrapper[4915]: I0127 19:18:37.898155 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcxlm\" (UniqueName: \"kubernetes.io/projected/a61a6cf7-110e-4e61-adb4-02098df50f3f-kube-api-access-bcxlm\") pod \"certified-operators-f698h\" (UID: \"a61a6cf7-110e-4e61-adb4-02098df50f3f\") " pod="openshift-marketplace/certified-operators-f698h" Jan 27 19:18:38 crc kubenswrapper[4915]: I0127 19:18:38.000125 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a61a6cf7-110e-4e61-adb4-02098df50f3f-catalog-content\") pod \"certified-operators-f698h\" (UID: \"a61a6cf7-110e-4e61-adb4-02098df50f3f\") " pod="openshift-marketplace/certified-operators-f698h" Jan 27 19:18:38 crc kubenswrapper[4915]: I0127 19:18:38.000207 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcxlm\" (UniqueName: \"kubernetes.io/projected/a61a6cf7-110e-4e61-adb4-02098df50f3f-kube-api-access-bcxlm\") pod \"certified-operators-f698h\" (UID: \"a61a6cf7-110e-4e61-adb4-02098df50f3f\") " pod="openshift-marketplace/certified-operators-f698h" Jan 27 19:18:38 crc kubenswrapper[4915]: I0127 19:18:38.000329 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a61a6cf7-110e-4e61-adb4-02098df50f3f-utilities\") pod \"certified-operators-f698h\" (UID: \"a61a6cf7-110e-4e61-adb4-02098df50f3f\") " pod="openshift-marketplace/certified-operators-f698h" Jan 27 19:18:38 crc kubenswrapper[4915]: I0127 19:18:38.000897 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a61a6cf7-110e-4e61-adb4-02098df50f3f-utilities\") pod \"certified-operators-f698h\" (UID: \"a61a6cf7-110e-4e61-adb4-02098df50f3f\") " pod="openshift-marketplace/certified-operators-f698h" Jan 27 19:18:38 crc kubenswrapper[4915]: I0127 19:18:38.001186 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a61a6cf7-110e-4e61-adb4-02098df50f3f-catalog-content\") pod \"certified-operators-f698h\" (UID: \"a61a6cf7-110e-4e61-adb4-02098df50f3f\") " pod="openshift-marketplace/certified-operators-f698h" Jan 27 19:18:38 crc kubenswrapper[4915]: I0127 19:18:38.039914 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcxlm\" (UniqueName: \"kubernetes.io/projected/a61a6cf7-110e-4e61-adb4-02098df50f3f-kube-api-access-bcxlm\") pod \"certified-operators-f698h\" (UID: \"a61a6cf7-110e-4e61-adb4-02098df50f3f\") " pod="openshift-marketplace/certified-operators-f698h" Jan 27 19:18:38 crc kubenswrapper[4915]: I0127 19:18:38.177198 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f698h" Jan 27 19:18:38 crc kubenswrapper[4915]: I0127 19:18:38.624734 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f698h"] Jan 27 19:18:39 crc kubenswrapper[4915]: I0127 19:18:39.393591 4915 generic.go:334] "Generic (PLEG): container finished" podID="a61a6cf7-110e-4e61-adb4-02098df50f3f" containerID="3b9b2d9678fc1cbec23f67156b91e64cbfa80bbf342a837418e4addc15539f2c" exitCode=0 Jan 27 19:18:39 crc kubenswrapper[4915]: I0127 19:18:39.393741 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f698h" event={"ID":"a61a6cf7-110e-4e61-adb4-02098df50f3f","Type":"ContainerDied","Data":"3b9b2d9678fc1cbec23f67156b91e64cbfa80bbf342a837418e4addc15539f2c"} Jan 27 19:18:39 crc kubenswrapper[4915]: I0127 19:18:39.394099 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f698h" event={"ID":"a61a6cf7-110e-4e61-adb4-02098df50f3f","Type":"ContainerStarted","Data":"f12d6bb605b7c66c731f9edfaeb98d30f710dc38ed9db59c730a5740aa0b1411"} Jan 27 19:18:40 crc kubenswrapper[4915]: I0127 19:18:40.411982 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f698h" event={"ID":"a61a6cf7-110e-4e61-adb4-02098df50f3f","Type":"ContainerStarted","Data":"6cd122aedf698fbc7a375005070a6e171a7fd48d6c802bd0b30ce366e2e4d5c6"} Jan 27 19:18:41 crc kubenswrapper[4915]: I0127 19:18:41.423724 4915 generic.go:334] "Generic (PLEG): container finished" podID="a61a6cf7-110e-4e61-adb4-02098df50f3f" containerID="6cd122aedf698fbc7a375005070a6e171a7fd48d6c802bd0b30ce366e2e4d5c6" exitCode=0 Jan 27 19:18:41 crc kubenswrapper[4915]: I0127 19:18:41.423775 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f698h" event={"ID":"a61a6cf7-110e-4e61-adb4-02098df50f3f","Type":"ContainerDied","Data":"6cd122aedf698fbc7a375005070a6e171a7fd48d6c802bd0b30ce366e2e4d5c6"} Jan 27 19:18:42 crc kubenswrapper[4915]: I0127 19:18:42.434626 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f698h" event={"ID":"a61a6cf7-110e-4e61-adb4-02098df50f3f","Type":"ContainerStarted","Data":"19c8fbef83a6d43bcc5eeb1cb6e7358d59fd11eb1c54e3c10644cbeaa636dbd8"} Jan 27 19:18:42 crc kubenswrapper[4915]: I0127 19:18:42.458212 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f698h" podStartSLOduration=3.050063692 podStartE2EDuration="5.45818947s" podCreationTimestamp="2026-01-27 19:18:37 +0000 UTC" firstStartedPulling="2026-01-27 19:18:39.397318224 +0000 UTC m=+2210.755171928" lastFinishedPulling="2026-01-27 19:18:41.805444002 +0000 UTC m=+2213.163297706" observedRunningTime="2026-01-27 19:18:42.45615563 +0000 UTC m=+2213.814009354" watchObservedRunningTime="2026-01-27 19:18:42.45818947 +0000 UTC m=+2213.816043154" Jan 27 19:18:48 crc kubenswrapper[4915]: I0127 19:18:48.178204 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f698h" Jan 27 19:18:48 crc kubenswrapper[4915]: I0127 19:18:48.178660 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f698h" Jan 27 19:18:48 crc kubenswrapper[4915]: I0127 19:18:48.247767 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f698h" Jan 27 19:18:48 crc kubenswrapper[4915]: I0127 19:18:48.523989 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f698h" Jan 27 19:18:48 crc kubenswrapper[4915]: I0127 19:18:48.569547 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f698h"] Jan 27 19:18:50 crc kubenswrapper[4915]: I0127 19:18:50.498108 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f698h" podUID="a61a6cf7-110e-4e61-adb4-02098df50f3f" containerName="registry-server" containerID="cri-o://19c8fbef83a6d43bcc5eeb1cb6e7358d59fd11eb1c54e3c10644cbeaa636dbd8" gracePeriod=2 Jan 27 19:18:50 crc kubenswrapper[4915]: I0127 19:18:50.624140 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:18:50 crc kubenswrapper[4915]: I0127 19:18:50.624485 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:18:51 crc kubenswrapper[4915]: I0127 19:18:51.390510 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f698h" Jan 27 19:18:51 crc kubenswrapper[4915]: I0127 19:18:51.509011 4915 generic.go:334] "Generic (PLEG): container finished" podID="a61a6cf7-110e-4e61-adb4-02098df50f3f" containerID="19c8fbef83a6d43bcc5eeb1cb6e7358d59fd11eb1c54e3c10644cbeaa636dbd8" exitCode=0 Jan 27 19:18:51 crc kubenswrapper[4915]: I0127 19:18:51.509091 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f698h" event={"ID":"a61a6cf7-110e-4e61-adb4-02098df50f3f","Type":"ContainerDied","Data":"19c8fbef83a6d43bcc5eeb1cb6e7358d59fd11eb1c54e3c10644cbeaa636dbd8"} Jan 27 19:18:51 crc kubenswrapper[4915]: I0127 19:18:51.509143 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f698h" Jan 27 19:18:51 crc kubenswrapper[4915]: I0127 19:18:51.509182 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f698h" event={"ID":"a61a6cf7-110e-4e61-adb4-02098df50f3f","Type":"ContainerDied","Data":"f12d6bb605b7c66c731f9edfaeb98d30f710dc38ed9db59c730a5740aa0b1411"} Jan 27 19:18:51 crc kubenswrapper[4915]: I0127 19:18:51.509225 4915 scope.go:117] "RemoveContainer" containerID="19c8fbef83a6d43bcc5eeb1cb6e7358d59fd11eb1c54e3c10644cbeaa636dbd8" Jan 27 19:18:51 crc kubenswrapper[4915]: I0127 19:18:51.516853 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a61a6cf7-110e-4e61-adb4-02098df50f3f-utilities\") pod \"a61a6cf7-110e-4e61-adb4-02098df50f3f\" (UID: \"a61a6cf7-110e-4e61-adb4-02098df50f3f\") " Jan 27 19:18:51 crc kubenswrapper[4915]: I0127 19:18:51.517323 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcxlm\" (UniqueName: \"kubernetes.io/projected/a61a6cf7-110e-4e61-adb4-02098df50f3f-kube-api-access-bcxlm\") pod \"a61a6cf7-110e-4e61-adb4-02098df50f3f\" (UID: \"a61a6cf7-110e-4e61-adb4-02098df50f3f\") " Jan 27 19:18:51 crc kubenswrapper[4915]: I0127 19:18:51.517549 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a61a6cf7-110e-4e61-adb4-02098df50f3f-catalog-content\") pod \"a61a6cf7-110e-4e61-adb4-02098df50f3f\" (UID: \"a61a6cf7-110e-4e61-adb4-02098df50f3f\") " Jan 27 19:18:51 crc kubenswrapper[4915]: I0127 19:18:51.518561 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a61a6cf7-110e-4e61-adb4-02098df50f3f-utilities" (OuterVolumeSpecName: "utilities") pod "a61a6cf7-110e-4e61-adb4-02098df50f3f" (UID: "a61a6cf7-110e-4e61-adb4-02098df50f3f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:18:51 crc kubenswrapper[4915]: I0127 19:18:51.525026 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a61a6cf7-110e-4e61-adb4-02098df50f3f-kube-api-access-bcxlm" (OuterVolumeSpecName: "kube-api-access-bcxlm") pod "a61a6cf7-110e-4e61-adb4-02098df50f3f" (UID: "a61a6cf7-110e-4e61-adb4-02098df50f3f"). InnerVolumeSpecName "kube-api-access-bcxlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:18:51 crc kubenswrapper[4915]: I0127 19:18:51.537248 4915 scope.go:117] "RemoveContainer" containerID="6cd122aedf698fbc7a375005070a6e171a7fd48d6c802bd0b30ce366e2e4d5c6" Jan 27 19:18:51 crc kubenswrapper[4915]: I0127 19:18:51.588246 4915 scope.go:117] "RemoveContainer" containerID="3b9b2d9678fc1cbec23f67156b91e64cbfa80bbf342a837418e4addc15539f2c" Jan 27 19:18:51 crc kubenswrapper[4915]: I0127 19:18:51.598261 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a61a6cf7-110e-4e61-adb4-02098df50f3f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a61a6cf7-110e-4e61-adb4-02098df50f3f" (UID: "a61a6cf7-110e-4e61-adb4-02098df50f3f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:18:51 crc kubenswrapper[4915]: I0127 19:18:51.615816 4915 scope.go:117] "RemoveContainer" containerID="19c8fbef83a6d43bcc5eeb1cb6e7358d59fd11eb1c54e3c10644cbeaa636dbd8" Jan 27 19:18:51 crc kubenswrapper[4915]: E0127 19:18:51.616220 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19c8fbef83a6d43bcc5eeb1cb6e7358d59fd11eb1c54e3c10644cbeaa636dbd8\": container with ID starting with 19c8fbef83a6d43bcc5eeb1cb6e7358d59fd11eb1c54e3c10644cbeaa636dbd8 not found: ID does not exist" containerID="19c8fbef83a6d43bcc5eeb1cb6e7358d59fd11eb1c54e3c10644cbeaa636dbd8" Jan 27 19:18:51 crc kubenswrapper[4915]: I0127 19:18:51.616268 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19c8fbef83a6d43bcc5eeb1cb6e7358d59fd11eb1c54e3c10644cbeaa636dbd8"} err="failed to get container status \"19c8fbef83a6d43bcc5eeb1cb6e7358d59fd11eb1c54e3c10644cbeaa636dbd8\": rpc error: code = NotFound desc = could not find container \"19c8fbef83a6d43bcc5eeb1cb6e7358d59fd11eb1c54e3c10644cbeaa636dbd8\": container with ID starting with 19c8fbef83a6d43bcc5eeb1cb6e7358d59fd11eb1c54e3c10644cbeaa636dbd8 not found: ID does not exist" Jan 27 19:18:51 crc kubenswrapper[4915]: I0127 19:18:51.616302 4915 scope.go:117] "RemoveContainer" containerID="6cd122aedf698fbc7a375005070a6e171a7fd48d6c802bd0b30ce366e2e4d5c6" Jan 27 19:18:51 crc kubenswrapper[4915]: E0127 19:18:51.616614 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cd122aedf698fbc7a375005070a6e171a7fd48d6c802bd0b30ce366e2e4d5c6\": container with ID starting with 6cd122aedf698fbc7a375005070a6e171a7fd48d6c802bd0b30ce366e2e4d5c6 not found: ID does not exist" containerID="6cd122aedf698fbc7a375005070a6e171a7fd48d6c802bd0b30ce366e2e4d5c6" Jan 27 19:18:51 crc kubenswrapper[4915]: I0127 19:18:51.616680 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd122aedf698fbc7a375005070a6e171a7fd48d6c802bd0b30ce366e2e4d5c6"} err="failed to get container status \"6cd122aedf698fbc7a375005070a6e171a7fd48d6c802bd0b30ce366e2e4d5c6\": rpc error: code = NotFound desc = could not find container \"6cd122aedf698fbc7a375005070a6e171a7fd48d6c802bd0b30ce366e2e4d5c6\": container with ID starting with 6cd122aedf698fbc7a375005070a6e171a7fd48d6c802bd0b30ce366e2e4d5c6 not found: ID does not exist" Jan 27 19:18:51 crc kubenswrapper[4915]: I0127 19:18:51.616707 4915 scope.go:117] "RemoveContainer" containerID="3b9b2d9678fc1cbec23f67156b91e64cbfa80bbf342a837418e4addc15539f2c" Jan 27 19:18:51 crc kubenswrapper[4915]: E0127 19:18:51.617001 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b9b2d9678fc1cbec23f67156b91e64cbfa80bbf342a837418e4addc15539f2c\": container with ID starting with 3b9b2d9678fc1cbec23f67156b91e64cbfa80bbf342a837418e4addc15539f2c not found: ID does not exist" containerID="3b9b2d9678fc1cbec23f67156b91e64cbfa80bbf342a837418e4addc15539f2c" Jan 27 19:18:51 crc kubenswrapper[4915]: I0127 19:18:51.617028 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b9b2d9678fc1cbec23f67156b91e64cbfa80bbf342a837418e4addc15539f2c"} err="failed to get container status \"3b9b2d9678fc1cbec23f67156b91e64cbfa80bbf342a837418e4addc15539f2c\": rpc error: code = NotFound desc = could not find container \"3b9b2d9678fc1cbec23f67156b91e64cbfa80bbf342a837418e4addc15539f2c\": container with ID starting with 3b9b2d9678fc1cbec23f67156b91e64cbfa80bbf342a837418e4addc15539f2c not found: ID does not exist" Jan 27 19:18:51 crc kubenswrapper[4915]: I0127 19:18:51.618895 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a61a6cf7-110e-4e61-adb4-02098df50f3f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:18:51 crc kubenswrapper[4915]: I0127 19:18:51.618920 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a61a6cf7-110e-4e61-adb4-02098df50f3f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:18:51 crc kubenswrapper[4915]: I0127 19:18:51.618933 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcxlm\" (UniqueName: \"kubernetes.io/projected/a61a6cf7-110e-4e61-adb4-02098df50f3f-kube-api-access-bcxlm\") on node \"crc\" DevicePath \"\"" Jan 27 19:18:51 crc kubenswrapper[4915]: I0127 19:18:51.859394 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f698h"] Jan 27 19:18:51 crc kubenswrapper[4915]: I0127 19:18:51.876590 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f698h"] Jan 27 19:18:53 crc kubenswrapper[4915]: I0127 19:18:53.367315 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a61a6cf7-110e-4e61-adb4-02098df50f3f" path="/var/lib/kubelet/pods/a61a6cf7-110e-4e61-adb4-02098df50f3f/volumes" Jan 27 19:19:20 crc kubenswrapper[4915]: I0127 19:19:20.624723 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:19:20 crc kubenswrapper[4915]: I0127 19:19:20.625418 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:19:50 crc kubenswrapper[4915]: I0127 19:19:50.624303 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:19:50 crc kubenswrapper[4915]: I0127 19:19:50.624943 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:19:50 crc kubenswrapper[4915]: I0127 19:19:50.625003 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 19:19:50 crc kubenswrapper[4915]: I0127 19:19:50.626516 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:19:50 crc kubenswrapper[4915]: I0127 19:19:50.626620 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89" gracePeriod=600 Jan 27 19:19:50 crc kubenswrapper[4915]: E0127 19:19:50.752458 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:19:50 crc kubenswrapper[4915]: I0127 19:19:50.992546 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89" exitCode=0 Jan 27 19:19:50 crc kubenswrapper[4915]: I0127 19:19:50.992622 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89"} Jan 27 19:19:50 crc kubenswrapper[4915]: I0127 19:19:50.993018 4915 scope.go:117] "RemoveContainer" containerID="8d47fc5c3f37fa6ee489fa0a9af8c76f413dbfa7e0da95d9404e899e4bc14052" Jan 27 19:19:50 crc kubenswrapper[4915]: I0127 19:19:50.993760 4915 scope.go:117] "RemoveContainer" containerID="0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89" Jan 27 19:19:50 crc kubenswrapper[4915]: E0127 19:19:50.994187 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:20:05 crc kubenswrapper[4915]: I0127 19:20:05.358580 4915 scope.go:117] "RemoveContainer" containerID="0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89" Jan 27 19:20:05 crc kubenswrapper[4915]: E0127 19:20:05.359364 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:20:19 crc kubenswrapper[4915]: I0127 19:20:19.362913 4915 scope.go:117] "RemoveContainer" containerID="0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89" Jan 27 19:20:19 crc kubenswrapper[4915]: E0127 19:20:19.363604 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:20:34 crc kubenswrapper[4915]: I0127 19:20:34.357566 4915 scope.go:117] "RemoveContainer" containerID="0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89" Jan 27 19:20:34 crc kubenswrapper[4915]: E0127 19:20:34.358781 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:20:48 crc kubenswrapper[4915]: I0127 19:20:48.358847 4915 scope.go:117] "RemoveContainer" containerID="0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89" Jan 27 19:20:48 crc kubenswrapper[4915]: E0127 19:20:48.359567 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:21:03 crc kubenswrapper[4915]: I0127 19:21:03.358483 4915 scope.go:117] "RemoveContainer" containerID="0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89" Jan 27 19:21:03 crc kubenswrapper[4915]: E0127 19:21:03.359683 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:21:18 crc kubenswrapper[4915]: I0127 19:21:18.357842 4915 scope.go:117] "RemoveContainer" containerID="0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89" Jan 27 19:21:18 crc kubenswrapper[4915]: E0127 19:21:18.358705 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:21:30 crc kubenswrapper[4915]: I0127 19:21:30.357307 4915 scope.go:117] "RemoveContainer" containerID="0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89" Jan 27 19:21:30 crc kubenswrapper[4915]: E0127 19:21:30.358150 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:21:43 crc kubenswrapper[4915]: I0127 19:21:43.167684 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4sxv4"] Jan 27 19:21:43 crc kubenswrapper[4915]: E0127 19:21:43.169844 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61a6cf7-110e-4e61-adb4-02098df50f3f" containerName="extract-utilities" Jan 27 19:21:43 crc kubenswrapper[4915]: I0127 19:21:43.169973 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61a6cf7-110e-4e61-adb4-02098df50f3f" containerName="extract-utilities" Jan 27 19:21:43 crc kubenswrapper[4915]: E0127 19:21:43.170015 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61a6cf7-110e-4e61-adb4-02098df50f3f" containerName="extract-content" Jan 27 19:21:43 crc kubenswrapper[4915]: I0127 19:21:43.170027 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61a6cf7-110e-4e61-adb4-02098df50f3f" containerName="extract-content" Jan 27 19:21:43 crc kubenswrapper[4915]: E0127 19:21:43.170044 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61a6cf7-110e-4e61-adb4-02098df50f3f" containerName="registry-server" Jan 27 19:21:43 crc kubenswrapper[4915]: I0127 19:21:43.170053 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61a6cf7-110e-4e61-adb4-02098df50f3f" containerName="registry-server" Jan 27 19:21:43 crc kubenswrapper[4915]: I0127 19:21:43.170243 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a61a6cf7-110e-4e61-adb4-02098df50f3f" containerName="registry-server" Jan 27 19:21:43 crc kubenswrapper[4915]: I0127 19:21:43.171629 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sxv4" Jan 27 19:21:43 crc kubenswrapper[4915]: I0127 19:21:43.187643 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sxv4"] Jan 27 19:21:43 crc kubenswrapper[4915]: I0127 19:21:43.340365 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5d605d6-c84c-4e6f-9656-45458cf36aee-catalog-content\") pod \"redhat-marketplace-4sxv4\" (UID: \"b5d605d6-c84c-4e6f-9656-45458cf36aee\") " pod="openshift-marketplace/redhat-marketplace-4sxv4" Jan 27 19:21:43 crc kubenswrapper[4915]: I0127 19:21:43.340447 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z96h\" (UniqueName: \"kubernetes.io/projected/b5d605d6-c84c-4e6f-9656-45458cf36aee-kube-api-access-7z96h\") pod \"redhat-marketplace-4sxv4\" (UID: \"b5d605d6-c84c-4e6f-9656-45458cf36aee\") " pod="openshift-marketplace/redhat-marketplace-4sxv4" Jan 27 19:21:43 crc kubenswrapper[4915]: I0127 19:21:43.340544 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5d605d6-c84c-4e6f-9656-45458cf36aee-utilities\") pod \"redhat-marketplace-4sxv4\" (UID: \"b5d605d6-c84c-4e6f-9656-45458cf36aee\") " pod="openshift-marketplace/redhat-marketplace-4sxv4" Jan 27 19:21:43 crc kubenswrapper[4915]: I0127 19:21:43.441502 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5d605d6-c84c-4e6f-9656-45458cf36aee-catalog-content\") pod \"redhat-marketplace-4sxv4\" (UID: \"b5d605d6-c84c-4e6f-9656-45458cf36aee\") " pod="openshift-marketplace/redhat-marketplace-4sxv4" Jan 27 19:21:43 crc kubenswrapper[4915]: I0127 19:21:43.441573 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z96h\" (UniqueName: \"kubernetes.io/projected/b5d605d6-c84c-4e6f-9656-45458cf36aee-kube-api-access-7z96h\") pod \"redhat-marketplace-4sxv4\" (UID: \"b5d605d6-c84c-4e6f-9656-45458cf36aee\") " pod="openshift-marketplace/redhat-marketplace-4sxv4" Jan 27 19:21:43 crc kubenswrapper[4915]: I0127 19:21:43.442394 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5d605d6-c84c-4e6f-9656-45458cf36aee-utilities\") pod \"redhat-marketplace-4sxv4\" (UID: \"b5d605d6-c84c-4e6f-9656-45458cf36aee\") " pod="openshift-marketplace/redhat-marketplace-4sxv4" Jan 27 19:21:43 crc kubenswrapper[4915]: I0127 19:21:43.442742 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5d605d6-c84c-4e6f-9656-45458cf36aee-utilities\") pod \"redhat-marketplace-4sxv4\" (UID: \"b5d605d6-c84c-4e6f-9656-45458cf36aee\") " pod="openshift-marketplace/redhat-marketplace-4sxv4" Jan 27 19:21:43 crc kubenswrapper[4915]: I0127 19:21:43.442997 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5d605d6-c84c-4e6f-9656-45458cf36aee-catalog-content\") pod \"redhat-marketplace-4sxv4\" (UID: \"b5d605d6-c84c-4e6f-9656-45458cf36aee\") " pod="openshift-marketplace/redhat-marketplace-4sxv4" Jan 27 19:21:43 crc kubenswrapper[4915]: I0127 19:21:43.460655 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z96h\" (UniqueName: \"kubernetes.io/projected/b5d605d6-c84c-4e6f-9656-45458cf36aee-kube-api-access-7z96h\") pod \"redhat-marketplace-4sxv4\" (UID: \"b5d605d6-c84c-4e6f-9656-45458cf36aee\") " pod="openshift-marketplace/redhat-marketplace-4sxv4" Jan 27 19:21:43 crc kubenswrapper[4915]: I0127 19:21:43.492628 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sxv4" Jan 27 19:21:43 crc kubenswrapper[4915]: I0127 19:21:43.945056 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sxv4"] Jan 27 19:21:44 crc kubenswrapper[4915]: I0127 19:21:44.938860 4915 generic.go:334] "Generic (PLEG): container finished" podID="b5d605d6-c84c-4e6f-9656-45458cf36aee" containerID="d9c207bce5853e459b42258b00bc9600b1c90e630c0b6a34c715869535aa3106" exitCode=0 Jan 27 19:21:44 crc kubenswrapper[4915]: I0127 19:21:44.938933 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sxv4" event={"ID":"b5d605d6-c84c-4e6f-9656-45458cf36aee","Type":"ContainerDied","Data":"d9c207bce5853e459b42258b00bc9600b1c90e630c0b6a34c715869535aa3106"} Jan 27 19:21:44 crc kubenswrapper[4915]: I0127 19:21:44.939258 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sxv4" event={"ID":"b5d605d6-c84c-4e6f-9656-45458cf36aee","Type":"ContainerStarted","Data":"46f8860ba6bbc2b4c918cadbd2dfc592de45e85ccf786598978cef35471f59e5"} Jan 27 19:21:44 crc kubenswrapper[4915]: I0127 19:21:44.940587 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:21:45 crc kubenswrapper[4915]: I0127 19:21:45.358104 4915 scope.go:117] "RemoveContainer" containerID="0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89" Jan 27 19:21:45 crc kubenswrapper[4915]: E0127 19:21:45.358662 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:21:45 crc kubenswrapper[4915]: I0127 19:21:45.948496 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sxv4" event={"ID":"b5d605d6-c84c-4e6f-9656-45458cf36aee","Type":"ContainerStarted","Data":"784428acc69746ffd41d568895fc55dfac4aad353f25c0b3d80e43c0f953816a"} Jan 27 19:21:46 crc kubenswrapper[4915]: I0127 19:21:46.564444 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vfb4q"] Jan 27 19:21:46 crc kubenswrapper[4915]: I0127 19:21:46.567589 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vfb4q" Jan 27 19:21:46 crc kubenswrapper[4915]: I0127 19:21:46.578284 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vfb4q"] Jan 27 19:21:46 crc kubenswrapper[4915]: I0127 19:21:46.686903 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z7sf\" (UniqueName: \"kubernetes.io/projected/4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69-kube-api-access-8z7sf\") pod \"community-operators-vfb4q\" (UID: \"4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69\") " pod="openshift-marketplace/community-operators-vfb4q" Jan 27 19:21:46 crc kubenswrapper[4915]: I0127 19:21:46.686991 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69-catalog-content\") pod \"community-operators-vfb4q\" (UID: \"4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69\") " pod="openshift-marketplace/community-operators-vfb4q" Jan 27 19:21:46 crc kubenswrapper[4915]: I0127 19:21:46.687107 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69-utilities\") pod \"community-operators-vfb4q\" (UID: \"4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69\") " pod="openshift-marketplace/community-operators-vfb4q" Jan 27 19:21:46 crc kubenswrapper[4915]: I0127 19:21:46.788726 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z7sf\" (UniqueName: \"kubernetes.io/projected/4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69-kube-api-access-8z7sf\") pod \"community-operators-vfb4q\" (UID: \"4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69\") " pod="openshift-marketplace/community-operators-vfb4q" Jan 27 19:21:46 crc kubenswrapper[4915]: I0127 19:21:46.788784 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69-catalog-content\") pod \"community-operators-vfb4q\" (UID: \"4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69\") " pod="openshift-marketplace/community-operators-vfb4q" Jan 27 19:21:46 crc kubenswrapper[4915]: I0127 19:21:46.788875 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69-utilities\") pod \"community-operators-vfb4q\" (UID: \"4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69\") " pod="openshift-marketplace/community-operators-vfb4q" Jan 27 19:21:46 crc kubenswrapper[4915]: I0127 19:21:46.789500 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69-utilities\") pod \"community-operators-vfb4q\" (UID: \"4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69\") " pod="openshift-marketplace/community-operators-vfb4q" Jan 27 19:21:46 crc kubenswrapper[4915]: I0127 19:21:46.790058 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69-catalog-content\") pod \"community-operators-vfb4q\" (UID: \"4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69\") " pod="openshift-marketplace/community-operators-vfb4q" Jan 27 19:21:46 crc kubenswrapper[4915]: I0127 19:21:46.824046 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z7sf\" (UniqueName: \"kubernetes.io/projected/4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69-kube-api-access-8z7sf\") pod \"community-operators-vfb4q\" (UID: \"4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69\") " pod="openshift-marketplace/community-operators-vfb4q" Jan 27 19:21:46 crc kubenswrapper[4915]: I0127 19:21:46.892202 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vfb4q" Jan 27 19:21:46 crc kubenswrapper[4915]: I0127 19:21:46.956653 4915 generic.go:334] "Generic (PLEG): container finished" podID="b5d605d6-c84c-4e6f-9656-45458cf36aee" containerID="784428acc69746ffd41d568895fc55dfac4aad353f25c0b3d80e43c0f953816a" exitCode=0 Jan 27 19:21:46 crc kubenswrapper[4915]: I0127 19:21:46.956747 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sxv4" event={"ID":"b5d605d6-c84c-4e6f-9656-45458cf36aee","Type":"ContainerDied","Data":"784428acc69746ffd41d568895fc55dfac4aad353f25c0b3d80e43c0f953816a"} Jan 27 19:21:47 crc kubenswrapper[4915]: I0127 19:21:47.445562 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vfb4q"] Jan 27 19:21:47 crc kubenswrapper[4915]: W0127 19:21:47.452912 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4842d5d7_8f8d_4c3e_a7ae_d316b2cd9d69.slice/crio-79e7c7e20594fcf1eaf732d433c7316d2879f150b1aa49d9149baa907937e731 WatchSource:0}: Error finding container 79e7c7e20594fcf1eaf732d433c7316d2879f150b1aa49d9149baa907937e731: Status 404 returned error can't find the container with id 79e7c7e20594fcf1eaf732d433c7316d2879f150b1aa49d9149baa907937e731 Jan 27 19:21:47 crc kubenswrapper[4915]: I0127 19:21:47.967669 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sxv4" event={"ID":"b5d605d6-c84c-4e6f-9656-45458cf36aee","Type":"ContainerStarted","Data":"7a13305152d654e8747e5e73f99bbbe0f7828b996ae73e76d903c59de5a24919"} Jan 27 19:21:47 crc kubenswrapper[4915]: I0127 19:21:47.970288 4915 generic.go:334] "Generic (PLEG): container finished" podID="4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69" containerID="a0e71258af05c667bdcbb9f2976a7b5737e62e3d358d11f0d7a70e3c52697e15" exitCode=0 Jan 27 19:21:47 crc kubenswrapper[4915]: I0127 19:21:47.970337 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfb4q" event={"ID":"4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69","Type":"ContainerDied","Data":"a0e71258af05c667bdcbb9f2976a7b5737e62e3d358d11f0d7a70e3c52697e15"} Jan 27 19:21:47 crc kubenswrapper[4915]: I0127 19:21:47.970374 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfb4q" event={"ID":"4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69","Type":"ContainerStarted","Data":"79e7c7e20594fcf1eaf732d433c7316d2879f150b1aa49d9149baa907937e731"} Jan 27 19:21:47 crc kubenswrapper[4915]: I0127 19:21:47.991423 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4sxv4" podStartSLOduration=2.497697712 podStartE2EDuration="4.991392024s" podCreationTimestamp="2026-01-27 19:21:43 +0000 UTC" firstStartedPulling="2026-01-27 19:21:44.940323328 +0000 UTC m=+2396.298177002" lastFinishedPulling="2026-01-27 19:21:47.43401765 +0000 UTC m=+2398.791871314" observedRunningTime="2026-01-27 19:21:47.988921884 +0000 UTC m=+2399.346775558" watchObservedRunningTime="2026-01-27 19:21:47.991392024 +0000 UTC m=+2399.349245688" Jan 27 19:21:48 crc kubenswrapper[4915]: I0127 19:21:48.979546 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfb4q" event={"ID":"4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69","Type":"ContainerStarted","Data":"e4d0c1906a343ff051f99baba1f29fc546d79ef968554b66cbe4d9105869faa3"} Jan 27 19:21:49 crc kubenswrapper[4915]: I0127 19:21:49.987158 4915 generic.go:334] "Generic (PLEG): container finished" podID="4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69" containerID="e4d0c1906a343ff051f99baba1f29fc546d79ef968554b66cbe4d9105869faa3" exitCode=0 Jan 27 19:21:49 crc kubenswrapper[4915]: I0127 19:21:49.987275 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfb4q" event={"ID":"4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69","Type":"ContainerDied","Data":"e4d0c1906a343ff051f99baba1f29fc546d79ef968554b66cbe4d9105869faa3"} Jan 27 19:21:50 crc kubenswrapper[4915]: I0127 19:21:50.995813 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfb4q" event={"ID":"4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69","Type":"ContainerStarted","Data":"1b58f6fae606a47c555a2c189c058d0db055c3aab62cfd6f31ae67aa8394086b"} Jan 27 19:21:51 crc kubenswrapper[4915]: I0127 19:21:51.019577 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vfb4q" podStartSLOduration=2.578401096 podStartE2EDuration="5.019557198s" podCreationTimestamp="2026-01-27 19:21:46 +0000 UTC" firstStartedPulling="2026-01-27 19:21:47.971535066 +0000 UTC m=+2399.329388760" lastFinishedPulling="2026-01-27 19:21:50.412691198 +0000 UTC m=+2401.770544862" observedRunningTime="2026-01-27 19:21:51.017414865 +0000 UTC m=+2402.375268539" watchObservedRunningTime="2026-01-27 19:21:51.019557198 +0000 UTC m=+2402.377410862" Jan 27 19:21:53 crc kubenswrapper[4915]: I0127 19:21:53.493463 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4sxv4" Jan 27 19:21:53 crc kubenswrapper[4915]: I0127 19:21:53.493517 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4sxv4" Jan 27 19:21:53 crc kubenswrapper[4915]: I0127 19:21:53.546989 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4sxv4" Jan 27 19:21:54 crc kubenswrapper[4915]: I0127 19:21:54.050995 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4sxv4" Jan 27 19:21:54 crc kubenswrapper[4915]: I0127 19:21:54.548169 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sxv4"] Jan 27 19:21:56 crc kubenswrapper[4915]: I0127 19:21:56.030030 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4sxv4" podUID="b5d605d6-c84c-4e6f-9656-45458cf36aee" containerName="registry-server" containerID="cri-o://7a13305152d654e8747e5e73f99bbbe0f7828b996ae73e76d903c59de5a24919" gracePeriod=2 Jan 27 19:21:56 crc kubenswrapper[4915]: I0127 19:21:56.357767 4915 scope.go:117] "RemoveContainer" containerID="0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89" Jan 27 19:21:56 crc kubenswrapper[4915]: E0127 19:21:56.358267 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:21:56 crc kubenswrapper[4915]: I0127 19:21:56.394415 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sxv4" Jan 27 19:21:56 crc kubenswrapper[4915]: I0127 19:21:56.527944 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5d605d6-c84c-4e6f-9656-45458cf36aee-catalog-content\") pod \"b5d605d6-c84c-4e6f-9656-45458cf36aee\" (UID: \"b5d605d6-c84c-4e6f-9656-45458cf36aee\") " Jan 27 19:21:56 crc kubenswrapper[4915]: I0127 19:21:56.528009 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z96h\" (UniqueName: \"kubernetes.io/projected/b5d605d6-c84c-4e6f-9656-45458cf36aee-kube-api-access-7z96h\") pod \"b5d605d6-c84c-4e6f-9656-45458cf36aee\" (UID: \"b5d605d6-c84c-4e6f-9656-45458cf36aee\") " Jan 27 19:21:56 crc kubenswrapper[4915]: I0127 19:21:56.528078 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5d605d6-c84c-4e6f-9656-45458cf36aee-utilities\") pod \"b5d605d6-c84c-4e6f-9656-45458cf36aee\" (UID: \"b5d605d6-c84c-4e6f-9656-45458cf36aee\") " Jan 27 19:21:56 crc kubenswrapper[4915]: I0127 19:21:56.532004 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5d605d6-c84c-4e6f-9656-45458cf36aee-utilities" (OuterVolumeSpecName: "utilities") pod "b5d605d6-c84c-4e6f-9656-45458cf36aee" (UID: "b5d605d6-c84c-4e6f-9656-45458cf36aee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:21:56 crc kubenswrapper[4915]: I0127 19:21:56.534987 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5d605d6-c84c-4e6f-9656-45458cf36aee-kube-api-access-7z96h" (OuterVolumeSpecName: "kube-api-access-7z96h") pod "b5d605d6-c84c-4e6f-9656-45458cf36aee" (UID: "b5d605d6-c84c-4e6f-9656-45458cf36aee"). InnerVolumeSpecName "kube-api-access-7z96h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:21:56 crc kubenswrapper[4915]: I0127 19:21:56.559844 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5d605d6-c84c-4e6f-9656-45458cf36aee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5d605d6-c84c-4e6f-9656-45458cf36aee" (UID: "b5d605d6-c84c-4e6f-9656-45458cf36aee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:21:56 crc kubenswrapper[4915]: I0127 19:21:56.629711 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5d605d6-c84c-4e6f-9656-45458cf36aee-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:21:56 crc kubenswrapper[4915]: I0127 19:21:56.629937 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z96h\" (UniqueName: \"kubernetes.io/projected/b5d605d6-c84c-4e6f-9656-45458cf36aee-kube-api-access-7z96h\") on node \"crc\" DevicePath \"\"" Jan 27 19:21:56 crc kubenswrapper[4915]: I0127 19:21:56.629996 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5d605d6-c84c-4e6f-9656-45458cf36aee-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:21:56 crc kubenswrapper[4915]: I0127 19:21:56.892988 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vfb4q" Jan 27 19:21:56 crc kubenswrapper[4915]: I0127 19:21:56.893466 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vfb4q" Jan 27 19:21:56 crc kubenswrapper[4915]: I0127 19:21:56.969408 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vfb4q" Jan 27 19:21:57 crc kubenswrapper[4915]: I0127 19:21:57.040657 4915 generic.go:334] "Generic (PLEG): container finished" podID="b5d605d6-c84c-4e6f-9656-45458cf36aee" containerID="7a13305152d654e8747e5e73f99bbbe0f7828b996ae73e76d903c59de5a24919" exitCode=0 Jan 27 19:21:57 crc kubenswrapper[4915]: I0127 19:21:57.040752 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sxv4" event={"ID":"b5d605d6-c84c-4e6f-9656-45458cf36aee","Type":"ContainerDied","Data":"7a13305152d654e8747e5e73f99bbbe0f7828b996ae73e76d903c59de5a24919"} Jan 27 19:21:57 crc kubenswrapper[4915]: I0127 19:21:57.041367 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sxv4" event={"ID":"b5d605d6-c84c-4e6f-9656-45458cf36aee","Type":"ContainerDied","Data":"46f8860ba6bbc2b4c918cadbd2dfc592de45e85ccf786598978cef35471f59e5"} Jan 27 19:21:57 crc kubenswrapper[4915]: I0127 19:21:57.041418 4915 scope.go:117] "RemoveContainer" containerID="7a13305152d654e8747e5e73f99bbbe0f7828b996ae73e76d903c59de5a24919" Jan 27 19:21:57 crc kubenswrapper[4915]: I0127 19:21:57.040777 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sxv4" Jan 27 19:21:57 crc kubenswrapper[4915]: I0127 19:21:57.059331 4915 scope.go:117] "RemoveContainer" containerID="784428acc69746ffd41d568895fc55dfac4aad353f25c0b3d80e43c0f953816a" Jan 27 19:21:57 crc kubenswrapper[4915]: I0127 19:21:57.077013 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sxv4"] Jan 27 19:21:57 crc kubenswrapper[4915]: I0127 19:21:57.083866 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sxv4"] Jan 27 19:21:57 crc kubenswrapper[4915]: I0127 19:21:57.099322 4915 scope.go:117] "RemoveContainer" containerID="d9c207bce5853e459b42258b00bc9600b1c90e630c0b6a34c715869535aa3106" Jan 27 19:21:57 crc kubenswrapper[4915]: I0127 19:21:57.100402 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vfb4q" Jan 27 19:21:57 crc kubenswrapper[4915]: I0127 19:21:57.117927 4915 scope.go:117] "RemoveContainer" containerID="7a13305152d654e8747e5e73f99bbbe0f7828b996ae73e76d903c59de5a24919" Jan 27 19:21:57 crc kubenswrapper[4915]: E0127 19:21:57.118804 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a13305152d654e8747e5e73f99bbbe0f7828b996ae73e76d903c59de5a24919\": container with ID starting with 7a13305152d654e8747e5e73f99bbbe0f7828b996ae73e76d903c59de5a24919 not found: ID does not exist" containerID="7a13305152d654e8747e5e73f99bbbe0f7828b996ae73e76d903c59de5a24919" Jan 27 19:21:57 crc kubenswrapper[4915]: I0127 19:21:57.118841 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a13305152d654e8747e5e73f99bbbe0f7828b996ae73e76d903c59de5a24919"} err="failed to get container status \"7a13305152d654e8747e5e73f99bbbe0f7828b996ae73e76d903c59de5a24919\": rpc error: code = NotFound desc = could not find container \"7a13305152d654e8747e5e73f99bbbe0f7828b996ae73e76d903c59de5a24919\": container with ID starting with 7a13305152d654e8747e5e73f99bbbe0f7828b996ae73e76d903c59de5a24919 not found: ID does not exist" Jan 27 19:21:57 crc kubenswrapper[4915]: I0127 19:21:57.118866 4915 scope.go:117] "RemoveContainer" containerID="784428acc69746ffd41d568895fc55dfac4aad353f25c0b3d80e43c0f953816a" Jan 27 19:21:57 crc kubenswrapper[4915]: E0127 19:21:57.119283 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"784428acc69746ffd41d568895fc55dfac4aad353f25c0b3d80e43c0f953816a\": container with ID starting with 784428acc69746ffd41d568895fc55dfac4aad353f25c0b3d80e43c0f953816a not found: ID does not exist" containerID="784428acc69746ffd41d568895fc55dfac4aad353f25c0b3d80e43c0f953816a" Jan 27 19:21:57 crc kubenswrapper[4915]: I0127 19:21:57.119310 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"784428acc69746ffd41d568895fc55dfac4aad353f25c0b3d80e43c0f953816a"} err="failed to get container status \"784428acc69746ffd41d568895fc55dfac4aad353f25c0b3d80e43c0f953816a\": rpc error: code = NotFound desc = could not find container \"784428acc69746ffd41d568895fc55dfac4aad353f25c0b3d80e43c0f953816a\": container with ID starting with 784428acc69746ffd41d568895fc55dfac4aad353f25c0b3d80e43c0f953816a not found: ID does not exist" Jan 27 19:21:57 crc kubenswrapper[4915]: I0127 19:21:57.119327 4915 scope.go:117] "RemoveContainer" containerID="d9c207bce5853e459b42258b00bc9600b1c90e630c0b6a34c715869535aa3106" Jan 27 19:21:57 crc kubenswrapper[4915]: E0127 19:21:57.119543 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9c207bce5853e459b42258b00bc9600b1c90e630c0b6a34c715869535aa3106\": container with ID starting with d9c207bce5853e459b42258b00bc9600b1c90e630c0b6a34c715869535aa3106 not found: ID does not exist" containerID="d9c207bce5853e459b42258b00bc9600b1c90e630c0b6a34c715869535aa3106" Jan 27 19:21:57 crc kubenswrapper[4915]: I0127 19:21:57.119566 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9c207bce5853e459b42258b00bc9600b1c90e630c0b6a34c715869535aa3106"} err="failed to get container status \"d9c207bce5853e459b42258b00bc9600b1c90e630c0b6a34c715869535aa3106\": rpc error: code = NotFound desc = could not find container \"d9c207bce5853e459b42258b00bc9600b1c90e630c0b6a34c715869535aa3106\": container with ID starting with d9c207bce5853e459b42258b00bc9600b1c90e630c0b6a34c715869535aa3106 not found: ID does not exist" Jan 27 19:21:57 crc kubenswrapper[4915]: I0127 19:21:57.367513 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5d605d6-c84c-4e6f-9656-45458cf36aee" path="/var/lib/kubelet/pods/b5d605d6-c84c-4e6f-9656-45458cf36aee/volumes" Jan 27 19:21:58 crc kubenswrapper[4915]: I0127 19:21:58.769325 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vfb4q"] Jan 27 19:22:00 crc kubenswrapper[4915]: I0127 19:22:00.066170 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vfb4q" podUID="4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69" containerName="registry-server" containerID="cri-o://1b58f6fae606a47c555a2c189c058d0db055c3aab62cfd6f31ae67aa8394086b" gracePeriod=2 Jan 27 19:22:00 crc kubenswrapper[4915]: I0127 19:22:00.528968 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vfb4q" Jan 27 19:22:00 crc kubenswrapper[4915]: I0127 19:22:00.604880 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z7sf\" (UniqueName: \"kubernetes.io/projected/4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69-kube-api-access-8z7sf\") pod \"4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69\" (UID: \"4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69\") " Jan 27 19:22:00 crc kubenswrapper[4915]: I0127 19:22:00.605020 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69-catalog-content\") pod \"4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69\" (UID: \"4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69\") " Jan 27 19:22:00 crc kubenswrapper[4915]: I0127 19:22:00.605152 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69-utilities\") pod \"4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69\" (UID: \"4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69\") " Jan 27 19:22:00 crc kubenswrapper[4915]: I0127 19:22:00.606365 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69-utilities" (OuterVolumeSpecName: "utilities") pod "4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69" (UID: "4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:22:00 crc kubenswrapper[4915]: I0127 19:22:00.620916 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69-kube-api-access-8z7sf" (OuterVolumeSpecName: "kube-api-access-8z7sf") pod "4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69" (UID: "4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69"). InnerVolumeSpecName "kube-api-access-8z7sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:22:00 crc kubenswrapper[4915]: I0127 19:22:00.675961 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69" (UID: "4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:22:00 crc kubenswrapper[4915]: I0127 19:22:00.706295 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:22:00 crc kubenswrapper[4915]: I0127 19:22:00.706328 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:22:00 crc kubenswrapper[4915]: I0127 19:22:00.706341 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z7sf\" (UniqueName: \"kubernetes.io/projected/4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69-kube-api-access-8z7sf\") on node \"crc\" DevicePath \"\"" Jan 27 19:22:01 crc kubenswrapper[4915]: I0127 19:22:01.075511 4915 generic.go:334] "Generic (PLEG): container finished" podID="4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69" containerID="1b58f6fae606a47c555a2c189c058d0db055c3aab62cfd6f31ae67aa8394086b" exitCode=0 Jan 27 19:22:01 crc kubenswrapper[4915]: I0127 19:22:01.076953 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfb4q" event={"ID":"4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69","Type":"ContainerDied","Data":"1b58f6fae606a47c555a2c189c058d0db055c3aab62cfd6f31ae67aa8394086b"} Jan 27 19:22:01 crc kubenswrapper[4915]: I0127 19:22:01.077027 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfb4q" event={"ID":"4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69","Type":"ContainerDied","Data":"79e7c7e20594fcf1eaf732d433c7316d2879f150b1aa49d9149baa907937e731"} Jan 27 19:22:01 crc kubenswrapper[4915]: I0127 19:22:01.077079 4915 scope.go:117] "RemoveContainer" containerID="1b58f6fae606a47c555a2c189c058d0db055c3aab62cfd6f31ae67aa8394086b" Jan 27 19:22:01 crc kubenswrapper[4915]: I0127 19:22:01.077287 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vfb4q" Jan 27 19:22:01 crc kubenswrapper[4915]: I0127 19:22:01.100112 4915 scope.go:117] "RemoveContainer" containerID="e4d0c1906a343ff051f99baba1f29fc546d79ef968554b66cbe4d9105869faa3" Jan 27 19:22:01 crc kubenswrapper[4915]: I0127 19:22:01.117656 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vfb4q"] Jan 27 19:22:01 crc kubenswrapper[4915]: I0127 19:22:01.125408 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vfb4q"] Jan 27 19:22:01 crc kubenswrapper[4915]: I0127 19:22:01.138596 4915 scope.go:117] "RemoveContainer" containerID="a0e71258af05c667bdcbb9f2976a7b5737e62e3d358d11f0d7a70e3c52697e15" Jan 27 19:22:01 crc kubenswrapper[4915]: I0127 19:22:01.158046 4915 scope.go:117] "RemoveContainer" containerID="1b58f6fae606a47c555a2c189c058d0db055c3aab62cfd6f31ae67aa8394086b" Jan 27 19:22:01 crc kubenswrapper[4915]: E0127 19:22:01.159305 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b58f6fae606a47c555a2c189c058d0db055c3aab62cfd6f31ae67aa8394086b\": container with ID starting with 1b58f6fae606a47c555a2c189c058d0db055c3aab62cfd6f31ae67aa8394086b not found: ID does not exist" containerID="1b58f6fae606a47c555a2c189c058d0db055c3aab62cfd6f31ae67aa8394086b" Jan 27 19:22:01 crc kubenswrapper[4915]: I0127 19:22:01.159350 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b58f6fae606a47c555a2c189c058d0db055c3aab62cfd6f31ae67aa8394086b"} err="failed to get container status \"1b58f6fae606a47c555a2c189c058d0db055c3aab62cfd6f31ae67aa8394086b\": rpc error: code = NotFound desc = could not find container \"1b58f6fae606a47c555a2c189c058d0db055c3aab62cfd6f31ae67aa8394086b\": container with ID starting with 1b58f6fae606a47c555a2c189c058d0db055c3aab62cfd6f31ae67aa8394086b not found: ID does not exist" Jan 27 19:22:01 crc kubenswrapper[4915]: I0127 19:22:01.159377 4915 scope.go:117] "RemoveContainer" containerID="e4d0c1906a343ff051f99baba1f29fc546d79ef968554b66cbe4d9105869faa3" Jan 27 19:22:01 crc kubenswrapper[4915]: E0127 19:22:01.159929 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4d0c1906a343ff051f99baba1f29fc546d79ef968554b66cbe4d9105869faa3\": container with ID starting with e4d0c1906a343ff051f99baba1f29fc546d79ef968554b66cbe4d9105869faa3 not found: ID does not exist" containerID="e4d0c1906a343ff051f99baba1f29fc546d79ef968554b66cbe4d9105869faa3" Jan 27 19:22:01 crc kubenswrapper[4915]: I0127 19:22:01.159988 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4d0c1906a343ff051f99baba1f29fc546d79ef968554b66cbe4d9105869faa3"} err="failed to get container status \"e4d0c1906a343ff051f99baba1f29fc546d79ef968554b66cbe4d9105869faa3\": rpc error: code = NotFound desc = could not find container \"e4d0c1906a343ff051f99baba1f29fc546d79ef968554b66cbe4d9105869faa3\": container with ID starting with e4d0c1906a343ff051f99baba1f29fc546d79ef968554b66cbe4d9105869faa3 not found: ID does not exist" Jan 27 19:22:01 crc kubenswrapper[4915]: I0127 19:22:01.160024 4915 scope.go:117] "RemoveContainer" containerID="a0e71258af05c667bdcbb9f2976a7b5737e62e3d358d11f0d7a70e3c52697e15" Jan 27 19:22:01 crc kubenswrapper[4915]: E0127 19:22:01.160470 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0e71258af05c667bdcbb9f2976a7b5737e62e3d358d11f0d7a70e3c52697e15\": container with ID starting with a0e71258af05c667bdcbb9f2976a7b5737e62e3d358d11f0d7a70e3c52697e15 not found: ID does not exist" containerID="a0e71258af05c667bdcbb9f2976a7b5737e62e3d358d11f0d7a70e3c52697e15" Jan 27 19:22:01 crc kubenswrapper[4915]: I0127 19:22:01.160504 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0e71258af05c667bdcbb9f2976a7b5737e62e3d358d11f0d7a70e3c52697e15"} err="failed to get container status \"a0e71258af05c667bdcbb9f2976a7b5737e62e3d358d11f0d7a70e3c52697e15\": rpc error: code = NotFound desc = could not find container \"a0e71258af05c667bdcbb9f2976a7b5737e62e3d358d11f0d7a70e3c52697e15\": container with ID starting with a0e71258af05c667bdcbb9f2976a7b5737e62e3d358d11f0d7a70e3c52697e15 not found: ID does not exist" Jan 27 19:22:01 crc kubenswrapper[4915]: I0127 19:22:01.368626 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69" path="/var/lib/kubelet/pods/4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69/volumes" Jan 27 19:22:11 crc kubenswrapper[4915]: I0127 19:22:11.358132 4915 scope.go:117] "RemoveContainer" containerID="0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89" Jan 27 19:22:11 crc kubenswrapper[4915]: E0127 19:22:11.359292 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:22:26 crc kubenswrapper[4915]: I0127 19:22:26.357876 4915 scope.go:117] "RemoveContainer" containerID="0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89" Jan 27 19:22:26 crc kubenswrapper[4915]: E0127 19:22:26.358631 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:22:41 crc kubenswrapper[4915]: I0127 19:22:41.358064 4915 scope.go:117] "RemoveContainer" containerID="0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89" Jan 27 19:22:41 crc kubenswrapper[4915]: E0127 19:22:41.358906 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:22:56 crc kubenswrapper[4915]: I0127 19:22:56.358041 4915 scope.go:117] "RemoveContainer" containerID="0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89" Jan 27 19:22:56 crc kubenswrapper[4915]: E0127 19:22:56.359282 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:23:07 crc kubenswrapper[4915]: I0127 19:23:07.357897 4915 scope.go:117] "RemoveContainer" containerID="0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89" Jan 27 19:23:07 crc kubenswrapper[4915]: E0127 19:23:07.359150 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:23:18 crc kubenswrapper[4915]: I0127 19:23:18.357877 4915 scope.go:117] "RemoveContainer" containerID="0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89" Jan 27 19:23:18 crc kubenswrapper[4915]: E0127 19:23:18.358720 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:23:29 crc kubenswrapper[4915]: I0127 19:23:29.365568 4915 scope.go:117] "RemoveContainer" containerID="0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89" Jan 27 19:23:29 crc kubenswrapper[4915]: E0127 19:23:29.367447 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:23:44 crc kubenswrapper[4915]: I0127 19:23:44.358150 4915 scope.go:117] "RemoveContainer" containerID="0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89" Jan 27 19:23:44 crc kubenswrapper[4915]: E0127 19:23:44.359243 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:23:55 crc kubenswrapper[4915]: I0127 19:23:55.358560 4915 scope.go:117] "RemoveContainer" containerID="0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89" Jan 27 19:23:55 crc kubenswrapper[4915]: E0127 19:23:55.359861 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:24:06 crc kubenswrapper[4915]: I0127 19:24:06.358103 4915 scope.go:117] "RemoveContainer" containerID="0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89" Jan 27 19:24:06 crc kubenswrapper[4915]: E0127 19:24:06.358769 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:24:18 crc kubenswrapper[4915]: I0127 19:24:18.358314 4915 scope.go:117] "RemoveContainer" containerID="0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89" Jan 27 19:24:18 crc kubenswrapper[4915]: E0127 19:24:18.359352 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:24:31 crc kubenswrapper[4915]: I0127 19:24:31.357965 4915 scope.go:117] "RemoveContainer" containerID="0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89" Jan 27 19:24:31 crc kubenswrapper[4915]: E0127 19:24:31.358993 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:24:42 crc kubenswrapper[4915]: I0127 19:24:42.358180 4915 scope.go:117] "RemoveContainer" containerID="0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89" Jan 27 19:24:42 crc kubenswrapper[4915]: E0127 19:24:42.358904 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:24:45 crc kubenswrapper[4915]: I0127 19:24:45.880247 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nhl4z"] Jan 27 19:24:45 crc kubenswrapper[4915]: E0127 19:24:45.881308 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5d605d6-c84c-4e6f-9656-45458cf36aee" containerName="extract-utilities" Jan 27 19:24:45 crc kubenswrapper[4915]: I0127 19:24:45.881340 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5d605d6-c84c-4e6f-9656-45458cf36aee" containerName="extract-utilities" Jan 27 19:24:45 crc kubenswrapper[4915]: E0127 19:24:45.881370 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5d605d6-c84c-4e6f-9656-45458cf36aee" containerName="registry-server" Jan 27 19:24:45 crc kubenswrapper[4915]: I0127 19:24:45.881389 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5d605d6-c84c-4e6f-9656-45458cf36aee" containerName="registry-server" Jan 27 19:24:45 crc kubenswrapper[4915]: E0127 19:24:45.881421 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69" containerName="extract-utilities" Jan 27 19:24:45 crc kubenswrapper[4915]: I0127 19:24:45.881440 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69" containerName="extract-utilities" Jan 27 19:24:45 crc kubenswrapper[4915]: E0127 19:24:45.881474 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5d605d6-c84c-4e6f-9656-45458cf36aee" containerName="extract-content" Jan 27 19:24:45 crc kubenswrapper[4915]: I0127 19:24:45.881491 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5d605d6-c84c-4e6f-9656-45458cf36aee" containerName="extract-content" Jan 27 19:24:45 crc kubenswrapper[4915]: E0127 19:24:45.881514 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69" containerName="registry-server" Jan 27 19:24:45 crc kubenswrapper[4915]: I0127 19:24:45.881534 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69" containerName="registry-server" Jan 27 19:24:45 crc kubenswrapper[4915]: E0127 19:24:45.881578 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69" containerName="extract-content" Jan 27 19:24:45 crc kubenswrapper[4915]: I0127 19:24:45.881595 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69" containerName="extract-content" Jan 27 19:24:45 crc kubenswrapper[4915]: I0127 19:24:45.881981 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="4842d5d7-8f8d-4c3e-a7ae-d316b2cd9d69" containerName="registry-server" Jan 27 19:24:45 crc kubenswrapper[4915]: I0127 19:24:45.882022 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5d605d6-c84c-4e6f-9656-45458cf36aee" containerName="registry-server" Jan 27 19:24:45 crc kubenswrapper[4915]: I0127 19:24:45.884620 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nhl4z" Jan 27 19:24:45 crc kubenswrapper[4915]: I0127 19:24:45.892934 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nhl4z"] Jan 27 19:24:45 crc kubenswrapper[4915]: I0127 19:24:45.956741 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc23e90c-fbc1-4306-899b-3ecde1139666-catalog-content\") pod \"redhat-operators-nhl4z\" (UID: \"fc23e90c-fbc1-4306-899b-3ecde1139666\") " pod="openshift-marketplace/redhat-operators-nhl4z" Jan 27 19:24:45 crc kubenswrapper[4915]: I0127 19:24:45.956889 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc23e90c-fbc1-4306-899b-3ecde1139666-utilities\") pod \"redhat-operators-nhl4z\" (UID: \"fc23e90c-fbc1-4306-899b-3ecde1139666\") " pod="openshift-marketplace/redhat-operators-nhl4z" Jan 27 19:24:45 crc kubenswrapper[4915]: I0127 19:24:45.957133 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxjjm\" (UniqueName: \"kubernetes.io/projected/fc23e90c-fbc1-4306-899b-3ecde1139666-kube-api-access-fxjjm\") pod \"redhat-operators-nhl4z\" (UID: \"fc23e90c-fbc1-4306-899b-3ecde1139666\") " pod="openshift-marketplace/redhat-operators-nhl4z" Jan 27 19:24:46 crc kubenswrapper[4915]: I0127 19:24:46.057967 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxjjm\" (UniqueName: \"kubernetes.io/projected/fc23e90c-fbc1-4306-899b-3ecde1139666-kube-api-access-fxjjm\") pod \"redhat-operators-nhl4z\" (UID: \"fc23e90c-fbc1-4306-899b-3ecde1139666\") " pod="openshift-marketplace/redhat-operators-nhl4z" Jan 27 19:24:46 crc kubenswrapper[4915]: I0127 19:24:46.058056 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc23e90c-fbc1-4306-899b-3ecde1139666-catalog-content\") pod \"redhat-operators-nhl4z\" (UID: \"fc23e90c-fbc1-4306-899b-3ecde1139666\") " pod="openshift-marketplace/redhat-operators-nhl4z" Jan 27 19:24:46 crc kubenswrapper[4915]: I0127 19:24:46.058079 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc23e90c-fbc1-4306-899b-3ecde1139666-utilities\") pod \"redhat-operators-nhl4z\" (UID: \"fc23e90c-fbc1-4306-899b-3ecde1139666\") " pod="openshift-marketplace/redhat-operators-nhl4z" Jan 27 19:24:46 crc kubenswrapper[4915]: I0127 19:24:46.058706 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc23e90c-fbc1-4306-899b-3ecde1139666-catalog-content\") pod \"redhat-operators-nhl4z\" (UID: \"fc23e90c-fbc1-4306-899b-3ecde1139666\") " pod="openshift-marketplace/redhat-operators-nhl4z" Jan 27 19:24:46 crc kubenswrapper[4915]: I0127 19:24:46.058747 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc23e90c-fbc1-4306-899b-3ecde1139666-utilities\") pod \"redhat-operators-nhl4z\" (UID: \"fc23e90c-fbc1-4306-899b-3ecde1139666\") " pod="openshift-marketplace/redhat-operators-nhl4z" Jan 27 19:24:46 crc kubenswrapper[4915]: I0127 19:24:46.082068 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxjjm\" (UniqueName: \"kubernetes.io/projected/fc23e90c-fbc1-4306-899b-3ecde1139666-kube-api-access-fxjjm\") pod \"redhat-operators-nhl4z\" (UID: \"fc23e90c-fbc1-4306-899b-3ecde1139666\") " pod="openshift-marketplace/redhat-operators-nhl4z" Jan 27 19:24:46 crc kubenswrapper[4915]: I0127 19:24:46.214015 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nhl4z" Jan 27 19:24:46 crc kubenswrapper[4915]: I0127 19:24:46.423762 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nhl4z"] Jan 27 19:24:47 crc kubenswrapper[4915]: I0127 19:24:47.394682 4915 generic.go:334] "Generic (PLEG): container finished" podID="fc23e90c-fbc1-4306-899b-3ecde1139666" containerID="298d813b5026213c240e3c930afadefe47e4ff2038f4d12ea6250fd6594d9351" exitCode=0 Jan 27 19:24:47 crc kubenswrapper[4915]: I0127 19:24:47.394770 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhl4z" event={"ID":"fc23e90c-fbc1-4306-899b-3ecde1139666","Type":"ContainerDied","Data":"298d813b5026213c240e3c930afadefe47e4ff2038f4d12ea6250fd6594d9351"} Jan 27 19:24:47 crc kubenswrapper[4915]: I0127 19:24:47.395023 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhl4z" event={"ID":"fc23e90c-fbc1-4306-899b-3ecde1139666","Type":"ContainerStarted","Data":"269b0f89e9081e1045927ca2adca37684ec8e7b6442ac24315d6c98bc5c8cc89"} Jan 27 19:24:49 crc kubenswrapper[4915]: I0127 19:24:49.413148 4915 generic.go:334] "Generic (PLEG): container finished" podID="fc23e90c-fbc1-4306-899b-3ecde1139666" containerID="7e6ed48749ecd39a81ac4b4c6c36e65882f5b5f503f3170ff26bd3f3f1d30e28" exitCode=0 Jan 27 19:24:49 crc kubenswrapper[4915]: I0127 19:24:49.413240 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhl4z" event={"ID":"fc23e90c-fbc1-4306-899b-3ecde1139666","Type":"ContainerDied","Data":"7e6ed48749ecd39a81ac4b4c6c36e65882f5b5f503f3170ff26bd3f3f1d30e28"} Jan 27 19:24:50 crc kubenswrapper[4915]: I0127 19:24:50.426238 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhl4z" event={"ID":"fc23e90c-fbc1-4306-899b-3ecde1139666","Type":"ContainerStarted","Data":"4f055ad94ec6e7503f71402d7ceeae66949d13ac7c455f9b705d7dbf0fa9b548"} Jan 27 19:24:50 crc kubenswrapper[4915]: I0127 19:24:50.454112 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nhl4z" podStartSLOduration=2.942566862 podStartE2EDuration="5.454080062s" podCreationTimestamp="2026-01-27 19:24:45 +0000 UTC" firstStartedPulling="2026-01-27 19:24:47.397143063 +0000 UTC m=+2578.754996767" lastFinishedPulling="2026-01-27 19:24:49.908656303 +0000 UTC m=+2581.266509967" observedRunningTime="2026-01-27 19:24:50.447196339 +0000 UTC m=+2581.805050003" watchObservedRunningTime="2026-01-27 19:24:50.454080062 +0000 UTC m=+2581.811933766" Jan 27 19:24:54 crc kubenswrapper[4915]: I0127 19:24:54.357702 4915 scope.go:117] "RemoveContainer" containerID="0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89" Jan 27 19:24:55 crc kubenswrapper[4915]: I0127 19:24:55.468959 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"b5bb0073b12a0cba5b28542e7c19879b6491bbb8e1f4c41b776e3f628cf0a0b2"} Jan 27 19:24:56 crc kubenswrapper[4915]: I0127 19:24:56.214650 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nhl4z" Jan 27 19:24:56 crc kubenswrapper[4915]: I0127 19:24:56.215044 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nhl4z" Jan 27 19:24:56 crc kubenswrapper[4915]: I0127 19:24:56.293223 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nhl4z" Jan 27 19:24:56 crc kubenswrapper[4915]: I0127 19:24:56.535075 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nhl4z" Jan 27 19:24:56 crc kubenswrapper[4915]: I0127 19:24:56.581362 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nhl4z"] Jan 27 19:24:58 crc kubenswrapper[4915]: I0127 19:24:58.489695 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nhl4z" podUID="fc23e90c-fbc1-4306-899b-3ecde1139666" containerName="registry-server" containerID="cri-o://4f055ad94ec6e7503f71402d7ceeae66949d13ac7c455f9b705d7dbf0fa9b548" gracePeriod=2 Jan 27 19:24:58 crc kubenswrapper[4915]: I0127 19:24:58.854028 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nhl4z" Jan 27 19:24:58 crc kubenswrapper[4915]: I0127 19:24:58.947502 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc23e90c-fbc1-4306-899b-3ecde1139666-catalog-content\") pod \"fc23e90c-fbc1-4306-899b-3ecde1139666\" (UID: \"fc23e90c-fbc1-4306-899b-3ecde1139666\") " Jan 27 19:24:58 crc kubenswrapper[4915]: I0127 19:24:58.948141 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc23e90c-fbc1-4306-899b-3ecde1139666-utilities\") pod \"fc23e90c-fbc1-4306-899b-3ecde1139666\" (UID: \"fc23e90c-fbc1-4306-899b-3ecde1139666\") " Jan 27 19:24:58 crc kubenswrapper[4915]: I0127 19:24:58.948256 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxjjm\" (UniqueName: \"kubernetes.io/projected/fc23e90c-fbc1-4306-899b-3ecde1139666-kube-api-access-fxjjm\") pod \"fc23e90c-fbc1-4306-899b-3ecde1139666\" (UID: \"fc23e90c-fbc1-4306-899b-3ecde1139666\") " Jan 27 19:24:58 crc kubenswrapper[4915]: I0127 19:24:58.949599 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc23e90c-fbc1-4306-899b-3ecde1139666-utilities" (OuterVolumeSpecName: "utilities") pod "fc23e90c-fbc1-4306-899b-3ecde1139666" (UID: "fc23e90c-fbc1-4306-899b-3ecde1139666"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:24:58 crc kubenswrapper[4915]: I0127 19:24:58.956004 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc23e90c-fbc1-4306-899b-3ecde1139666-kube-api-access-fxjjm" (OuterVolumeSpecName: "kube-api-access-fxjjm") pod "fc23e90c-fbc1-4306-899b-3ecde1139666" (UID: "fc23e90c-fbc1-4306-899b-3ecde1139666"). InnerVolumeSpecName "kube-api-access-fxjjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:24:59 crc kubenswrapper[4915]: I0127 19:24:59.049862 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc23e90c-fbc1-4306-899b-3ecde1139666-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:24:59 crc kubenswrapper[4915]: I0127 19:24:59.050146 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxjjm\" (UniqueName: \"kubernetes.io/projected/fc23e90c-fbc1-4306-899b-3ecde1139666-kube-api-access-fxjjm\") on node \"crc\" DevicePath \"\"" Jan 27 19:24:59 crc kubenswrapper[4915]: I0127 19:24:59.094175 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc23e90c-fbc1-4306-899b-3ecde1139666-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc23e90c-fbc1-4306-899b-3ecde1139666" (UID: "fc23e90c-fbc1-4306-899b-3ecde1139666"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:24:59 crc kubenswrapper[4915]: I0127 19:24:59.152059 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc23e90c-fbc1-4306-899b-3ecde1139666-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:24:59 crc kubenswrapper[4915]: I0127 19:24:59.504262 4915 generic.go:334] "Generic (PLEG): container finished" podID="fc23e90c-fbc1-4306-899b-3ecde1139666" containerID="4f055ad94ec6e7503f71402d7ceeae66949d13ac7c455f9b705d7dbf0fa9b548" exitCode=0 Jan 27 19:24:59 crc kubenswrapper[4915]: I0127 19:24:59.504322 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhl4z" event={"ID":"fc23e90c-fbc1-4306-899b-3ecde1139666","Type":"ContainerDied","Data":"4f055ad94ec6e7503f71402d7ceeae66949d13ac7c455f9b705d7dbf0fa9b548"} Jan 27 19:24:59 crc kubenswrapper[4915]: I0127 19:24:59.504362 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhl4z" event={"ID":"fc23e90c-fbc1-4306-899b-3ecde1139666","Type":"ContainerDied","Data":"269b0f89e9081e1045927ca2adca37684ec8e7b6442ac24315d6c98bc5c8cc89"} Jan 27 19:24:59 crc kubenswrapper[4915]: I0127 19:24:59.504388 4915 scope.go:117] "RemoveContainer" containerID="4f055ad94ec6e7503f71402d7ceeae66949d13ac7c455f9b705d7dbf0fa9b548" Jan 27 19:24:59 crc kubenswrapper[4915]: I0127 19:24:59.504379 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nhl4z" Jan 27 19:24:59 crc kubenswrapper[4915]: I0127 19:24:59.533659 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nhl4z"] Jan 27 19:24:59 crc kubenswrapper[4915]: I0127 19:24:59.536111 4915 scope.go:117] "RemoveContainer" containerID="7e6ed48749ecd39a81ac4b4c6c36e65882f5b5f503f3170ff26bd3f3f1d30e28" Jan 27 19:24:59 crc kubenswrapper[4915]: I0127 19:24:59.545508 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nhl4z"] Jan 27 19:24:59 crc kubenswrapper[4915]: I0127 19:24:59.570729 4915 scope.go:117] "RemoveContainer" containerID="298d813b5026213c240e3c930afadefe47e4ff2038f4d12ea6250fd6594d9351" Jan 27 19:24:59 crc kubenswrapper[4915]: I0127 19:24:59.598315 4915 scope.go:117] "RemoveContainer" containerID="4f055ad94ec6e7503f71402d7ceeae66949d13ac7c455f9b705d7dbf0fa9b548" Jan 27 19:24:59 crc kubenswrapper[4915]: E0127 19:24:59.599442 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f055ad94ec6e7503f71402d7ceeae66949d13ac7c455f9b705d7dbf0fa9b548\": container with ID starting with 4f055ad94ec6e7503f71402d7ceeae66949d13ac7c455f9b705d7dbf0fa9b548 not found: ID does not exist" containerID="4f055ad94ec6e7503f71402d7ceeae66949d13ac7c455f9b705d7dbf0fa9b548" Jan 27 19:24:59 crc kubenswrapper[4915]: I0127 19:24:59.599544 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f055ad94ec6e7503f71402d7ceeae66949d13ac7c455f9b705d7dbf0fa9b548"} err="failed to get container status \"4f055ad94ec6e7503f71402d7ceeae66949d13ac7c455f9b705d7dbf0fa9b548\": rpc error: code = NotFound desc = could not find container \"4f055ad94ec6e7503f71402d7ceeae66949d13ac7c455f9b705d7dbf0fa9b548\": container with ID starting with 4f055ad94ec6e7503f71402d7ceeae66949d13ac7c455f9b705d7dbf0fa9b548 not found: ID does not exist" Jan 27 19:24:59 crc kubenswrapper[4915]: I0127 19:24:59.599581 4915 scope.go:117] "RemoveContainer" containerID="7e6ed48749ecd39a81ac4b4c6c36e65882f5b5f503f3170ff26bd3f3f1d30e28" Jan 27 19:24:59 crc kubenswrapper[4915]: E0127 19:24:59.600041 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e6ed48749ecd39a81ac4b4c6c36e65882f5b5f503f3170ff26bd3f3f1d30e28\": container with ID starting with 7e6ed48749ecd39a81ac4b4c6c36e65882f5b5f503f3170ff26bd3f3f1d30e28 not found: ID does not exist" containerID="7e6ed48749ecd39a81ac4b4c6c36e65882f5b5f503f3170ff26bd3f3f1d30e28" Jan 27 19:24:59 crc kubenswrapper[4915]: I0127 19:24:59.600102 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e6ed48749ecd39a81ac4b4c6c36e65882f5b5f503f3170ff26bd3f3f1d30e28"} err="failed to get container status \"7e6ed48749ecd39a81ac4b4c6c36e65882f5b5f503f3170ff26bd3f3f1d30e28\": rpc error: code = NotFound desc = could not find container \"7e6ed48749ecd39a81ac4b4c6c36e65882f5b5f503f3170ff26bd3f3f1d30e28\": container with ID starting with 7e6ed48749ecd39a81ac4b4c6c36e65882f5b5f503f3170ff26bd3f3f1d30e28 not found: ID does not exist" Jan 27 19:24:59 crc kubenswrapper[4915]: I0127 19:24:59.600163 4915 scope.go:117] "RemoveContainer" containerID="298d813b5026213c240e3c930afadefe47e4ff2038f4d12ea6250fd6594d9351" Jan 27 19:24:59 crc kubenswrapper[4915]: E0127 19:24:59.600636 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"298d813b5026213c240e3c930afadefe47e4ff2038f4d12ea6250fd6594d9351\": container with ID starting with 298d813b5026213c240e3c930afadefe47e4ff2038f4d12ea6250fd6594d9351 not found: ID does not exist" containerID="298d813b5026213c240e3c930afadefe47e4ff2038f4d12ea6250fd6594d9351" Jan 27 19:24:59 crc kubenswrapper[4915]: I0127 19:24:59.600728 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"298d813b5026213c240e3c930afadefe47e4ff2038f4d12ea6250fd6594d9351"} err="failed to get container status \"298d813b5026213c240e3c930afadefe47e4ff2038f4d12ea6250fd6594d9351\": rpc error: code = NotFound desc = could not find container \"298d813b5026213c240e3c930afadefe47e4ff2038f4d12ea6250fd6594d9351\": container with ID starting with 298d813b5026213c240e3c930afadefe47e4ff2038f4d12ea6250fd6594d9351 not found: ID does not exist" Jan 27 19:25:01 crc kubenswrapper[4915]: I0127 19:25:01.369852 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc23e90c-fbc1-4306-899b-3ecde1139666" path="/var/lib/kubelet/pods/fc23e90c-fbc1-4306-899b-3ecde1139666/volumes" Jan 27 19:27:20 crc kubenswrapper[4915]: I0127 19:27:20.624999 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:27:20 crc kubenswrapper[4915]: I0127 19:27:20.625621 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:27:50 crc kubenswrapper[4915]: I0127 19:27:50.624583 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:27:50 crc kubenswrapper[4915]: I0127 19:27:50.625633 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:28:20 crc kubenswrapper[4915]: I0127 19:28:20.624416 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:28:20 crc kubenswrapper[4915]: I0127 19:28:20.625356 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:28:20 crc kubenswrapper[4915]: I0127 19:28:20.625439 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 19:28:20 crc kubenswrapper[4915]: I0127 19:28:20.626611 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b5bb0073b12a0cba5b28542e7c19879b6491bbb8e1f4c41b776e3f628cf0a0b2"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:28:20 crc kubenswrapper[4915]: I0127 19:28:20.626739 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://b5bb0073b12a0cba5b28542e7c19879b6491bbb8e1f4c41b776e3f628cf0a0b2" gracePeriod=600 Jan 27 19:28:21 crc kubenswrapper[4915]: I0127 19:28:21.205306 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="b5bb0073b12a0cba5b28542e7c19879b6491bbb8e1f4c41b776e3f628cf0a0b2" exitCode=0 Jan 27 19:28:21 crc kubenswrapper[4915]: I0127 19:28:21.205403 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"b5bb0073b12a0cba5b28542e7c19879b6491bbb8e1f4c41b776e3f628cf0a0b2"} Jan 27 19:28:21 crc kubenswrapper[4915]: I0127 19:28:21.205828 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445"} Jan 27 19:28:21 crc kubenswrapper[4915]: I0127 19:28:21.205876 4915 scope.go:117] "RemoveContainer" containerID="0a814d49ec33a99dc2089b5d874caad413c3097006151f18aa6aad78c166ef89" Jan 27 19:29:15 crc kubenswrapper[4915]: I0127 19:29:15.948950 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4mcw2"] Jan 27 19:29:15 crc kubenswrapper[4915]: E0127 19:29:15.950235 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc23e90c-fbc1-4306-899b-3ecde1139666" containerName="extract-utilities" Jan 27 19:29:15 crc kubenswrapper[4915]: I0127 19:29:15.950257 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc23e90c-fbc1-4306-899b-3ecde1139666" containerName="extract-utilities" Jan 27 19:29:15 crc kubenswrapper[4915]: E0127 19:29:15.950270 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc23e90c-fbc1-4306-899b-3ecde1139666" containerName="registry-server" Jan 27 19:29:15 crc kubenswrapper[4915]: I0127 19:29:15.950280 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc23e90c-fbc1-4306-899b-3ecde1139666" containerName="registry-server" Jan 27 19:29:15 crc kubenswrapper[4915]: E0127 19:29:15.950298 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc23e90c-fbc1-4306-899b-3ecde1139666" containerName="extract-content" Jan 27 19:29:15 crc kubenswrapper[4915]: I0127 19:29:15.950309 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc23e90c-fbc1-4306-899b-3ecde1139666" containerName="extract-content" Jan 27 19:29:15 crc kubenswrapper[4915]: I0127 19:29:15.950618 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc23e90c-fbc1-4306-899b-3ecde1139666" containerName="registry-server" Jan 27 19:29:15 crc kubenswrapper[4915]: I0127 19:29:15.952164 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4mcw2" Jan 27 19:29:15 crc kubenswrapper[4915]: I0127 19:29:15.957712 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4mcw2"] Jan 27 19:29:16 crc kubenswrapper[4915]: I0127 19:29:16.020186 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38d1af8c-7592-42f0-8736-0c331e4faa5f-catalog-content\") pod \"certified-operators-4mcw2\" (UID: \"38d1af8c-7592-42f0-8736-0c331e4faa5f\") " pod="openshift-marketplace/certified-operators-4mcw2" Jan 27 19:29:16 crc kubenswrapper[4915]: I0127 19:29:16.020319 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7bnj\" (UniqueName: \"kubernetes.io/projected/38d1af8c-7592-42f0-8736-0c331e4faa5f-kube-api-access-k7bnj\") pod \"certified-operators-4mcw2\" (UID: \"38d1af8c-7592-42f0-8736-0c331e4faa5f\") " pod="openshift-marketplace/certified-operators-4mcw2" Jan 27 19:29:16 crc kubenswrapper[4915]: I0127 19:29:16.020391 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38d1af8c-7592-42f0-8736-0c331e4faa5f-utilities\") pod \"certified-operators-4mcw2\" (UID: \"38d1af8c-7592-42f0-8736-0c331e4faa5f\") " pod="openshift-marketplace/certified-operators-4mcw2" Jan 27 19:29:16 crc kubenswrapper[4915]: I0127 19:29:16.121672 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7bnj\" (UniqueName: \"kubernetes.io/projected/38d1af8c-7592-42f0-8736-0c331e4faa5f-kube-api-access-k7bnj\") pod \"certified-operators-4mcw2\" (UID: \"38d1af8c-7592-42f0-8736-0c331e4faa5f\") " pod="openshift-marketplace/certified-operators-4mcw2" Jan 27 19:29:16 crc kubenswrapper[4915]: I0127 19:29:16.121767 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38d1af8c-7592-42f0-8736-0c331e4faa5f-utilities\") pod \"certified-operators-4mcw2\" (UID: \"38d1af8c-7592-42f0-8736-0c331e4faa5f\") " pod="openshift-marketplace/certified-operators-4mcw2" Jan 27 19:29:16 crc kubenswrapper[4915]: I0127 19:29:16.121835 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38d1af8c-7592-42f0-8736-0c331e4faa5f-catalog-content\") pod \"certified-operators-4mcw2\" (UID: \"38d1af8c-7592-42f0-8736-0c331e4faa5f\") " pod="openshift-marketplace/certified-operators-4mcw2" Jan 27 19:29:16 crc kubenswrapper[4915]: I0127 19:29:16.122249 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38d1af8c-7592-42f0-8736-0c331e4faa5f-utilities\") pod \"certified-operators-4mcw2\" (UID: \"38d1af8c-7592-42f0-8736-0c331e4faa5f\") " pod="openshift-marketplace/certified-operators-4mcw2" Jan 27 19:29:16 crc kubenswrapper[4915]: I0127 19:29:16.122272 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38d1af8c-7592-42f0-8736-0c331e4faa5f-catalog-content\") pod \"certified-operators-4mcw2\" (UID: \"38d1af8c-7592-42f0-8736-0c331e4faa5f\") " pod="openshift-marketplace/certified-operators-4mcw2" Jan 27 19:29:16 crc kubenswrapper[4915]: I0127 19:29:16.143095 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7bnj\" (UniqueName: \"kubernetes.io/projected/38d1af8c-7592-42f0-8736-0c331e4faa5f-kube-api-access-k7bnj\") pod \"certified-operators-4mcw2\" (UID: \"38d1af8c-7592-42f0-8736-0c331e4faa5f\") " pod="openshift-marketplace/certified-operators-4mcw2" Jan 27 19:29:16 crc kubenswrapper[4915]: I0127 19:29:16.273632 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4mcw2" Jan 27 19:29:16 crc kubenswrapper[4915]: I0127 19:29:16.563663 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4mcw2"] Jan 27 19:29:16 crc kubenswrapper[4915]: I0127 19:29:16.696407 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4mcw2" event={"ID":"38d1af8c-7592-42f0-8736-0c331e4faa5f","Type":"ContainerStarted","Data":"496793209c05bb169bc9d4501a855cc687e60932042ca6470a95bcb606b701d3"} Jan 27 19:29:17 crc kubenswrapper[4915]: I0127 19:29:17.709034 4915 generic.go:334] "Generic (PLEG): container finished" podID="38d1af8c-7592-42f0-8736-0c331e4faa5f" containerID="146c07229862ed1b50d12928633f7484bd2f053d3e039f2ff81ecf7a67fa24d4" exitCode=0 Jan 27 19:29:17 crc kubenswrapper[4915]: I0127 19:29:17.709116 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4mcw2" event={"ID":"38d1af8c-7592-42f0-8736-0c331e4faa5f","Type":"ContainerDied","Data":"146c07229862ed1b50d12928633f7484bd2f053d3e039f2ff81ecf7a67fa24d4"} Jan 27 19:29:17 crc kubenswrapper[4915]: I0127 19:29:17.712420 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:29:20 crc kubenswrapper[4915]: I0127 19:29:20.743703 4915 generic.go:334] "Generic (PLEG): container finished" podID="38d1af8c-7592-42f0-8736-0c331e4faa5f" containerID="fa4e671565d173eb6ba86bf56a559d1214288d2b3d4ba3dedb5fca34c57d6759" exitCode=0 Jan 27 19:29:20 crc kubenswrapper[4915]: I0127 19:29:20.743803 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4mcw2" event={"ID":"38d1af8c-7592-42f0-8736-0c331e4faa5f","Type":"ContainerDied","Data":"fa4e671565d173eb6ba86bf56a559d1214288d2b3d4ba3dedb5fca34c57d6759"} Jan 27 19:29:21 crc kubenswrapper[4915]: I0127 19:29:21.764851 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4mcw2" event={"ID":"38d1af8c-7592-42f0-8736-0c331e4faa5f","Type":"ContainerStarted","Data":"13c981bef22f7b755b41c00545c06be0f57ae08f26ed7b1c150096aa25d513a0"} Jan 27 19:29:21 crc kubenswrapper[4915]: I0127 19:29:21.794241 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4mcw2" podStartSLOduration=3.211706307 podStartE2EDuration="6.794209785s" podCreationTimestamp="2026-01-27 19:29:15 +0000 UTC" firstStartedPulling="2026-01-27 19:29:17.712019015 +0000 UTC m=+2849.069872719" lastFinishedPulling="2026-01-27 19:29:21.294522533 +0000 UTC m=+2852.652376197" observedRunningTime="2026-01-27 19:29:21.78539942 +0000 UTC m=+2853.143253084" watchObservedRunningTime="2026-01-27 19:29:21.794209785 +0000 UTC m=+2853.152063489" Jan 27 19:29:26 crc kubenswrapper[4915]: I0127 19:29:26.274827 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4mcw2" Jan 27 19:29:26 crc kubenswrapper[4915]: I0127 19:29:26.275191 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4mcw2" Jan 27 19:29:26 crc kubenswrapper[4915]: I0127 19:29:26.335670 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4mcw2" Jan 27 19:29:26 crc kubenswrapper[4915]: I0127 19:29:26.877152 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4mcw2" Jan 27 19:29:26 crc kubenswrapper[4915]: I0127 19:29:26.935684 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4mcw2"] Jan 27 19:29:28 crc kubenswrapper[4915]: I0127 19:29:28.822294 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4mcw2" podUID="38d1af8c-7592-42f0-8736-0c331e4faa5f" containerName="registry-server" containerID="cri-o://13c981bef22f7b755b41c00545c06be0f57ae08f26ed7b1c150096aa25d513a0" gracePeriod=2 Jan 27 19:29:29 crc kubenswrapper[4915]: I0127 19:29:29.286525 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4mcw2" Jan 27 19:29:29 crc kubenswrapper[4915]: I0127 19:29:29.437277 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38d1af8c-7592-42f0-8736-0c331e4faa5f-utilities\") pod \"38d1af8c-7592-42f0-8736-0c331e4faa5f\" (UID: \"38d1af8c-7592-42f0-8736-0c331e4faa5f\") " Jan 27 19:29:29 crc kubenswrapper[4915]: I0127 19:29:29.437661 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38d1af8c-7592-42f0-8736-0c331e4faa5f-catalog-content\") pod \"38d1af8c-7592-42f0-8736-0c331e4faa5f\" (UID: \"38d1af8c-7592-42f0-8736-0c331e4faa5f\") " Jan 27 19:29:29 crc kubenswrapper[4915]: I0127 19:29:29.437766 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7bnj\" (UniqueName: \"kubernetes.io/projected/38d1af8c-7592-42f0-8736-0c331e4faa5f-kube-api-access-k7bnj\") pod \"38d1af8c-7592-42f0-8736-0c331e4faa5f\" (UID: \"38d1af8c-7592-42f0-8736-0c331e4faa5f\") " Jan 27 19:29:29 crc kubenswrapper[4915]: I0127 19:29:29.439749 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38d1af8c-7592-42f0-8736-0c331e4faa5f-utilities" (OuterVolumeSpecName: "utilities") pod "38d1af8c-7592-42f0-8736-0c331e4faa5f" (UID: "38d1af8c-7592-42f0-8736-0c331e4faa5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:29:29 crc kubenswrapper[4915]: I0127 19:29:29.443477 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38d1af8c-7592-42f0-8736-0c331e4faa5f-kube-api-access-k7bnj" (OuterVolumeSpecName: "kube-api-access-k7bnj") pod "38d1af8c-7592-42f0-8736-0c331e4faa5f" (UID: "38d1af8c-7592-42f0-8736-0c331e4faa5f"). InnerVolumeSpecName "kube-api-access-k7bnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:29:29 crc kubenswrapper[4915]: I0127 19:29:29.539290 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7bnj\" (UniqueName: \"kubernetes.io/projected/38d1af8c-7592-42f0-8736-0c331e4faa5f-kube-api-access-k7bnj\") on node \"crc\" DevicePath \"\"" Jan 27 19:29:29 crc kubenswrapper[4915]: I0127 19:29:29.539333 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38d1af8c-7592-42f0-8736-0c331e4faa5f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:29:29 crc kubenswrapper[4915]: I0127 19:29:29.572706 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38d1af8c-7592-42f0-8736-0c331e4faa5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38d1af8c-7592-42f0-8736-0c331e4faa5f" (UID: "38d1af8c-7592-42f0-8736-0c331e4faa5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:29:29 crc kubenswrapper[4915]: I0127 19:29:29.640680 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38d1af8c-7592-42f0-8736-0c331e4faa5f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:29:29 crc kubenswrapper[4915]: I0127 19:29:29.836745 4915 generic.go:334] "Generic (PLEG): container finished" podID="38d1af8c-7592-42f0-8736-0c331e4faa5f" containerID="13c981bef22f7b755b41c00545c06be0f57ae08f26ed7b1c150096aa25d513a0" exitCode=0 Jan 27 19:29:29 crc kubenswrapper[4915]: I0127 19:29:29.836882 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4mcw2" event={"ID":"38d1af8c-7592-42f0-8736-0c331e4faa5f","Type":"ContainerDied","Data":"13c981bef22f7b755b41c00545c06be0f57ae08f26ed7b1c150096aa25d513a0"} Jan 27 19:29:29 crc kubenswrapper[4915]: I0127 19:29:29.836942 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4mcw2" event={"ID":"38d1af8c-7592-42f0-8736-0c331e4faa5f","Type":"ContainerDied","Data":"496793209c05bb169bc9d4501a855cc687e60932042ca6470a95bcb606b701d3"} Jan 27 19:29:29 crc kubenswrapper[4915]: I0127 19:29:29.836990 4915 scope.go:117] "RemoveContainer" containerID="13c981bef22f7b755b41c00545c06be0f57ae08f26ed7b1c150096aa25d513a0" Jan 27 19:29:29 crc kubenswrapper[4915]: I0127 19:29:29.837286 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4mcw2" Jan 27 19:29:29 crc kubenswrapper[4915]: I0127 19:29:29.883274 4915 scope.go:117] "RemoveContainer" containerID="fa4e671565d173eb6ba86bf56a559d1214288d2b3d4ba3dedb5fca34c57d6759" Jan 27 19:29:29 crc kubenswrapper[4915]: I0127 19:29:29.918460 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4mcw2"] Jan 27 19:29:29 crc kubenswrapper[4915]: I0127 19:29:29.926289 4915 scope.go:117] "RemoveContainer" containerID="146c07229862ed1b50d12928633f7484bd2f053d3e039f2ff81ecf7a67fa24d4" Jan 27 19:29:29 crc kubenswrapper[4915]: I0127 19:29:29.930134 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4mcw2"] Jan 27 19:29:29 crc kubenswrapper[4915]: I0127 19:29:29.957956 4915 scope.go:117] "RemoveContainer" containerID="13c981bef22f7b755b41c00545c06be0f57ae08f26ed7b1c150096aa25d513a0" Jan 27 19:29:29 crc kubenswrapper[4915]: E0127 19:29:29.958857 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13c981bef22f7b755b41c00545c06be0f57ae08f26ed7b1c150096aa25d513a0\": container with ID starting with 13c981bef22f7b755b41c00545c06be0f57ae08f26ed7b1c150096aa25d513a0 not found: ID does not exist" containerID="13c981bef22f7b755b41c00545c06be0f57ae08f26ed7b1c150096aa25d513a0" Jan 27 19:29:29 crc kubenswrapper[4915]: I0127 19:29:29.958925 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13c981bef22f7b755b41c00545c06be0f57ae08f26ed7b1c150096aa25d513a0"} err="failed to get container status \"13c981bef22f7b755b41c00545c06be0f57ae08f26ed7b1c150096aa25d513a0\": rpc error: code = NotFound desc = could not find container \"13c981bef22f7b755b41c00545c06be0f57ae08f26ed7b1c150096aa25d513a0\": container with ID starting with 13c981bef22f7b755b41c00545c06be0f57ae08f26ed7b1c150096aa25d513a0 not found: ID does not exist" Jan 27 19:29:29 crc kubenswrapper[4915]: I0127 19:29:29.958965 4915 scope.go:117] "RemoveContainer" containerID="fa4e671565d173eb6ba86bf56a559d1214288d2b3d4ba3dedb5fca34c57d6759" Jan 27 19:29:29 crc kubenswrapper[4915]: E0127 19:29:29.959489 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa4e671565d173eb6ba86bf56a559d1214288d2b3d4ba3dedb5fca34c57d6759\": container with ID starting with fa4e671565d173eb6ba86bf56a559d1214288d2b3d4ba3dedb5fca34c57d6759 not found: ID does not exist" containerID="fa4e671565d173eb6ba86bf56a559d1214288d2b3d4ba3dedb5fca34c57d6759" Jan 27 19:29:29 crc kubenswrapper[4915]: I0127 19:29:29.959584 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa4e671565d173eb6ba86bf56a559d1214288d2b3d4ba3dedb5fca34c57d6759"} err="failed to get container status \"fa4e671565d173eb6ba86bf56a559d1214288d2b3d4ba3dedb5fca34c57d6759\": rpc error: code = NotFound desc = could not find container \"fa4e671565d173eb6ba86bf56a559d1214288d2b3d4ba3dedb5fca34c57d6759\": container with ID starting with fa4e671565d173eb6ba86bf56a559d1214288d2b3d4ba3dedb5fca34c57d6759 not found: ID does not exist" Jan 27 19:29:29 crc kubenswrapper[4915]: I0127 19:29:29.959640 4915 scope.go:117] "RemoveContainer" containerID="146c07229862ed1b50d12928633f7484bd2f053d3e039f2ff81ecf7a67fa24d4" Jan 27 19:29:29 crc kubenswrapper[4915]: E0127 19:29:29.960316 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"146c07229862ed1b50d12928633f7484bd2f053d3e039f2ff81ecf7a67fa24d4\": container with ID starting with 146c07229862ed1b50d12928633f7484bd2f053d3e039f2ff81ecf7a67fa24d4 not found: ID does not exist" containerID="146c07229862ed1b50d12928633f7484bd2f053d3e039f2ff81ecf7a67fa24d4" Jan 27 19:29:29 crc kubenswrapper[4915]: I0127 19:29:29.960374 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"146c07229862ed1b50d12928633f7484bd2f053d3e039f2ff81ecf7a67fa24d4"} err="failed to get container status \"146c07229862ed1b50d12928633f7484bd2f053d3e039f2ff81ecf7a67fa24d4\": rpc error: code = NotFound desc = could not find container \"146c07229862ed1b50d12928633f7484bd2f053d3e039f2ff81ecf7a67fa24d4\": container with ID starting with 146c07229862ed1b50d12928633f7484bd2f053d3e039f2ff81ecf7a67fa24d4 not found: ID does not exist" Jan 27 19:29:31 crc kubenswrapper[4915]: I0127 19:29:31.374860 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38d1af8c-7592-42f0-8736-0c331e4faa5f" path="/var/lib/kubelet/pods/38d1af8c-7592-42f0-8736-0c331e4faa5f/volumes" Jan 27 19:30:00 crc kubenswrapper[4915]: I0127 19:30:00.145577 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492370-mxv69"] Jan 27 19:30:00 crc kubenswrapper[4915]: E0127 19:30:00.146325 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d1af8c-7592-42f0-8736-0c331e4faa5f" containerName="extract-utilities" Jan 27 19:30:00 crc kubenswrapper[4915]: I0127 19:30:00.146343 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d1af8c-7592-42f0-8736-0c331e4faa5f" containerName="extract-utilities" Jan 27 19:30:00 crc kubenswrapper[4915]: E0127 19:30:00.146357 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d1af8c-7592-42f0-8736-0c331e4faa5f" containerName="extract-content" Jan 27 19:30:00 crc kubenswrapper[4915]: I0127 19:30:00.146364 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d1af8c-7592-42f0-8736-0c331e4faa5f" containerName="extract-content" Jan 27 19:30:00 crc kubenswrapper[4915]: E0127 19:30:00.146376 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d1af8c-7592-42f0-8736-0c331e4faa5f" containerName="registry-server" Jan 27 19:30:00 crc kubenswrapper[4915]: I0127 19:30:00.146382 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d1af8c-7592-42f0-8736-0c331e4faa5f" containerName="registry-server" Jan 27 19:30:00 crc kubenswrapper[4915]: I0127 19:30:00.146510 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d1af8c-7592-42f0-8736-0c331e4faa5f" containerName="registry-server" Jan 27 19:30:00 crc kubenswrapper[4915]: I0127 19:30:00.146967 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-mxv69" Jan 27 19:30:00 crc kubenswrapper[4915]: I0127 19:30:00.149543 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 19:30:00 crc kubenswrapper[4915]: I0127 19:30:00.152637 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 19:30:00 crc kubenswrapper[4915]: I0127 19:30:00.163280 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492370-mxv69"] Jan 27 19:30:00 crc kubenswrapper[4915]: I0127 19:30:00.331689 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn2tw\" (UniqueName: \"kubernetes.io/projected/58a0a309-348a-4379-a435-a2e95ee9d37d-kube-api-access-bn2tw\") pod \"collect-profiles-29492370-mxv69\" (UID: \"58a0a309-348a-4379-a435-a2e95ee9d37d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-mxv69" Jan 27 19:30:00 crc kubenswrapper[4915]: I0127 19:30:00.331779 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58a0a309-348a-4379-a435-a2e95ee9d37d-secret-volume\") pod \"collect-profiles-29492370-mxv69\" (UID: \"58a0a309-348a-4379-a435-a2e95ee9d37d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-mxv69" Jan 27 19:30:00 crc kubenswrapper[4915]: I0127 19:30:00.331859 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58a0a309-348a-4379-a435-a2e95ee9d37d-config-volume\") pod \"collect-profiles-29492370-mxv69\" (UID: \"58a0a309-348a-4379-a435-a2e95ee9d37d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-mxv69" Jan 27 19:30:00 crc kubenswrapper[4915]: I0127 19:30:00.433063 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58a0a309-348a-4379-a435-a2e95ee9d37d-secret-volume\") pod \"collect-profiles-29492370-mxv69\" (UID: \"58a0a309-348a-4379-a435-a2e95ee9d37d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-mxv69" Jan 27 19:30:00 crc kubenswrapper[4915]: I0127 19:30:00.433107 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58a0a309-348a-4379-a435-a2e95ee9d37d-config-volume\") pod \"collect-profiles-29492370-mxv69\" (UID: \"58a0a309-348a-4379-a435-a2e95ee9d37d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-mxv69" Jan 27 19:30:00 crc kubenswrapper[4915]: I0127 19:30:00.433176 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn2tw\" (UniqueName: \"kubernetes.io/projected/58a0a309-348a-4379-a435-a2e95ee9d37d-kube-api-access-bn2tw\") pod \"collect-profiles-29492370-mxv69\" (UID: \"58a0a309-348a-4379-a435-a2e95ee9d37d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-mxv69" Jan 27 19:30:00 crc kubenswrapper[4915]: I0127 19:30:00.434391 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58a0a309-348a-4379-a435-a2e95ee9d37d-config-volume\") pod \"collect-profiles-29492370-mxv69\" (UID: \"58a0a309-348a-4379-a435-a2e95ee9d37d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-mxv69" Jan 27 19:30:00 crc kubenswrapper[4915]: I0127 19:30:00.440587 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58a0a309-348a-4379-a435-a2e95ee9d37d-secret-volume\") pod \"collect-profiles-29492370-mxv69\" (UID: \"58a0a309-348a-4379-a435-a2e95ee9d37d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-mxv69" Jan 27 19:30:00 crc kubenswrapper[4915]: I0127 19:30:00.462752 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn2tw\" (UniqueName: \"kubernetes.io/projected/58a0a309-348a-4379-a435-a2e95ee9d37d-kube-api-access-bn2tw\") pod \"collect-profiles-29492370-mxv69\" (UID: \"58a0a309-348a-4379-a435-a2e95ee9d37d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-mxv69" Jan 27 19:30:00 crc kubenswrapper[4915]: I0127 19:30:00.467777 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-mxv69" Jan 27 19:30:00 crc kubenswrapper[4915]: I0127 19:30:00.927018 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492370-mxv69"] Jan 27 19:30:01 crc kubenswrapper[4915]: I0127 19:30:01.095181 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-mxv69" event={"ID":"58a0a309-348a-4379-a435-a2e95ee9d37d","Type":"ContainerStarted","Data":"952a08ba4644fbea4f825348f02284d7be7d574ce293e32c660aa02161e71393"} Jan 27 19:30:01 crc kubenswrapper[4915]: I0127 19:30:01.095695 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-mxv69" event={"ID":"58a0a309-348a-4379-a435-a2e95ee9d37d","Type":"ContainerStarted","Data":"6f85bc452005ae04ec0770a20ba33268c50bb373dc8442552be632e32e80f459"} Jan 27 19:30:01 crc kubenswrapper[4915]: I0127 19:30:01.130516 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-mxv69" podStartSLOduration=1.130491622 podStartE2EDuration="1.130491622s" podCreationTimestamp="2026-01-27 19:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:30:01.119166263 +0000 UTC m=+2892.477019967" watchObservedRunningTime="2026-01-27 19:30:01.130491622 +0000 UTC m=+2892.488345286" Jan 27 19:30:02 crc kubenswrapper[4915]: I0127 19:30:02.103217 4915 generic.go:334] "Generic (PLEG): container finished" podID="58a0a309-348a-4379-a435-a2e95ee9d37d" containerID="952a08ba4644fbea4f825348f02284d7be7d574ce293e32c660aa02161e71393" exitCode=0 Jan 27 19:30:02 crc kubenswrapper[4915]: I0127 19:30:02.103272 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-mxv69" event={"ID":"58a0a309-348a-4379-a435-a2e95ee9d37d","Type":"ContainerDied","Data":"952a08ba4644fbea4f825348f02284d7be7d574ce293e32c660aa02161e71393"} Jan 27 19:30:03 crc kubenswrapper[4915]: I0127 19:30:03.464430 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-mxv69" Jan 27 19:30:03 crc kubenswrapper[4915]: I0127 19:30:03.586934 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58a0a309-348a-4379-a435-a2e95ee9d37d-secret-volume\") pod \"58a0a309-348a-4379-a435-a2e95ee9d37d\" (UID: \"58a0a309-348a-4379-a435-a2e95ee9d37d\") " Jan 27 19:30:03 crc kubenswrapper[4915]: I0127 19:30:03.587030 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn2tw\" (UniqueName: \"kubernetes.io/projected/58a0a309-348a-4379-a435-a2e95ee9d37d-kube-api-access-bn2tw\") pod \"58a0a309-348a-4379-a435-a2e95ee9d37d\" (UID: \"58a0a309-348a-4379-a435-a2e95ee9d37d\") " Jan 27 19:30:03 crc kubenswrapper[4915]: I0127 19:30:03.587137 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58a0a309-348a-4379-a435-a2e95ee9d37d-config-volume\") pod \"58a0a309-348a-4379-a435-a2e95ee9d37d\" (UID: \"58a0a309-348a-4379-a435-a2e95ee9d37d\") " Jan 27 19:30:03 crc kubenswrapper[4915]: I0127 19:30:03.587727 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58a0a309-348a-4379-a435-a2e95ee9d37d-config-volume" (OuterVolumeSpecName: "config-volume") pod "58a0a309-348a-4379-a435-a2e95ee9d37d" (UID: "58a0a309-348a-4379-a435-a2e95ee9d37d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:30:03 crc kubenswrapper[4915]: I0127 19:30:03.592285 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58a0a309-348a-4379-a435-a2e95ee9d37d-kube-api-access-bn2tw" (OuterVolumeSpecName: "kube-api-access-bn2tw") pod "58a0a309-348a-4379-a435-a2e95ee9d37d" (UID: "58a0a309-348a-4379-a435-a2e95ee9d37d"). InnerVolumeSpecName "kube-api-access-bn2tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:30:03 crc kubenswrapper[4915]: I0127 19:30:03.599519 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58a0a309-348a-4379-a435-a2e95ee9d37d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "58a0a309-348a-4379-a435-a2e95ee9d37d" (UID: "58a0a309-348a-4379-a435-a2e95ee9d37d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:30:03 crc kubenswrapper[4915]: I0127 19:30:03.688851 4915 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58a0a309-348a-4379-a435-a2e95ee9d37d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:30:03 crc kubenswrapper[4915]: I0127 19:30:03.688891 4915 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58a0a309-348a-4379-a435-a2e95ee9d37d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:30:03 crc kubenswrapper[4915]: I0127 19:30:03.688903 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn2tw\" (UniqueName: \"kubernetes.io/projected/58a0a309-348a-4379-a435-a2e95ee9d37d-kube-api-access-bn2tw\") on node \"crc\" DevicePath \"\"" Jan 27 19:30:04 crc kubenswrapper[4915]: I0127 19:30:04.120752 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-mxv69" event={"ID":"58a0a309-348a-4379-a435-a2e95ee9d37d","Type":"ContainerDied","Data":"6f85bc452005ae04ec0770a20ba33268c50bb373dc8442552be632e32e80f459"} Jan 27 19:30:04 crc kubenswrapper[4915]: I0127 19:30:04.120819 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f85bc452005ae04ec0770a20ba33268c50bb373dc8442552be632e32e80f459" Jan 27 19:30:04 crc kubenswrapper[4915]: I0127 19:30:04.120873 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-mxv69" Jan 27 19:30:04 crc kubenswrapper[4915]: I0127 19:30:04.548554 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492325-7jg5z"] Jan 27 19:30:04 crc kubenswrapper[4915]: I0127 19:30:04.555461 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492325-7jg5z"] Jan 27 19:30:05 crc kubenswrapper[4915]: I0127 19:30:05.368955 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd" path="/var/lib/kubelet/pods/26ca3030-1aef-4e8a-a7cb-ac9b8b8bbafd/volumes" Jan 27 19:30:20 crc kubenswrapper[4915]: I0127 19:30:20.625109 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:30:20 crc kubenswrapper[4915]: I0127 19:30:20.625764 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:30:50 crc kubenswrapper[4915]: I0127 19:30:50.625089 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:30:50 crc kubenswrapper[4915]: I0127 19:30:50.625724 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:30:54 crc kubenswrapper[4915]: I0127 19:30:54.615702 4915 scope.go:117] "RemoveContainer" containerID="4493a21aaba906d69aef852a9340c8b02c6fdcb9bd20a8b861ea1ca4b21dea5a" Jan 27 19:31:20 crc kubenswrapper[4915]: I0127 19:31:20.624521 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:31:20 crc kubenswrapper[4915]: I0127 19:31:20.625143 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:31:20 crc kubenswrapper[4915]: I0127 19:31:20.625208 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 19:31:20 crc kubenswrapper[4915]: I0127 19:31:20.626037 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:31:20 crc kubenswrapper[4915]: I0127 19:31:20.626129 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" gracePeriod=600 Jan 27 19:31:20 crc kubenswrapper[4915]: E0127 19:31:20.758525 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:31:20 crc kubenswrapper[4915]: I0127 19:31:20.786402 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" exitCode=0 Jan 27 19:31:20 crc kubenswrapper[4915]: I0127 19:31:20.786476 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445"} Jan 27 19:31:20 crc kubenswrapper[4915]: I0127 19:31:20.786512 4915 scope.go:117] "RemoveContainer" containerID="b5bb0073b12a0cba5b28542e7c19879b6491bbb8e1f4c41b776e3f628cf0a0b2" Jan 27 19:31:20 crc kubenswrapper[4915]: I0127 19:31:20.787214 4915 scope.go:117] "RemoveContainer" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" Jan 27 19:31:20 crc kubenswrapper[4915]: E0127 19:31:20.787570 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:31:31 crc kubenswrapper[4915]: I0127 19:31:31.357765 4915 scope.go:117] "RemoveContainer" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" Jan 27 19:31:31 crc kubenswrapper[4915]: E0127 19:31:31.358609 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:31:42 crc kubenswrapper[4915]: I0127 19:31:42.358174 4915 scope.go:117] "RemoveContainer" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" Jan 27 19:31:42 crc kubenswrapper[4915]: E0127 19:31:42.359342 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:31:53 crc kubenswrapper[4915]: I0127 19:31:53.357377 4915 scope.go:117] "RemoveContainer" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" Jan 27 19:31:53 crc kubenswrapper[4915]: E0127 19:31:53.358153 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:32:04 crc kubenswrapper[4915]: I0127 19:32:04.358284 4915 scope.go:117] "RemoveContainer" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" Jan 27 19:32:04 crc kubenswrapper[4915]: E0127 19:32:04.360242 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:32:10 crc kubenswrapper[4915]: I0127 19:32:10.344454 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6mpvb"] Jan 27 19:32:10 crc kubenswrapper[4915]: E0127 19:32:10.345484 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58a0a309-348a-4379-a435-a2e95ee9d37d" containerName="collect-profiles" Jan 27 19:32:10 crc kubenswrapper[4915]: I0127 19:32:10.345513 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="58a0a309-348a-4379-a435-a2e95ee9d37d" containerName="collect-profiles" Jan 27 19:32:10 crc kubenswrapper[4915]: I0127 19:32:10.345856 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="58a0a309-348a-4379-a435-a2e95ee9d37d" containerName="collect-profiles" Jan 27 19:32:10 crc kubenswrapper[4915]: I0127 19:32:10.348168 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6mpvb" Jan 27 19:32:10 crc kubenswrapper[4915]: I0127 19:32:10.365625 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6mpvb"] Jan 27 19:32:10 crc kubenswrapper[4915]: I0127 19:32:10.515172 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5rcv\" (UniqueName: \"kubernetes.io/projected/d54eca8a-860f-4b29-b35c-074c360563cd-kube-api-access-r5rcv\") pod \"community-operators-6mpvb\" (UID: \"d54eca8a-860f-4b29-b35c-074c360563cd\") " pod="openshift-marketplace/community-operators-6mpvb" Jan 27 19:32:10 crc kubenswrapper[4915]: I0127 19:32:10.515530 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d54eca8a-860f-4b29-b35c-074c360563cd-utilities\") pod \"community-operators-6mpvb\" (UID: \"d54eca8a-860f-4b29-b35c-074c360563cd\") " pod="openshift-marketplace/community-operators-6mpvb" Jan 27 19:32:10 crc kubenswrapper[4915]: I0127 19:32:10.515693 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d54eca8a-860f-4b29-b35c-074c360563cd-catalog-content\") pod \"community-operators-6mpvb\" (UID: \"d54eca8a-860f-4b29-b35c-074c360563cd\") " pod="openshift-marketplace/community-operators-6mpvb" Jan 27 19:32:10 crc kubenswrapper[4915]: I0127 19:32:10.616503 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d54eca8a-860f-4b29-b35c-074c360563cd-utilities\") pod \"community-operators-6mpvb\" (UID: \"d54eca8a-860f-4b29-b35c-074c360563cd\") " pod="openshift-marketplace/community-operators-6mpvb" Jan 27 19:32:10 crc kubenswrapper[4915]: I0127 19:32:10.616583 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d54eca8a-860f-4b29-b35c-074c360563cd-catalog-content\") pod \"community-operators-6mpvb\" (UID: \"d54eca8a-860f-4b29-b35c-074c360563cd\") " pod="openshift-marketplace/community-operators-6mpvb" Jan 27 19:32:10 crc kubenswrapper[4915]: I0127 19:32:10.616642 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5rcv\" (UniqueName: \"kubernetes.io/projected/d54eca8a-860f-4b29-b35c-074c360563cd-kube-api-access-r5rcv\") pod \"community-operators-6mpvb\" (UID: \"d54eca8a-860f-4b29-b35c-074c360563cd\") " pod="openshift-marketplace/community-operators-6mpvb" Jan 27 19:32:10 crc kubenswrapper[4915]: I0127 19:32:10.617070 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d54eca8a-860f-4b29-b35c-074c360563cd-utilities\") pod \"community-operators-6mpvb\" (UID: \"d54eca8a-860f-4b29-b35c-074c360563cd\") " pod="openshift-marketplace/community-operators-6mpvb" Jan 27 19:32:10 crc kubenswrapper[4915]: I0127 19:32:10.617180 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d54eca8a-860f-4b29-b35c-074c360563cd-catalog-content\") pod \"community-operators-6mpvb\" (UID: \"d54eca8a-860f-4b29-b35c-074c360563cd\") " pod="openshift-marketplace/community-operators-6mpvb" Jan 27 19:32:10 crc kubenswrapper[4915]: I0127 19:32:10.638404 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5rcv\" (UniqueName: \"kubernetes.io/projected/d54eca8a-860f-4b29-b35c-074c360563cd-kube-api-access-r5rcv\") pod \"community-operators-6mpvb\" (UID: \"d54eca8a-860f-4b29-b35c-074c360563cd\") " pod="openshift-marketplace/community-operators-6mpvb" Jan 27 19:32:10 crc kubenswrapper[4915]: I0127 19:32:10.677451 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6mpvb" Jan 27 19:32:11 crc kubenswrapper[4915]: I0127 19:32:11.143775 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6mpvb"] Jan 27 19:32:11 crc kubenswrapper[4915]: I0127 19:32:11.235186 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mpvb" event={"ID":"d54eca8a-860f-4b29-b35c-074c360563cd","Type":"ContainerStarted","Data":"ff48ad96341cbf80db991804ed784ec96ea581746d6becfa26bc61e0afb0de80"} Jan 27 19:32:12 crc kubenswrapper[4915]: I0127 19:32:12.245622 4915 generic.go:334] "Generic (PLEG): container finished" podID="d54eca8a-860f-4b29-b35c-074c360563cd" containerID="9a8fd43aa1911d1cb591a2d6351d230467aa2133c00cd6b3a452ba92664d5468" exitCode=0 Jan 27 19:32:12 crc kubenswrapper[4915]: I0127 19:32:12.245699 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mpvb" event={"ID":"d54eca8a-860f-4b29-b35c-074c360563cd","Type":"ContainerDied","Data":"9a8fd43aa1911d1cb591a2d6351d230467aa2133c00cd6b3a452ba92664d5468"} Jan 27 19:32:13 crc kubenswrapper[4915]: I0127 19:32:13.254483 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mpvb" event={"ID":"d54eca8a-860f-4b29-b35c-074c360563cd","Type":"ContainerStarted","Data":"03c0fb48a1e7b373edd0c97c33744807192debb392bf2fc9118b0a535bb1c29e"} Jan 27 19:32:14 crc kubenswrapper[4915]: I0127 19:32:14.266837 4915 generic.go:334] "Generic (PLEG): container finished" podID="d54eca8a-860f-4b29-b35c-074c360563cd" containerID="03c0fb48a1e7b373edd0c97c33744807192debb392bf2fc9118b0a535bb1c29e" exitCode=0 Jan 27 19:32:14 crc kubenswrapper[4915]: I0127 19:32:14.266915 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mpvb" event={"ID":"d54eca8a-860f-4b29-b35c-074c360563cd","Type":"ContainerDied","Data":"03c0fb48a1e7b373edd0c97c33744807192debb392bf2fc9118b0a535bb1c29e"} Jan 27 19:32:15 crc kubenswrapper[4915]: I0127 19:32:15.277641 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mpvb" event={"ID":"d54eca8a-860f-4b29-b35c-074c360563cd","Type":"ContainerStarted","Data":"8b9247fa72606d6e267f19527035390a5386114df231a81edf250d6a03960dc4"} Jan 27 19:32:15 crc kubenswrapper[4915]: I0127 19:32:15.302256 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6mpvb" podStartSLOduration=2.8605537720000003 podStartE2EDuration="5.302232114s" podCreationTimestamp="2026-01-27 19:32:10 +0000 UTC" firstStartedPulling="2026-01-27 19:32:12.247430105 +0000 UTC m=+3023.605283789" lastFinishedPulling="2026-01-27 19:32:14.689108467 +0000 UTC m=+3026.046962131" observedRunningTime="2026-01-27 19:32:15.299544118 +0000 UTC m=+3026.657397792" watchObservedRunningTime="2026-01-27 19:32:15.302232114 +0000 UTC m=+3026.660085778" Jan 27 19:32:15 crc kubenswrapper[4915]: I0127 19:32:15.358441 4915 scope.go:117] "RemoveContainer" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" Jan 27 19:32:15 crc kubenswrapper[4915]: E0127 19:32:15.358655 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:32:20 crc kubenswrapper[4915]: I0127 19:32:20.679096 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6mpvb" Jan 27 19:32:20 crc kubenswrapper[4915]: I0127 19:32:20.680546 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6mpvb" Jan 27 19:32:20 crc kubenswrapper[4915]: I0127 19:32:20.758175 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6mpvb" Jan 27 19:32:21 crc kubenswrapper[4915]: I0127 19:32:21.379863 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6mpvb" Jan 27 19:32:21 crc kubenswrapper[4915]: I0127 19:32:21.433767 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6mpvb"] Jan 27 19:32:23 crc kubenswrapper[4915]: I0127 19:32:23.339897 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6mpvb" podUID="d54eca8a-860f-4b29-b35c-074c360563cd" containerName="registry-server" containerID="cri-o://8b9247fa72606d6e267f19527035390a5386114df231a81edf250d6a03960dc4" gracePeriod=2 Jan 27 19:32:23 crc kubenswrapper[4915]: I0127 19:32:23.805713 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6mpvb" Jan 27 19:32:23 crc kubenswrapper[4915]: I0127 19:32:23.917847 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d54eca8a-860f-4b29-b35c-074c360563cd-catalog-content\") pod \"d54eca8a-860f-4b29-b35c-074c360563cd\" (UID: \"d54eca8a-860f-4b29-b35c-074c360563cd\") " Jan 27 19:32:23 crc kubenswrapper[4915]: I0127 19:32:23.917957 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5rcv\" (UniqueName: \"kubernetes.io/projected/d54eca8a-860f-4b29-b35c-074c360563cd-kube-api-access-r5rcv\") pod \"d54eca8a-860f-4b29-b35c-074c360563cd\" (UID: \"d54eca8a-860f-4b29-b35c-074c360563cd\") " Jan 27 19:32:23 crc kubenswrapper[4915]: I0127 19:32:23.917993 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d54eca8a-860f-4b29-b35c-074c360563cd-utilities\") pod \"d54eca8a-860f-4b29-b35c-074c360563cd\" (UID: \"d54eca8a-860f-4b29-b35c-074c360563cd\") " Jan 27 19:32:23 crc kubenswrapper[4915]: I0127 19:32:23.919601 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d54eca8a-860f-4b29-b35c-074c360563cd-utilities" (OuterVolumeSpecName: "utilities") pod "d54eca8a-860f-4b29-b35c-074c360563cd" (UID: "d54eca8a-860f-4b29-b35c-074c360563cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:32:23 crc kubenswrapper[4915]: I0127 19:32:23.926563 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d54eca8a-860f-4b29-b35c-074c360563cd-kube-api-access-r5rcv" (OuterVolumeSpecName: "kube-api-access-r5rcv") pod "d54eca8a-860f-4b29-b35c-074c360563cd" (UID: "d54eca8a-860f-4b29-b35c-074c360563cd"). InnerVolumeSpecName "kube-api-access-r5rcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:32:23 crc kubenswrapper[4915]: I0127 19:32:23.981261 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d54eca8a-860f-4b29-b35c-074c360563cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d54eca8a-860f-4b29-b35c-074c360563cd" (UID: "d54eca8a-860f-4b29-b35c-074c360563cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:32:24 crc kubenswrapper[4915]: I0127 19:32:24.020507 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d54eca8a-860f-4b29-b35c-074c360563cd-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:32:24 crc kubenswrapper[4915]: I0127 19:32:24.020556 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d54eca8a-860f-4b29-b35c-074c360563cd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:32:24 crc kubenswrapper[4915]: I0127 19:32:24.020580 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5rcv\" (UniqueName: \"kubernetes.io/projected/d54eca8a-860f-4b29-b35c-074c360563cd-kube-api-access-r5rcv\") on node \"crc\" DevicePath \"\"" Jan 27 19:32:24 crc kubenswrapper[4915]: I0127 19:32:24.352478 4915 generic.go:334] "Generic (PLEG): container finished" podID="d54eca8a-860f-4b29-b35c-074c360563cd" containerID="8b9247fa72606d6e267f19527035390a5386114df231a81edf250d6a03960dc4" exitCode=0 Jan 27 19:32:24 crc kubenswrapper[4915]: I0127 19:32:24.352527 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mpvb" event={"ID":"d54eca8a-860f-4b29-b35c-074c360563cd","Type":"ContainerDied","Data":"8b9247fa72606d6e267f19527035390a5386114df231a81edf250d6a03960dc4"} Jan 27 19:32:24 crc kubenswrapper[4915]: I0127 19:32:24.352563 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mpvb" event={"ID":"d54eca8a-860f-4b29-b35c-074c360563cd","Type":"ContainerDied","Data":"ff48ad96341cbf80db991804ed784ec96ea581746d6becfa26bc61e0afb0de80"} Jan 27 19:32:24 crc kubenswrapper[4915]: I0127 19:32:24.352566 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6mpvb" Jan 27 19:32:24 crc kubenswrapper[4915]: I0127 19:32:24.352584 4915 scope.go:117] "RemoveContainer" containerID="8b9247fa72606d6e267f19527035390a5386114df231a81edf250d6a03960dc4" Jan 27 19:32:24 crc kubenswrapper[4915]: I0127 19:32:24.388349 4915 scope.go:117] "RemoveContainer" containerID="03c0fb48a1e7b373edd0c97c33744807192debb392bf2fc9118b0a535bb1c29e" Jan 27 19:32:24 crc kubenswrapper[4915]: I0127 19:32:24.389050 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6mpvb"] Jan 27 19:32:24 crc kubenswrapper[4915]: I0127 19:32:24.395585 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6mpvb"] Jan 27 19:32:24 crc kubenswrapper[4915]: I0127 19:32:24.413258 4915 scope.go:117] "RemoveContainer" containerID="9a8fd43aa1911d1cb591a2d6351d230467aa2133c00cd6b3a452ba92664d5468" Jan 27 19:32:24 crc kubenswrapper[4915]: I0127 19:32:24.438174 4915 scope.go:117] "RemoveContainer" containerID="8b9247fa72606d6e267f19527035390a5386114df231a81edf250d6a03960dc4" Jan 27 19:32:24 crc kubenswrapper[4915]: E0127 19:32:24.438989 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b9247fa72606d6e267f19527035390a5386114df231a81edf250d6a03960dc4\": container with ID starting with 8b9247fa72606d6e267f19527035390a5386114df231a81edf250d6a03960dc4 not found: ID does not exist" containerID="8b9247fa72606d6e267f19527035390a5386114df231a81edf250d6a03960dc4" Jan 27 19:32:24 crc kubenswrapper[4915]: I0127 19:32:24.439030 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b9247fa72606d6e267f19527035390a5386114df231a81edf250d6a03960dc4"} err="failed to get container status \"8b9247fa72606d6e267f19527035390a5386114df231a81edf250d6a03960dc4\": rpc error: code = NotFound desc = could not find container \"8b9247fa72606d6e267f19527035390a5386114df231a81edf250d6a03960dc4\": container with ID starting with 8b9247fa72606d6e267f19527035390a5386114df231a81edf250d6a03960dc4 not found: ID does not exist" Jan 27 19:32:24 crc kubenswrapper[4915]: I0127 19:32:24.439080 4915 scope.go:117] "RemoveContainer" containerID="03c0fb48a1e7b373edd0c97c33744807192debb392bf2fc9118b0a535bb1c29e" Jan 27 19:32:24 crc kubenswrapper[4915]: E0127 19:32:24.439513 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03c0fb48a1e7b373edd0c97c33744807192debb392bf2fc9118b0a535bb1c29e\": container with ID starting with 03c0fb48a1e7b373edd0c97c33744807192debb392bf2fc9118b0a535bb1c29e not found: ID does not exist" containerID="03c0fb48a1e7b373edd0c97c33744807192debb392bf2fc9118b0a535bb1c29e" Jan 27 19:32:24 crc kubenswrapper[4915]: I0127 19:32:24.439552 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03c0fb48a1e7b373edd0c97c33744807192debb392bf2fc9118b0a535bb1c29e"} err="failed to get container status \"03c0fb48a1e7b373edd0c97c33744807192debb392bf2fc9118b0a535bb1c29e\": rpc error: code = NotFound desc = could not find container \"03c0fb48a1e7b373edd0c97c33744807192debb392bf2fc9118b0a535bb1c29e\": container with ID starting with 03c0fb48a1e7b373edd0c97c33744807192debb392bf2fc9118b0a535bb1c29e not found: ID does not exist" Jan 27 19:32:24 crc kubenswrapper[4915]: I0127 19:32:24.439584 4915 scope.go:117] "RemoveContainer" containerID="9a8fd43aa1911d1cb591a2d6351d230467aa2133c00cd6b3a452ba92664d5468" Jan 27 19:32:24 crc kubenswrapper[4915]: E0127 19:32:24.440124 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a8fd43aa1911d1cb591a2d6351d230467aa2133c00cd6b3a452ba92664d5468\": container with ID starting with 9a8fd43aa1911d1cb591a2d6351d230467aa2133c00cd6b3a452ba92664d5468 not found: ID does not exist" containerID="9a8fd43aa1911d1cb591a2d6351d230467aa2133c00cd6b3a452ba92664d5468" Jan 27 19:32:24 crc kubenswrapper[4915]: I0127 19:32:24.440163 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a8fd43aa1911d1cb591a2d6351d230467aa2133c00cd6b3a452ba92664d5468"} err="failed to get container status \"9a8fd43aa1911d1cb591a2d6351d230467aa2133c00cd6b3a452ba92664d5468\": rpc error: code = NotFound desc = could not find container \"9a8fd43aa1911d1cb591a2d6351d230467aa2133c00cd6b3a452ba92664d5468\": container with ID starting with 9a8fd43aa1911d1cb591a2d6351d230467aa2133c00cd6b3a452ba92664d5468 not found: ID does not exist" Jan 27 19:32:25 crc kubenswrapper[4915]: I0127 19:32:25.367550 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d54eca8a-860f-4b29-b35c-074c360563cd" path="/var/lib/kubelet/pods/d54eca8a-860f-4b29-b35c-074c360563cd/volumes" Jan 27 19:32:29 crc kubenswrapper[4915]: I0127 19:32:29.358247 4915 scope.go:117] "RemoveContainer" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" Jan 27 19:32:29 crc kubenswrapper[4915]: E0127 19:32:29.359570 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:32:43 crc kubenswrapper[4915]: I0127 19:32:43.358448 4915 scope.go:117] "RemoveContainer" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" Jan 27 19:32:43 crc kubenswrapper[4915]: E0127 19:32:43.359169 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:32:43 crc kubenswrapper[4915]: I0127 19:32:43.423243 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bgqbw"] Jan 27 19:32:43 crc kubenswrapper[4915]: E0127 19:32:43.423573 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d54eca8a-860f-4b29-b35c-074c360563cd" containerName="extract-utilities" Jan 27 19:32:43 crc kubenswrapper[4915]: I0127 19:32:43.423608 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d54eca8a-860f-4b29-b35c-074c360563cd" containerName="extract-utilities" Jan 27 19:32:43 crc kubenswrapper[4915]: E0127 19:32:43.423643 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d54eca8a-860f-4b29-b35c-074c360563cd" containerName="extract-content" Jan 27 19:32:43 crc kubenswrapper[4915]: I0127 19:32:43.423651 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d54eca8a-860f-4b29-b35c-074c360563cd" containerName="extract-content" Jan 27 19:32:43 crc kubenswrapper[4915]: E0127 19:32:43.423667 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d54eca8a-860f-4b29-b35c-074c360563cd" containerName="registry-server" Jan 27 19:32:43 crc kubenswrapper[4915]: I0127 19:32:43.423676 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d54eca8a-860f-4b29-b35c-074c360563cd" containerName="registry-server" Jan 27 19:32:43 crc kubenswrapper[4915]: I0127 19:32:43.424811 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="d54eca8a-860f-4b29-b35c-074c360563cd" containerName="registry-server" Jan 27 19:32:43 crc kubenswrapper[4915]: I0127 19:32:43.434621 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bgqbw" Jan 27 19:32:43 crc kubenswrapper[4915]: I0127 19:32:43.465098 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgqbw"] Jan 27 19:32:43 crc kubenswrapper[4915]: I0127 19:32:43.544126 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c36b749d-1a76-4503-954d-7b76c0528180-utilities\") pod \"redhat-marketplace-bgqbw\" (UID: \"c36b749d-1a76-4503-954d-7b76c0528180\") " pod="openshift-marketplace/redhat-marketplace-bgqbw" Jan 27 19:32:43 crc kubenswrapper[4915]: I0127 19:32:43.544209 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c36b749d-1a76-4503-954d-7b76c0528180-catalog-content\") pod \"redhat-marketplace-bgqbw\" (UID: \"c36b749d-1a76-4503-954d-7b76c0528180\") " pod="openshift-marketplace/redhat-marketplace-bgqbw" Jan 27 19:32:43 crc kubenswrapper[4915]: I0127 19:32:43.544260 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44g6d\" (UniqueName: \"kubernetes.io/projected/c36b749d-1a76-4503-954d-7b76c0528180-kube-api-access-44g6d\") pod \"redhat-marketplace-bgqbw\" (UID: \"c36b749d-1a76-4503-954d-7b76c0528180\") " pod="openshift-marketplace/redhat-marketplace-bgqbw" Jan 27 19:32:43 crc kubenswrapper[4915]: I0127 19:32:43.645077 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c36b749d-1a76-4503-954d-7b76c0528180-utilities\") pod \"redhat-marketplace-bgqbw\" (UID: \"c36b749d-1a76-4503-954d-7b76c0528180\") " pod="openshift-marketplace/redhat-marketplace-bgqbw" Jan 27 19:32:43 crc kubenswrapper[4915]: I0127 19:32:43.645147 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c36b749d-1a76-4503-954d-7b76c0528180-catalog-content\") pod \"redhat-marketplace-bgqbw\" (UID: \"c36b749d-1a76-4503-954d-7b76c0528180\") " pod="openshift-marketplace/redhat-marketplace-bgqbw" Jan 27 19:32:43 crc kubenswrapper[4915]: I0127 19:32:43.645190 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44g6d\" (UniqueName: \"kubernetes.io/projected/c36b749d-1a76-4503-954d-7b76c0528180-kube-api-access-44g6d\") pod \"redhat-marketplace-bgqbw\" (UID: \"c36b749d-1a76-4503-954d-7b76c0528180\") " pod="openshift-marketplace/redhat-marketplace-bgqbw" Jan 27 19:32:43 crc kubenswrapper[4915]: I0127 19:32:43.645653 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c36b749d-1a76-4503-954d-7b76c0528180-utilities\") pod \"redhat-marketplace-bgqbw\" (UID: \"c36b749d-1a76-4503-954d-7b76c0528180\") " pod="openshift-marketplace/redhat-marketplace-bgqbw" Jan 27 19:32:43 crc kubenswrapper[4915]: I0127 19:32:43.645684 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c36b749d-1a76-4503-954d-7b76c0528180-catalog-content\") pod \"redhat-marketplace-bgqbw\" (UID: \"c36b749d-1a76-4503-954d-7b76c0528180\") " pod="openshift-marketplace/redhat-marketplace-bgqbw" Jan 27 19:32:43 crc kubenswrapper[4915]: I0127 19:32:43.666333 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44g6d\" (UniqueName: \"kubernetes.io/projected/c36b749d-1a76-4503-954d-7b76c0528180-kube-api-access-44g6d\") pod \"redhat-marketplace-bgqbw\" (UID: \"c36b749d-1a76-4503-954d-7b76c0528180\") " pod="openshift-marketplace/redhat-marketplace-bgqbw" Jan 27 19:32:43 crc kubenswrapper[4915]: I0127 19:32:43.770888 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bgqbw" Jan 27 19:32:44 crc kubenswrapper[4915]: I0127 19:32:44.210719 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgqbw"] Jan 27 19:32:44 crc kubenswrapper[4915]: I0127 19:32:44.526966 4915 generic.go:334] "Generic (PLEG): container finished" podID="c36b749d-1a76-4503-954d-7b76c0528180" containerID="8d043747f61863446919a5e27331298ea1f8e2e8efbe763d4d15a0abe2c79eb7" exitCode=0 Jan 27 19:32:44 crc kubenswrapper[4915]: I0127 19:32:44.527046 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgqbw" event={"ID":"c36b749d-1a76-4503-954d-7b76c0528180","Type":"ContainerDied","Data":"8d043747f61863446919a5e27331298ea1f8e2e8efbe763d4d15a0abe2c79eb7"} Jan 27 19:32:44 crc kubenswrapper[4915]: I0127 19:32:44.527251 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgqbw" event={"ID":"c36b749d-1a76-4503-954d-7b76c0528180","Type":"ContainerStarted","Data":"7ea48305feba73e2113d259dca62ad664ba123e622171bbecdd4c809ca83a75a"} Jan 27 19:32:45 crc kubenswrapper[4915]: I0127 19:32:45.539308 4915 generic.go:334] "Generic (PLEG): container finished" podID="c36b749d-1a76-4503-954d-7b76c0528180" containerID="3bb10242d3a4129c3b8aad8a0f896c00fc02b7aea1057237e4f999aa4827e230" exitCode=0 Jan 27 19:32:45 crc kubenswrapper[4915]: I0127 19:32:45.539377 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgqbw" event={"ID":"c36b749d-1a76-4503-954d-7b76c0528180","Type":"ContainerDied","Data":"3bb10242d3a4129c3b8aad8a0f896c00fc02b7aea1057237e4f999aa4827e230"} Jan 27 19:32:46 crc kubenswrapper[4915]: I0127 19:32:46.552300 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgqbw" event={"ID":"c36b749d-1a76-4503-954d-7b76c0528180","Type":"ContainerStarted","Data":"6c7400e05ae5309a38d8a13a0a2df4f8bcea159495c20dfb3a3e1f7035109a9e"} Jan 27 19:32:46 crc kubenswrapper[4915]: I0127 19:32:46.580845 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bgqbw" podStartSLOduration=1.9999923160000002 podStartE2EDuration="3.580821043s" podCreationTimestamp="2026-01-27 19:32:43 +0000 UTC" firstStartedPulling="2026-01-27 19:32:44.53040665 +0000 UTC m=+3055.888260334" lastFinishedPulling="2026-01-27 19:32:46.111235387 +0000 UTC m=+3057.469089061" observedRunningTime="2026-01-27 19:32:46.573976214 +0000 UTC m=+3057.931829898" watchObservedRunningTime="2026-01-27 19:32:46.580821043 +0000 UTC m=+3057.938674727" Jan 27 19:32:53 crc kubenswrapper[4915]: I0127 19:32:53.771866 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bgqbw" Jan 27 19:32:53 crc kubenswrapper[4915]: I0127 19:32:53.772367 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bgqbw" Jan 27 19:32:53 crc kubenswrapper[4915]: I0127 19:32:53.839195 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bgqbw" Jan 27 19:32:54 crc kubenswrapper[4915]: I0127 19:32:54.358434 4915 scope.go:117] "RemoveContainer" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" Jan 27 19:32:54 crc kubenswrapper[4915]: E0127 19:32:54.358843 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:32:54 crc kubenswrapper[4915]: I0127 19:32:54.678455 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bgqbw" Jan 27 19:32:54 crc kubenswrapper[4915]: I0127 19:32:54.732590 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgqbw"] Jan 27 19:32:56 crc kubenswrapper[4915]: I0127 19:32:56.644018 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bgqbw" podUID="c36b749d-1a76-4503-954d-7b76c0528180" containerName="registry-server" containerID="cri-o://6c7400e05ae5309a38d8a13a0a2df4f8bcea159495c20dfb3a3e1f7035109a9e" gracePeriod=2 Jan 27 19:32:57 crc kubenswrapper[4915]: I0127 19:32:57.087138 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bgqbw" Jan 27 19:32:57 crc kubenswrapper[4915]: I0127 19:32:57.260247 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c36b749d-1a76-4503-954d-7b76c0528180-utilities\") pod \"c36b749d-1a76-4503-954d-7b76c0528180\" (UID: \"c36b749d-1a76-4503-954d-7b76c0528180\") " Jan 27 19:32:57 crc kubenswrapper[4915]: I0127 19:32:57.260393 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44g6d\" (UniqueName: \"kubernetes.io/projected/c36b749d-1a76-4503-954d-7b76c0528180-kube-api-access-44g6d\") pod \"c36b749d-1a76-4503-954d-7b76c0528180\" (UID: \"c36b749d-1a76-4503-954d-7b76c0528180\") " Jan 27 19:32:57 crc kubenswrapper[4915]: I0127 19:32:57.260518 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c36b749d-1a76-4503-954d-7b76c0528180-catalog-content\") pod \"c36b749d-1a76-4503-954d-7b76c0528180\" (UID: \"c36b749d-1a76-4503-954d-7b76c0528180\") " Jan 27 19:32:57 crc kubenswrapper[4915]: I0127 19:32:57.261771 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c36b749d-1a76-4503-954d-7b76c0528180-utilities" (OuterVolumeSpecName: "utilities") pod "c36b749d-1a76-4503-954d-7b76c0528180" (UID: "c36b749d-1a76-4503-954d-7b76c0528180"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:32:57 crc kubenswrapper[4915]: I0127 19:32:57.267475 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c36b749d-1a76-4503-954d-7b76c0528180-kube-api-access-44g6d" (OuterVolumeSpecName: "kube-api-access-44g6d") pod "c36b749d-1a76-4503-954d-7b76c0528180" (UID: "c36b749d-1a76-4503-954d-7b76c0528180"). InnerVolumeSpecName "kube-api-access-44g6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:32:57 crc kubenswrapper[4915]: I0127 19:32:57.282211 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c36b749d-1a76-4503-954d-7b76c0528180-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c36b749d-1a76-4503-954d-7b76c0528180" (UID: "c36b749d-1a76-4503-954d-7b76c0528180"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:32:57 crc kubenswrapper[4915]: I0127 19:32:57.363112 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c36b749d-1a76-4503-954d-7b76c0528180-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:32:57 crc kubenswrapper[4915]: I0127 19:32:57.363202 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44g6d\" (UniqueName: \"kubernetes.io/projected/c36b749d-1a76-4503-954d-7b76c0528180-kube-api-access-44g6d\") on node \"crc\" DevicePath \"\"" Jan 27 19:32:57 crc kubenswrapper[4915]: I0127 19:32:57.363223 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c36b749d-1a76-4503-954d-7b76c0528180-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:32:57 crc kubenswrapper[4915]: I0127 19:32:57.657232 4915 generic.go:334] "Generic (PLEG): container finished" podID="c36b749d-1a76-4503-954d-7b76c0528180" containerID="6c7400e05ae5309a38d8a13a0a2df4f8bcea159495c20dfb3a3e1f7035109a9e" exitCode=0 Jan 27 19:32:57 crc kubenswrapper[4915]: I0127 19:32:57.657301 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bgqbw" Jan 27 19:32:57 crc kubenswrapper[4915]: I0127 19:32:57.657370 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgqbw" event={"ID":"c36b749d-1a76-4503-954d-7b76c0528180","Type":"ContainerDied","Data":"6c7400e05ae5309a38d8a13a0a2df4f8bcea159495c20dfb3a3e1f7035109a9e"} Jan 27 19:32:57 crc kubenswrapper[4915]: I0127 19:32:57.657456 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgqbw" event={"ID":"c36b749d-1a76-4503-954d-7b76c0528180","Type":"ContainerDied","Data":"7ea48305feba73e2113d259dca62ad664ba123e622171bbecdd4c809ca83a75a"} Jan 27 19:32:57 crc kubenswrapper[4915]: I0127 19:32:57.657486 4915 scope.go:117] "RemoveContainer" containerID="6c7400e05ae5309a38d8a13a0a2df4f8bcea159495c20dfb3a3e1f7035109a9e" Jan 27 19:32:57 crc kubenswrapper[4915]: I0127 19:32:57.683870 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgqbw"] Jan 27 19:32:57 crc kubenswrapper[4915]: I0127 19:32:57.691144 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgqbw"] Jan 27 19:32:57 crc kubenswrapper[4915]: I0127 19:32:57.693459 4915 scope.go:117] "RemoveContainer" containerID="3bb10242d3a4129c3b8aad8a0f896c00fc02b7aea1057237e4f999aa4827e230" Jan 27 19:32:57 crc kubenswrapper[4915]: I0127 19:32:57.712088 4915 scope.go:117] "RemoveContainer" containerID="8d043747f61863446919a5e27331298ea1f8e2e8efbe763d4d15a0abe2c79eb7" Jan 27 19:32:57 crc kubenswrapper[4915]: I0127 19:32:57.751709 4915 scope.go:117] "RemoveContainer" containerID="6c7400e05ae5309a38d8a13a0a2df4f8bcea159495c20dfb3a3e1f7035109a9e" Jan 27 19:32:57 crc kubenswrapper[4915]: E0127 19:32:57.752137 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c7400e05ae5309a38d8a13a0a2df4f8bcea159495c20dfb3a3e1f7035109a9e\": container with ID starting with 6c7400e05ae5309a38d8a13a0a2df4f8bcea159495c20dfb3a3e1f7035109a9e not found: ID does not exist" containerID="6c7400e05ae5309a38d8a13a0a2df4f8bcea159495c20dfb3a3e1f7035109a9e" Jan 27 19:32:57 crc kubenswrapper[4915]: I0127 19:32:57.752167 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c7400e05ae5309a38d8a13a0a2df4f8bcea159495c20dfb3a3e1f7035109a9e"} err="failed to get container status \"6c7400e05ae5309a38d8a13a0a2df4f8bcea159495c20dfb3a3e1f7035109a9e\": rpc error: code = NotFound desc = could not find container \"6c7400e05ae5309a38d8a13a0a2df4f8bcea159495c20dfb3a3e1f7035109a9e\": container with ID starting with 6c7400e05ae5309a38d8a13a0a2df4f8bcea159495c20dfb3a3e1f7035109a9e not found: ID does not exist" Jan 27 19:32:57 crc kubenswrapper[4915]: I0127 19:32:57.752189 4915 scope.go:117] "RemoveContainer" containerID="3bb10242d3a4129c3b8aad8a0f896c00fc02b7aea1057237e4f999aa4827e230" Jan 27 19:32:57 crc kubenswrapper[4915]: E0127 19:32:57.752392 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bb10242d3a4129c3b8aad8a0f896c00fc02b7aea1057237e4f999aa4827e230\": container with ID starting with 3bb10242d3a4129c3b8aad8a0f896c00fc02b7aea1057237e4f999aa4827e230 not found: ID does not exist" containerID="3bb10242d3a4129c3b8aad8a0f896c00fc02b7aea1057237e4f999aa4827e230" Jan 27 19:32:57 crc kubenswrapper[4915]: I0127 19:32:57.752415 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bb10242d3a4129c3b8aad8a0f896c00fc02b7aea1057237e4f999aa4827e230"} err="failed to get container status \"3bb10242d3a4129c3b8aad8a0f896c00fc02b7aea1057237e4f999aa4827e230\": rpc error: code = NotFound desc = could not find container \"3bb10242d3a4129c3b8aad8a0f896c00fc02b7aea1057237e4f999aa4827e230\": container with ID starting with 3bb10242d3a4129c3b8aad8a0f896c00fc02b7aea1057237e4f999aa4827e230 not found: ID does not exist" Jan 27 19:32:57 crc kubenswrapper[4915]: I0127 19:32:57.752428 4915 scope.go:117] "RemoveContainer" containerID="8d043747f61863446919a5e27331298ea1f8e2e8efbe763d4d15a0abe2c79eb7" Jan 27 19:32:57 crc kubenswrapper[4915]: E0127 19:32:57.752644 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d043747f61863446919a5e27331298ea1f8e2e8efbe763d4d15a0abe2c79eb7\": container with ID starting with 8d043747f61863446919a5e27331298ea1f8e2e8efbe763d4d15a0abe2c79eb7 not found: ID does not exist" containerID="8d043747f61863446919a5e27331298ea1f8e2e8efbe763d4d15a0abe2c79eb7" Jan 27 19:32:57 crc kubenswrapper[4915]: I0127 19:32:57.752668 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d043747f61863446919a5e27331298ea1f8e2e8efbe763d4d15a0abe2c79eb7"} err="failed to get container status \"8d043747f61863446919a5e27331298ea1f8e2e8efbe763d4d15a0abe2c79eb7\": rpc error: code = NotFound desc = could not find container \"8d043747f61863446919a5e27331298ea1f8e2e8efbe763d4d15a0abe2c79eb7\": container with ID starting with 8d043747f61863446919a5e27331298ea1f8e2e8efbe763d4d15a0abe2c79eb7 not found: ID does not exist" Jan 27 19:32:59 crc kubenswrapper[4915]: I0127 19:32:59.374168 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c36b749d-1a76-4503-954d-7b76c0528180" path="/var/lib/kubelet/pods/c36b749d-1a76-4503-954d-7b76c0528180/volumes" Jan 27 19:33:05 crc kubenswrapper[4915]: I0127 19:33:05.358783 4915 scope.go:117] "RemoveContainer" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" Jan 27 19:33:05 crc kubenswrapper[4915]: E0127 19:33:05.360028 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:33:16 crc kubenswrapper[4915]: I0127 19:33:16.357692 4915 scope.go:117] "RemoveContainer" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" Jan 27 19:33:16 crc kubenswrapper[4915]: E0127 19:33:16.358854 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:33:29 crc kubenswrapper[4915]: I0127 19:33:29.365446 4915 scope.go:117] "RemoveContainer" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" Jan 27 19:33:29 crc kubenswrapper[4915]: E0127 19:33:29.366711 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:33:41 crc kubenswrapper[4915]: I0127 19:33:41.359007 4915 scope.go:117] "RemoveContainer" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" Jan 27 19:33:41 crc kubenswrapper[4915]: E0127 19:33:41.360263 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:33:54 crc kubenswrapper[4915]: I0127 19:33:54.357963 4915 scope.go:117] "RemoveContainer" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" Jan 27 19:33:54 crc kubenswrapper[4915]: E0127 19:33:54.358993 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:34:06 crc kubenswrapper[4915]: I0127 19:34:06.357946 4915 scope.go:117] "RemoveContainer" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" Jan 27 19:34:06 crc kubenswrapper[4915]: E0127 19:34:06.358868 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:34:20 crc kubenswrapper[4915]: I0127 19:34:20.357153 4915 scope.go:117] "RemoveContainer" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" Jan 27 19:34:20 crc kubenswrapper[4915]: E0127 19:34:20.358058 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:34:33 crc kubenswrapper[4915]: I0127 19:34:33.358089 4915 scope.go:117] "RemoveContainer" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" Jan 27 19:34:33 crc kubenswrapper[4915]: E0127 19:34:33.358923 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:34:46 crc kubenswrapper[4915]: I0127 19:34:46.357948 4915 scope.go:117] "RemoveContainer" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" Jan 27 19:34:46 crc kubenswrapper[4915]: E0127 19:34:46.358863 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:34:58 crc kubenswrapper[4915]: I0127 19:34:58.358147 4915 scope.go:117] "RemoveContainer" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" Jan 27 19:34:58 crc kubenswrapper[4915]: E0127 19:34:58.359314 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:35:12 crc kubenswrapper[4915]: I0127 19:35:12.357923 4915 scope.go:117] "RemoveContainer" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" Jan 27 19:35:12 crc kubenswrapper[4915]: E0127 19:35:12.359045 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:35:23 crc kubenswrapper[4915]: I0127 19:35:23.357531 4915 scope.go:117] "RemoveContainer" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" Jan 27 19:35:23 crc kubenswrapper[4915]: E0127 19:35:23.358356 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:35:37 crc kubenswrapper[4915]: I0127 19:35:37.358438 4915 scope.go:117] "RemoveContainer" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" Jan 27 19:35:37 crc kubenswrapper[4915]: E0127 19:35:37.359419 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:35:50 crc kubenswrapper[4915]: I0127 19:35:50.357152 4915 scope.go:117] "RemoveContainer" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" Jan 27 19:35:50 crc kubenswrapper[4915]: E0127 19:35:50.358005 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:36:02 crc kubenswrapper[4915]: I0127 19:36:02.358558 4915 scope.go:117] "RemoveContainer" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" Jan 27 19:36:02 crc kubenswrapper[4915]: E0127 19:36:02.359778 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:36:17 crc kubenswrapper[4915]: I0127 19:36:17.358498 4915 scope.go:117] "RemoveContainer" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" Jan 27 19:36:17 crc kubenswrapper[4915]: E0127 19:36:17.359689 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:36:29 crc kubenswrapper[4915]: I0127 19:36:29.367285 4915 scope.go:117] "RemoveContainer" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" Jan 27 19:36:30 crc kubenswrapper[4915]: I0127 19:36:30.533695 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"ebabb2d311c0998d9fd3e1502fa62ae7dbb56f8cfc48cf77fb70331b19ccae1b"} Jan 27 19:38:50 crc kubenswrapper[4915]: I0127 19:38:50.624525 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:38:50 crc kubenswrapper[4915]: I0127 19:38:50.625499 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:38:58 crc kubenswrapper[4915]: I0127 19:38:58.175731 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qstmq"] Jan 27 19:38:58 crc kubenswrapper[4915]: E0127 19:38:58.177080 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c36b749d-1a76-4503-954d-7b76c0528180" containerName="extract-content" Jan 27 19:38:58 crc kubenswrapper[4915]: I0127 19:38:58.177114 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c36b749d-1a76-4503-954d-7b76c0528180" containerName="extract-content" Jan 27 19:38:58 crc kubenswrapper[4915]: E0127 19:38:58.177142 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c36b749d-1a76-4503-954d-7b76c0528180" containerName="registry-server" Jan 27 19:38:58 crc kubenswrapper[4915]: I0127 19:38:58.177159 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c36b749d-1a76-4503-954d-7b76c0528180" containerName="registry-server" Jan 27 19:38:58 crc kubenswrapper[4915]: E0127 19:38:58.177196 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c36b749d-1a76-4503-954d-7b76c0528180" containerName="extract-utilities" Jan 27 19:38:58 crc kubenswrapper[4915]: I0127 19:38:58.177215 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c36b749d-1a76-4503-954d-7b76c0528180" containerName="extract-utilities" Jan 27 19:38:58 crc kubenswrapper[4915]: I0127 19:38:58.177511 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c36b749d-1a76-4503-954d-7b76c0528180" containerName="registry-server" Jan 27 19:38:58 crc kubenswrapper[4915]: I0127 19:38:58.179354 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qstmq" Jan 27 19:38:58 crc kubenswrapper[4915]: I0127 19:38:58.204935 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qstmq"] Jan 27 19:38:58 crc kubenswrapper[4915]: I0127 19:38:58.319459 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d38b2af2-cd4a-4d3d-a1c9-bfc638029a56-utilities\") pod \"redhat-operators-qstmq\" (UID: \"d38b2af2-cd4a-4d3d-a1c9-bfc638029a56\") " pod="openshift-marketplace/redhat-operators-qstmq" Jan 27 19:38:58 crc kubenswrapper[4915]: I0127 19:38:58.319599 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czcdf\" (UniqueName: \"kubernetes.io/projected/d38b2af2-cd4a-4d3d-a1c9-bfc638029a56-kube-api-access-czcdf\") pod \"redhat-operators-qstmq\" (UID: \"d38b2af2-cd4a-4d3d-a1c9-bfc638029a56\") " pod="openshift-marketplace/redhat-operators-qstmq" Jan 27 19:38:58 crc kubenswrapper[4915]: I0127 19:38:58.319700 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d38b2af2-cd4a-4d3d-a1c9-bfc638029a56-catalog-content\") pod \"redhat-operators-qstmq\" (UID: \"d38b2af2-cd4a-4d3d-a1c9-bfc638029a56\") " pod="openshift-marketplace/redhat-operators-qstmq" Jan 27 19:38:58 crc kubenswrapper[4915]: I0127 19:38:58.421172 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d38b2af2-cd4a-4d3d-a1c9-bfc638029a56-catalog-content\") pod \"redhat-operators-qstmq\" (UID: \"d38b2af2-cd4a-4d3d-a1c9-bfc638029a56\") " pod="openshift-marketplace/redhat-operators-qstmq" Jan 27 19:38:58 crc kubenswrapper[4915]: I0127 19:38:58.421323 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d38b2af2-cd4a-4d3d-a1c9-bfc638029a56-utilities\") pod \"redhat-operators-qstmq\" (UID: \"d38b2af2-cd4a-4d3d-a1c9-bfc638029a56\") " pod="openshift-marketplace/redhat-operators-qstmq" Jan 27 19:38:58 crc kubenswrapper[4915]: I0127 19:38:58.421454 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czcdf\" (UniqueName: \"kubernetes.io/projected/d38b2af2-cd4a-4d3d-a1c9-bfc638029a56-kube-api-access-czcdf\") pod \"redhat-operators-qstmq\" (UID: \"d38b2af2-cd4a-4d3d-a1c9-bfc638029a56\") " pod="openshift-marketplace/redhat-operators-qstmq" Jan 27 19:38:58 crc kubenswrapper[4915]: I0127 19:38:58.421907 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d38b2af2-cd4a-4d3d-a1c9-bfc638029a56-catalog-content\") pod \"redhat-operators-qstmq\" (UID: \"d38b2af2-cd4a-4d3d-a1c9-bfc638029a56\") " pod="openshift-marketplace/redhat-operators-qstmq" Jan 27 19:38:58 crc kubenswrapper[4915]: I0127 19:38:58.422044 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d38b2af2-cd4a-4d3d-a1c9-bfc638029a56-utilities\") pod \"redhat-operators-qstmq\" (UID: \"d38b2af2-cd4a-4d3d-a1c9-bfc638029a56\") " pod="openshift-marketplace/redhat-operators-qstmq" Jan 27 19:38:58 crc kubenswrapper[4915]: I0127 19:38:58.448538 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czcdf\" (UniqueName: \"kubernetes.io/projected/d38b2af2-cd4a-4d3d-a1c9-bfc638029a56-kube-api-access-czcdf\") pod \"redhat-operators-qstmq\" (UID: \"d38b2af2-cd4a-4d3d-a1c9-bfc638029a56\") " pod="openshift-marketplace/redhat-operators-qstmq" Jan 27 19:38:58 crc kubenswrapper[4915]: I0127 19:38:58.514996 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qstmq" Jan 27 19:38:58 crc kubenswrapper[4915]: I0127 19:38:58.782635 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qstmq"] Jan 27 19:38:58 crc kubenswrapper[4915]: I0127 19:38:58.812747 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qstmq" event={"ID":"d38b2af2-cd4a-4d3d-a1c9-bfc638029a56","Type":"ContainerStarted","Data":"447ac823acf0cf46459912cbc259ee79c5e3ae9b4aaacf3a6cdf6d2efbfc8130"} Jan 27 19:38:59 crc kubenswrapper[4915]: I0127 19:38:59.823142 4915 generic.go:334] "Generic (PLEG): container finished" podID="d38b2af2-cd4a-4d3d-a1c9-bfc638029a56" containerID="257a97d36516cfd1ef31d44d36e6c2db4418e59fccf77c25e35655e94b8c5ee3" exitCode=0 Jan 27 19:38:59 crc kubenswrapper[4915]: I0127 19:38:59.823245 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qstmq" event={"ID":"d38b2af2-cd4a-4d3d-a1c9-bfc638029a56","Type":"ContainerDied","Data":"257a97d36516cfd1ef31d44d36e6c2db4418e59fccf77c25e35655e94b8c5ee3"} Jan 27 19:38:59 crc kubenswrapper[4915]: I0127 19:38:59.827067 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:39:02 crc kubenswrapper[4915]: I0127 19:39:02.855422 4915 generic.go:334] "Generic (PLEG): container finished" podID="d38b2af2-cd4a-4d3d-a1c9-bfc638029a56" containerID="5706538c6c4a7fc1d9f772b4f6f3fc1b0eb5bc521769bc70462a855b1e6fa1b1" exitCode=0 Jan 27 19:39:02 crc kubenswrapper[4915]: I0127 19:39:02.855518 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qstmq" event={"ID":"d38b2af2-cd4a-4d3d-a1c9-bfc638029a56","Type":"ContainerDied","Data":"5706538c6c4a7fc1d9f772b4f6f3fc1b0eb5bc521769bc70462a855b1e6fa1b1"} Jan 27 19:39:03 crc kubenswrapper[4915]: I0127 19:39:03.866646 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qstmq" event={"ID":"d38b2af2-cd4a-4d3d-a1c9-bfc638029a56","Type":"ContainerStarted","Data":"2c28f93358edec64a52d96a053aecf5d7e7a75be9b72fc22be13e9804bcf956f"} Jan 27 19:39:03 crc kubenswrapper[4915]: I0127 19:39:03.896234 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qstmq" podStartSLOduration=2.20726925 podStartE2EDuration="5.896210833s" podCreationTimestamp="2026-01-27 19:38:58 +0000 UTC" firstStartedPulling="2026-01-27 19:38:59.826247661 +0000 UTC m=+3431.184101375" lastFinishedPulling="2026-01-27 19:39:03.515189294 +0000 UTC m=+3434.873042958" observedRunningTime="2026-01-27 19:39:03.891055765 +0000 UTC m=+3435.248909469" watchObservedRunningTime="2026-01-27 19:39:03.896210833 +0000 UTC m=+3435.254064527" Jan 27 19:39:08 crc kubenswrapper[4915]: I0127 19:39:08.516026 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qstmq" Jan 27 19:39:08 crc kubenswrapper[4915]: I0127 19:39:08.516574 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qstmq" Jan 27 19:39:09 crc kubenswrapper[4915]: I0127 19:39:09.560680 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qstmq" podUID="d38b2af2-cd4a-4d3d-a1c9-bfc638029a56" containerName="registry-server" probeResult="failure" output=< Jan 27 19:39:09 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Jan 27 19:39:09 crc kubenswrapper[4915]: > Jan 27 19:39:18 crc kubenswrapper[4915]: I0127 19:39:18.589731 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qstmq" Jan 27 19:39:18 crc kubenswrapper[4915]: I0127 19:39:18.658653 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qstmq" Jan 27 19:39:19 crc kubenswrapper[4915]: I0127 19:39:19.963230 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qstmq"] Jan 27 19:39:20 crc kubenswrapper[4915]: I0127 19:39:20.000650 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qstmq" podUID="d38b2af2-cd4a-4d3d-a1c9-bfc638029a56" containerName="registry-server" containerID="cri-o://2c28f93358edec64a52d96a053aecf5d7e7a75be9b72fc22be13e9804bcf956f" gracePeriod=2 Jan 27 19:39:20 crc kubenswrapper[4915]: I0127 19:39:20.442713 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qstmq" Jan 27 19:39:20 crc kubenswrapper[4915]: I0127 19:39:20.558145 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d38b2af2-cd4a-4d3d-a1c9-bfc638029a56-catalog-content\") pod \"d38b2af2-cd4a-4d3d-a1c9-bfc638029a56\" (UID: \"d38b2af2-cd4a-4d3d-a1c9-bfc638029a56\") " Jan 27 19:39:20 crc kubenswrapper[4915]: I0127 19:39:20.558251 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czcdf\" (UniqueName: \"kubernetes.io/projected/d38b2af2-cd4a-4d3d-a1c9-bfc638029a56-kube-api-access-czcdf\") pod \"d38b2af2-cd4a-4d3d-a1c9-bfc638029a56\" (UID: \"d38b2af2-cd4a-4d3d-a1c9-bfc638029a56\") " Jan 27 19:39:20 crc kubenswrapper[4915]: I0127 19:39:20.558439 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d38b2af2-cd4a-4d3d-a1c9-bfc638029a56-utilities\") pod \"d38b2af2-cd4a-4d3d-a1c9-bfc638029a56\" (UID: \"d38b2af2-cd4a-4d3d-a1c9-bfc638029a56\") " Jan 27 19:39:20 crc kubenswrapper[4915]: I0127 19:39:20.560631 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d38b2af2-cd4a-4d3d-a1c9-bfc638029a56-utilities" (OuterVolumeSpecName: "utilities") pod "d38b2af2-cd4a-4d3d-a1c9-bfc638029a56" (UID: "d38b2af2-cd4a-4d3d-a1c9-bfc638029a56"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:39:20 crc kubenswrapper[4915]: I0127 19:39:20.564642 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d38b2af2-cd4a-4d3d-a1c9-bfc638029a56-kube-api-access-czcdf" (OuterVolumeSpecName: "kube-api-access-czcdf") pod "d38b2af2-cd4a-4d3d-a1c9-bfc638029a56" (UID: "d38b2af2-cd4a-4d3d-a1c9-bfc638029a56"). InnerVolumeSpecName "kube-api-access-czcdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:39:20 crc kubenswrapper[4915]: I0127 19:39:20.625109 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:39:20 crc kubenswrapper[4915]: I0127 19:39:20.625474 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:39:20 crc kubenswrapper[4915]: I0127 19:39:20.663589 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d38b2af2-cd4a-4d3d-a1c9-bfc638029a56-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:39:20 crc kubenswrapper[4915]: I0127 19:39:20.663641 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czcdf\" (UniqueName: \"kubernetes.io/projected/d38b2af2-cd4a-4d3d-a1c9-bfc638029a56-kube-api-access-czcdf\") on node \"crc\" DevicePath \"\"" Jan 27 19:39:20 crc kubenswrapper[4915]: I0127 19:39:20.734056 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d38b2af2-cd4a-4d3d-a1c9-bfc638029a56-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d38b2af2-cd4a-4d3d-a1c9-bfc638029a56" (UID: "d38b2af2-cd4a-4d3d-a1c9-bfc638029a56"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:39:20 crc kubenswrapper[4915]: I0127 19:39:20.765011 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d38b2af2-cd4a-4d3d-a1c9-bfc638029a56-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:39:21 crc kubenswrapper[4915]: I0127 19:39:21.012454 4915 generic.go:334] "Generic (PLEG): container finished" podID="d38b2af2-cd4a-4d3d-a1c9-bfc638029a56" containerID="2c28f93358edec64a52d96a053aecf5d7e7a75be9b72fc22be13e9804bcf956f" exitCode=0 Jan 27 19:39:21 crc kubenswrapper[4915]: I0127 19:39:21.012536 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qstmq" event={"ID":"d38b2af2-cd4a-4d3d-a1c9-bfc638029a56","Type":"ContainerDied","Data":"2c28f93358edec64a52d96a053aecf5d7e7a75be9b72fc22be13e9804bcf956f"} Jan 27 19:39:21 crc kubenswrapper[4915]: I0127 19:39:21.012595 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qstmq" event={"ID":"d38b2af2-cd4a-4d3d-a1c9-bfc638029a56","Type":"ContainerDied","Data":"447ac823acf0cf46459912cbc259ee79c5e3ae9b4aaacf3a6cdf6d2efbfc8130"} Jan 27 19:39:21 crc kubenswrapper[4915]: I0127 19:39:21.012548 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qstmq" Jan 27 19:39:21 crc kubenswrapper[4915]: I0127 19:39:21.012620 4915 scope.go:117] "RemoveContainer" containerID="2c28f93358edec64a52d96a053aecf5d7e7a75be9b72fc22be13e9804bcf956f" Jan 27 19:39:21 crc kubenswrapper[4915]: I0127 19:39:21.044278 4915 scope.go:117] "RemoveContainer" containerID="5706538c6c4a7fc1d9f772b4f6f3fc1b0eb5bc521769bc70462a855b1e6fa1b1" Jan 27 19:39:21 crc kubenswrapper[4915]: I0127 19:39:21.057561 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qstmq"] Jan 27 19:39:21 crc kubenswrapper[4915]: I0127 19:39:21.063221 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qstmq"] Jan 27 19:39:21 crc kubenswrapper[4915]: I0127 19:39:21.082091 4915 scope.go:117] "RemoveContainer" containerID="257a97d36516cfd1ef31d44d36e6c2db4418e59fccf77c25e35655e94b8c5ee3" Jan 27 19:39:21 crc kubenswrapper[4915]: I0127 19:39:21.105483 4915 scope.go:117] "RemoveContainer" containerID="2c28f93358edec64a52d96a053aecf5d7e7a75be9b72fc22be13e9804bcf956f" Jan 27 19:39:21 crc kubenswrapper[4915]: E0127 19:39:21.106185 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c28f93358edec64a52d96a053aecf5d7e7a75be9b72fc22be13e9804bcf956f\": container with ID starting with 2c28f93358edec64a52d96a053aecf5d7e7a75be9b72fc22be13e9804bcf956f not found: ID does not exist" containerID="2c28f93358edec64a52d96a053aecf5d7e7a75be9b72fc22be13e9804bcf956f" Jan 27 19:39:21 crc kubenswrapper[4915]: I0127 19:39:21.106229 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c28f93358edec64a52d96a053aecf5d7e7a75be9b72fc22be13e9804bcf956f"} err="failed to get container status \"2c28f93358edec64a52d96a053aecf5d7e7a75be9b72fc22be13e9804bcf956f\": rpc error: code = NotFound desc = could not find container \"2c28f93358edec64a52d96a053aecf5d7e7a75be9b72fc22be13e9804bcf956f\": container with ID starting with 2c28f93358edec64a52d96a053aecf5d7e7a75be9b72fc22be13e9804bcf956f not found: ID does not exist" Jan 27 19:39:21 crc kubenswrapper[4915]: I0127 19:39:21.106256 4915 scope.go:117] "RemoveContainer" containerID="5706538c6c4a7fc1d9f772b4f6f3fc1b0eb5bc521769bc70462a855b1e6fa1b1" Jan 27 19:39:21 crc kubenswrapper[4915]: E0127 19:39:21.106737 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5706538c6c4a7fc1d9f772b4f6f3fc1b0eb5bc521769bc70462a855b1e6fa1b1\": container with ID starting with 5706538c6c4a7fc1d9f772b4f6f3fc1b0eb5bc521769bc70462a855b1e6fa1b1 not found: ID does not exist" containerID="5706538c6c4a7fc1d9f772b4f6f3fc1b0eb5bc521769bc70462a855b1e6fa1b1" Jan 27 19:39:21 crc kubenswrapper[4915]: I0127 19:39:21.106777 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5706538c6c4a7fc1d9f772b4f6f3fc1b0eb5bc521769bc70462a855b1e6fa1b1"} err="failed to get container status \"5706538c6c4a7fc1d9f772b4f6f3fc1b0eb5bc521769bc70462a855b1e6fa1b1\": rpc error: code = NotFound desc = could not find container \"5706538c6c4a7fc1d9f772b4f6f3fc1b0eb5bc521769bc70462a855b1e6fa1b1\": container with ID starting with 5706538c6c4a7fc1d9f772b4f6f3fc1b0eb5bc521769bc70462a855b1e6fa1b1 not found: ID does not exist" Jan 27 19:39:21 crc kubenswrapper[4915]: I0127 19:39:21.106820 4915 scope.go:117] "RemoveContainer" containerID="257a97d36516cfd1ef31d44d36e6c2db4418e59fccf77c25e35655e94b8c5ee3" Jan 27 19:39:21 crc kubenswrapper[4915]: E0127 19:39:21.107051 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"257a97d36516cfd1ef31d44d36e6c2db4418e59fccf77c25e35655e94b8c5ee3\": container with ID starting with 257a97d36516cfd1ef31d44d36e6c2db4418e59fccf77c25e35655e94b8c5ee3 not found: ID does not exist" containerID="257a97d36516cfd1ef31d44d36e6c2db4418e59fccf77c25e35655e94b8c5ee3" Jan 27 19:39:21 crc kubenswrapper[4915]: I0127 19:39:21.107079 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"257a97d36516cfd1ef31d44d36e6c2db4418e59fccf77c25e35655e94b8c5ee3"} err="failed to get container status \"257a97d36516cfd1ef31d44d36e6c2db4418e59fccf77c25e35655e94b8c5ee3\": rpc error: code = NotFound desc = could not find container \"257a97d36516cfd1ef31d44d36e6c2db4418e59fccf77c25e35655e94b8c5ee3\": container with ID starting with 257a97d36516cfd1ef31d44d36e6c2db4418e59fccf77c25e35655e94b8c5ee3 not found: ID does not exist" Jan 27 19:39:21 crc kubenswrapper[4915]: I0127 19:39:21.374267 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d38b2af2-cd4a-4d3d-a1c9-bfc638029a56" path="/var/lib/kubelet/pods/d38b2af2-cd4a-4d3d-a1c9-bfc638029a56/volumes" Jan 27 19:39:23 crc kubenswrapper[4915]: I0127 19:39:23.375974 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p9f6z"] Jan 27 19:39:23 crc kubenswrapper[4915]: E0127 19:39:23.376399 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d38b2af2-cd4a-4d3d-a1c9-bfc638029a56" containerName="extract-content" Jan 27 19:39:23 crc kubenswrapper[4915]: I0127 19:39:23.376422 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d38b2af2-cd4a-4d3d-a1c9-bfc638029a56" containerName="extract-content" Jan 27 19:39:23 crc kubenswrapper[4915]: E0127 19:39:23.376468 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d38b2af2-cd4a-4d3d-a1c9-bfc638029a56" containerName="registry-server" Jan 27 19:39:23 crc kubenswrapper[4915]: I0127 19:39:23.376481 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d38b2af2-cd4a-4d3d-a1c9-bfc638029a56" containerName="registry-server" Jan 27 19:39:23 crc kubenswrapper[4915]: E0127 19:39:23.376506 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d38b2af2-cd4a-4d3d-a1c9-bfc638029a56" containerName="extract-utilities" Jan 27 19:39:23 crc kubenswrapper[4915]: I0127 19:39:23.376518 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d38b2af2-cd4a-4d3d-a1c9-bfc638029a56" containerName="extract-utilities" Jan 27 19:39:23 crc kubenswrapper[4915]: I0127 19:39:23.376815 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="d38b2af2-cd4a-4d3d-a1c9-bfc638029a56" containerName="registry-server" Jan 27 19:39:23 crc kubenswrapper[4915]: I0127 19:39:23.378614 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9f6z" Jan 27 19:39:23 crc kubenswrapper[4915]: I0127 19:39:23.395831 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p9f6z"] Jan 27 19:39:23 crc kubenswrapper[4915]: I0127 19:39:23.513690 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/172eb5af-f447-4f3a-95cb-6f42e71de566-utilities\") pod \"certified-operators-p9f6z\" (UID: \"172eb5af-f447-4f3a-95cb-6f42e71de566\") " pod="openshift-marketplace/certified-operators-p9f6z" Jan 27 19:39:23 crc kubenswrapper[4915]: I0127 19:39:23.513779 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6k4w\" (UniqueName: \"kubernetes.io/projected/172eb5af-f447-4f3a-95cb-6f42e71de566-kube-api-access-q6k4w\") pod \"certified-operators-p9f6z\" (UID: \"172eb5af-f447-4f3a-95cb-6f42e71de566\") " pod="openshift-marketplace/certified-operators-p9f6z" Jan 27 19:39:23 crc kubenswrapper[4915]: I0127 19:39:23.513850 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/172eb5af-f447-4f3a-95cb-6f42e71de566-catalog-content\") pod \"certified-operators-p9f6z\" (UID: \"172eb5af-f447-4f3a-95cb-6f42e71de566\") " pod="openshift-marketplace/certified-operators-p9f6z" Jan 27 19:39:23 crc kubenswrapper[4915]: I0127 19:39:23.615020 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6k4w\" (UniqueName: \"kubernetes.io/projected/172eb5af-f447-4f3a-95cb-6f42e71de566-kube-api-access-q6k4w\") pod \"certified-operators-p9f6z\" (UID: \"172eb5af-f447-4f3a-95cb-6f42e71de566\") " pod="openshift-marketplace/certified-operators-p9f6z" Jan 27 19:39:23 crc kubenswrapper[4915]: I0127 19:39:23.615097 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/172eb5af-f447-4f3a-95cb-6f42e71de566-catalog-content\") pod \"certified-operators-p9f6z\" (UID: \"172eb5af-f447-4f3a-95cb-6f42e71de566\") " pod="openshift-marketplace/certified-operators-p9f6z" Jan 27 19:39:23 crc kubenswrapper[4915]: I0127 19:39:23.615195 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/172eb5af-f447-4f3a-95cb-6f42e71de566-utilities\") pod \"certified-operators-p9f6z\" (UID: \"172eb5af-f447-4f3a-95cb-6f42e71de566\") " pod="openshift-marketplace/certified-operators-p9f6z" Jan 27 19:39:23 crc kubenswrapper[4915]: I0127 19:39:23.615591 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/172eb5af-f447-4f3a-95cb-6f42e71de566-catalog-content\") pod \"certified-operators-p9f6z\" (UID: \"172eb5af-f447-4f3a-95cb-6f42e71de566\") " pod="openshift-marketplace/certified-operators-p9f6z" Jan 27 19:39:23 crc kubenswrapper[4915]: I0127 19:39:23.615643 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/172eb5af-f447-4f3a-95cb-6f42e71de566-utilities\") pod \"certified-operators-p9f6z\" (UID: \"172eb5af-f447-4f3a-95cb-6f42e71de566\") " pod="openshift-marketplace/certified-operators-p9f6z" Jan 27 19:39:23 crc kubenswrapper[4915]: I0127 19:39:23.639022 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6k4w\" (UniqueName: \"kubernetes.io/projected/172eb5af-f447-4f3a-95cb-6f42e71de566-kube-api-access-q6k4w\") pod \"certified-operators-p9f6z\" (UID: \"172eb5af-f447-4f3a-95cb-6f42e71de566\") " pod="openshift-marketplace/certified-operators-p9f6z" Jan 27 19:39:23 crc kubenswrapper[4915]: I0127 19:39:23.722993 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9f6z" Jan 27 19:39:24 crc kubenswrapper[4915]: I0127 19:39:24.178660 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p9f6z"] Jan 27 19:39:25 crc kubenswrapper[4915]: I0127 19:39:25.047785 4915 generic.go:334] "Generic (PLEG): container finished" podID="172eb5af-f447-4f3a-95cb-6f42e71de566" containerID="4a7c0fdf8bef0a708e7941d003c2ec7f583b080b763562952a828224e1a81f5d" exitCode=0 Jan 27 19:39:25 crc kubenswrapper[4915]: I0127 19:39:25.047888 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9f6z" event={"ID":"172eb5af-f447-4f3a-95cb-6f42e71de566","Type":"ContainerDied","Data":"4a7c0fdf8bef0a708e7941d003c2ec7f583b080b763562952a828224e1a81f5d"} Jan 27 19:39:25 crc kubenswrapper[4915]: I0127 19:39:25.048253 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9f6z" event={"ID":"172eb5af-f447-4f3a-95cb-6f42e71de566","Type":"ContainerStarted","Data":"ee8c22f3fb11d75d1b2b47c9294fc33591f4464f8ea5d6bcbbbec85d79c8dbc1"} Jan 27 19:39:27 crc kubenswrapper[4915]: I0127 19:39:27.071319 4915 generic.go:334] "Generic (PLEG): container finished" podID="172eb5af-f447-4f3a-95cb-6f42e71de566" containerID="5507de0f5e0ee1b35f69089c81dcc6821ff28d5a130e264e6e0f103591a1acac" exitCode=0 Jan 27 19:39:27 crc kubenswrapper[4915]: I0127 19:39:27.071528 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9f6z" event={"ID":"172eb5af-f447-4f3a-95cb-6f42e71de566","Type":"ContainerDied","Data":"5507de0f5e0ee1b35f69089c81dcc6821ff28d5a130e264e6e0f103591a1acac"} Jan 27 19:39:29 crc kubenswrapper[4915]: I0127 19:39:29.092102 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9f6z" event={"ID":"172eb5af-f447-4f3a-95cb-6f42e71de566","Type":"ContainerStarted","Data":"5809f4b1f8642ce1e80ad9adfb99b6c028a8c1a4101a8574649d27558197aa84"} Jan 27 19:39:29 crc kubenswrapper[4915]: I0127 19:39:29.119536 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p9f6z" podStartSLOduration=3.170641921 podStartE2EDuration="6.119516272s" podCreationTimestamp="2026-01-27 19:39:23 +0000 UTC" firstStartedPulling="2026-01-27 19:39:25.049631172 +0000 UTC m=+3456.407484836" lastFinishedPulling="2026-01-27 19:39:27.998505483 +0000 UTC m=+3459.356359187" observedRunningTime="2026-01-27 19:39:29.116760954 +0000 UTC m=+3460.474614618" watchObservedRunningTime="2026-01-27 19:39:29.119516272 +0000 UTC m=+3460.477369936" Jan 27 19:39:33 crc kubenswrapper[4915]: I0127 19:39:33.723436 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p9f6z" Jan 27 19:39:33 crc kubenswrapper[4915]: I0127 19:39:33.724250 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p9f6z" Jan 27 19:39:33 crc kubenswrapper[4915]: I0127 19:39:33.792265 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p9f6z" Jan 27 19:39:34 crc kubenswrapper[4915]: I0127 19:39:34.201069 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p9f6z" Jan 27 19:39:34 crc kubenswrapper[4915]: I0127 19:39:34.258798 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p9f6z"] Jan 27 19:39:36 crc kubenswrapper[4915]: I0127 19:39:36.153997 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p9f6z" podUID="172eb5af-f447-4f3a-95cb-6f42e71de566" containerName="registry-server" containerID="cri-o://5809f4b1f8642ce1e80ad9adfb99b6c028a8c1a4101a8574649d27558197aa84" gracePeriod=2 Jan 27 19:39:36 crc kubenswrapper[4915]: I0127 19:39:36.648507 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9f6z" Jan 27 19:39:36 crc kubenswrapper[4915]: I0127 19:39:36.722120 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/172eb5af-f447-4f3a-95cb-6f42e71de566-catalog-content\") pod \"172eb5af-f447-4f3a-95cb-6f42e71de566\" (UID: \"172eb5af-f447-4f3a-95cb-6f42e71de566\") " Jan 27 19:39:36 crc kubenswrapper[4915]: I0127 19:39:36.722172 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6k4w\" (UniqueName: \"kubernetes.io/projected/172eb5af-f447-4f3a-95cb-6f42e71de566-kube-api-access-q6k4w\") pod \"172eb5af-f447-4f3a-95cb-6f42e71de566\" (UID: \"172eb5af-f447-4f3a-95cb-6f42e71de566\") " Jan 27 19:39:36 crc kubenswrapper[4915]: I0127 19:39:36.722300 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/172eb5af-f447-4f3a-95cb-6f42e71de566-utilities\") pod \"172eb5af-f447-4f3a-95cb-6f42e71de566\" (UID: \"172eb5af-f447-4f3a-95cb-6f42e71de566\") " Jan 27 19:39:36 crc kubenswrapper[4915]: I0127 19:39:36.725138 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/172eb5af-f447-4f3a-95cb-6f42e71de566-utilities" (OuterVolumeSpecName: "utilities") pod "172eb5af-f447-4f3a-95cb-6f42e71de566" (UID: "172eb5af-f447-4f3a-95cb-6f42e71de566"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:39:36 crc kubenswrapper[4915]: I0127 19:39:36.733011 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/172eb5af-f447-4f3a-95cb-6f42e71de566-kube-api-access-q6k4w" (OuterVolumeSpecName: "kube-api-access-q6k4w") pod "172eb5af-f447-4f3a-95cb-6f42e71de566" (UID: "172eb5af-f447-4f3a-95cb-6f42e71de566"). InnerVolumeSpecName "kube-api-access-q6k4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:39:36 crc kubenswrapper[4915]: I0127 19:39:36.790021 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/172eb5af-f447-4f3a-95cb-6f42e71de566-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "172eb5af-f447-4f3a-95cb-6f42e71de566" (UID: "172eb5af-f447-4f3a-95cb-6f42e71de566"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:39:36 crc kubenswrapper[4915]: I0127 19:39:36.824129 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/172eb5af-f447-4f3a-95cb-6f42e71de566-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:39:36 crc kubenswrapper[4915]: I0127 19:39:36.824184 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/172eb5af-f447-4f3a-95cb-6f42e71de566-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:39:36 crc kubenswrapper[4915]: I0127 19:39:36.824200 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6k4w\" (UniqueName: \"kubernetes.io/projected/172eb5af-f447-4f3a-95cb-6f42e71de566-kube-api-access-q6k4w\") on node \"crc\" DevicePath \"\"" Jan 27 19:39:37 crc kubenswrapper[4915]: I0127 19:39:37.167765 4915 generic.go:334] "Generic (PLEG): container finished" podID="172eb5af-f447-4f3a-95cb-6f42e71de566" containerID="5809f4b1f8642ce1e80ad9adfb99b6c028a8c1a4101a8574649d27558197aa84" exitCode=0 Jan 27 19:39:37 crc kubenswrapper[4915]: I0127 19:39:37.167947 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9f6z" event={"ID":"172eb5af-f447-4f3a-95cb-6f42e71de566","Type":"ContainerDied","Data":"5809f4b1f8642ce1e80ad9adfb99b6c028a8c1a4101a8574649d27558197aa84"} Jan 27 19:39:37 crc kubenswrapper[4915]: I0127 19:39:37.168249 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9f6z" event={"ID":"172eb5af-f447-4f3a-95cb-6f42e71de566","Type":"ContainerDied","Data":"ee8c22f3fb11d75d1b2b47c9294fc33591f4464f8ea5d6bcbbbec85d79c8dbc1"} Jan 27 19:39:37 crc kubenswrapper[4915]: I0127 19:39:37.168288 4915 scope.go:117] "RemoveContainer" containerID="5809f4b1f8642ce1e80ad9adfb99b6c028a8c1a4101a8574649d27558197aa84" Jan 27 19:39:37 crc kubenswrapper[4915]: I0127 19:39:37.167957 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9f6z" Jan 27 19:39:37 crc kubenswrapper[4915]: I0127 19:39:37.211534 4915 scope.go:117] "RemoveContainer" containerID="5507de0f5e0ee1b35f69089c81dcc6821ff28d5a130e264e6e0f103591a1acac" Jan 27 19:39:37 crc kubenswrapper[4915]: I0127 19:39:37.236752 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p9f6z"] Jan 27 19:39:37 crc kubenswrapper[4915]: I0127 19:39:37.243626 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p9f6z"] Jan 27 19:39:37 crc kubenswrapper[4915]: I0127 19:39:37.245663 4915 scope.go:117] "RemoveContainer" containerID="4a7c0fdf8bef0a708e7941d003c2ec7f583b080b763562952a828224e1a81f5d" Jan 27 19:39:37 crc kubenswrapper[4915]: I0127 19:39:37.281698 4915 scope.go:117] "RemoveContainer" containerID="5809f4b1f8642ce1e80ad9adfb99b6c028a8c1a4101a8574649d27558197aa84" Jan 27 19:39:37 crc kubenswrapper[4915]: E0127 19:39:37.282273 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5809f4b1f8642ce1e80ad9adfb99b6c028a8c1a4101a8574649d27558197aa84\": container with ID starting with 5809f4b1f8642ce1e80ad9adfb99b6c028a8c1a4101a8574649d27558197aa84 not found: ID does not exist" containerID="5809f4b1f8642ce1e80ad9adfb99b6c028a8c1a4101a8574649d27558197aa84" Jan 27 19:39:37 crc kubenswrapper[4915]: I0127 19:39:37.282554 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5809f4b1f8642ce1e80ad9adfb99b6c028a8c1a4101a8574649d27558197aa84"} err="failed to get container status \"5809f4b1f8642ce1e80ad9adfb99b6c028a8c1a4101a8574649d27558197aa84\": rpc error: code = NotFound desc = could not find container \"5809f4b1f8642ce1e80ad9adfb99b6c028a8c1a4101a8574649d27558197aa84\": container with ID starting with 5809f4b1f8642ce1e80ad9adfb99b6c028a8c1a4101a8574649d27558197aa84 not found: ID does not exist" Jan 27 19:39:37 crc kubenswrapper[4915]: I0127 19:39:37.282661 4915 scope.go:117] "RemoveContainer" containerID="5507de0f5e0ee1b35f69089c81dcc6821ff28d5a130e264e6e0f103591a1acac" Jan 27 19:39:37 crc kubenswrapper[4915]: E0127 19:39:37.283252 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5507de0f5e0ee1b35f69089c81dcc6821ff28d5a130e264e6e0f103591a1acac\": container with ID starting with 5507de0f5e0ee1b35f69089c81dcc6821ff28d5a130e264e6e0f103591a1acac not found: ID does not exist" containerID="5507de0f5e0ee1b35f69089c81dcc6821ff28d5a130e264e6e0f103591a1acac" Jan 27 19:39:37 crc kubenswrapper[4915]: I0127 19:39:37.283312 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5507de0f5e0ee1b35f69089c81dcc6821ff28d5a130e264e6e0f103591a1acac"} err="failed to get container status \"5507de0f5e0ee1b35f69089c81dcc6821ff28d5a130e264e6e0f103591a1acac\": rpc error: code = NotFound desc = could not find container \"5507de0f5e0ee1b35f69089c81dcc6821ff28d5a130e264e6e0f103591a1acac\": container with ID starting with 5507de0f5e0ee1b35f69089c81dcc6821ff28d5a130e264e6e0f103591a1acac not found: ID does not exist" Jan 27 19:39:37 crc kubenswrapper[4915]: I0127 19:39:37.283354 4915 scope.go:117] "RemoveContainer" containerID="4a7c0fdf8bef0a708e7941d003c2ec7f583b080b763562952a828224e1a81f5d" Jan 27 19:39:37 crc kubenswrapper[4915]: E0127 19:39:37.283924 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a7c0fdf8bef0a708e7941d003c2ec7f583b080b763562952a828224e1a81f5d\": container with ID starting with 4a7c0fdf8bef0a708e7941d003c2ec7f583b080b763562952a828224e1a81f5d not found: ID does not exist" containerID="4a7c0fdf8bef0a708e7941d003c2ec7f583b080b763562952a828224e1a81f5d" Jan 27 19:39:37 crc kubenswrapper[4915]: I0127 19:39:37.283987 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a7c0fdf8bef0a708e7941d003c2ec7f583b080b763562952a828224e1a81f5d"} err="failed to get container status \"4a7c0fdf8bef0a708e7941d003c2ec7f583b080b763562952a828224e1a81f5d\": rpc error: code = NotFound desc = could not find container \"4a7c0fdf8bef0a708e7941d003c2ec7f583b080b763562952a828224e1a81f5d\": container with ID starting with 4a7c0fdf8bef0a708e7941d003c2ec7f583b080b763562952a828224e1a81f5d not found: ID does not exist" Jan 27 19:39:37 crc kubenswrapper[4915]: I0127 19:39:37.367979 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="172eb5af-f447-4f3a-95cb-6f42e71de566" path="/var/lib/kubelet/pods/172eb5af-f447-4f3a-95cb-6f42e71de566/volumes" Jan 27 19:39:50 crc kubenswrapper[4915]: I0127 19:39:50.625093 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:39:50 crc kubenswrapper[4915]: I0127 19:39:50.625871 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:39:50 crc kubenswrapper[4915]: I0127 19:39:50.625955 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 19:39:50 crc kubenswrapper[4915]: I0127 19:39:50.627020 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ebabb2d311c0998d9fd3e1502fa62ae7dbb56f8cfc48cf77fb70331b19ccae1b"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:39:50 crc kubenswrapper[4915]: I0127 19:39:50.627143 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://ebabb2d311c0998d9fd3e1502fa62ae7dbb56f8cfc48cf77fb70331b19ccae1b" gracePeriod=600 Jan 27 19:39:51 crc kubenswrapper[4915]: I0127 19:39:51.320832 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="ebabb2d311c0998d9fd3e1502fa62ae7dbb56f8cfc48cf77fb70331b19ccae1b" exitCode=0 Jan 27 19:39:51 crc kubenswrapper[4915]: I0127 19:39:51.320855 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"ebabb2d311c0998d9fd3e1502fa62ae7dbb56f8cfc48cf77fb70331b19ccae1b"} Jan 27 19:39:51 crc kubenswrapper[4915]: I0127 19:39:51.321291 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea"} Jan 27 19:39:51 crc kubenswrapper[4915]: I0127 19:39:51.321319 4915 scope.go:117] "RemoveContainer" containerID="a158ca2da814be559988c2689f10066c5b1df6cd39fad6f73ec3087427723445" Jan 27 19:41:50 crc kubenswrapper[4915]: I0127 19:41:50.625404 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:41:50 crc kubenswrapper[4915]: I0127 19:41:50.626957 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:42:20 crc kubenswrapper[4915]: I0127 19:42:20.625194 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:42:20 crc kubenswrapper[4915]: I0127 19:42:20.625857 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:42:50 crc kubenswrapper[4915]: I0127 19:42:50.624973 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:42:50 crc kubenswrapper[4915]: I0127 19:42:50.625652 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:42:50 crc kubenswrapper[4915]: I0127 19:42:50.625713 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 19:42:50 crc kubenswrapper[4915]: I0127 19:42:50.626591 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:42:50 crc kubenswrapper[4915]: I0127 19:42:50.626689 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea" gracePeriod=600 Jan 27 19:42:50 crc kubenswrapper[4915]: E0127 19:42:50.764173 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:42:50 crc kubenswrapper[4915]: I0127 19:42:50.916479 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea" exitCode=0 Jan 27 19:42:50 crc kubenswrapper[4915]: I0127 19:42:50.916537 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea"} Jan 27 19:42:50 crc kubenswrapper[4915]: I0127 19:42:50.917140 4915 scope.go:117] "RemoveContainer" containerID="ebabb2d311c0998d9fd3e1502fa62ae7dbb56f8cfc48cf77fb70331b19ccae1b" Jan 27 19:42:50 crc kubenswrapper[4915]: I0127 19:42:50.918115 4915 scope.go:117] "RemoveContainer" containerID="f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea" Jan 27 19:42:50 crc kubenswrapper[4915]: E0127 19:42:50.918660 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:42:54 crc kubenswrapper[4915]: I0127 19:42:54.727920 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kf8sz"] Jan 27 19:42:54 crc kubenswrapper[4915]: E0127 19:42:54.728488 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="172eb5af-f447-4f3a-95cb-6f42e71de566" containerName="registry-server" Jan 27 19:42:54 crc kubenswrapper[4915]: I0127 19:42:54.728500 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="172eb5af-f447-4f3a-95cb-6f42e71de566" containerName="registry-server" Jan 27 19:42:54 crc kubenswrapper[4915]: E0127 19:42:54.728518 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="172eb5af-f447-4f3a-95cb-6f42e71de566" containerName="extract-utilities" Jan 27 19:42:54 crc kubenswrapper[4915]: I0127 19:42:54.728524 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="172eb5af-f447-4f3a-95cb-6f42e71de566" containerName="extract-utilities" Jan 27 19:42:54 crc kubenswrapper[4915]: E0127 19:42:54.728543 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="172eb5af-f447-4f3a-95cb-6f42e71de566" containerName="extract-content" Jan 27 19:42:54 crc kubenswrapper[4915]: I0127 19:42:54.728551 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="172eb5af-f447-4f3a-95cb-6f42e71de566" containerName="extract-content" Jan 27 19:42:54 crc kubenswrapper[4915]: I0127 19:42:54.728735 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="172eb5af-f447-4f3a-95cb-6f42e71de566" containerName="registry-server" Jan 27 19:42:54 crc kubenswrapper[4915]: I0127 19:42:54.729675 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kf8sz" Jan 27 19:42:54 crc kubenswrapper[4915]: I0127 19:42:54.749459 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kf8sz"] Jan 27 19:42:54 crc kubenswrapper[4915]: I0127 19:42:54.899278 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/392339bf-6a1b-476b-9de9-7e3d179b9a06-utilities\") pod \"community-operators-kf8sz\" (UID: \"392339bf-6a1b-476b-9de9-7e3d179b9a06\") " pod="openshift-marketplace/community-operators-kf8sz" Jan 27 19:42:54 crc kubenswrapper[4915]: I0127 19:42:54.899559 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx9rk\" (UniqueName: \"kubernetes.io/projected/392339bf-6a1b-476b-9de9-7e3d179b9a06-kube-api-access-dx9rk\") pod \"community-operators-kf8sz\" (UID: \"392339bf-6a1b-476b-9de9-7e3d179b9a06\") " pod="openshift-marketplace/community-operators-kf8sz" Jan 27 19:42:54 crc kubenswrapper[4915]: I0127 19:42:54.899967 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/392339bf-6a1b-476b-9de9-7e3d179b9a06-catalog-content\") pod \"community-operators-kf8sz\" (UID: \"392339bf-6a1b-476b-9de9-7e3d179b9a06\") " pod="openshift-marketplace/community-operators-kf8sz" Jan 27 19:42:55 crc kubenswrapper[4915]: I0127 19:42:55.001097 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx9rk\" (UniqueName: \"kubernetes.io/projected/392339bf-6a1b-476b-9de9-7e3d179b9a06-kube-api-access-dx9rk\") pod \"community-operators-kf8sz\" (UID: \"392339bf-6a1b-476b-9de9-7e3d179b9a06\") " pod="openshift-marketplace/community-operators-kf8sz" Jan 27 19:42:55 crc kubenswrapper[4915]: I0127 19:42:55.001537 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/392339bf-6a1b-476b-9de9-7e3d179b9a06-catalog-content\") pod \"community-operators-kf8sz\" (UID: \"392339bf-6a1b-476b-9de9-7e3d179b9a06\") " pod="openshift-marketplace/community-operators-kf8sz" Jan 27 19:42:55 crc kubenswrapper[4915]: I0127 19:42:55.001729 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/392339bf-6a1b-476b-9de9-7e3d179b9a06-utilities\") pod \"community-operators-kf8sz\" (UID: \"392339bf-6a1b-476b-9de9-7e3d179b9a06\") " pod="openshift-marketplace/community-operators-kf8sz" Jan 27 19:42:55 crc kubenswrapper[4915]: I0127 19:42:55.002150 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/392339bf-6a1b-476b-9de9-7e3d179b9a06-catalog-content\") pod \"community-operators-kf8sz\" (UID: \"392339bf-6a1b-476b-9de9-7e3d179b9a06\") " pod="openshift-marketplace/community-operators-kf8sz" Jan 27 19:42:55 crc kubenswrapper[4915]: I0127 19:42:55.002175 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/392339bf-6a1b-476b-9de9-7e3d179b9a06-utilities\") pod \"community-operators-kf8sz\" (UID: \"392339bf-6a1b-476b-9de9-7e3d179b9a06\") " pod="openshift-marketplace/community-operators-kf8sz" Jan 27 19:42:55 crc kubenswrapper[4915]: I0127 19:42:55.024921 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx9rk\" (UniqueName: \"kubernetes.io/projected/392339bf-6a1b-476b-9de9-7e3d179b9a06-kube-api-access-dx9rk\") pod \"community-operators-kf8sz\" (UID: \"392339bf-6a1b-476b-9de9-7e3d179b9a06\") " pod="openshift-marketplace/community-operators-kf8sz" Jan 27 19:42:55 crc kubenswrapper[4915]: I0127 19:42:55.050301 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kf8sz" Jan 27 19:42:55 crc kubenswrapper[4915]: I0127 19:42:55.548844 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kf8sz"] Jan 27 19:42:55 crc kubenswrapper[4915]: I0127 19:42:55.960520 4915 generic.go:334] "Generic (PLEG): container finished" podID="392339bf-6a1b-476b-9de9-7e3d179b9a06" containerID="a2d01fe00cdc5e877e1d6d5ae5ca925c3bcdc4d75ecdc202561ba6401a128944" exitCode=0 Jan 27 19:42:55 crc kubenswrapper[4915]: I0127 19:42:55.960660 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf8sz" event={"ID":"392339bf-6a1b-476b-9de9-7e3d179b9a06","Type":"ContainerDied","Data":"a2d01fe00cdc5e877e1d6d5ae5ca925c3bcdc4d75ecdc202561ba6401a128944"} Jan 27 19:42:55 crc kubenswrapper[4915]: I0127 19:42:55.960995 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf8sz" event={"ID":"392339bf-6a1b-476b-9de9-7e3d179b9a06","Type":"ContainerStarted","Data":"6da9232d981a1da8d8d06c16ade94ce2b74fa01b6575a56aa22fed593270a84d"} Jan 27 19:42:56 crc kubenswrapper[4915]: I0127 19:42:56.124145 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mvcv4"] Jan 27 19:42:56 crc kubenswrapper[4915]: I0127 19:42:56.125428 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvcv4" Jan 27 19:42:56 crc kubenswrapper[4915]: I0127 19:42:56.140746 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvcv4"] Jan 27 19:42:56 crc kubenswrapper[4915]: I0127 19:42:56.218768 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dcf4187-2dce-4457-8c93-c486b030c357-utilities\") pod \"redhat-marketplace-mvcv4\" (UID: \"9dcf4187-2dce-4457-8c93-c486b030c357\") " pod="openshift-marketplace/redhat-marketplace-mvcv4" Jan 27 19:42:56 crc kubenswrapper[4915]: I0127 19:42:56.218887 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtbxc\" (UniqueName: \"kubernetes.io/projected/9dcf4187-2dce-4457-8c93-c486b030c357-kube-api-access-qtbxc\") pod \"redhat-marketplace-mvcv4\" (UID: \"9dcf4187-2dce-4457-8c93-c486b030c357\") " pod="openshift-marketplace/redhat-marketplace-mvcv4" Jan 27 19:42:56 crc kubenswrapper[4915]: I0127 19:42:56.218950 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dcf4187-2dce-4457-8c93-c486b030c357-catalog-content\") pod \"redhat-marketplace-mvcv4\" (UID: \"9dcf4187-2dce-4457-8c93-c486b030c357\") " pod="openshift-marketplace/redhat-marketplace-mvcv4" Jan 27 19:42:56 crc kubenswrapper[4915]: I0127 19:42:56.320156 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dcf4187-2dce-4457-8c93-c486b030c357-utilities\") pod \"redhat-marketplace-mvcv4\" (UID: \"9dcf4187-2dce-4457-8c93-c486b030c357\") " pod="openshift-marketplace/redhat-marketplace-mvcv4" Jan 27 19:42:56 crc kubenswrapper[4915]: I0127 19:42:56.320282 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtbxc\" (UniqueName: \"kubernetes.io/projected/9dcf4187-2dce-4457-8c93-c486b030c357-kube-api-access-qtbxc\") pod \"redhat-marketplace-mvcv4\" (UID: \"9dcf4187-2dce-4457-8c93-c486b030c357\") " pod="openshift-marketplace/redhat-marketplace-mvcv4" Jan 27 19:42:56 crc kubenswrapper[4915]: I0127 19:42:56.320382 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dcf4187-2dce-4457-8c93-c486b030c357-catalog-content\") pod \"redhat-marketplace-mvcv4\" (UID: \"9dcf4187-2dce-4457-8c93-c486b030c357\") " pod="openshift-marketplace/redhat-marketplace-mvcv4" Jan 27 19:42:56 crc kubenswrapper[4915]: I0127 19:42:56.320826 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dcf4187-2dce-4457-8c93-c486b030c357-utilities\") pod \"redhat-marketplace-mvcv4\" (UID: \"9dcf4187-2dce-4457-8c93-c486b030c357\") " pod="openshift-marketplace/redhat-marketplace-mvcv4" Jan 27 19:42:56 crc kubenswrapper[4915]: I0127 19:42:56.321122 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dcf4187-2dce-4457-8c93-c486b030c357-catalog-content\") pod \"redhat-marketplace-mvcv4\" (UID: \"9dcf4187-2dce-4457-8c93-c486b030c357\") " pod="openshift-marketplace/redhat-marketplace-mvcv4" Jan 27 19:42:56 crc kubenswrapper[4915]: I0127 19:42:56.342744 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtbxc\" (UniqueName: \"kubernetes.io/projected/9dcf4187-2dce-4457-8c93-c486b030c357-kube-api-access-qtbxc\") pod \"redhat-marketplace-mvcv4\" (UID: \"9dcf4187-2dce-4457-8c93-c486b030c357\") " pod="openshift-marketplace/redhat-marketplace-mvcv4" Jan 27 19:42:56 crc kubenswrapper[4915]: I0127 19:42:56.477926 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvcv4" Jan 27 19:42:56 crc kubenswrapper[4915]: I0127 19:42:56.739289 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvcv4"] Jan 27 19:42:56 crc kubenswrapper[4915]: I0127 19:42:56.968890 4915 generic.go:334] "Generic (PLEG): container finished" podID="9dcf4187-2dce-4457-8c93-c486b030c357" containerID="9831e739119f1c826b8b0752713c1459ef48e01ca038b70a0aab93ca73f118e1" exitCode=0 Jan 27 19:42:56 crc kubenswrapper[4915]: I0127 19:42:56.969089 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvcv4" event={"ID":"9dcf4187-2dce-4457-8c93-c486b030c357","Type":"ContainerDied","Data":"9831e739119f1c826b8b0752713c1459ef48e01ca038b70a0aab93ca73f118e1"} Jan 27 19:42:56 crc kubenswrapper[4915]: I0127 19:42:56.969187 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvcv4" event={"ID":"9dcf4187-2dce-4457-8c93-c486b030c357","Type":"ContainerStarted","Data":"9a13af9afb9548ad9a4940355903cd4e9f16ecda831da479a49ec71fb0ae9961"} Jan 27 19:42:57 crc kubenswrapper[4915]: I0127 19:42:57.977909 4915 generic.go:334] "Generic (PLEG): container finished" podID="9dcf4187-2dce-4457-8c93-c486b030c357" containerID="555239d34d572b633a120ca06e0a2eba61580924659f54c6f27997c18728dad3" exitCode=0 Jan 27 19:42:57 crc kubenswrapper[4915]: I0127 19:42:57.977948 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvcv4" event={"ID":"9dcf4187-2dce-4457-8c93-c486b030c357","Type":"ContainerDied","Data":"555239d34d572b633a120ca06e0a2eba61580924659f54c6f27997c18728dad3"} Jan 27 19:42:59 crc kubenswrapper[4915]: I0127 19:42:59.995435 4915 generic.go:334] "Generic (PLEG): container finished" podID="392339bf-6a1b-476b-9de9-7e3d179b9a06" containerID="4e4fbc0d1ad0e6d993bb1a443e29b42f47eeca4f0f6fbb6b096b1dbe6fd2faf7" exitCode=0 Jan 27 19:42:59 crc kubenswrapper[4915]: I0127 19:42:59.995513 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf8sz" event={"ID":"392339bf-6a1b-476b-9de9-7e3d179b9a06","Type":"ContainerDied","Data":"4e4fbc0d1ad0e6d993bb1a443e29b42f47eeca4f0f6fbb6b096b1dbe6fd2faf7"} Jan 27 19:43:00 crc kubenswrapper[4915]: I0127 19:42:59.999313 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvcv4" event={"ID":"9dcf4187-2dce-4457-8c93-c486b030c357","Type":"ContainerStarted","Data":"173b2b7d00c8158b0ba1b957dbfe402f7e497c7a9a2bc6496c7a5c1ecb7bcac0"} Jan 27 19:43:00 crc kubenswrapper[4915]: I0127 19:43:00.034544 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mvcv4" podStartSLOduration=1.506684645 podStartE2EDuration="4.034518855s" podCreationTimestamp="2026-01-27 19:42:56 +0000 UTC" firstStartedPulling="2026-01-27 19:42:56.970045891 +0000 UTC m=+3668.327899555" lastFinishedPulling="2026-01-27 19:42:59.497880091 +0000 UTC m=+3670.855733765" observedRunningTime="2026-01-27 19:43:00.033313055 +0000 UTC m=+3671.391166729" watchObservedRunningTime="2026-01-27 19:43:00.034518855 +0000 UTC m=+3671.392372559" Jan 27 19:43:01 crc kubenswrapper[4915]: I0127 19:43:01.030894 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf8sz" event={"ID":"392339bf-6a1b-476b-9de9-7e3d179b9a06","Type":"ContainerStarted","Data":"461b971c2d94f8071c08611dfe9dcc34f4675ddcdc6cdb9bd111712ca72f33ae"} Jan 27 19:43:01 crc kubenswrapper[4915]: I0127 19:43:01.048452 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kf8sz" podStartSLOduration=2.6116769939999998 podStartE2EDuration="7.048438816s" podCreationTimestamp="2026-01-27 19:42:54 +0000 UTC" firstStartedPulling="2026-01-27 19:42:55.962777765 +0000 UTC m=+3667.320631469" lastFinishedPulling="2026-01-27 19:43:00.399539617 +0000 UTC m=+3671.757393291" observedRunningTime="2026-01-27 19:43:01.047041551 +0000 UTC m=+3672.404895235" watchObservedRunningTime="2026-01-27 19:43:01.048438816 +0000 UTC m=+3672.406292480" Jan 27 19:43:05 crc kubenswrapper[4915]: I0127 19:43:05.050681 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kf8sz" Jan 27 19:43:05 crc kubenswrapper[4915]: I0127 19:43:05.051088 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kf8sz" Jan 27 19:43:05 crc kubenswrapper[4915]: I0127 19:43:05.124112 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kf8sz" Jan 27 19:43:05 crc kubenswrapper[4915]: I0127 19:43:05.176879 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kf8sz" Jan 27 19:43:05 crc kubenswrapper[4915]: I0127 19:43:05.279102 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kf8sz"] Jan 27 19:43:05 crc kubenswrapper[4915]: I0127 19:43:05.359136 4915 scope.go:117] "RemoveContainer" containerID="f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea" Jan 27 19:43:05 crc kubenswrapper[4915]: E0127 19:43:05.359421 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:43:05 crc kubenswrapper[4915]: I0127 19:43:05.371133 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fgszs"] Jan 27 19:43:05 crc kubenswrapper[4915]: I0127 19:43:05.371528 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fgszs" podUID="17b9de2a-d796-4189-8d2c-f8ce1dd65de7" containerName="registry-server" containerID="cri-o://26b266d0bf6b3250821b0afe58ea56c59a898bbf3110391e070175c409b8b94e" gracePeriod=2 Jan 27 19:43:05 crc kubenswrapper[4915]: I0127 19:43:05.760689 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fgszs" Jan 27 19:43:05 crc kubenswrapper[4915]: I0127 19:43:05.962490 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17b9de2a-d796-4189-8d2c-f8ce1dd65de7-catalog-content\") pod \"17b9de2a-d796-4189-8d2c-f8ce1dd65de7\" (UID: \"17b9de2a-d796-4189-8d2c-f8ce1dd65de7\") " Jan 27 19:43:05 crc kubenswrapper[4915]: I0127 19:43:05.962542 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17b9de2a-d796-4189-8d2c-f8ce1dd65de7-utilities\") pod \"17b9de2a-d796-4189-8d2c-f8ce1dd65de7\" (UID: \"17b9de2a-d796-4189-8d2c-f8ce1dd65de7\") " Jan 27 19:43:05 crc kubenswrapper[4915]: I0127 19:43:05.962596 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtdng\" (UniqueName: \"kubernetes.io/projected/17b9de2a-d796-4189-8d2c-f8ce1dd65de7-kube-api-access-dtdng\") pod \"17b9de2a-d796-4189-8d2c-f8ce1dd65de7\" (UID: \"17b9de2a-d796-4189-8d2c-f8ce1dd65de7\") " Jan 27 19:43:05 crc kubenswrapper[4915]: I0127 19:43:05.963191 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17b9de2a-d796-4189-8d2c-f8ce1dd65de7-utilities" (OuterVolumeSpecName: "utilities") pod "17b9de2a-d796-4189-8d2c-f8ce1dd65de7" (UID: "17b9de2a-d796-4189-8d2c-f8ce1dd65de7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:43:05 crc kubenswrapper[4915]: I0127 19:43:05.969615 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17b9de2a-d796-4189-8d2c-f8ce1dd65de7-kube-api-access-dtdng" (OuterVolumeSpecName: "kube-api-access-dtdng") pod "17b9de2a-d796-4189-8d2c-f8ce1dd65de7" (UID: "17b9de2a-d796-4189-8d2c-f8ce1dd65de7"). InnerVolumeSpecName "kube-api-access-dtdng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:43:06 crc kubenswrapper[4915]: I0127 19:43:06.004203 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17b9de2a-d796-4189-8d2c-f8ce1dd65de7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17b9de2a-d796-4189-8d2c-f8ce1dd65de7" (UID: "17b9de2a-d796-4189-8d2c-f8ce1dd65de7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:43:06 crc kubenswrapper[4915]: I0127 19:43:06.064583 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17b9de2a-d796-4189-8d2c-f8ce1dd65de7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:43:06 crc kubenswrapper[4915]: I0127 19:43:06.064623 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17b9de2a-d796-4189-8d2c-f8ce1dd65de7-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:43:06 crc kubenswrapper[4915]: I0127 19:43:06.064636 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtdng\" (UniqueName: \"kubernetes.io/projected/17b9de2a-d796-4189-8d2c-f8ce1dd65de7-kube-api-access-dtdng\") on node \"crc\" DevicePath \"\"" Jan 27 19:43:06 crc kubenswrapper[4915]: I0127 19:43:06.073078 4915 generic.go:334] "Generic (PLEG): container finished" podID="17b9de2a-d796-4189-8d2c-f8ce1dd65de7" containerID="26b266d0bf6b3250821b0afe58ea56c59a898bbf3110391e070175c409b8b94e" exitCode=0 Jan 27 19:43:06 crc kubenswrapper[4915]: I0127 19:43:06.073148 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fgszs" Jan 27 19:43:06 crc kubenswrapper[4915]: I0127 19:43:06.073198 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fgszs" event={"ID":"17b9de2a-d796-4189-8d2c-f8ce1dd65de7","Type":"ContainerDied","Data":"26b266d0bf6b3250821b0afe58ea56c59a898bbf3110391e070175c409b8b94e"} Jan 27 19:43:06 crc kubenswrapper[4915]: I0127 19:43:06.073264 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fgszs" event={"ID":"17b9de2a-d796-4189-8d2c-f8ce1dd65de7","Type":"ContainerDied","Data":"7c57d5c01fe09bfbf94c1a53a92c0c230acd8febfdb212cab16472d6614b100b"} Jan 27 19:43:06 crc kubenswrapper[4915]: I0127 19:43:06.073301 4915 scope.go:117] "RemoveContainer" containerID="26b266d0bf6b3250821b0afe58ea56c59a898bbf3110391e070175c409b8b94e" Jan 27 19:43:06 crc kubenswrapper[4915]: I0127 19:43:06.104427 4915 scope.go:117] "RemoveContainer" containerID="d4c4187252117852bfad7fb89310092e792a490c41f3db4ef52a33067455deb2" Jan 27 19:43:06 crc kubenswrapper[4915]: I0127 19:43:06.105005 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fgszs"] Jan 27 19:43:06 crc kubenswrapper[4915]: I0127 19:43:06.109660 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fgszs"] Jan 27 19:43:06 crc kubenswrapper[4915]: I0127 19:43:06.122766 4915 scope.go:117] "RemoveContainer" containerID="c29a0b49ea599bf4e0ad18d5cfbcd010bab747bd5d740109a6043fa07624e454" Jan 27 19:43:06 crc kubenswrapper[4915]: I0127 19:43:06.145635 4915 scope.go:117] "RemoveContainer" containerID="26b266d0bf6b3250821b0afe58ea56c59a898bbf3110391e070175c409b8b94e" Jan 27 19:43:06 crc kubenswrapper[4915]: E0127 19:43:06.146389 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26b266d0bf6b3250821b0afe58ea56c59a898bbf3110391e070175c409b8b94e\": container with ID starting with 26b266d0bf6b3250821b0afe58ea56c59a898bbf3110391e070175c409b8b94e not found: ID does not exist" containerID="26b266d0bf6b3250821b0afe58ea56c59a898bbf3110391e070175c409b8b94e" Jan 27 19:43:06 crc kubenswrapper[4915]: I0127 19:43:06.146422 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26b266d0bf6b3250821b0afe58ea56c59a898bbf3110391e070175c409b8b94e"} err="failed to get container status \"26b266d0bf6b3250821b0afe58ea56c59a898bbf3110391e070175c409b8b94e\": rpc error: code = NotFound desc = could not find container \"26b266d0bf6b3250821b0afe58ea56c59a898bbf3110391e070175c409b8b94e\": container with ID starting with 26b266d0bf6b3250821b0afe58ea56c59a898bbf3110391e070175c409b8b94e not found: ID does not exist" Jan 27 19:43:06 crc kubenswrapper[4915]: I0127 19:43:06.146443 4915 scope.go:117] "RemoveContainer" containerID="d4c4187252117852bfad7fb89310092e792a490c41f3db4ef52a33067455deb2" Jan 27 19:43:06 crc kubenswrapper[4915]: E0127 19:43:06.146738 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4c4187252117852bfad7fb89310092e792a490c41f3db4ef52a33067455deb2\": container with ID starting with d4c4187252117852bfad7fb89310092e792a490c41f3db4ef52a33067455deb2 not found: ID does not exist" containerID="d4c4187252117852bfad7fb89310092e792a490c41f3db4ef52a33067455deb2" Jan 27 19:43:06 crc kubenswrapper[4915]: I0127 19:43:06.146760 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4c4187252117852bfad7fb89310092e792a490c41f3db4ef52a33067455deb2"} err="failed to get container status \"d4c4187252117852bfad7fb89310092e792a490c41f3db4ef52a33067455deb2\": rpc error: code = NotFound desc = could not find container \"d4c4187252117852bfad7fb89310092e792a490c41f3db4ef52a33067455deb2\": container with ID starting with d4c4187252117852bfad7fb89310092e792a490c41f3db4ef52a33067455deb2 not found: ID does not exist" Jan 27 19:43:06 crc kubenswrapper[4915]: I0127 19:43:06.146774 4915 scope.go:117] "RemoveContainer" containerID="c29a0b49ea599bf4e0ad18d5cfbcd010bab747bd5d740109a6043fa07624e454" Jan 27 19:43:06 crc kubenswrapper[4915]: E0127 19:43:06.147176 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c29a0b49ea599bf4e0ad18d5cfbcd010bab747bd5d740109a6043fa07624e454\": container with ID starting with c29a0b49ea599bf4e0ad18d5cfbcd010bab747bd5d740109a6043fa07624e454 not found: ID does not exist" containerID="c29a0b49ea599bf4e0ad18d5cfbcd010bab747bd5d740109a6043fa07624e454" Jan 27 19:43:06 crc kubenswrapper[4915]: I0127 19:43:06.147218 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c29a0b49ea599bf4e0ad18d5cfbcd010bab747bd5d740109a6043fa07624e454"} err="failed to get container status \"c29a0b49ea599bf4e0ad18d5cfbcd010bab747bd5d740109a6043fa07624e454\": rpc error: code = NotFound desc = could not find container \"c29a0b49ea599bf4e0ad18d5cfbcd010bab747bd5d740109a6043fa07624e454\": container with ID starting with c29a0b49ea599bf4e0ad18d5cfbcd010bab747bd5d740109a6043fa07624e454 not found: ID does not exist" Jan 27 19:43:06 crc kubenswrapper[4915]: I0127 19:43:06.479121 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mvcv4" Jan 27 19:43:06 crc kubenswrapper[4915]: I0127 19:43:06.479198 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mvcv4" Jan 27 19:43:06 crc kubenswrapper[4915]: I0127 19:43:06.539315 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mvcv4" Jan 27 19:43:07 crc kubenswrapper[4915]: I0127 19:43:07.129653 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mvcv4" Jan 27 19:43:07 crc kubenswrapper[4915]: I0127 19:43:07.372746 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17b9de2a-d796-4189-8d2c-f8ce1dd65de7" path="/var/lib/kubelet/pods/17b9de2a-d796-4189-8d2c-f8ce1dd65de7/volumes" Jan 27 19:43:08 crc kubenswrapper[4915]: I0127 19:43:08.568866 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvcv4"] Jan 27 19:43:09 crc kubenswrapper[4915]: I0127 19:43:09.096863 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mvcv4" podUID="9dcf4187-2dce-4457-8c93-c486b030c357" containerName="registry-server" containerID="cri-o://173b2b7d00c8158b0ba1b957dbfe402f7e497c7a9a2bc6496c7a5c1ecb7bcac0" gracePeriod=2 Jan 27 19:43:09 crc kubenswrapper[4915]: I0127 19:43:09.490299 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvcv4" Jan 27 19:43:09 crc kubenswrapper[4915]: I0127 19:43:09.627861 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dcf4187-2dce-4457-8c93-c486b030c357-catalog-content\") pod \"9dcf4187-2dce-4457-8c93-c486b030c357\" (UID: \"9dcf4187-2dce-4457-8c93-c486b030c357\") " Jan 27 19:43:09 crc kubenswrapper[4915]: I0127 19:43:09.627939 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtbxc\" (UniqueName: \"kubernetes.io/projected/9dcf4187-2dce-4457-8c93-c486b030c357-kube-api-access-qtbxc\") pod \"9dcf4187-2dce-4457-8c93-c486b030c357\" (UID: \"9dcf4187-2dce-4457-8c93-c486b030c357\") " Jan 27 19:43:09 crc kubenswrapper[4915]: I0127 19:43:09.628138 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dcf4187-2dce-4457-8c93-c486b030c357-utilities\") pod \"9dcf4187-2dce-4457-8c93-c486b030c357\" (UID: \"9dcf4187-2dce-4457-8c93-c486b030c357\") " Jan 27 19:43:09 crc kubenswrapper[4915]: I0127 19:43:09.629486 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dcf4187-2dce-4457-8c93-c486b030c357-utilities" (OuterVolumeSpecName: "utilities") pod "9dcf4187-2dce-4457-8c93-c486b030c357" (UID: "9dcf4187-2dce-4457-8c93-c486b030c357"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:43:09 crc kubenswrapper[4915]: I0127 19:43:09.634076 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dcf4187-2dce-4457-8c93-c486b030c357-kube-api-access-qtbxc" (OuterVolumeSpecName: "kube-api-access-qtbxc") pod "9dcf4187-2dce-4457-8c93-c486b030c357" (UID: "9dcf4187-2dce-4457-8c93-c486b030c357"). InnerVolumeSpecName "kube-api-access-qtbxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:43:09 crc kubenswrapper[4915]: I0127 19:43:09.652603 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dcf4187-2dce-4457-8c93-c486b030c357-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9dcf4187-2dce-4457-8c93-c486b030c357" (UID: "9dcf4187-2dce-4457-8c93-c486b030c357"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:43:09 crc kubenswrapper[4915]: I0127 19:43:09.730047 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dcf4187-2dce-4457-8c93-c486b030c357-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:43:09 crc kubenswrapper[4915]: I0127 19:43:09.730286 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtbxc\" (UniqueName: \"kubernetes.io/projected/9dcf4187-2dce-4457-8c93-c486b030c357-kube-api-access-qtbxc\") on node \"crc\" DevicePath \"\"" Jan 27 19:43:09 crc kubenswrapper[4915]: I0127 19:43:09.730391 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dcf4187-2dce-4457-8c93-c486b030c357-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:43:10 crc kubenswrapper[4915]: I0127 19:43:10.109453 4915 generic.go:334] "Generic (PLEG): container finished" podID="9dcf4187-2dce-4457-8c93-c486b030c357" containerID="173b2b7d00c8158b0ba1b957dbfe402f7e497c7a9a2bc6496c7a5c1ecb7bcac0" exitCode=0 Jan 27 19:43:10 crc kubenswrapper[4915]: I0127 19:43:10.109575 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvcv4" event={"ID":"9dcf4187-2dce-4457-8c93-c486b030c357","Type":"ContainerDied","Data":"173b2b7d00c8158b0ba1b957dbfe402f7e497c7a9a2bc6496c7a5c1ecb7bcac0"} Jan 27 19:43:10 crc kubenswrapper[4915]: I0127 19:43:10.109769 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvcv4" event={"ID":"9dcf4187-2dce-4457-8c93-c486b030c357","Type":"ContainerDied","Data":"9a13af9afb9548ad9a4940355903cd4e9f16ecda831da479a49ec71fb0ae9961"} Jan 27 19:43:10 crc kubenswrapper[4915]: I0127 19:43:10.109804 4915 scope.go:117] "RemoveContainer" containerID="173b2b7d00c8158b0ba1b957dbfe402f7e497c7a9a2bc6496c7a5c1ecb7bcac0" Jan 27 19:43:10 crc kubenswrapper[4915]: I0127 19:43:10.109597 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvcv4" Jan 27 19:43:10 crc kubenswrapper[4915]: I0127 19:43:10.149724 4915 scope.go:117] "RemoveContainer" containerID="555239d34d572b633a120ca06e0a2eba61580924659f54c6f27997c18728dad3" Jan 27 19:43:10 crc kubenswrapper[4915]: I0127 19:43:10.151519 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvcv4"] Jan 27 19:43:10 crc kubenswrapper[4915]: I0127 19:43:10.161700 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvcv4"] Jan 27 19:43:10 crc kubenswrapper[4915]: I0127 19:43:10.179009 4915 scope.go:117] "RemoveContainer" containerID="9831e739119f1c826b8b0752713c1459ef48e01ca038b70a0aab93ca73f118e1" Jan 27 19:43:10 crc kubenswrapper[4915]: I0127 19:43:10.218111 4915 scope.go:117] "RemoveContainer" containerID="173b2b7d00c8158b0ba1b957dbfe402f7e497c7a9a2bc6496c7a5c1ecb7bcac0" Jan 27 19:43:10 crc kubenswrapper[4915]: E0127 19:43:10.218647 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"173b2b7d00c8158b0ba1b957dbfe402f7e497c7a9a2bc6496c7a5c1ecb7bcac0\": container with ID starting with 173b2b7d00c8158b0ba1b957dbfe402f7e497c7a9a2bc6496c7a5c1ecb7bcac0 not found: ID does not exist" containerID="173b2b7d00c8158b0ba1b957dbfe402f7e497c7a9a2bc6496c7a5c1ecb7bcac0" Jan 27 19:43:10 crc kubenswrapper[4915]: I0127 19:43:10.218705 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"173b2b7d00c8158b0ba1b957dbfe402f7e497c7a9a2bc6496c7a5c1ecb7bcac0"} err="failed to get container status \"173b2b7d00c8158b0ba1b957dbfe402f7e497c7a9a2bc6496c7a5c1ecb7bcac0\": rpc error: code = NotFound desc = could not find container \"173b2b7d00c8158b0ba1b957dbfe402f7e497c7a9a2bc6496c7a5c1ecb7bcac0\": container with ID starting with 173b2b7d00c8158b0ba1b957dbfe402f7e497c7a9a2bc6496c7a5c1ecb7bcac0 not found: ID does not exist" Jan 27 19:43:10 crc kubenswrapper[4915]: I0127 19:43:10.218739 4915 scope.go:117] "RemoveContainer" containerID="555239d34d572b633a120ca06e0a2eba61580924659f54c6f27997c18728dad3" Jan 27 19:43:10 crc kubenswrapper[4915]: E0127 19:43:10.219062 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"555239d34d572b633a120ca06e0a2eba61580924659f54c6f27997c18728dad3\": container with ID starting with 555239d34d572b633a120ca06e0a2eba61580924659f54c6f27997c18728dad3 not found: ID does not exist" containerID="555239d34d572b633a120ca06e0a2eba61580924659f54c6f27997c18728dad3" Jan 27 19:43:10 crc kubenswrapper[4915]: I0127 19:43:10.219109 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"555239d34d572b633a120ca06e0a2eba61580924659f54c6f27997c18728dad3"} err="failed to get container status \"555239d34d572b633a120ca06e0a2eba61580924659f54c6f27997c18728dad3\": rpc error: code = NotFound desc = could not find container \"555239d34d572b633a120ca06e0a2eba61580924659f54c6f27997c18728dad3\": container with ID starting with 555239d34d572b633a120ca06e0a2eba61580924659f54c6f27997c18728dad3 not found: ID does not exist" Jan 27 19:43:10 crc kubenswrapper[4915]: I0127 19:43:10.219141 4915 scope.go:117] "RemoveContainer" containerID="9831e739119f1c826b8b0752713c1459ef48e01ca038b70a0aab93ca73f118e1" Jan 27 19:43:10 crc kubenswrapper[4915]: E0127 19:43:10.219542 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9831e739119f1c826b8b0752713c1459ef48e01ca038b70a0aab93ca73f118e1\": container with ID starting with 9831e739119f1c826b8b0752713c1459ef48e01ca038b70a0aab93ca73f118e1 not found: ID does not exist" containerID="9831e739119f1c826b8b0752713c1459ef48e01ca038b70a0aab93ca73f118e1" Jan 27 19:43:10 crc kubenswrapper[4915]: I0127 19:43:10.219582 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9831e739119f1c826b8b0752713c1459ef48e01ca038b70a0aab93ca73f118e1"} err="failed to get container status \"9831e739119f1c826b8b0752713c1459ef48e01ca038b70a0aab93ca73f118e1\": rpc error: code = NotFound desc = could not find container \"9831e739119f1c826b8b0752713c1459ef48e01ca038b70a0aab93ca73f118e1\": container with ID starting with 9831e739119f1c826b8b0752713c1459ef48e01ca038b70a0aab93ca73f118e1 not found: ID does not exist" Jan 27 19:43:11 crc kubenswrapper[4915]: I0127 19:43:11.372494 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dcf4187-2dce-4457-8c93-c486b030c357" path="/var/lib/kubelet/pods/9dcf4187-2dce-4457-8c93-c486b030c357/volumes" Jan 27 19:43:19 crc kubenswrapper[4915]: I0127 19:43:19.365558 4915 scope.go:117] "RemoveContainer" containerID="f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea" Jan 27 19:43:19 crc kubenswrapper[4915]: E0127 19:43:19.366842 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:43:32 crc kubenswrapper[4915]: I0127 19:43:32.357307 4915 scope.go:117] "RemoveContainer" containerID="f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea" Jan 27 19:43:32 crc kubenswrapper[4915]: E0127 19:43:32.358427 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:43:46 crc kubenswrapper[4915]: I0127 19:43:46.358215 4915 scope.go:117] "RemoveContainer" containerID="f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea" Jan 27 19:43:46 crc kubenswrapper[4915]: E0127 19:43:46.359269 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:43:57 crc kubenswrapper[4915]: I0127 19:43:57.359259 4915 scope.go:117] "RemoveContainer" containerID="f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea" Jan 27 19:43:57 crc kubenswrapper[4915]: E0127 19:43:57.360359 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:44:10 crc kubenswrapper[4915]: I0127 19:44:10.358505 4915 scope.go:117] "RemoveContainer" containerID="f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea" Jan 27 19:44:10 crc kubenswrapper[4915]: E0127 19:44:10.359439 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:44:23 crc kubenswrapper[4915]: I0127 19:44:23.357834 4915 scope.go:117] "RemoveContainer" containerID="f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea" Jan 27 19:44:23 crc kubenswrapper[4915]: E0127 19:44:23.358746 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:44:37 crc kubenswrapper[4915]: I0127 19:44:37.358398 4915 scope.go:117] "RemoveContainer" containerID="f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea" Jan 27 19:44:37 crc kubenswrapper[4915]: E0127 19:44:37.359445 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:44:51 crc kubenswrapper[4915]: I0127 19:44:51.358722 4915 scope.go:117] "RemoveContainer" containerID="f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea" Jan 27 19:44:51 crc kubenswrapper[4915]: E0127 19:44:51.361408 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:45:00 crc kubenswrapper[4915]: I0127 19:45:00.174039 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492385-4s645"] Jan 27 19:45:00 crc kubenswrapper[4915]: E0127 19:45:00.175019 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17b9de2a-d796-4189-8d2c-f8ce1dd65de7" containerName="extract-utilities" Jan 27 19:45:00 crc kubenswrapper[4915]: I0127 19:45:00.175035 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="17b9de2a-d796-4189-8d2c-f8ce1dd65de7" containerName="extract-utilities" Jan 27 19:45:00 crc kubenswrapper[4915]: E0127 19:45:00.175044 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17b9de2a-d796-4189-8d2c-f8ce1dd65de7" containerName="extract-content" Jan 27 19:45:00 crc kubenswrapper[4915]: I0127 19:45:00.175052 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="17b9de2a-d796-4189-8d2c-f8ce1dd65de7" containerName="extract-content" Jan 27 19:45:00 crc kubenswrapper[4915]: E0127 19:45:00.175070 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dcf4187-2dce-4457-8c93-c486b030c357" containerName="registry-server" Jan 27 19:45:00 crc kubenswrapper[4915]: I0127 19:45:00.175079 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dcf4187-2dce-4457-8c93-c486b030c357" containerName="registry-server" Jan 27 19:45:00 crc kubenswrapper[4915]: E0127 19:45:00.175091 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dcf4187-2dce-4457-8c93-c486b030c357" containerName="extract-utilities" Jan 27 19:45:00 crc kubenswrapper[4915]: I0127 19:45:00.175099 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dcf4187-2dce-4457-8c93-c486b030c357" containerName="extract-utilities" Jan 27 19:45:00 crc kubenswrapper[4915]: E0127 19:45:00.175111 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dcf4187-2dce-4457-8c93-c486b030c357" containerName="extract-content" Jan 27 19:45:00 crc kubenswrapper[4915]: I0127 19:45:00.175118 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dcf4187-2dce-4457-8c93-c486b030c357" containerName="extract-content" Jan 27 19:45:00 crc kubenswrapper[4915]: E0127 19:45:00.175141 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17b9de2a-d796-4189-8d2c-f8ce1dd65de7" containerName="registry-server" Jan 27 19:45:00 crc kubenswrapper[4915]: I0127 19:45:00.175148 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="17b9de2a-d796-4189-8d2c-f8ce1dd65de7" containerName="registry-server" Jan 27 19:45:00 crc kubenswrapper[4915]: I0127 19:45:00.175299 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dcf4187-2dce-4457-8c93-c486b030c357" containerName="registry-server" Jan 27 19:45:00 crc kubenswrapper[4915]: I0127 19:45:00.175323 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="17b9de2a-d796-4189-8d2c-f8ce1dd65de7" containerName="registry-server" Jan 27 19:45:00 crc kubenswrapper[4915]: I0127 19:45:00.175898 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-4s645" Jan 27 19:45:00 crc kubenswrapper[4915]: I0127 19:45:00.178234 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 19:45:00 crc kubenswrapper[4915]: I0127 19:45:00.179152 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 19:45:00 crc kubenswrapper[4915]: I0127 19:45:00.187816 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492385-4s645"] Jan 27 19:45:00 crc kubenswrapper[4915]: I0127 19:45:00.233226 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/36574015-0446-4376-ad57-bf528f20af86-secret-volume\") pod \"collect-profiles-29492385-4s645\" (UID: \"36574015-0446-4376-ad57-bf528f20af86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-4s645" Jan 27 19:45:00 crc kubenswrapper[4915]: I0127 19:45:00.233302 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36574015-0446-4376-ad57-bf528f20af86-config-volume\") pod \"collect-profiles-29492385-4s645\" (UID: \"36574015-0446-4376-ad57-bf528f20af86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-4s645" Jan 27 19:45:00 crc kubenswrapper[4915]: I0127 19:45:00.233467 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7wj6\" (UniqueName: \"kubernetes.io/projected/36574015-0446-4376-ad57-bf528f20af86-kube-api-access-j7wj6\") pod \"collect-profiles-29492385-4s645\" (UID: \"36574015-0446-4376-ad57-bf528f20af86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-4s645" Jan 27 19:45:00 crc kubenswrapper[4915]: I0127 19:45:00.335346 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7wj6\" (UniqueName: \"kubernetes.io/projected/36574015-0446-4376-ad57-bf528f20af86-kube-api-access-j7wj6\") pod \"collect-profiles-29492385-4s645\" (UID: \"36574015-0446-4376-ad57-bf528f20af86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-4s645" Jan 27 19:45:00 crc kubenswrapper[4915]: I0127 19:45:00.335896 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/36574015-0446-4376-ad57-bf528f20af86-secret-volume\") pod \"collect-profiles-29492385-4s645\" (UID: \"36574015-0446-4376-ad57-bf528f20af86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-4s645" Jan 27 19:45:00 crc kubenswrapper[4915]: I0127 19:45:00.336217 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36574015-0446-4376-ad57-bf528f20af86-config-volume\") pod \"collect-profiles-29492385-4s645\" (UID: \"36574015-0446-4376-ad57-bf528f20af86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-4s645" Jan 27 19:45:00 crc kubenswrapper[4915]: I0127 19:45:00.337654 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36574015-0446-4376-ad57-bf528f20af86-config-volume\") pod \"collect-profiles-29492385-4s645\" (UID: \"36574015-0446-4376-ad57-bf528f20af86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-4s645" Jan 27 19:45:00 crc kubenswrapper[4915]: I0127 19:45:00.344250 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/36574015-0446-4376-ad57-bf528f20af86-secret-volume\") pod \"collect-profiles-29492385-4s645\" (UID: \"36574015-0446-4376-ad57-bf528f20af86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-4s645" Jan 27 19:45:00 crc kubenswrapper[4915]: I0127 19:45:00.357110 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7wj6\" (UniqueName: \"kubernetes.io/projected/36574015-0446-4376-ad57-bf528f20af86-kube-api-access-j7wj6\") pod \"collect-profiles-29492385-4s645\" (UID: \"36574015-0446-4376-ad57-bf528f20af86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-4s645" Jan 27 19:45:00 crc kubenswrapper[4915]: I0127 19:45:00.505596 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-4s645" Jan 27 19:45:00 crc kubenswrapper[4915]: I0127 19:45:00.764899 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492385-4s645"] Jan 27 19:45:01 crc kubenswrapper[4915]: I0127 19:45:01.182866 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-4s645" event={"ID":"36574015-0446-4376-ad57-bf528f20af86","Type":"ContainerStarted","Data":"077aec338595ada7c31a2a9397417f7765784b2b2d0e6940e6145f228fd7fbc9"} Jan 27 19:45:01 crc kubenswrapper[4915]: I0127 19:45:01.182917 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-4s645" event={"ID":"36574015-0446-4376-ad57-bf528f20af86","Type":"ContainerStarted","Data":"0366eab93e0a1ac1cba49d293285e21b14d79bcc5efd389514f91242656582d8"} Jan 27 19:45:01 crc kubenswrapper[4915]: I0127 19:45:01.205342 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-4s645" podStartSLOduration=1.205324084 podStartE2EDuration="1.205324084s" podCreationTimestamp="2026-01-27 19:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:45:01.200545756 +0000 UTC m=+3792.558399440" watchObservedRunningTime="2026-01-27 19:45:01.205324084 +0000 UTC m=+3792.563177758" Jan 27 19:45:02 crc kubenswrapper[4915]: I0127 19:45:02.193078 4915 generic.go:334] "Generic (PLEG): container finished" podID="36574015-0446-4376-ad57-bf528f20af86" containerID="077aec338595ada7c31a2a9397417f7765784b2b2d0e6940e6145f228fd7fbc9" exitCode=0 Jan 27 19:45:02 crc kubenswrapper[4915]: I0127 19:45:02.193325 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-4s645" event={"ID":"36574015-0446-4376-ad57-bf528f20af86","Type":"ContainerDied","Data":"077aec338595ada7c31a2a9397417f7765784b2b2d0e6940e6145f228fd7fbc9"} Jan 27 19:45:03 crc kubenswrapper[4915]: I0127 19:45:03.358455 4915 scope.go:117] "RemoveContainer" containerID="f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea" Jan 27 19:45:03 crc kubenswrapper[4915]: E0127 19:45:03.358719 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:45:03 crc kubenswrapper[4915]: I0127 19:45:03.517709 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-4s645" Jan 27 19:45:03 crc kubenswrapper[4915]: I0127 19:45:03.590073 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7wj6\" (UniqueName: \"kubernetes.io/projected/36574015-0446-4376-ad57-bf528f20af86-kube-api-access-j7wj6\") pod \"36574015-0446-4376-ad57-bf528f20af86\" (UID: \"36574015-0446-4376-ad57-bf528f20af86\") " Jan 27 19:45:03 crc kubenswrapper[4915]: I0127 19:45:03.590147 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36574015-0446-4376-ad57-bf528f20af86-config-volume\") pod \"36574015-0446-4376-ad57-bf528f20af86\" (UID: \"36574015-0446-4376-ad57-bf528f20af86\") " Jan 27 19:45:03 crc kubenswrapper[4915]: I0127 19:45:03.590171 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/36574015-0446-4376-ad57-bf528f20af86-secret-volume\") pod \"36574015-0446-4376-ad57-bf528f20af86\" (UID: \"36574015-0446-4376-ad57-bf528f20af86\") " Jan 27 19:45:03 crc kubenswrapper[4915]: I0127 19:45:03.590960 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36574015-0446-4376-ad57-bf528f20af86-config-volume" (OuterVolumeSpecName: "config-volume") pod "36574015-0446-4376-ad57-bf528f20af86" (UID: "36574015-0446-4376-ad57-bf528f20af86"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:45:03 crc kubenswrapper[4915]: I0127 19:45:03.597899 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36574015-0446-4376-ad57-bf528f20af86-kube-api-access-j7wj6" (OuterVolumeSpecName: "kube-api-access-j7wj6") pod "36574015-0446-4376-ad57-bf528f20af86" (UID: "36574015-0446-4376-ad57-bf528f20af86"). InnerVolumeSpecName "kube-api-access-j7wj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:45:03 crc kubenswrapper[4915]: I0127 19:45:03.598280 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36574015-0446-4376-ad57-bf528f20af86-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "36574015-0446-4376-ad57-bf528f20af86" (UID: "36574015-0446-4376-ad57-bf528f20af86"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:45:03 crc kubenswrapper[4915]: I0127 19:45:03.692437 4915 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36574015-0446-4376-ad57-bf528f20af86-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:45:03 crc kubenswrapper[4915]: I0127 19:45:03.692478 4915 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/36574015-0446-4376-ad57-bf528f20af86-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:45:03 crc kubenswrapper[4915]: I0127 19:45:03.692488 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7wj6\" (UniqueName: \"kubernetes.io/projected/36574015-0446-4376-ad57-bf528f20af86-kube-api-access-j7wj6\") on node \"crc\" DevicePath \"\"" Jan 27 19:45:04 crc kubenswrapper[4915]: I0127 19:45:04.212122 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-4s645" event={"ID":"36574015-0446-4376-ad57-bf528f20af86","Type":"ContainerDied","Data":"0366eab93e0a1ac1cba49d293285e21b14d79bcc5efd389514f91242656582d8"} Jan 27 19:45:04 crc kubenswrapper[4915]: I0127 19:45:04.212166 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0366eab93e0a1ac1cba49d293285e21b14d79bcc5efd389514f91242656582d8" Jan 27 19:45:04 crc kubenswrapper[4915]: I0127 19:45:04.212205 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-4s645" Jan 27 19:45:04 crc kubenswrapper[4915]: I0127 19:45:04.290827 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492340-srbs2"] Jan 27 19:45:04 crc kubenswrapper[4915]: I0127 19:45:04.302093 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492340-srbs2"] Jan 27 19:45:05 crc kubenswrapper[4915]: I0127 19:45:05.371323 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4cf525f-49ce-424b-8dbc-1c3f807b78d7" path="/var/lib/kubelet/pods/d4cf525f-49ce-424b-8dbc-1c3f807b78d7/volumes" Jan 27 19:45:15 crc kubenswrapper[4915]: I0127 19:45:15.358096 4915 scope.go:117] "RemoveContainer" containerID="f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea" Jan 27 19:45:15 crc kubenswrapper[4915]: E0127 19:45:15.359197 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:45:29 crc kubenswrapper[4915]: I0127 19:45:29.366744 4915 scope.go:117] "RemoveContainer" containerID="f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea" Jan 27 19:45:29 crc kubenswrapper[4915]: E0127 19:45:29.368138 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:45:44 crc kubenswrapper[4915]: I0127 19:45:44.358205 4915 scope.go:117] "RemoveContainer" containerID="f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea" Jan 27 19:45:44 crc kubenswrapper[4915]: E0127 19:45:44.359530 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:45:55 crc kubenswrapper[4915]: I0127 19:45:55.027898 4915 scope.go:117] "RemoveContainer" containerID="c01b0c9b5aeafc4712f319baa61e2fc29c98c01b0aad9c6b22b1ebfea66c9459" Jan 27 19:45:59 crc kubenswrapper[4915]: I0127 19:45:59.364767 4915 scope.go:117] "RemoveContainer" containerID="f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea" Jan 27 19:45:59 crc kubenswrapper[4915]: E0127 19:45:59.366008 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:46:11 crc kubenswrapper[4915]: I0127 19:46:11.358369 4915 scope.go:117] "RemoveContainer" containerID="f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea" Jan 27 19:46:11 crc kubenswrapper[4915]: E0127 19:46:11.360772 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:46:26 crc kubenswrapper[4915]: I0127 19:46:26.357566 4915 scope.go:117] "RemoveContainer" containerID="f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea" Jan 27 19:46:26 crc kubenswrapper[4915]: E0127 19:46:26.358844 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:46:41 crc kubenswrapper[4915]: I0127 19:46:41.362223 4915 scope.go:117] "RemoveContainer" containerID="f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea" Jan 27 19:46:41 crc kubenswrapper[4915]: E0127 19:46:41.363165 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:46:54 crc kubenswrapper[4915]: I0127 19:46:54.357707 4915 scope.go:117] "RemoveContainer" containerID="f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea" Jan 27 19:46:54 crc kubenswrapper[4915]: E0127 19:46:54.358922 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:47:09 crc kubenswrapper[4915]: I0127 19:47:09.366300 4915 scope.go:117] "RemoveContainer" containerID="f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea" Jan 27 19:47:09 crc kubenswrapper[4915]: E0127 19:47:09.367326 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:47:24 crc kubenswrapper[4915]: I0127 19:47:24.358374 4915 scope.go:117] "RemoveContainer" containerID="f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea" Jan 27 19:47:24 crc kubenswrapper[4915]: E0127 19:47:24.359625 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:47:39 crc kubenswrapper[4915]: I0127 19:47:39.367869 4915 scope.go:117] "RemoveContainer" containerID="f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea" Jan 27 19:47:39 crc kubenswrapper[4915]: E0127 19:47:39.368671 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:47:54 crc kubenswrapper[4915]: I0127 19:47:54.357387 4915 scope.go:117] "RemoveContainer" containerID="f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea" Jan 27 19:47:55 crc kubenswrapper[4915]: I0127 19:47:55.690218 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"cc7d6ec1bf845f1012fcec8f77d179f8bec8f6fd2bbfbb24a74dc4bf6e1989d5"} Jan 27 19:50:06 crc kubenswrapper[4915]: I0127 19:50:06.673452 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qtjgc"] Jan 27 19:50:06 crc kubenswrapper[4915]: E0127 19:50:06.674531 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36574015-0446-4376-ad57-bf528f20af86" containerName="collect-profiles" Jan 27 19:50:06 crc kubenswrapper[4915]: I0127 19:50:06.674558 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="36574015-0446-4376-ad57-bf528f20af86" containerName="collect-profiles" Jan 27 19:50:06 crc kubenswrapper[4915]: I0127 19:50:06.674901 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="36574015-0446-4376-ad57-bf528f20af86" containerName="collect-profiles" Jan 27 19:50:06 crc kubenswrapper[4915]: I0127 19:50:06.676533 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qtjgc" Jan 27 19:50:06 crc kubenswrapper[4915]: I0127 19:50:06.688484 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qtjgc"] Jan 27 19:50:06 crc kubenswrapper[4915]: I0127 19:50:06.802991 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f87f4e-694f-4376-bca2-14b958c85784-catalog-content\") pod \"certified-operators-qtjgc\" (UID: \"07f87f4e-694f-4376-bca2-14b958c85784\") " pod="openshift-marketplace/certified-operators-qtjgc" Jan 27 19:50:06 crc kubenswrapper[4915]: I0127 19:50:06.803268 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlg9q\" (UniqueName: \"kubernetes.io/projected/07f87f4e-694f-4376-bca2-14b958c85784-kube-api-access-qlg9q\") pod \"certified-operators-qtjgc\" (UID: \"07f87f4e-694f-4376-bca2-14b958c85784\") " pod="openshift-marketplace/certified-operators-qtjgc" Jan 27 19:50:06 crc kubenswrapper[4915]: I0127 19:50:06.803411 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f87f4e-694f-4376-bca2-14b958c85784-utilities\") pod \"certified-operators-qtjgc\" (UID: \"07f87f4e-694f-4376-bca2-14b958c85784\") " pod="openshift-marketplace/certified-operators-qtjgc" Jan 27 19:50:06 crc kubenswrapper[4915]: I0127 19:50:06.904916 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f87f4e-694f-4376-bca2-14b958c85784-utilities\") pod \"certified-operators-qtjgc\" (UID: \"07f87f4e-694f-4376-bca2-14b958c85784\") " pod="openshift-marketplace/certified-operators-qtjgc" Jan 27 19:50:06 crc kubenswrapper[4915]: I0127 19:50:06.905002 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f87f4e-694f-4376-bca2-14b958c85784-catalog-content\") pod \"certified-operators-qtjgc\" (UID: \"07f87f4e-694f-4376-bca2-14b958c85784\") " pod="openshift-marketplace/certified-operators-qtjgc" Jan 27 19:50:06 crc kubenswrapper[4915]: I0127 19:50:06.905032 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlg9q\" (UniqueName: \"kubernetes.io/projected/07f87f4e-694f-4376-bca2-14b958c85784-kube-api-access-qlg9q\") pod \"certified-operators-qtjgc\" (UID: \"07f87f4e-694f-4376-bca2-14b958c85784\") " pod="openshift-marketplace/certified-operators-qtjgc" Jan 27 19:50:06 crc kubenswrapper[4915]: I0127 19:50:06.905406 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f87f4e-694f-4376-bca2-14b958c85784-utilities\") pod \"certified-operators-qtjgc\" (UID: \"07f87f4e-694f-4376-bca2-14b958c85784\") " pod="openshift-marketplace/certified-operators-qtjgc" Jan 27 19:50:06 crc kubenswrapper[4915]: I0127 19:50:06.905661 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f87f4e-694f-4376-bca2-14b958c85784-catalog-content\") pod \"certified-operators-qtjgc\" (UID: \"07f87f4e-694f-4376-bca2-14b958c85784\") " pod="openshift-marketplace/certified-operators-qtjgc" Jan 27 19:50:06 crc kubenswrapper[4915]: I0127 19:50:06.927611 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlg9q\" (UniqueName: \"kubernetes.io/projected/07f87f4e-694f-4376-bca2-14b958c85784-kube-api-access-qlg9q\") pod \"certified-operators-qtjgc\" (UID: \"07f87f4e-694f-4376-bca2-14b958c85784\") " pod="openshift-marketplace/certified-operators-qtjgc" Jan 27 19:50:06 crc kubenswrapper[4915]: I0127 19:50:06.997878 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qtjgc" Jan 27 19:50:07 crc kubenswrapper[4915]: I0127 19:50:07.452137 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qtjgc"] Jan 27 19:50:07 crc kubenswrapper[4915]: I0127 19:50:07.846100 4915 generic.go:334] "Generic (PLEG): container finished" podID="07f87f4e-694f-4376-bca2-14b958c85784" containerID="0dbc63a565c7521e17ec484dd0518b5fcc4f4651896153cc62a4d4a0840b03e5" exitCode=0 Jan 27 19:50:07 crc kubenswrapper[4915]: I0127 19:50:07.846169 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtjgc" event={"ID":"07f87f4e-694f-4376-bca2-14b958c85784","Type":"ContainerDied","Data":"0dbc63a565c7521e17ec484dd0518b5fcc4f4651896153cc62a4d4a0840b03e5"} Jan 27 19:50:07 crc kubenswrapper[4915]: I0127 19:50:07.846472 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtjgc" event={"ID":"07f87f4e-694f-4376-bca2-14b958c85784","Type":"ContainerStarted","Data":"d2151af47cfa07f2a10b6ffdcabca83971580d8c5c48c0ec2f21b5016eeb6d27"} Jan 27 19:50:07 crc kubenswrapper[4915]: I0127 19:50:07.848713 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:50:08 crc kubenswrapper[4915]: I0127 19:50:08.855782 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtjgc" event={"ID":"07f87f4e-694f-4376-bca2-14b958c85784","Type":"ContainerStarted","Data":"cee39e299f04de9580a00066a0efb6aea49d4de73ce88691e91f09299a50d2cd"} Jan 27 19:50:09 crc kubenswrapper[4915]: I0127 19:50:09.865951 4915 generic.go:334] "Generic (PLEG): container finished" podID="07f87f4e-694f-4376-bca2-14b958c85784" containerID="cee39e299f04de9580a00066a0efb6aea49d4de73ce88691e91f09299a50d2cd" exitCode=0 Jan 27 19:50:09 crc kubenswrapper[4915]: I0127 19:50:09.866047 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtjgc" event={"ID":"07f87f4e-694f-4376-bca2-14b958c85784","Type":"ContainerDied","Data":"cee39e299f04de9580a00066a0efb6aea49d4de73ce88691e91f09299a50d2cd"} Jan 27 19:50:10 crc kubenswrapper[4915]: I0127 19:50:10.877729 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtjgc" event={"ID":"07f87f4e-694f-4376-bca2-14b958c85784","Type":"ContainerStarted","Data":"9d789604d980d39a044a2b7b90d7545c7c693b757ed29752c82751d9fe67f976"} Jan 27 19:50:13 crc kubenswrapper[4915]: I0127 19:50:13.244963 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qtjgc" podStartSLOduration=4.725503651 podStartE2EDuration="7.244946772s" podCreationTimestamp="2026-01-27 19:50:06 +0000 UTC" firstStartedPulling="2026-01-27 19:50:07.848294604 +0000 UTC m=+4099.206148308" lastFinishedPulling="2026-01-27 19:50:10.367737755 +0000 UTC m=+4101.725591429" observedRunningTime="2026-01-27 19:50:10.907141637 +0000 UTC m=+4102.264995291" watchObservedRunningTime="2026-01-27 19:50:13.244946772 +0000 UTC m=+4104.602800436" Jan 27 19:50:13 crc kubenswrapper[4915]: I0127 19:50:13.249555 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ps8vh"] Jan 27 19:50:13 crc kubenswrapper[4915]: I0127 19:50:13.250821 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ps8vh" Jan 27 19:50:13 crc kubenswrapper[4915]: I0127 19:50:13.266784 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ps8vh"] Jan 27 19:50:13 crc kubenswrapper[4915]: I0127 19:50:13.307449 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhnwr\" (UniqueName: \"kubernetes.io/projected/72e7b729-afd2-4975-840e-014d32415c3e-kube-api-access-zhnwr\") pod \"redhat-operators-ps8vh\" (UID: \"72e7b729-afd2-4975-840e-014d32415c3e\") " pod="openshift-marketplace/redhat-operators-ps8vh" Jan 27 19:50:13 crc kubenswrapper[4915]: I0127 19:50:13.307640 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72e7b729-afd2-4975-840e-014d32415c3e-catalog-content\") pod \"redhat-operators-ps8vh\" (UID: \"72e7b729-afd2-4975-840e-014d32415c3e\") " pod="openshift-marketplace/redhat-operators-ps8vh" Jan 27 19:50:13 crc kubenswrapper[4915]: I0127 19:50:13.307865 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72e7b729-afd2-4975-840e-014d32415c3e-utilities\") pod \"redhat-operators-ps8vh\" (UID: \"72e7b729-afd2-4975-840e-014d32415c3e\") " pod="openshift-marketplace/redhat-operators-ps8vh" Jan 27 19:50:13 crc kubenswrapper[4915]: I0127 19:50:13.409057 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72e7b729-afd2-4975-840e-014d32415c3e-utilities\") pod \"redhat-operators-ps8vh\" (UID: \"72e7b729-afd2-4975-840e-014d32415c3e\") " pod="openshift-marketplace/redhat-operators-ps8vh" Jan 27 19:50:13 crc kubenswrapper[4915]: I0127 19:50:13.409145 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhnwr\" (UniqueName: \"kubernetes.io/projected/72e7b729-afd2-4975-840e-014d32415c3e-kube-api-access-zhnwr\") pod \"redhat-operators-ps8vh\" (UID: \"72e7b729-afd2-4975-840e-014d32415c3e\") " pod="openshift-marketplace/redhat-operators-ps8vh" Jan 27 19:50:13 crc kubenswrapper[4915]: I0127 19:50:13.409181 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72e7b729-afd2-4975-840e-014d32415c3e-catalog-content\") pod \"redhat-operators-ps8vh\" (UID: \"72e7b729-afd2-4975-840e-014d32415c3e\") " pod="openshift-marketplace/redhat-operators-ps8vh" Jan 27 19:50:13 crc kubenswrapper[4915]: I0127 19:50:13.409662 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72e7b729-afd2-4975-840e-014d32415c3e-utilities\") pod \"redhat-operators-ps8vh\" (UID: \"72e7b729-afd2-4975-840e-014d32415c3e\") " pod="openshift-marketplace/redhat-operators-ps8vh" Jan 27 19:50:13 crc kubenswrapper[4915]: I0127 19:50:13.409743 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72e7b729-afd2-4975-840e-014d32415c3e-catalog-content\") pod \"redhat-operators-ps8vh\" (UID: \"72e7b729-afd2-4975-840e-014d32415c3e\") " pod="openshift-marketplace/redhat-operators-ps8vh" Jan 27 19:50:13 crc kubenswrapper[4915]: I0127 19:50:13.428194 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhnwr\" (UniqueName: \"kubernetes.io/projected/72e7b729-afd2-4975-840e-014d32415c3e-kube-api-access-zhnwr\") pod \"redhat-operators-ps8vh\" (UID: \"72e7b729-afd2-4975-840e-014d32415c3e\") " pod="openshift-marketplace/redhat-operators-ps8vh" Jan 27 19:50:13 crc kubenswrapper[4915]: I0127 19:50:13.570228 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ps8vh" Jan 27 19:50:13 crc kubenswrapper[4915]: I0127 19:50:13.809385 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ps8vh"] Jan 27 19:50:13 crc kubenswrapper[4915]: W0127 19:50:13.813052 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72e7b729_afd2_4975_840e_014d32415c3e.slice/crio-54d0e75eff0be70b80c5ade993ca74ca3f09611b8dc0069edaa4359658d1709a WatchSource:0}: Error finding container 54d0e75eff0be70b80c5ade993ca74ca3f09611b8dc0069edaa4359658d1709a: Status 404 returned error can't find the container with id 54d0e75eff0be70b80c5ade993ca74ca3f09611b8dc0069edaa4359658d1709a Jan 27 19:50:13 crc kubenswrapper[4915]: I0127 19:50:13.901700 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ps8vh" event={"ID":"72e7b729-afd2-4975-840e-014d32415c3e","Type":"ContainerStarted","Data":"54d0e75eff0be70b80c5ade993ca74ca3f09611b8dc0069edaa4359658d1709a"} Jan 27 19:50:14 crc kubenswrapper[4915]: I0127 19:50:14.914106 4915 generic.go:334] "Generic (PLEG): container finished" podID="72e7b729-afd2-4975-840e-014d32415c3e" containerID="27cde2bc936516beda0c5fa949367489bd39cac8630ca4fd4329ab64c4ef3f96" exitCode=0 Jan 27 19:50:14 crc kubenswrapper[4915]: I0127 19:50:14.914172 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ps8vh" event={"ID":"72e7b729-afd2-4975-840e-014d32415c3e","Type":"ContainerDied","Data":"27cde2bc936516beda0c5fa949367489bd39cac8630ca4fd4329ab64c4ef3f96"} Jan 27 19:50:15 crc kubenswrapper[4915]: I0127 19:50:15.923274 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ps8vh" event={"ID":"72e7b729-afd2-4975-840e-014d32415c3e","Type":"ContainerStarted","Data":"7a0e276885c131b9c185ed13dbc6aa87ffce634c9ba5bc32abe3903391a95086"} Jan 27 19:50:16 crc kubenswrapper[4915]: I0127 19:50:16.931026 4915 generic.go:334] "Generic (PLEG): container finished" podID="72e7b729-afd2-4975-840e-014d32415c3e" containerID="7a0e276885c131b9c185ed13dbc6aa87ffce634c9ba5bc32abe3903391a95086" exitCode=0 Jan 27 19:50:16 crc kubenswrapper[4915]: I0127 19:50:16.931058 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ps8vh" event={"ID":"72e7b729-afd2-4975-840e-014d32415c3e","Type":"ContainerDied","Data":"7a0e276885c131b9c185ed13dbc6aa87ffce634c9ba5bc32abe3903391a95086"} Jan 27 19:50:16 crc kubenswrapper[4915]: I0127 19:50:16.998594 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qtjgc" Jan 27 19:50:16 crc kubenswrapper[4915]: I0127 19:50:16.998681 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qtjgc" Jan 27 19:50:17 crc kubenswrapper[4915]: I0127 19:50:17.043108 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qtjgc" Jan 27 19:50:18 crc kubenswrapper[4915]: I0127 19:50:18.017451 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qtjgc" Jan 27 19:50:18 crc kubenswrapper[4915]: I0127 19:50:18.951225 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ps8vh" event={"ID":"72e7b729-afd2-4975-840e-014d32415c3e","Type":"ContainerStarted","Data":"44f5e2e59b2ba57721297848714571a4d90c0536e49e74cf59e365cd18acd6b1"} Jan 27 19:50:18 crc kubenswrapper[4915]: I0127 19:50:18.975071 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ps8vh" podStartSLOduration=2.622534505 podStartE2EDuration="5.975047775s" podCreationTimestamp="2026-01-27 19:50:13 +0000 UTC" firstStartedPulling="2026-01-27 19:50:14.916938276 +0000 UTC m=+4106.274791980" lastFinishedPulling="2026-01-27 19:50:18.269451586 +0000 UTC m=+4109.627305250" observedRunningTime="2026-01-27 19:50:18.967951101 +0000 UTC m=+4110.325804775" watchObservedRunningTime="2026-01-27 19:50:18.975047775 +0000 UTC m=+4110.332901439" Jan 27 19:50:19 crc kubenswrapper[4915]: I0127 19:50:19.443370 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qtjgc"] Jan 27 19:50:19 crc kubenswrapper[4915]: I0127 19:50:19.958316 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qtjgc" podUID="07f87f4e-694f-4376-bca2-14b958c85784" containerName="registry-server" containerID="cri-o://9d789604d980d39a044a2b7b90d7545c7c693b757ed29752c82751d9fe67f976" gracePeriod=2 Jan 27 19:50:20 crc kubenswrapper[4915]: I0127 19:50:20.624568 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:50:20 crc kubenswrapper[4915]: I0127 19:50:20.624649 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:50:20 crc kubenswrapper[4915]: I0127 19:50:20.966572 4915 generic.go:334] "Generic (PLEG): container finished" podID="07f87f4e-694f-4376-bca2-14b958c85784" containerID="9d789604d980d39a044a2b7b90d7545c7c693b757ed29752c82751d9fe67f976" exitCode=0 Jan 27 19:50:20 crc kubenswrapper[4915]: I0127 19:50:20.966641 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtjgc" event={"ID":"07f87f4e-694f-4376-bca2-14b958c85784","Type":"ContainerDied","Data":"9d789604d980d39a044a2b7b90d7545c7c693b757ed29752c82751d9fe67f976"} Jan 27 19:50:21 crc kubenswrapper[4915]: I0127 19:50:21.339428 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qtjgc" Jan 27 19:50:21 crc kubenswrapper[4915]: I0127 19:50:21.524566 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f87f4e-694f-4376-bca2-14b958c85784-utilities\") pod \"07f87f4e-694f-4376-bca2-14b958c85784\" (UID: \"07f87f4e-694f-4376-bca2-14b958c85784\") " Jan 27 19:50:21 crc kubenswrapper[4915]: I0127 19:50:21.524653 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f87f4e-694f-4376-bca2-14b958c85784-catalog-content\") pod \"07f87f4e-694f-4376-bca2-14b958c85784\" (UID: \"07f87f4e-694f-4376-bca2-14b958c85784\") " Jan 27 19:50:21 crc kubenswrapper[4915]: I0127 19:50:21.524745 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlg9q\" (UniqueName: \"kubernetes.io/projected/07f87f4e-694f-4376-bca2-14b958c85784-kube-api-access-qlg9q\") pod \"07f87f4e-694f-4376-bca2-14b958c85784\" (UID: \"07f87f4e-694f-4376-bca2-14b958c85784\") " Jan 27 19:50:21 crc kubenswrapper[4915]: I0127 19:50:21.526238 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07f87f4e-694f-4376-bca2-14b958c85784-utilities" (OuterVolumeSpecName: "utilities") pod "07f87f4e-694f-4376-bca2-14b958c85784" (UID: "07f87f4e-694f-4376-bca2-14b958c85784"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:50:21 crc kubenswrapper[4915]: I0127 19:50:21.530736 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07f87f4e-694f-4376-bca2-14b958c85784-kube-api-access-qlg9q" (OuterVolumeSpecName: "kube-api-access-qlg9q") pod "07f87f4e-694f-4376-bca2-14b958c85784" (UID: "07f87f4e-694f-4376-bca2-14b958c85784"). InnerVolumeSpecName "kube-api-access-qlg9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:50:21 crc kubenswrapper[4915]: I0127 19:50:21.573473 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07f87f4e-694f-4376-bca2-14b958c85784-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07f87f4e-694f-4376-bca2-14b958c85784" (UID: "07f87f4e-694f-4376-bca2-14b958c85784"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:50:21 crc kubenswrapper[4915]: I0127 19:50:21.627289 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlg9q\" (UniqueName: \"kubernetes.io/projected/07f87f4e-694f-4376-bca2-14b958c85784-kube-api-access-qlg9q\") on node \"crc\" DevicePath \"\"" Jan 27 19:50:21 crc kubenswrapper[4915]: I0127 19:50:21.627903 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f87f4e-694f-4376-bca2-14b958c85784-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:50:21 crc kubenswrapper[4915]: I0127 19:50:21.627945 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f87f4e-694f-4376-bca2-14b958c85784-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:50:21 crc kubenswrapper[4915]: I0127 19:50:21.975441 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtjgc" event={"ID":"07f87f4e-694f-4376-bca2-14b958c85784","Type":"ContainerDied","Data":"d2151af47cfa07f2a10b6ffdcabca83971580d8c5c48c0ec2f21b5016eeb6d27"} Jan 27 19:50:21 crc kubenswrapper[4915]: I0127 19:50:21.975505 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qtjgc" Jan 27 19:50:21 crc kubenswrapper[4915]: I0127 19:50:21.975509 4915 scope.go:117] "RemoveContainer" containerID="9d789604d980d39a044a2b7b90d7545c7c693b757ed29752c82751d9fe67f976" Jan 27 19:50:21 crc kubenswrapper[4915]: I0127 19:50:21.995884 4915 scope.go:117] "RemoveContainer" containerID="cee39e299f04de9580a00066a0efb6aea49d4de73ce88691e91f09299a50d2cd" Jan 27 19:50:22 crc kubenswrapper[4915]: I0127 19:50:22.020553 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qtjgc"] Jan 27 19:50:22 crc kubenswrapper[4915]: I0127 19:50:22.024007 4915 scope.go:117] "RemoveContainer" containerID="0dbc63a565c7521e17ec484dd0518b5fcc4f4651896153cc62a4d4a0840b03e5" Jan 27 19:50:22 crc kubenswrapper[4915]: I0127 19:50:22.030602 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qtjgc"] Jan 27 19:50:23 crc kubenswrapper[4915]: I0127 19:50:23.365385 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07f87f4e-694f-4376-bca2-14b958c85784" path="/var/lib/kubelet/pods/07f87f4e-694f-4376-bca2-14b958c85784/volumes" Jan 27 19:50:23 crc kubenswrapper[4915]: I0127 19:50:23.594045 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ps8vh" Jan 27 19:50:23 crc kubenswrapper[4915]: I0127 19:50:23.594477 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ps8vh" Jan 27 19:50:24 crc kubenswrapper[4915]: I0127 19:50:24.660449 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ps8vh" podUID="72e7b729-afd2-4975-840e-014d32415c3e" containerName="registry-server" probeResult="failure" output=< Jan 27 19:50:24 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Jan 27 19:50:24 crc kubenswrapper[4915]: > Jan 27 19:50:33 crc kubenswrapper[4915]: I0127 19:50:33.622924 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ps8vh" Jan 27 19:50:33 crc kubenswrapper[4915]: I0127 19:50:33.664708 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ps8vh" Jan 27 19:50:33 crc kubenswrapper[4915]: I0127 19:50:33.860484 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ps8vh"] Jan 27 19:50:35 crc kubenswrapper[4915]: I0127 19:50:35.099130 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ps8vh" podUID="72e7b729-afd2-4975-840e-014d32415c3e" containerName="registry-server" containerID="cri-o://44f5e2e59b2ba57721297848714571a4d90c0536e49e74cf59e365cd18acd6b1" gracePeriod=2 Jan 27 19:50:35 crc kubenswrapper[4915]: I0127 19:50:35.559066 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ps8vh" Jan 27 19:50:35 crc kubenswrapper[4915]: I0127 19:50:35.629854 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72e7b729-afd2-4975-840e-014d32415c3e-catalog-content\") pod \"72e7b729-afd2-4975-840e-014d32415c3e\" (UID: \"72e7b729-afd2-4975-840e-014d32415c3e\") " Jan 27 19:50:35 crc kubenswrapper[4915]: I0127 19:50:35.630011 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhnwr\" (UniqueName: \"kubernetes.io/projected/72e7b729-afd2-4975-840e-014d32415c3e-kube-api-access-zhnwr\") pod \"72e7b729-afd2-4975-840e-014d32415c3e\" (UID: \"72e7b729-afd2-4975-840e-014d32415c3e\") " Jan 27 19:50:35 crc kubenswrapper[4915]: I0127 19:50:35.630134 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72e7b729-afd2-4975-840e-014d32415c3e-utilities\") pod \"72e7b729-afd2-4975-840e-014d32415c3e\" (UID: \"72e7b729-afd2-4975-840e-014d32415c3e\") " Jan 27 19:50:35 crc kubenswrapper[4915]: I0127 19:50:35.631690 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72e7b729-afd2-4975-840e-014d32415c3e-utilities" (OuterVolumeSpecName: "utilities") pod "72e7b729-afd2-4975-840e-014d32415c3e" (UID: "72e7b729-afd2-4975-840e-014d32415c3e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:50:35 crc kubenswrapper[4915]: I0127 19:50:35.639036 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72e7b729-afd2-4975-840e-014d32415c3e-kube-api-access-zhnwr" (OuterVolumeSpecName: "kube-api-access-zhnwr") pod "72e7b729-afd2-4975-840e-014d32415c3e" (UID: "72e7b729-afd2-4975-840e-014d32415c3e"). InnerVolumeSpecName "kube-api-access-zhnwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:50:35 crc kubenswrapper[4915]: I0127 19:50:35.639216 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72e7b729-afd2-4975-840e-014d32415c3e-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:50:35 crc kubenswrapper[4915]: I0127 19:50:35.740711 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhnwr\" (UniqueName: \"kubernetes.io/projected/72e7b729-afd2-4975-840e-014d32415c3e-kube-api-access-zhnwr\") on node \"crc\" DevicePath \"\"" Jan 27 19:50:35 crc kubenswrapper[4915]: I0127 19:50:35.801710 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72e7b729-afd2-4975-840e-014d32415c3e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72e7b729-afd2-4975-840e-014d32415c3e" (UID: "72e7b729-afd2-4975-840e-014d32415c3e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:50:35 crc kubenswrapper[4915]: I0127 19:50:35.842354 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72e7b729-afd2-4975-840e-014d32415c3e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:50:36 crc kubenswrapper[4915]: I0127 19:50:36.109075 4915 generic.go:334] "Generic (PLEG): container finished" podID="72e7b729-afd2-4975-840e-014d32415c3e" containerID="44f5e2e59b2ba57721297848714571a4d90c0536e49e74cf59e365cd18acd6b1" exitCode=0 Jan 27 19:50:36 crc kubenswrapper[4915]: I0127 19:50:36.109148 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ps8vh" event={"ID":"72e7b729-afd2-4975-840e-014d32415c3e","Type":"ContainerDied","Data":"44f5e2e59b2ba57721297848714571a4d90c0536e49e74cf59e365cd18acd6b1"} Jan 27 19:50:36 crc kubenswrapper[4915]: I0127 19:50:36.109198 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ps8vh" event={"ID":"72e7b729-afd2-4975-840e-014d32415c3e","Type":"ContainerDied","Data":"54d0e75eff0be70b80c5ade993ca74ca3f09611b8dc0069edaa4359658d1709a"} Jan 27 19:50:36 crc kubenswrapper[4915]: I0127 19:50:36.109221 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ps8vh" Jan 27 19:50:36 crc kubenswrapper[4915]: I0127 19:50:36.109226 4915 scope.go:117] "RemoveContainer" containerID="44f5e2e59b2ba57721297848714571a4d90c0536e49e74cf59e365cd18acd6b1" Jan 27 19:50:36 crc kubenswrapper[4915]: I0127 19:50:36.134224 4915 scope.go:117] "RemoveContainer" containerID="7a0e276885c131b9c185ed13dbc6aa87ffce634c9ba5bc32abe3903391a95086" Jan 27 19:50:36 crc kubenswrapper[4915]: I0127 19:50:36.160318 4915 scope.go:117] "RemoveContainer" containerID="27cde2bc936516beda0c5fa949367489bd39cac8630ca4fd4329ab64c4ef3f96" Jan 27 19:50:36 crc kubenswrapper[4915]: I0127 19:50:36.165549 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ps8vh"] Jan 27 19:50:36 crc kubenswrapper[4915]: I0127 19:50:36.172510 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ps8vh"] Jan 27 19:50:36 crc kubenswrapper[4915]: I0127 19:50:36.186057 4915 scope.go:117] "RemoveContainer" containerID="44f5e2e59b2ba57721297848714571a4d90c0536e49e74cf59e365cd18acd6b1" Jan 27 19:50:36 crc kubenswrapper[4915]: E0127 19:50:36.186437 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44f5e2e59b2ba57721297848714571a4d90c0536e49e74cf59e365cd18acd6b1\": container with ID starting with 44f5e2e59b2ba57721297848714571a4d90c0536e49e74cf59e365cd18acd6b1 not found: ID does not exist" containerID="44f5e2e59b2ba57721297848714571a4d90c0536e49e74cf59e365cd18acd6b1" Jan 27 19:50:36 crc kubenswrapper[4915]: I0127 19:50:36.186495 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44f5e2e59b2ba57721297848714571a4d90c0536e49e74cf59e365cd18acd6b1"} err="failed to get container status \"44f5e2e59b2ba57721297848714571a4d90c0536e49e74cf59e365cd18acd6b1\": rpc error: code = NotFound desc = could not find container \"44f5e2e59b2ba57721297848714571a4d90c0536e49e74cf59e365cd18acd6b1\": container with ID starting with 44f5e2e59b2ba57721297848714571a4d90c0536e49e74cf59e365cd18acd6b1 not found: ID does not exist" Jan 27 19:50:36 crc kubenswrapper[4915]: I0127 19:50:36.186520 4915 scope.go:117] "RemoveContainer" containerID="7a0e276885c131b9c185ed13dbc6aa87ffce634c9ba5bc32abe3903391a95086" Jan 27 19:50:36 crc kubenswrapper[4915]: E0127 19:50:36.186850 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a0e276885c131b9c185ed13dbc6aa87ffce634c9ba5bc32abe3903391a95086\": container with ID starting with 7a0e276885c131b9c185ed13dbc6aa87ffce634c9ba5bc32abe3903391a95086 not found: ID does not exist" containerID="7a0e276885c131b9c185ed13dbc6aa87ffce634c9ba5bc32abe3903391a95086" Jan 27 19:50:36 crc kubenswrapper[4915]: I0127 19:50:36.186902 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a0e276885c131b9c185ed13dbc6aa87ffce634c9ba5bc32abe3903391a95086"} err="failed to get container status \"7a0e276885c131b9c185ed13dbc6aa87ffce634c9ba5bc32abe3903391a95086\": rpc error: code = NotFound desc = could not find container \"7a0e276885c131b9c185ed13dbc6aa87ffce634c9ba5bc32abe3903391a95086\": container with ID starting with 7a0e276885c131b9c185ed13dbc6aa87ffce634c9ba5bc32abe3903391a95086 not found: ID does not exist" Jan 27 19:50:36 crc kubenswrapper[4915]: I0127 19:50:36.186931 4915 scope.go:117] "RemoveContainer" containerID="27cde2bc936516beda0c5fa949367489bd39cac8630ca4fd4329ab64c4ef3f96" Jan 27 19:50:36 crc kubenswrapper[4915]: E0127 19:50:36.187361 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27cde2bc936516beda0c5fa949367489bd39cac8630ca4fd4329ab64c4ef3f96\": container with ID starting with 27cde2bc936516beda0c5fa949367489bd39cac8630ca4fd4329ab64c4ef3f96 not found: ID does not exist" containerID="27cde2bc936516beda0c5fa949367489bd39cac8630ca4fd4329ab64c4ef3f96" Jan 27 19:50:36 crc kubenswrapper[4915]: I0127 19:50:36.187409 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27cde2bc936516beda0c5fa949367489bd39cac8630ca4fd4329ab64c4ef3f96"} err="failed to get container status \"27cde2bc936516beda0c5fa949367489bd39cac8630ca4fd4329ab64c4ef3f96\": rpc error: code = NotFound desc = could not find container \"27cde2bc936516beda0c5fa949367489bd39cac8630ca4fd4329ab64c4ef3f96\": container with ID starting with 27cde2bc936516beda0c5fa949367489bd39cac8630ca4fd4329ab64c4ef3f96 not found: ID does not exist" Jan 27 19:50:37 crc kubenswrapper[4915]: I0127 19:50:37.373245 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72e7b729-afd2-4975-840e-014d32415c3e" path="/var/lib/kubelet/pods/72e7b729-afd2-4975-840e-014d32415c3e/volumes" Jan 27 19:50:50 crc kubenswrapper[4915]: I0127 19:50:50.624597 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:50:50 crc kubenswrapper[4915]: I0127 19:50:50.625276 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:51:20 crc kubenswrapper[4915]: I0127 19:51:20.624968 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:51:20 crc kubenswrapper[4915]: I0127 19:51:20.626869 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:51:20 crc kubenswrapper[4915]: I0127 19:51:20.627013 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 19:51:20 crc kubenswrapper[4915]: I0127 19:51:20.627708 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cc7d6ec1bf845f1012fcec8f77d179f8bec8f6fd2bbfbb24a74dc4bf6e1989d5"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:51:20 crc kubenswrapper[4915]: I0127 19:51:20.627912 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://cc7d6ec1bf845f1012fcec8f77d179f8bec8f6fd2bbfbb24a74dc4bf6e1989d5" gracePeriod=600 Jan 27 19:51:21 crc kubenswrapper[4915]: I0127 19:51:21.503130 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="cc7d6ec1bf845f1012fcec8f77d179f8bec8f6fd2bbfbb24a74dc4bf6e1989d5" exitCode=0 Jan 27 19:51:21 crc kubenswrapper[4915]: I0127 19:51:21.503230 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"cc7d6ec1bf845f1012fcec8f77d179f8bec8f6fd2bbfbb24a74dc4bf6e1989d5"} Jan 27 19:51:21 crc kubenswrapper[4915]: I0127 19:51:21.503745 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c"} Jan 27 19:51:21 crc kubenswrapper[4915]: I0127 19:51:21.503770 4915 scope.go:117] "RemoveContainer" containerID="f6f56b993739f053f077b71f903f50763522d492e042f9923887655154e51bea" Jan 27 19:53:17 crc kubenswrapper[4915]: I0127 19:53:17.575862 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2r276"] Jan 27 19:53:17 crc kubenswrapper[4915]: E0127 19:53:17.577170 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f87f4e-694f-4376-bca2-14b958c85784" containerName="registry-server" Jan 27 19:53:17 crc kubenswrapper[4915]: I0127 19:53:17.577199 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f87f4e-694f-4376-bca2-14b958c85784" containerName="registry-server" Jan 27 19:53:17 crc kubenswrapper[4915]: E0127 19:53:17.577241 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f87f4e-694f-4376-bca2-14b958c85784" containerName="extract-utilities" Jan 27 19:53:17 crc kubenswrapper[4915]: I0127 19:53:17.577260 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f87f4e-694f-4376-bca2-14b958c85784" containerName="extract-utilities" Jan 27 19:53:17 crc kubenswrapper[4915]: E0127 19:53:17.577287 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72e7b729-afd2-4975-840e-014d32415c3e" containerName="extract-utilities" Jan 27 19:53:17 crc kubenswrapper[4915]: I0127 19:53:17.577304 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="72e7b729-afd2-4975-840e-014d32415c3e" containerName="extract-utilities" Jan 27 19:53:17 crc kubenswrapper[4915]: E0127 19:53:17.577327 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72e7b729-afd2-4975-840e-014d32415c3e" containerName="registry-server" Jan 27 19:53:17 crc kubenswrapper[4915]: I0127 19:53:17.577343 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="72e7b729-afd2-4975-840e-014d32415c3e" containerName="registry-server" Jan 27 19:53:17 crc kubenswrapper[4915]: E0127 19:53:17.577371 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f87f4e-694f-4376-bca2-14b958c85784" containerName="extract-content" Jan 27 19:53:17 crc kubenswrapper[4915]: I0127 19:53:17.577389 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f87f4e-694f-4376-bca2-14b958c85784" containerName="extract-content" Jan 27 19:53:17 crc kubenswrapper[4915]: E0127 19:53:17.577413 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72e7b729-afd2-4975-840e-014d32415c3e" containerName="extract-content" Jan 27 19:53:17 crc kubenswrapper[4915]: I0127 19:53:17.577428 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="72e7b729-afd2-4975-840e-014d32415c3e" containerName="extract-content" Jan 27 19:53:17 crc kubenswrapper[4915]: I0127 19:53:17.577738 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="72e7b729-afd2-4975-840e-014d32415c3e" containerName="registry-server" Jan 27 19:53:17 crc kubenswrapper[4915]: I0127 19:53:17.577834 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f87f4e-694f-4376-bca2-14b958c85784" containerName="registry-server" Jan 27 19:53:17 crc kubenswrapper[4915]: I0127 19:53:17.580237 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2r276" Jan 27 19:53:17 crc kubenswrapper[4915]: I0127 19:53:17.594529 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2r276"] Jan 27 19:53:17 crc kubenswrapper[4915]: I0127 19:53:17.757376 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eeede0f-80bb-4fef-8fab-30374bea1927-utilities\") pod \"community-operators-2r276\" (UID: \"8eeede0f-80bb-4fef-8fab-30374bea1927\") " pod="openshift-marketplace/community-operators-2r276" Jan 27 19:53:17 crc kubenswrapper[4915]: I0127 19:53:17.757495 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eeede0f-80bb-4fef-8fab-30374bea1927-catalog-content\") pod \"community-operators-2r276\" (UID: \"8eeede0f-80bb-4fef-8fab-30374bea1927\") " pod="openshift-marketplace/community-operators-2r276" Jan 27 19:53:17 crc kubenswrapper[4915]: I0127 19:53:17.757624 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqgfm\" (UniqueName: \"kubernetes.io/projected/8eeede0f-80bb-4fef-8fab-30374bea1927-kube-api-access-gqgfm\") pod \"community-operators-2r276\" (UID: \"8eeede0f-80bb-4fef-8fab-30374bea1927\") " pod="openshift-marketplace/community-operators-2r276" Jan 27 19:53:17 crc kubenswrapper[4915]: I0127 19:53:17.858932 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eeede0f-80bb-4fef-8fab-30374bea1927-catalog-content\") pod \"community-operators-2r276\" (UID: \"8eeede0f-80bb-4fef-8fab-30374bea1927\") " pod="openshift-marketplace/community-operators-2r276" Jan 27 19:53:17 crc kubenswrapper[4915]: I0127 19:53:17.859018 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqgfm\" (UniqueName: \"kubernetes.io/projected/8eeede0f-80bb-4fef-8fab-30374bea1927-kube-api-access-gqgfm\") pod \"community-operators-2r276\" (UID: \"8eeede0f-80bb-4fef-8fab-30374bea1927\") " pod="openshift-marketplace/community-operators-2r276" Jan 27 19:53:17 crc kubenswrapper[4915]: I0127 19:53:17.859054 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eeede0f-80bb-4fef-8fab-30374bea1927-utilities\") pod \"community-operators-2r276\" (UID: \"8eeede0f-80bb-4fef-8fab-30374bea1927\") " pod="openshift-marketplace/community-operators-2r276" Jan 27 19:53:17 crc kubenswrapper[4915]: I0127 19:53:17.859579 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eeede0f-80bb-4fef-8fab-30374bea1927-utilities\") pod \"community-operators-2r276\" (UID: \"8eeede0f-80bb-4fef-8fab-30374bea1927\") " pod="openshift-marketplace/community-operators-2r276" Jan 27 19:53:17 crc kubenswrapper[4915]: I0127 19:53:17.860415 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eeede0f-80bb-4fef-8fab-30374bea1927-catalog-content\") pod \"community-operators-2r276\" (UID: \"8eeede0f-80bb-4fef-8fab-30374bea1927\") " pod="openshift-marketplace/community-operators-2r276" Jan 27 19:53:17 crc kubenswrapper[4915]: I0127 19:53:17.877458 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqgfm\" (UniqueName: \"kubernetes.io/projected/8eeede0f-80bb-4fef-8fab-30374bea1927-kube-api-access-gqgfm\") pod \"community-operators-2r276\" (UID: \"8eeede0f-80bb-4fef-8fab-30374bea1927\") " pod="openshift-marketplace/community-operators-2r276" Jan 27 19:53:17 crc kubenswrapper[4915]: I0127 19:53:17.917552 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2r276" Jan 27 19:53:18 crc kubenswrapper[4915]: I0127 19:53:18.418057 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2r276"] Jan 27 19:53:18 crc kubenswrapper[4915]: I0127 19:53:18.503507 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2r276" event={"ID":"8eeede0f-80bb-4fef-8fab-30374bea1927","Type":"ContainerStarted","Data":"b9e888866687d8f9e22829c16ba90787c9400d6061ecf48c5519b2eb0644a4e6"} Jan 27 19:53:19 crc kubenswrapper[4915]: I0127 19:53:19.512540 4915 generic.go:334] "Generic (PLEG): container finished" podID="8eeede0f-80bb-4fef-8fab-30374bea1927" containerID="a289030cf0b981d560e210e4dd025be14b2bb7e6f0f871ebd2a58d905a118001" exitCode=0 Jan 27 19:53:19 crc kubenswrapper[4915]: I0127 19:53:19.512608 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2r276" event={"ID":"8eeede0f-80bb-4fef-8fab-30374bea1927","Type":"ContainerDied","Data":"a289030cf0b981d560e210e4dd025be14b2bb7e6f0f871ebd2a58d905a118001"} Jan 27 19:53:22 crc kubenswrapper[4915]: I0127 19:53:22.539159 4915 generic.go:334] "Generic (PLEG): container finished" podID="8eeede0f-80bb-4fef-8fab-30374bea1927" containerID="c54b86bebf1bbf06a72f623a5457c91d037a5b0d6d07b5c24d0049e2501d51ba" exitCode=0 Jan 27 19:53:22 crc kubenswrapper[4915]: I0127 19:53:22.539233 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2r276" event={"ID":"8eeede0f-80bb-4fef-8fab-30374bea1927","Type":"ContainerDied","Data":"c54b86bebf1bbf06a72f623a5457c91d037a5b0d6d07b5c24d0049e2501d51ba"} Jan 27 19:53:25 crc kubenswrapper[4915]: I0127 19:53:25.566659 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2r276" event={"ID":"8eeede0f-80bb-4fef-8fab-30374bea1927","Type":"ContainerStarted","Data":"fcc5a1d5509a1d379686bbd0d6d755b1904153fbc429475d543d54147237e0d1"} Jan 27 19:53:25 crc kubenswrapper[4915]: I0127 19:53:25.587246 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2r276" podStartSLOduration=3.450097226 podStartE2EDuration="8.587227581s" podCreationTimestamp="2026-01-27 19:53:17 +0000 UTC" firstStartedPulling="2026-01-27 19:53:19.5154663 +0000 UTC m=+4290.873319964" lastFinishedPulling="2026-01-27 19:53:24.652596645 +0000 UTC m=+4296.010450319" observedRunningTime="2026-01-27 19:53:25.583919119 +0000 UTC m=+4296.941772823" watchObservedRunningTime="2026-01-27 19:53:25.587227581 +0000 UTC m=+4296.945081255" Jan 27 19:53:27 crc kubenswrapper[4915]: I0127 19:53:27.919326 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2r276" Jan 27 19:53:27 crc kubenswrapper[4915]: I0127 19:53:27.919784 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2r276" Jan 27 19:53:27 crc kubenswrapper[4915]: I0127 19:53:27.992864 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2r276" Jan 27 19:53:37 crc kubenswrapper[4915]: I0127 19:53:37.995962 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2r276" Jan 27 19:53:38 crc kubenswrapper[4915]: I0127 19:53:38.055165 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2r276"] Jan 27 19:53:38 crc kubenswrapper[4915]: I0127 19:53:38.674487 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2r276" podUID="8eeede0f-80bb-4fef-8fab-30374bea1927" containerName="registry-server" containerID="cri-o://fcc5a1d5509a1d379686bbd0d6d755b1904153fbc429475d543d54147237e0d1" gracePeriod=2 Jan 27 19:53:39 crc kubenswrapper[4915]: I0127 19:53:39.688457 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2r276" Jan 27 19:53:39 crc kubenswrapper[4915]: I0127 19:53:39.688489 4915 generic.go:334] "Generic (PLEG): container finished" podID="8eeede0f-80bb-4fef-8fab-30374bea1927" containerID="fcc5a1d5509a1d379686bbd0d6d755b1904153fbc429475d543d54147237e0d1" exitCode=0 Jan 27 19:53:39 crc kubenswrapper[4915]: I0127 19:53:39.688521 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2r276" event={"ID":"8eeede0f-80bb-4fef-8fab-30374bea1927","Type":"ContainerDied","Data":"fcc5a1d5509a1d379686bbd0d6d755b1904153fbc429475d543d54147237e0d1"} Jan 27 19:53:39 crc kubenswrapper[4915]: I0127 19:53:39.688546 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2r276" event={"ID":"8eeede0f-80bb-4fef-8fab-30374bea1927","Type":"ContainerDied","Data":"b9e888866687d8f9e22829c16ba90787c9400d6061ecf48c5519b2eb0644a4e6"} Jan 27 19:53:39 crc kubenswrapper[4915]: I0127 19:53:39.688563 4915 scope.go:117] "RemoveContainer" containerID="fcc5a1d5509a1d379686bbd0d6d755b1904153fbc429475d543d54147237e0d1" Jan 27 19:53:39 crc kubenswrapper[4915]: I0127 19:53:39.719005 4915 scope.go:117] "RemoveContainer" containerID="c54b86bebf1bbf06a72f623a5457c91d037a5b0d6d07b5c24d0049e2501d51ba" Jan 27 19:53:39 crc kubenswrapper[4915]: I0127 19:53:39.743637 4915 scope.go:117] "RemoveContainer" containerID="a289030cf0b981d560e210e4dd025be14b2bb7e6f0f871ebd2a58d905a118001" Jan 27 19:53:39 crc kubenswrapper[4915]: I0127 19:53:39.773767 4915 scope.go:117] "RemoveContainer" containerID="fcc5a1d5509a1d379686bbd0d6d755b1904153fbc429475d543d54147237e0d1" Jan 27 19:53:39 crc kubenswrapper[4915]: E0127 19:53:39.774284 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcc5a1d5509a1d379686bbd0d6d755b1904153fbc429475d543d54147237e0d1\": container with ID starting with fcc5a1d5509a1d379686bbd0d6d755b1904153fbc429475d543d54147237e0d1 not found: ID does not exist" containerID="fcc5a1d5509a1d379686bbd0d6d755b1904153fbc429475d543d54147237e0d1" Jan 27 19:53:39 crc kubenswrapper[4915]: I0127 19:53:39.774323 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcc5a1d5509a1d379686bbd0d6d755b1904153fbc429475d543d54147237e0d1"} err="failed to get container status \"fcc5a1d5509a1d379686bbd0d6d755b1904153fbc429475d543d54147237e0d1\": rpc error: code = NotFound desc = could not find container \"fcc5a1d5509a1d379686bbd0d6d755b1904153fbc429475d543d54147237e0d1\": container with ID starting with fcc5a1d5509a1d379686bbd0d6d755b1904153fbc429475d543d54147237e0d1 not found: ID does not exist" Jan 27 19:53:39 crc kubenswrapper[4915]: I0127 19:53:39.774347 4915 scope.go:117] "RemoveContainer" containerID="c54b86bebf1bbf06a72f623a5457c91d037a5b0d6d07b5c24d0049e2501d51ba" Jan 27 19:53:39 crc kubenswrapper[4915]: E0127 19:53:39.774692 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c54b86bebf1bbf06a72f623a5457c91d037a5b0d6d07b5c24d0049e2501d51ba\": container with ID starting with c54b86bebf1bbf06a72f623a5457c91d037a5b0d6d07b5c24d0049e2501d51ba not found: ID does not exist" containerID="c54b86bebf1bbf06a72f623a5457c91d037a5b0d6d07b5c24d0049e2501d51ba" Jan 27 19:53:39 crc kubenswrapper[4915]: I0127 19:53:39.774725 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c54b86bebf1bbf06a72f623a5457c91d037a5b0d6d07b5c24d0049e2501d51ba"} err="failed to get container status \"c54b86bebf1bbf06a72f623a5457c91d037a5b0d6d07b5c24d0049e2501d51ba\": rpc error: code = NotFound desc = could not find container \"c54b86bebf1bbf06a72f623a5457c91d037a5b0d6d07b5c24d0049e2501d51ba\": container with ID starting with c54b86bebf1bbf06a72f623a5457c91d037a5b0d6d07b5c24d0049e2501d51ba not found: ID does not exist" Jan 27 19:53:39 crc kubenswrapper[4915]: I0127 19:53:39.774743 4915 scope.go:117] "RemoveContainer" containerID="a289030cf0b981d560e210e4dd025be14b2bb7e6f0f871ebd2a58d905a118001" Jan 27 19:53:39 crc kubenswrapper[4915]: E0127 19:53:39.775165 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a289030cf0b981d560e210e4dd025be14b2bb7e6f0f871ebd2a58d905a118001\": container with ID starting with a289030cf0b981d560e210e4dd025be14b2bb7e6f0f871ebd2a58d905a118001 not found: ID does not exist" containerID="a289030cf0b981d560e210e4dd025be14b2bb7e6f0f871ebd2a58d905a118001" Jan 27 19:53:39 crc kubenswrapper[4915]: I0127 19:53:39.775193 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a289030cf0b981d560e210e4dd025be14b2bb7e6f0f871ebd2a58d905a118001"} err="failed to get container status \"a289030cf0b981d560e210e4dd025be14b2bb7e6f0f871ebd2a58d905a118001\": rpc error: code = NotFound desc = could not find container \"a289030cf0b981d560e210e4dd025be14b2bb7e6f0f871ebd2a58d905a118001\": container with ID starting with a289030cf0b981d560e210e4dd025be14b2bb7e6f0f871ebd2a58d905a118001 not found: ID does not exist" Jan 27 19:53:39 crc kubenswrapper[4915]: I0127 19:53:39.825210 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eeede0f-80bb-4fef-8fab-30374bea1927-catalog-content\") pod \"8eeede0f-80bb-4fef-8fab-30374bea1927\" (UID: \"8eeede0f-80bb-4fef-8fab-30374bea1927\") " Jan 27 19:53:39 crc kubenswrapper[4915]: I0127 19:53:39.825405 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eeede0f-80bb-4fef-8fab-30374bea1927-utilities\") pod \"8eeede0f-80bb-4fef-8fab-30374bea1927\" (UID: \"8eeede0f-80bb-4fef-8fab-30374bea1927\") " Jan 27 19:53:39 crc kubenswrapper[4915]: I0127 19:53:39.825513 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqgfm\" (UniqueName: \"kubernetes.io/projected/8eeede0f-80bb-4fef-8fab-30374bea1927-kube-api-access-gqgfm\") pod \"8eeede0f-80bb-4fef-8fab-30374bea1927\" (UID: \"8eeede0f-80bb-4fef-8fab-30374bea1927\") " Jan 27 19:53:39 crc kubenswrapper[4915]: I0127 19:53:39.827040 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eeede0f-80bb-4fef-8fab-30374bea1927-utilities" (OuterVolumeSpecName: "utilities") pod "8eeede0f-80bb-4fef-8fab-30374bea1927" (UID: "8eeede0f-80bb-4fef-8fab-30374bea1927"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:53:39 crc kubenswrapper[4915]: I0127 19:53:39.832485 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eeede0f-80bb-4fef-8fab-30374bea1927-kube-api-access-gqgfm" (OuterVolumeSpecName: "kube-api-access-gqgfm") pod "8eeede0f-80bb-4fef-8fab-30374bea1927" (UID: "8eeede0f-80bb-4fef-8fab-30374bea1927"). InnerVolumeSpecName "kube-api-access-gqgfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:53:39 crc kubenswrapper[4915]: I0127 19:53:39.895381 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eeede0f-80bb-4fef-8fab-30374bea1927-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8eeede0f-80bb-4fef-8fab-30374bea1927" (UID: "8eeede0f-80bb-4fef-8fab-30374bea1927"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:53:39 crc kubenswrapper[4915]: I0127 19:53:39.927561 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqgfm\" (UniqueName: \"kubernetes.io/projected/8eeede0f-80bb-4fef-8fab-30374bea1927-kube-api-access-gqgfm\") on node \"crc\" DevicePath \"\"" Jan 27 19:53:39 crc kubenswrapper[4915]: I0127 19:53:39.927617 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eeede0f-80bb-4fef-8fab-30374bea1927-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:53:39 crc kubenswrapper[4915]: I0127 19:53:39.927632 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eeede0f-80bb-4fef-8fab-30374bea1927-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:53:40 crc kubenswrapper[4915]: I0127 19:53:40.701773 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2r276" Jan 27 19:53:40 crc kubenswrapper[4915]: I0127 19:53:40.752372 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2r276"] Jan 27 19:53:40 crc kubenswrapper[4915]: I0127 19:53:40.760517 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2r276"] Jan 27 19:53:41 crc kubenswrapper[4915]: I0127 19:53:41.380850 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eeede0f-80bb-4fef-8fab-30374bea1927" path="/var/lib/kubelet/pods/8eeede0f-80bb-4fef-8fab-30374bea1927/volumes" Jan 27 19:53:50 crc kubenswrapper[4915]: I0127 19:53:50.625260 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:53:50 crc kubenswrapper[4915]: I0127 19:53:50.626270 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:53:51 crc kubenswrapper[4915]: I0127 19:53:51.434286 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rmn69"] Jan 27 19:53:51 crc kubenswrapper[4915]: E0127 19:53:51.434703 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eeede0f-80bb-4fef-8fab-30374bea1927" containerName="extract-utilities" Jan 27 19:53:51 crc kubenswrapper[4915]: I0127 19:53:51.434725 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eeede0f-80bb-4fef-8fab-30374bea1927" containerName="extract-utilities" Jan 27 19:53:51 crc kubenswrapper[4915]: E0127 19:53:51.434748 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eeede0f-80bb-4fef-8fab-30374bea1927" containerName="extract-content" Jan 27 19:53:51 crc kubenswrapper[4915]: I0127 19:53:51.434759 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eeede0f-80bb-4fef-8fab-30374bea1927" containerName="extract-content" Jan 27 19:53:51 crc kubenswrapper[4915]: E0127 19:53:51.434779 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eeede0f-80bb-4fef-8fab-30374bea1927" containerName="registry-server" Jan 27 19:53:51 crc kubenswrapper[4915]: I0127 19:53:51.434809 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eeede0f-80bb-4fef-8fab-30374bea1927" containerName="registry-server" Jan 27 19:53:51 crc kubenswrapper[4915]: I0127 19:53:51.435109 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eeede0f-80bb-4fef-8fab-30374bea1927" containerName="registry-server" Jan 27 19:53:51 crc kubenswrapper[4915]: I0127 19:53:51.436960 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmn69" Jan 27 19:53:51 crc kubenswrapper[4915]: I0127 19:53:51.463827 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rmn69"] Jan 27 19:53:51 crc kubenswrapper[4915]: I0127 19:53:51.627697 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f689f039-fee2-4ebb-80b8-26360a2a18bf-catalog-content\") pod \"redhat-marketplace-rmn69\" (UID: \"f689f039-fee2-4ebb-80b8-26360a2a18bf\") " pod="openshift-marketplace/redhat-marketplace-rmn69" Jan 27 19:53:51 crc kubenswrapper[4915]: I0127 19:53:51.627939 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw25h\" (UniqueName: \"kubernetes.io/projected/f689f039-fee2-4ebb-80b8-26360a2a18bf-kube-api-access-pw25h\") pod \"redhat-marketplace-rmn69\" (UID: \"f689f039-fee2-4ebb-80b8-26360a2a18bf\") " pod="openshift-marketplace/redhat-marketplace-rmn69" Jan 27 19:53:51 crc kubenswrapper[4915]: I0127 19:53:51.628162 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f689f039-fee2-4ebb-80b8-26360a2a18bf-utilities\") pod \"redhat-marketplace-rmn69\" (UID: \"f689f039-fee2-4ebb-80b8-26360a2a18bf\") " pod="openshift-marketplace/redhat-marketplace-rmn69" Jan 27 19:53:51 crc kubenswrapper[4915]: I0127 19:53:51.729421 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f689f039-fee2-4ebb-80b8-26360a2a18bf-catalog-content\") pod \"redhat-marketplace-rmn69\" (UID: \"f689f039-fee2-4ebb-80b8-26360a2a18bf\") " pod="openshift-marketplace/redhat-marketplace-rmn69" Jan 27 19:53:51 crc kubenswrapper[4915]: I0127 19:53:51.729497 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw25h\" (UniqueName: \"kubernetes.io/projected/f689f039-fee2-4ebb-80b8-26360a2a18bf-kube-api-access-pw25h\") pod \"redhat-marketplace-rmn69\" (UID: \"f689f039-fee2-4ebb-80b8-26360a2a18bf\") " pod="openshift-marketplace/redhat-marketplace-rmn69" Jan 27 19:53:51 crc kubenswrapper[4915]: I0127 19:53:51.729569 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f689f039-fee2-4ebb-80b8-26360a2a18bf-utilities\") pod \"redhat-marketplace-rmn69\" (UID: \"f689f039-fee2-4ebb-80b8-26360a2a18bf\") " pod="openshift-marketplace/redhat-marketplace-rmn69" Jan 27 19:53:51 crc kubenswrapper[4915]: I0127 19:53:51.730125 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f689f039-fee2-4ebb-80b8-26360a2a18bf-catalog-content\") pod \"redhat-marketplace-rmn69\" (UID: \"f689f039-fee2-4ebb-80b8-26360a2a18bf\") " pod="openshift-marketplace/redhat-marketplace-rmn69" Jan 27 19:53:51 crc kubenswrapper[4915]: I0127 19:53:51.730283 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f689f039-fee2-4ebb-80b8-26360a2a18bf-utilities\") pod \"redhat-marketplace-rmn69\" (UID: \"f689f039-fee2-4ebb-80b8-26360a2a18bf\") " pod="openshift-marketplace/redhat-marketplace-rmn69" Jan 27 19:53:51 crc kubenswrapper[4915]: I0127 19:53:51.759515 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw25h\" (UniqueName: \"kubernetes.io/projected/f689f039-fee2-4ebb-80b8-26360a2a18bf-kube-api-access-pw25h\") pod \"redhat-marketplace-rmn69\" (UID: \"f689f039-fee2-4ebb-80b8-26360a2a18bf\") " pod="openshift-marketplace/redhat-marketplace-rmn69" Jan 27 19:53:51 crc kubenswrapper[4915]: I0127 19:53:51.766976 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmn69" Jan 27 19:53:52 crc kubenswrapper[4915]: I0127 19:53:52.198417 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rmn69"] Jan 27 19:53:52 crc kubenswrapper[4915]: I0127 19:53:52.807505 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmn69" event={"ID":"f689f039-fee2-4ebb-80b8-26360a2a18bf","Type":"ContainerStarted","Data":"e8eb4f8150a28fb6848c0f784f44a0a956f2b96bdecc1da63d858279a2325185"} Jan 27 19:53:53 crc kubenswrapper[4915]: I0127 19:53:53.819405 4915 generic.go:334] "Generic (PLEG): container finished" podID="f689f039-fee2-4ebb-80b8-26360a2a18bf" containerID="bb462514f2128d5f4679be60360873088fbce156be7627a237404b9d2d7fd8cc" exitCode=0 Jan 27 19:53:53 crc kubenswrapper[4915]: I0127 19:53:53.819507 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmn69" event={"ID":"f689f039-fee2-4ebb-80b8-26360a2a18bf","Type":"ContainerDied","Data":"bb462514f2128d5f4679be60360873088fbce156be7627a237404b9d2d7fd8cc"} Jan 27 19:53:56 crc kubenswrapper[4915]: I0127 19:53:56.850412 4915 generic.go:334] "Generic (PLEG): container finished" podID="f689f039-fee2-4ebb-80b8-26360a2a18bf" containerID="ac4f8db82101dcf6c745398dd842b1cda79dad2494ac0669b6b22f9709ae45f1" exitCode=0 Jan 27 19:53:56 crc kubenswrapper[4915]: I0127 19:53:56.850733 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmn69" event={"ID":"f689f039-fee2-4ebb-80b8-26360a2a18bf","Type":"ContainerDied","Data":"ac4f8db82101dcf6c745398dd842b1cda79dad2494ac0669b6b22f9709ae45f1"} Jan 27 19:53:58 crc kubenswrapper[4915]: I0127 19:53:58.870976 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmn69" event={"ID":"f689f039-fee2-4ebb-80b8-26360a2a18bf","Type":"ContainerStarted","Data":"8c666be556b632d414118ee0792eb4ffe5537feaa89966a5cb0a137fa0904dc3"} Jan 27 19:54:01 crc kubenswrapper[4915]: I0127 19:54:01.767678 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rmn69" Jan 27 19:54:01 crc kubenswrapper[4915]: I0127 19:54:01.768187 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rmn69" Jan 27 19:54:01 crc kubenswrapper[4915]: I0127 19:54:01.822254 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rmn69" Jan 27 19:54:01 crc kubenswrapper[4915]: I0127 19:54:01.846496 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rmn69" podStartSLOduration=6.534389917 podStartE2EDuration="10.846474823s" podCreationTimestamp="2026-01-27 19:53:51 +0000 UTC" firstStartedPulling="2026-01-27 19:53:53.823402173 +0000 UTC m=+4325.181255877" lastFinishedPulling="2026-01-27 19:53:58.135487079 +0000 UTC m=+4329.493340783" observedRunningTime="2026-01-27 19:53:58.899855897 +0000 UTC m=+4330.257709591" watchObservedRunningTime="2026-01-27 19:54:01.846474823 +0000 UTC m=+4333.204328497" Jan 27 19:54:11 crc kubenswrapper[4915]: I0127 19:54:11.820366 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rmn69" Jan 27 19:54:11 crc kubenswrapper[4915]: I0127 19:54:11.880964 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rmn69"] Jan 27 19:54:11 crc kubenswrapper[4915]: I0127 19:54:11.980440 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rmn69" podUID="f689f039-fee2-4ebb-80b8-26360a2a18bf" containerName="registry-server" containerID="cri-o://8c666be556b632d414118ee0792eb4ffe5537feaa89966a5cb0a137fa0904dc3" gracePeriod=2 Jan 27 19:54:12 crc kubenswrapper[4915]: I0127 19:54:12.511714 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmn69" Jan 27 19:54:12 crc kubenswrapper[4915]: I0127 19:54:12.671485 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f689f039-fee2-4ebb-80b8-26360a2a18bf-catalog-content\") pod \"f689f039-fee2-4ebb-80b8-26360a2a18bf\" (UID: \"f689f039-fee2-4ebb-80b8-26360a2a18bf\") " Jan 27 19:54:12 crc kubenswrapper[4915]: I0127 19:54:12.671621 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f689f039-fee2-4ebb-80b8-26360a2a18bf-utilities\") pod \"f689f039-fee2-4ebb-80b8-26360a2a18bf\" (UID: \"f689f039-fee2-4ebb-80b8-26360a2a18bf\") " Jan 27 19:54:12 crc kubenswrapper[4915]: I0127 19:54:12.671658 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw25h\" (UniqueName: \"kubernetes.io/projected/f689f039-fee2-4ebb-80b8-26360a2a18bf-kube-api-access-pw25h\") pod \"f689f039-fee2-4ebb-80b8-26360a2a18bf\" (UID: \"f689f039-fee2-4ebb-80b8-26360a2a18bf\") " Jan 27 19:54:12 crc kubenswrapper[4915]: I0127 19:54:12.673031 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f689f039-fee2-4ebb-80b8-26360a2a18bf-utilities" (OuterVolumeSpecName: "utilities") pod "f689f039-fee2-4ebb-80b8-26360a2a18bf" (UID: "f689f039-fee2-4ebb-80b8-26360a2a18bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:54:12 crc kubenswrapper[4915]: I0127 19:54:12.679143 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f689f039-fee2-4ebb-80b8-26360a2a18bf-kube-api-access-pw25h" (OuterVolumeSpecName: "kube-api-access-pw25h") pod "f689f039-fee2-4ebb-80b8-26360a2a18bf" (UID: "f689f039-fee2-4ebb-80b8-26360a2a18bf"). InnerVolumeSpecName "kube-api-access-pw25h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:54:12 crc kubenswrapper[4915]: I0127 19:54:12.698767 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f689f039-fee2-4ebb-80b8-26360a2a18bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f689f039-fee2-4ebb-80b8-26360a2a18bf" (UID: "f689f039-fee2-4ebb-80b8-26360a2a18bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:54:12 crc kubenswrapper[4915]: I0127 19:54:12.773096 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f689f039-fee2-4ebb-80b8-26360a2a18bf-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:54:12 crc kubenswrapper[4915]: I0127 19:54:12.773145 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pw25h\" (UniqueName: \"kubernetes.io/projected/f689f039-fee2-4ebb-80b8-26360a2a18bf-kube-api-access-pw25h\") on node \"crc\" DevicePath \"\"" Jan 27 19:54:12 crc kubenswrapper[4915]: I0127 19:54:12.773160 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f689f039-fee2-4ebb-80b8-26360a2a18bf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:54:12 crc kubenswrapper[4915]: I0127 19:54:12.988856 4915 generic.go:334] "Generic (PLEG): container finished" podID="f689f039-fee2-4ebb-80b8-26360a2a18bf" containerID="8c666be556b632d414118ee0792eb4ffe5537feaa89966a5cb0a137fa0904dc3" exitCode=0 Jan 27 19:54:12 crc kubenswrapper[4915]: I0127 19:54:12.988906 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmn69" event={"ID":"f689f039-fee2-4ebb-80b8-26360a2a18bf","Type":"ContainerDied","Data":"8c666be556b632d414118ee0792eb4ffe5537feaa89966a5cb0a137fa0904dc3"} Jan 27 19:54:12 crc kubenswrapper[4915]: I0127 19:54:12.988921 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmn69" Jan 27 19:54:12 crc kubenswrapper[4915]: I0127 19:54:12.988939 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmn69" event={"ID":"f689f039-fee2-4ebb-80b8-26360a2a18bf","Type":"ContainerDied","Data":"e8eb4f8150a28fb6848c0f784f44a0a956f2b96bdecc1da63d858279a2325185"} Jan 27 19:54:12 crc kubenswrapper[4915]: I0127 19:54:12.988958 4915 scope.go:117] "RemoveContainer" containerID="8c666be556b632d414118ee0792eb4ffe5537feaa89966a5cb0a137fa0904dc3" Jan 27 19:54:13 crc kubenswrapper[4915]: I0127 19:54:13.012830 4915 scope.go:117] "RemoveContainer" containerID="ac4f8db82101dcf6c745398dd842b1cda79dad2494ac0669b6b22f9709ae45f1" Jan 27 19:54:13 crc kubenswrapper[4915]: I0127 19:54:13.028966 4915 scope.go:117] "RemoveContainer" containerID="bb462514f2128d5f4679be60360873088fbce156be7627a237404b9d2d7fd8cc" Jan 27 19:54:13 crc kubenswrapper[4915]: I0127 19:54:13.030537 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rmn69"] Jan 27 19:54:13 crc kubenswrapper[4915]: I0127 19:54:13.035611 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rmn69"] Jan 27 19:54:13 crc kubenswrapper[4915]: I0127 19:54:13.066096 4915 scope.go:117] "RemoveContainer" containerID="8c666be556b632d414118ee0792eb4ffe5537feaa89966a5cb0a137fa0904dc3" Jan 27 19:54:13 crc kubenswrapper[4915]: E0127 19:54:13.067747 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c666be556b632d414118ee0792eb4ffe5537feaa89966a5cb0a137fa0904dc3\": container with ID starting with 8c666be556b632d414118ee0792eb4ffe5537feaa89966a5cb0a137fa0904dc3 not found: ID does not exist" containerID="8c666be556b632d414118ee0792eb4ffe5537feaa89966a5cb0a137fa0904dc3" Jan 27 19:54:13 crc kubenswrapper[4915]: I0127 19:54:13.067902 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c666be556b632d414118ee0792eb4ffe5537feaa89966a5cb0a137fa0904dc3"} err="failed to get container status \"8c666be556b632d414118ee0792eb4ffe5537feaa89966a5cb0a137fa0904dc3\": rpc error: code = NotFound desc = could not find container \"8c666be556b632d414118ee0792eb4ffe5537feaa89966a5cb0a137fa0904dc3\": container with ID starting with 8c666be556b632d414118ee0792eb4ffe5537feaa89966a5cb0a137fa0904dc3 not found: ID does not exist" Jan 27 19:54:13 crc kubenswrapper[4915]: I0127 19:54:13.067955 4915 scope.go:117] "RemoveContainer" containerID="ac4f8db82101dcf6c745398dd842b1cda79dad2494ac0669b6b22f9709ae45f1" Jan 27 19:54:13 crc kubenswrapper[4915]: E0127 19:54:13.069028 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac4f8db82101dcf6c745398dd842b1cda79dad2494ac0669b6b22f9709ae45f1\": container with ID starting with ac4f8db82101dcf6c745398dd842b1cda79dad2494ac0669b6b22f9709ae45f1 not found: ID does not exist" containerID="ac4f8db82101dcf6c745398dd842b1cda79dad2494ac0669b6b22f9709ae45f1" Jan 27 19:54:13 crc kubenswrapper[4915]: I0127 19:54:13.069290 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac4f8db82101dcf6c745398dd842b1cda79dad2494ac0669b6b22f9709ae45f1"} err="failed to get container status \"ac4f8db82101dcf6c745398dd842b1cda79dad2494ac0669b6b22f9709ae45f1\": rpc error: code = NotFound desc = could not find container \"ac4f8db82101dcf6c745398dd842b1cda79dad2494ac0669b6b22f9709ae45f1\": container with ID starting with ac4f8db82101dcf6c745398dd842b1cda79dad2494ac0669b6b22f9709ae45f1 not found: ID does not exist" Jan 27 19:54:13 crc kubenswrapper[4915]: I0127 19:54:13.069331 4915 scope.go:117] "RemoveContainer" containerID="bb462514f2128d5f4679be60360873088fbce156be7627a237404b9d2d7fd8cc" Jan 27 19:54:13 crc kubenswrapper[4915]: E0127 19:54:13.069832 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb462514f2128d5f4679be60360873088fbce156be7627a237404b9d2d7fd8cc\": container with ID starting with bb462514f2128d5f4679be60360873088fbce156be7627a237404b9d2d7fd8cc not found: ID does not exist" containerID="bb462514f2128d5f4679be60360873088fbce156be7627a237404b9d2d7fd8cc" Jan 27 19:54:13 crc kubenswrapper[4915]: I0127 19:54:13.070027 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb462514f2128d5f4679be60360873088fbce156be7627a237404b9d2d7fd8cc"} err="failed to get container status \"bb462514f2128d5f4679be60360873088fbce156be7627a237404b9d2d7fd8cc\": rpc error: code = NotFound desc = could not find container \"bb462514f2128d5f4679be60360873088fbce156be7627a237404b9d2d7fd8cc\": container with ID starting with bb462514f2128d5f4679be60360873088fbce156be7627a237404b9d2d7fd8cc not found: ID does not exist" Jan 27 19:54:13 crc kubenswrapper[4915]: I0127 19:54:13.375342 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f689f039-fee2-4ebb-80b8-26360a2a18bf" path="/var/lib/kubelet/pods/f689f039-fee2-4ebb-80b8-26360a2a18bf/volumes" Jan 27 19:54:20 crc kubenswrapper[4915]: I0127 19:54:20.624703 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:54:20 crc kubenswrapper[4915]: I0127 19:54:20.625195 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:54:50 crc kubenswrapper[4915]: I0127 19:54:50.625440 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:54:50 crc kubenswrapper[4915]: I0127 19:54:50.626239 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:54:50 crc kubenswrapper[4915]: I0127 19:54:50.626321 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 19:54:50 crc kubenswrapper[4915]: I0127 19:54:50.627401 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:54:50 crc kubenswrapper[4915]: I0127 19:54:50.627512 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" gracePeriod=600 Jan 27 19:54:51 crc kubenswrapper[4915]: I0127 19:54:51.308024 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" exitCode=0 Jan 27 19:54:51 crc kubenswrapper[4915]: I0127 19:54:51.308111 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c"} Jan 27 19:54:51 crc kubenswrapper[4915]: I0127 19:54:51.308169 4915 scope.go:117] "RemoveContainer" containerID="cc7d6ec1bf845f1012fcec8f77d179f8bec8f6fd2bbfbb24a74dc4bf6e1989d5" Jan 27 19:54:51 crc kubenswrapper[4915]: E0127 19:54:51.310767 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:54:52 crc kubenswrapper[4915]: I0127 19:54:52.319919 4915 scope.go:117] "RemoveContainer" containerID="be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" Jan 27 19:54:52 crc kubenswrapper[4915]: E0127 19:54:52.320395 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:55:07 crc kubenswrapper[4915]: I0127 19:55:07.358181 4915 scope.go:117] "RemoveContainer" containerID="be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" Jan 27 19:55:07 crc kubenswrapper[4915]: E0127 19:55:07.359030 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:55:20 crc kubenswrapper[4915]: I0127 19:55:20.358313 4915 scope.go:117] "RemoveContainer" containerID="be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" Jan 27 19:55:20 crc kubenswrapper[4915]: E0127 19:55:20.359416 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:55:32 crc kubenswrapper[4915]: I0127 19:55:32.357496 4915 scope.go:117] "RemoveContainer" containerID="be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" Jan 27 19:55:32 crc kubenswrapper[4915]: E0127 19:55:32.358456 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:55:45 crc kubenswrapper[4915]: I0127 19:55:45.357985 4915 scope.go:117] "RemoveContainer" containerID="be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" Jan 27 19:55:45 crc kubenswrapper[4915]: E0127 19:55:45.358784 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:55:56 crc kubenswrapper[4915]: I0127 19:55:56.358637 4915 scope.go:117] "RemoveContainer" containerID="be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" Jan 27 19:55:56 crc kubenswrapper[4915]: E0127 19:55:56.359620 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:56:08 crc kubenswrapper[4915]: I0127 19:56:08.357220 4915 scope.go:117] "RemoveContainer" containerID="be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" Jan 27 19:56:08 crc kubenswrapper[4915]: E0127 19:56:08.357871 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:56:20 crc kubenswrapper[4915]: I0127 19:56:20.358517 4915 scope.go:117] "RemoveContainer" containerID="be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" Jan 27 19:56:20 crc kubenswrapper[4915]: E0127 19:56:20.359676 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:56:35 crc kubenswrapper[4915]: I0127 19:56:35.357996 4915 scope.go:117] "RemoveContainer" containerID="be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" Jan 27 19:56:35 crc kubenswrapper[4915]: E0127 19:56:35.358957 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:56:48 crc kubenswrapper[4915]: I0127 19:56:48.357561 4915 scope.go:117] "RemoveContainer" containerID="be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" Jan 27 19:56:48 crc kubenswrapper[4915]: E0127 19:56:48.358303 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:57:00 crc kubenswrapper[4915]: I0127 19:57:00.357308 4915 scope.go:117] "RemoveContainer" containerID="be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" Jan 27 19:57:00 crc kubenswrapper[4915]: E0127 19:57:00.358204 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:57:14 crc kubenswrapper[4915]: I0127 19:57:14.358006 4915 scope.go:117] "RemoveContainer" containerID="be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" Jan 27 19:57:14 crc kubenswrapper[4915]: E0127 19:57:14.358974 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:57:27 crc kubenswrapper[4915]: I0127 19:57:27.358224 4915 scope.go:117] "RemoveContainer" containerID="be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" Jan 27 19:57:27 crc kubenswrapper[4915]: E0127 19:57:27.359499 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:57:42 crc kubenswrapper[4915]: I0127 19:57:42.357520 4915 scope.go:117] "RemoveContainer" containerID="be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" Jan 27 19:57:42 crc kubenswrapper[4915]: E0127 19:57:42.358568 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:57:52 crc kubenswrapper[4915]: I0127 19:57:52.813548 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-rnmzh"] Jan 27 19:57:52 crc kubenswrapper[4915]: I0127 19:57:52.821311 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-rnmzh"] Jan 27 19:57:52 crc kubenswrapper[4915]: I0127 19:57:52.962055 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-tzqqz"] Jan 27 19:57:52 crc kubenswrapper[4915]: E0127 19:57:52.962603 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f689f039-fee2-4ebb-80b8-26360a2a18bf" containerName="registry-server" Jan 27 19:57:52 crc kubenswrapper[4915]: I0127 19:57:52.962659 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f689f039-fee2-4ebb-80b8-26360a2a18bf" containerName="registry-server" Jan 27 19:57:52 crc kubenswrapper[4915]: E0127 19:57:52.962689 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f689f039-fee2-4ebb-80b8-26360a2a18bf" containerName="extract-utilities" Jan 27 19:57:52 crc kubenswrapper[4915]: I0127 19:57:52.962699 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f689f039-fee2-4ebb-80b8-26360a2a18bf" containerName="extract-utilities" Jan 27 19:57:52 crc kubenswrapper[4915]: E0127 19:57:52.962730 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f689f039-fee2-4ebb-80b8-26360a2a18bf" containerName="extract-content" Jan 27 19:57:52 crc kubenswrapper[4915]: I0127 19:57:52.962738 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f689f039-fee2-4ebb-80b8-26360a2a18bf" containerName="extract-content" Jan 27 19:57:52 crc kubenswrapper[4915]: I0127 19:57:52.963032 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f689f039-fee2-4ebb-80b8-26360a2a18bf" containerName="registry-server" Jan 27 19:57:52 crc kubenswrapper[4915]: I0127 19:57:52.963684 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-tzqqz" Jan 27 19:57:52 crc kubenswrapper[4915]: I0127 19:57:52.966083 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 27 19:57:52 crc kubenswrapper[4915]: I0127 19:57:52.966102 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 27 19:57:52 crc kubenswrapper[4915]: I0127 19:57:52.968424 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 27 19:57:52 crc kubenswrapper[4915]: I0127 19:57:52.968652 4915 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-l2x7g" Jan 27 19:57:52 crc kubenswrapper[4915]: I0127 19:57:52.985307 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-tzqqz"] Jan 27 19:57:53 crc kubenswrapper[4915]: I0127 19:57:53.065971 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/352d358b-9465-4591-8159-74342269ace6-node-mnt\") pod \"crc-storage-crc-tzqqz\" (UID: \"352d358b-9465-4591-8159-74342269ace6\") " pod="crc-storage/crc-storage-crc-tzqqz" Jan 27 19:57:53 crc kubenswrapper[4915]: I0127 19:57:53.066153 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/352d358b-9465-4591-8159-74342269ace6-crc-storage\") pod \"crc-storage-crc-tzqqz\" (UID: \"352d358b-9465-4591-8159-74342269ace6\") " pod="crc-storage/crc-storage-crc-tzqqz" Jan 27 19:57:53 crc kubenswrapper[4915]: I0127 19:57:53.066359 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct4wm\" (UniqueName: \"kubernetes.io/projected/352d358b-9465-4591-8159-74342269ace6-kube-api-access-ct4wm\") pod \"crc-storage-crc-tzqqz\" (UID: \"352d358b-9465-4591-8159-74342269ace6\") " pod="crc-storage/crc-storage-crc-tzqqz" Jan 27 19:57:53 crc kubenswrapper[4915]: I0127 19:57:53.168394 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/352d358b-9465-4591-8159-74342269ace6-node-mnt\") pod \"crc-storage-crc-tzqqz\" (UID: \"352d358b-9465-4591-8159-74342269ace6\") " pod="crc-storage/crc-storage-crc-tzqqz" Jan 27 19:57:53 crc kubenswrapper[4915]: I0127 19:57:53.168510 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/352d358b-9465-4591-8159-74342269ace6-crc-storage\") pod \"crc-storage-crc-tzqqz\" (UID: \"352d358b-9465-4591-8159-74342269ace6\") " pod="crc-storage/crc-storage-crc-tzqqz" Jan 27 19:57:53 crc kubenswrapper[4915]: I0127 19:57:53.168600 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct4wm\" (UniqueName: \"kubernetes.io/projected/352d358b-9465-4591-8159-74342269ace6-kube-api-access-ct4wm\") pod \"crc-storage-crc-tzqqz\" (UID: \"352d358b-9465-4591-8159-74342269ace6\") " pod="crc-storage/crc-storage-crc-tzqqz" Jan 27 19:57:53 crc kubenswrapper[4915]: I0127 19:57:53.168946 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/352d358b-9465-4591-8159-74342269ace6-node-mnt\") pod \"crc-storage-crc-tzqqz\" (UID: \"352d358b-9465-4591-8159-74342269ace6\") " pod="crc-storage/crc-storage-crc-tzqqz" Jan 27 19:57:53 crc kubenswrapper[4915]: I0127 19:57:53.169619 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/352d358b-9465-4591-8159-74342269ace6-crc-storage\") pod \"crc-storage-crc-tzqqz\" (UID: \"352d358b-9465-4591-8159-74342269ace6\") " pod="crc-storage/crc-storage-crc-tzqqz" Jan 27 19:57:53 crc kubenswrapper[4915]: I0127 19:57:53.195541 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct4wm\" (UniqueName: \"kubernetes.io/projected/352d358b-9465-4591-8159-74342269ace6-kube-api-access-ct4wm\") pod \"crc-storage-crc-tzqqz\" (UID: \"352d358b-9465-4591-8159-74342269ace6\") " pod="crc-storage/crc-storage-crc-tzqqz" Jan 27 19:57:53 crc kubenswrapper[4915]: I0127 19:57:53.289714 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-tzqqz" Jan 27 19:57:53 crc kubenswrapper[4915]: I0127 19:57:53.358370 4915 scope.go:117] "RemoveContainer" containerID="be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" Jan 27 19:57:53 crc kubenswrapper[4915]: E0127 19:57:53.358661 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:57:53 crc kubenswrapper[4915]: I0127 19:57:53.369882 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2af86ec3-0a4b-4749-8da0-a67435d42aee" path="/var/lib/kubelet/pods/2af86ec3-0a4b-4749-8da0-a67435d42aee/volumes" Jan 27 19:57:53 crc kubenswrapper[4915]: I0127 19:57:53.752101 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-tzqqz"] Jan 27 19:57:53 crc kubenswrapper[4915]: I0127 19:57:53.755953 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:57:53 crc kubenswrapper[4915]: I0127 19:57:53.777775 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-tzqqz" event={"ID":"352d358b-9465-4591-8159-74342269ace6","Type":"ContainerStarted","Data":"ea897f268563d15f9d9e51bbbe36064a3e20dd7bffe5fcca7ffcf9e9e4770b4a"} Jan 27 19:57:54 crc kubenswrapper[4915]: I0127 19:57:54.788358 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-tzqqz" event={"ID":"352d358b-9465-4591-8159-74342269ace6","Type":"ContainerStarted","Data":"f835b485a2f0a391b77314b5350b80f347bb0565f43cae1fabace47a6fe1dd3f"} Jan 27 19:57:54 crc kubenswrapper[4915]: I0127 19:57:54.814842 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="crc-storage/crc-storage-crc-tzqqz" podStartSLOduration=2.018616113 podStartE2EDuration="2.814788443s" podCreationTimestamp="2026-01-27 19:57:52 +0000 UTC" firstStartedPulling="2026-01-27 19:57:53.755697967 +0000 UTC m=+4565.113551631" lastFinishedPulling="2026-01-27 19:57:54.551870297 +0000 UTC m=+4565.909723961" observedRunningTime="2026-01-27 19:57:54.80815572 +0000 UTC m=+4566.166009384" watchObservedRunningTime="2026-01-27 19:57:54.814788443 +0000 UTC m=+4566.172642147" Jan 27 19:57:55 crc kubenswrapper[4915]: I0127 19:57:55.481675 4915 scope.go:117] "RemoveContainer" containerID="d2e26f7efab91a0c0839e1fee083d2c70a8048d7e522b0db19dadbc99ab3bad3" Jan 27 19:57:55 crc kubenswrapper[4915]: I0127 19:57:55.802028 4915 generic.go:334] "Generic (PLEG): container finished" podID="352d358b-9465-4591-8159-74342269ace6" containerID="f835b485a2f0a391b77314b5350b80f347bb0565f43cae1fabace47a6fe1dd3f" exitCode=0 Jan 27 19:57:55 crc kubenswrapper[4915]: I0127 19:57:55.802152 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-tzqqz" event={"ID":"352d358b-9465-4591-8159-74342269ace6","Type":"ContainerDied","Data":"f835b485a2f0a391b77314b5350b80f347bb0565f43cae1fabace47a6fe1dd3f"} Jan 27 19:57:57 crc kubenswrapper[4915]: I0127 19:57:57.126133 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-tzqqz" Jan 27 19:57:57 crc kubenswrapper[4915]: I0127 19:57:57.230741 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/352d358b-9465-4591-8159-74342269ace6-node-mnt\") pod \"352d358b-9465-4591-8159-74342269ace6\" (UID: \"352d358b-9465-4591-8159-74342269ace6\") " Jan 27 19:57:57 crc kubenswrapper[4915]: I0127 19:57:57.230877 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct4wm\" (UniqueName: \"kubernetes.io/projected/352d358b-9465-4591-8159-74342269ace6-kube-api-access-ct4wm\") pod \"352d358b-9465-4591-8159-74342269ace6\" (UID: \"352d358b-9465-4591-8159-74342269ace6\") " Jan 27 19:57:57 crc kubenswrapper[4915]: I0127 19:57:57.230954 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/352d358b-9465-4591-8159-74342269ace6-crc-storage\") pod \"352d358b-9465-4591-8159-74342269ace6\" (UID: \"352d358b-9465-4591-8159-74342269ace6\") " Jan 27 19:57:57 crc kubenswrapper[4915]: I0127 19:57:57.231007 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/352d358b-9465-4591-8159-74342269ace6-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "352d358b-9465-4591-8159-74342269ace6" (UID: "352d358b-9465-4591-8159-74342269ace6"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:57:57 crc kubenswrapper[4915]: I0127 19:57:57.231396 4915 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/352d358b-9465-4591-8159-74342269ace6-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 27 19:57:57 crc kubenswrapper[4915]: I0127 19:57:57.237989 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/352d358b-9465-4591-8159-74342269ace6-kube-api-access-ct4wm" (OuterVolumeSpecName: "kube-api-access-ct4wm") pod "352d358b-9465-4591-8159-74342269ace6" (UID: "352d358b-9465-4591-8159-74342269ace6"). InnerVolumeSpecName "kube-api-access-ct4wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:57:57 crc kubenswrapper[4915]: I0127 19:57:57.253152 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/352d358b-9465-4591-8159-74342269ace6-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "352d358b-9465-4591-8159-74342269ace6" (UID: "352d358b-9465-4591-8159-74342269ace6"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:57:57 crc kubenswrapper[4915]: I0127 19:57:57.332988 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct4wm\" (UniqueName: \"kubernetes.io/projected/352d358b-9465-4591-8159-74342269ace6-kube-api-access-ct4wm\") on node \"crc\" DevicePath \"\"" Jan 27 19:57:57 crc kubenswrapper[4915]: I0127 19:57:57.333045 4915 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/352d358b-9465-4591-8159-74342269ace6-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 27 19:57:57 crc kubenswrapper[4915]: I0127 19:57:57.826914 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-tzqqz" event={"ID":"352d358b-9465-4591-8159-74342269ace6","Type":"ContainerDied","Data":"ea897f268563d15f9d9e51bbbe36064a3e20dd7bffe5fcca7ffcf9e9e4770b4a"} Jan 27 19:57:57 crc kubenswrapper[4915]: I0127 19:57:57.826953 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea897f268563d15f9d9e51bbbe36064a3e20dd7bffe5fcca7ffcf9e9e4770b4a" Jan 27 19:57:57 crc kubenswrapper[4915]: I0127 19:57:57.827023 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-tzqqz" Jan 27 19:57:59 crc kubenswrapper[4915]: I0127 19:57:59.131629 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-tzqqz"] Jan 27 19:57:59 crc kubenswrapper[4915]: I0127 19:57:59.136597 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-tzqqz"] Jan 27 19:57:59 crc kubenswrapper[4915]: I0127 19:57:59.299776 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-rkkzk"] Jan 27 19:57:59 crc kubenswrapper[4915]: E0127 19:57:59.300278 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="352d358b-9465-4591-8159-74342269ace6" containerName="storage" Jan 27 19:57:59 crc kubenswrapper[4915]: I0127 19:57:59.300302 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="352d358b-9465-4591-8159-74342269ace6" containerName="storage" Jan 27 19:57:59 crc kubenswrapper[4915]: I0127 19:57:59.300614 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="352d358b-9465-4591-8159-74342269ace6" containerName="storage" Jan 27 19:57:59 crc kubenswrapper[4915]: I0127 19:57:59.301492 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rkkzk" Jan 27 19:57:59 crc kubenswrapper[4915]: I0127 19:57:59.304030 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 27 19:57:59 crc kubenswrapper[4915]: I0127 19:57:59.304962 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 27 19:57:59 crc kubenswrapper[4915]: I0127 19:57:59.305454 4915 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-l2x7g" Jan 27 19:57:59 crc kubenswrapper[4915]: I0127 19:57:59.307513 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-rkkzk"] Jan 27 19:57:59 crc kubenswrapper[4915]: I0127 19:57:59.309891 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 27 19:57:59 crc kubenswrapper[4915]: I0127 19:57:59.361733 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt7zn\" (UniqueName: \"kubernetes.io/projected/b23a7926-216e-4548-a6ed-1d8a6280e0b1-kube-api-access-zt7zn\") pod \"crc-storage-crc-rkkzk\" (UID: \"b23a7926-216e-4548-a6ed-1d8a6280e0b1\") " pod="crc-storage/crc-storage-crc-rkkzk" Jan 27 19:57:59 crc kubenswrapper[4915]: I0127 19:57:59.361858 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b23a7926-216e-4548-a6ed-1d8a6280e0b1-node-mnt\") pod \"crc-storage-crc-rkkzk\" (UID: \"b23a7926-216e-4548-a6ed-1d8a6280e0b1\") " pod="crc-storage/crc-storage-crc-rkkzk" Jan 27 19:57:59 crc kubenswrapper[4915]: I0127 19:57:59.361890 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b23a7926-216e-4548-a6ed-1d8a6280e0b1-crc-storage\") pod \"crc-storage-crc-rkkzk\" (UID: \"b23a7926-216e-4548-a6ed-1d8a6280e0b1\") " pod="crc-storage/crc-storage-crc-rkkzk" Jan 27 19:57:59 crc kubenswrapper[4915]: I0127 19:57:59.365244 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="352d358b-9465-4591-8159-74342269ace6" path="/var/lib/kubelet/pods/352d358b-9465-4591-8159-74342269ace6/volumes" Jan 27 19:57:59 crc kubenswrapper[4915]: I0127 19:57:59.463490 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b23a7926-216e-4548-a6ed-1d8a6280e0b1-node-mnt\") pod \"crc-storage-crc-rkkzk\" (UID: \"b23a7926-216e-4548-a6ed-1d8a6280e0b1\") " pod="crc-storage/crc-storage-crc-rkkzk" Jan 27 19:57:59 crc kubenswrapper[4915]: I0127 19:57:59.463605 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b23a7926-216e-4548-a6ed-1d8a6280e0b1-crc-storage\") pod \"crc-storage-crc-rkkzk\" (UID: \"b23a7926-216e-4548-a6ed-1d8a6280e0b1\") " pod="crc-storage/crc-storage-crc-rkkzk" Jan 27 19:57:59 crc kubenswrapper[4915]: I0127 19:57:59.463678 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt7zn\" (UniqueName: \"kubernetes.io/projected/b23a7926-216e-4548-a6ed-1d8a6280e0b1-kube-api-access-zt7zn\") pod \"crc-storage-crc-rkkzk\" (UID: \"b23a7926-216e-4548-a6ed-1d8a6280e0b1\") " pod="crc-storage/crc-storage-crc-rkkzk" Jan 27 19:57:59 crc kubenswrapper[4915]: I0127 19:57:59.463900 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b23a7926-216e-4548-a6ed-1d8a6280e0b1-node-mnt\") pod \"crc-storage-crc-rkkzk\" (UID: \"b23a7926-216e-4548-a6ed-1d8a6280e0b1\") " pod="crc-storage/crc-storage-crc-rkkzk" Jan 27 19:57:59 crc kubenswrapper[4915]: I0127 19:57:59.464338 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b23a7926-216e-4548-a6ed-1d8a6280e0b1-crc-storage\") pod \"crc-storage-crc-rkkzk\" (UID: \"b23a7926-216e-4548-a6ed-1d8a6280e0b1\") " pod="crc-storage/crc-storage-crc-rkkzk" Jan 27 19:57:59 crc kubenswrapper[4915]: I0127 19:57:59.484363 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt7zn\" (UniqueName: \"kubernetes.io/projected/b23a7926-216e-4548-a6ed-1d8a6280e0b1-kube-api-access-zt7zn\") pod \"crc-storage-crc-rkkzk\" (UID: \"b23a7926-216e-4548-a6ed-1d8a6280e0b1\") " pod="crc-storage/crc-storage-crc-rkkzk" Jan 27 19:57:59 crc kubenswrapper[4915]: I0127 19:57:59.623054 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rkkzk" Jan 27 19:58:00 crc kubenswrapper[4915]: I0127 19:58:00.048317 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-rkkzk"] Jan 27 19:58:00 crc kubenswrapper[4915]: I0127 19:58:00.847560 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-rkkzk" event={"ID":"b23a7926-216e-4548-a6ed-1d8a6280e0b1","Type":"ContainerStarted","Data":"e3a669c66d0d6f1443ab517fe173da4772805415f06ffea0aa82f87ab231afc3"} Jan 27 19:58:01 crc kubenswrapper[4915]: I0127 19:58:01.857613 4915 generic.go:334] "Generic (PLEG): container finished" podID="b23a7926-216e-4548-a6ed-1d8a6280e0b1" containerID="336bc03689798f21620d9006c16ee25305d9fe5bcd7e9e68e6b80a54441386ea" exitCode=0 Jan 27 19:58:01 crc kubenswrapper[4915]: I0127 19:58:01.857710 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-rkkzk" event={"ID":"b23a7926-216e-4548-a6ed-1d8a6280e0b1","Type":"ContainerDied","Data":"336bc03689798f21620d9006c16ee25305d9fe5bcd7e9e68e6b80a54441386ea"} Jan 27 19:58:03 crc kubenswrapper[4915]: I0127 19:58:03.122340 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rkkzk" Jan 27 19:58:03 crc kubenswrapper[4915]: I0127 19:58:03.320027 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b23a7926-216e-4548-a6ed-1d8a6280e0b1-crc-storage\") pod \"b23a7926-216e-4548-a6ed-1d8a6280e0b1\" (UID: \"b23a7926-216e-4548-a6ed-1d8a6280e0b1\") " Jan 27 19:58:03 crc kubenswrapper[4915]: I0127 19:58:03.320454 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt7zn\" (UniqueName: \"kubernetes.io/projected/b23a7926-216e-4548-a6ed-1d8a6280e0b1-kube-api-access-zt7zn\") pod \"b23a7926-216e-4548-a6ed-1d8a6280e0b1\" (UID: \"b23a7926-216e-4548-a6ed-1d8a6280e0b1\") " Jan 27 19:58:03 crc kubenswrapper[4915]: I0127 19:58:03.320644 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b23a7926-216e-4548-a6ed-1d8a6280e0b1-node-mnt\") pod \"b23a7926-216e-4548-a6ed-1d8a6280e0b1\" (UID: \"b23a7926-216e-4548-a6ed-1d8a6280e0b1\") " Jan 27 19:58:03 crc kubenswrapper[4915]: I0127 19:58:03.320784 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b23a7926-216e-4548-a6ed-1d8a6280e0b1-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "b23a7926-216e-4548-a6ed-1d8a6280e0b1" (UID: "b23a7926-216e-4548-a6ed-1d8a6280e0b1"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:58:03 crc kubenswrapper[4915]: I0127 19:58:03.321157 4915 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b23a7926-216e-4548-a6ed-1d8a6280e0b1-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 27 19:58:03 crc kubenswrapper[4915]: I0127 19:58:03.328002 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b23a7926-216e-4548-a6ed-1d8a6280e0b1-kube-api-access-zt7zn" (OuterVolumeSpecName: "kube-api-access-zt7zn") pod "b23a7926-216e-4548-a6ed-1d8a6280e0b1" (UID: "b23a7926-216e-4548-a6ed-1d8a6280e0b1"). InnerVolumeSpecName "kube-api-access-zt7zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:58:03 crc kubenswrapper[4915]: I0127 19:58:03.354224 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23a7926-216e-4548-a6ed-1d8a6280e0b1-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "b23a7926-216e-4548-a6ed-1d8a6280e0b1" (UID: "b23a7926-216e-4548-a6ed-1d8a6280e0b1"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:58:03 crc kubenswrapper[4915]: I0127 19:58:03.422300 4915 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b23a7926-216e-4548-a6ed-1d8a6280e0b1-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 27 19:58:03 crc kubenswrapper[4915]: I0127 19:58:03.422352 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt7zn\" (UniqueName: \"kubernetes.io/projected/b23a7926-216e-4548-a6ed-1d8a6280e0b1-kube-api-access-zt7zn\") on node \"crc\" DevicePath \"\"" Jan 27 19:58:03 crc kubenswrapper[4915]: I0127 19:58:03.874246 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-rkkzk" event={"ID":"b23a7926-216e-4548-a6ed-1d8a6280e0b1","Type":"ContainerDied","Data":"e3a669c66d0d6f1443ab517fe173da4772805415f06ffea0aa82f87ab231afc3"} Jan 27 19:58:03 crc kubenswrapper[4915]: I0127 19:58:03.874328 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3a669c66d0d6f1443ab517fe173da4772805415f06ffea0aa82f87ab231afc3" Jan 27 19:58:03 crc kubenswrapper[4915]: I0127 19:58:03.874348 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rkkzk" Jan 27 19:58:04 crc kubenswrapper[4915]: I0127 19:58:04.357949 4915 scope.go:117] "RemoveContainer" containerID="be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" Jan 27 19:58:04 crc kubenswrapper[4915]: E0127 19:58:04.358318 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:58:15 crc kubenswrapper[4915]: I0127 19:58:15.357380 4915 scope.go:117] "RemoveContainer" containerID="be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" Jan 27 19:58:15 crc kubenswrapper[4915]: E0127 19:58:15.359255 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:58:27 crc kubenswrapper[4915]: I0127 19:58:27.058685 4915 scope.go:117] "RemoveContainer" containerID="be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" Jan 27 19:58:27 crc kubenswrapper[4915]: E0127 19:58:27.067800 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:58:38 crc kubenswrapper[4915]: I0127 19:58:38.357538 4915 scope.go:117] "RemoveContainer" containerID="be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" Jan 27 19:58:38 crc kubenswrapper[4915]: E0127 19:58:38.358283 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:58:51 crc kubenswrapper[4915]: I0127 19:58:51.358061 4915 scope.go:117] "RemoveContainer" containerID="be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" Jan 27 19:58:51 crc kubenswrapper[4915]: E0127 19:58:51.359179 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:59:03 crc kubenswrapper[4915]: I0127 19:59:03.357488 4915 scope.go:117] "RemoveContainer" containerID="be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" Jan 27 19:59:03 crc kubenswrapper[4915]: E0127 19:59:03.358363 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:59:18 crc kubenswrapper[4915]: I0127 19:59:18.357966 4915 scope.go:117] "RemoveContainer" containerID="be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" Jan 27 19:59:18 crc kubenswrapper[4915]: E0127 19:59:18.358898 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:59:33 crc kubenswrapper[4915]: I0127 19:59:33.358051 4915 scope.go:117] "RemoveContainer" containerID="be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" Jan 27 19:59:33 crc kubenswrapper[4915]: E0127 19:59:33.360549 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:59:44 crc kubenswrapper[4915]: I0127 19:59:44.357625 4915 scope.go:117] "RemoveContainer" containerID="be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" Jan 27 19:59:44 crc kubenswrapper[4915]: E0127 19:59:44.358445 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 19:59:58 crc kubenswrapper[4915]: I0127 19:59:58.357478 4915 scope.go:117] "RemoveContainer" containerID="be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" Jan 27 19:59:58 crc kubenswrapper[4915]: I0127 19:59:58.788343 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"d580dbaf336b4143ceeb4e1292fe240a1a27371f6459ca5ccb047b9704b2054c"} Jan 27 20:00:00 crc kubenswrapper[4915]: I0127 20:00:00.188775 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492400-4m4nb"] Jan 27 20:00:00 crc kubenswrapper[4915]: E0127 20:00:00.189652 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23a7926-216e-4548-a6ed-1d8a6280e0b1" containerName="storage" Jan 27 20:00:00 crc kubenswrapper[4915]: I0127 20:00:00.189675 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23a7926-216e-4548-a6ed-1d8a6280e0b1" containerName="storage" Jan 27 20:00:00 crc kubenswrapper[4915]: I0127 20:00:00.189961 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23a7926-216e-4548-a6ed-1d8a6280e0b1" containerName="storage" Jan 27 20:00:00 crc kubenswrapper[4915]: I0127 20:00:00.190671 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-4m4nb" Jan 27 20:00:00 crc kubenswrapper[4915]: I0127 20:00:00.193051 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 20:00:00 crc kubenswrapper[4915]: I0127 20:00:00.193112 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 20:00:00 crc kubenswrapper[4915]: I0127 20:00:00.221168 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492400-4m4nb"] Jan 27 20:00:00 crc kubenswrapper[4915]: I0127 20:00:00.375030 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71005214-b374-41b3-b484-609fe44d2fa1-secret-volume\") pod \"collect-profiles-29492400-4m4nb\" (UID: \"71005214-b374-41b3-b484-609fe44d2fa1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-4m4nb" Jan 27 20:00:00 crc kubenswrapper[4915]: I0127 20:00:00.375077 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dfcq\" (UniqueName: \"kubernetes.io/projected/71005214-b374-41b3-b484-609fe44d2fa1-kube-api-access-6dfcq\") pod \"collect-profiles-29492400-4m4nb\" (UID: \"71005214-b374-41b3-b484-609fe44d2fa1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-4m4nb" Jan 27 20:00:00 crc kubenswrapper[4915]: I0127 20:00:00.375299 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71005214-b374-41b3-b484-609fe44d2fa1-config-volume\") pod \"collect-profiles-29492400-4m4nb\" (UID: \"71005214-b374-41b3-b484-609fe44d2fa1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-4m4nb" Jan 27 20:00:00 crc kubenswrapper[4915]: I0127 20:00:00.476469 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71005214-b374-41b3-b484-609fe44d2fa1-secret-volume\") pod \"collect-profiles-29492400-4m4nb\" (UID: \"71005214-b374-41b3-b484-609fe44d2fa1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-4m4nb" Jan 27 20:00:00 crc kubenswrapper[4915]: I0127 20:00:00.476540 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dfcq\" (UniqueName: \"kubernetes.io/projected/71005214-b374-41b3-b484-609fe44d2fa1-kube-api-access-6dfcq\") pod \"collect-profiles-29492400-4m4nb\" (UID: \"71005214-b374-41b3-b484-609fe44d2fa1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-4m4nb" Jan 27 20:00:00 crc kubenswrapper[4915]: I0127 20:00:00.476701 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71005214-b374-41b3-b484-609fe44d2fa1-config-volume\") pod \"collect-profiles-29492400-4m4nb\" (UID: \"71005214-b374-41b3-b484-609fe44d2fa1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-4m4nb" Jan 27 20:00:00 crc kubenswrapper[4915]: I0127 20:00:00.478371 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71005214-b374-41b3-b484-609fe44d2fa1-config-volume\") pod \"collect-profiles-29492400-4m4nb\" (UID: \"71005214-b374-41b3-b484-609fe44d2fa1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-4m4nb" Jan 27 20:00:00 crc kubenswrapper[4915]: I0127 20:00:00.482646 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71005214-b374-41b3-b484-609fe44d2fa1-secret-volume\") pod \"collect-profiles-29492400-4m4nb\" (UID: \"71005214-b374-41b3-b484-609fe44d2fa1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-4m4nb" Jan 27 20:00:00 crc kubenswrapper[4915]: I0127 20:00:00.502076 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dfcq\" (UniqueName: \"kubernetes.io/projected/71005214-b374-41b3-b484-609fe44d2fa1-kube-api-access-6dfcq\") pod \"collect-profiles-29492400-4m4nb\" (UID: \"71005214-b374-41b3-b484-609fe44d2fa1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-4m4nb" Jan 27 20:00:00 crc kubenswrapper[4915]: I0127 20:00:00.538181 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-4m4nb" Jan 27 20:00:01 crc kubenswrapper[4915]: I0127 20:00:01.019271 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492400-4m4nb"] Jan 27 20:00:01 crc kubenswrapper[4915]: W0127 20:00:01.022246 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71005214_b374_41b3_b484_609fe44d2fa1.slice/crio-0cbddb37976ade09ef5a682c4a7cd25a36e220d6b2a8406edeef2c6759bed55f WatchSource:0}: Error finding container 0cbddb37976ade09ef5a682c4a7cd25a36e220d6b2a8406edeef2c6759bed55f: Status 404 returned error can't find the container with id 0cbddb37976ade09ef5a682c4a7cd25a36e220d6b2a8406edeef2c6759bed55f Jan 27 20:00:01 crc kubenswrapper[4915]: I0127 20:00:01.808504 4915 generic.go:334] "Generic (PLEG): container finished" podID="71005214-b374-41b3-b484-609fe44d2fa1" containerID="c8c0a28d00d971aab6b71b4cfa83b41fb29b0248bc2a90ebaa0a757903062058" exitCode=0 Jan 27 20:00:01 crc kubenswrapper[4915]: I0127 20:00:01.808617 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-4m4nb" event={"ID":"71005214-b374-41b3-b484-609fe44d2fa1","Type":"ContainerDied","Data":"c8c0a28d00d971aab6b71b4cfa83b41fb29b0248bc2a90ebaa0a757903062058"} Jan 27 20:00:01 crc kubenswrapper[4915]: I0127 20:00:01.808874 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-4m4nb" event={"ID":"71005214-b374-41b3-b484-609fe44d2fa1","Type":"ContainerStarted","Data":"0cbddb37976ade09ef5a682c4a7cd25a36e220d6b2a8406edeef2c6759bed55f"} Jan 27 20:00:03 crc kubenswrapper[4915]: I0127 20:00:03.046301 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-4m4nb" Jan 27 20:00:03 crc kubenswrapper[4915]: I0127 20:00:03.217683 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71005214-b374-41b3-b484-609fe44d2fa1-secret-volume\") pod \"71005214-b374-41b3-b484-609fe44d2fa1\" (UID: \"71005214-b374-41b3-b484-609fe44d2fa1\") " Jan 27 20:00:03 crc kubenswrapper[4915]: I0127 20:00:03.217879 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71005214-b374-41b3-b484-609fe44d2fa1-config-volume\") pod \"71005214-b374-41b3-b484-609fe44d2fa1\" (UID: \"71005214-b374-41b3-b484-609fe44d2fa1\") " Jan 27 20:00:03 crc kubenswrapper[4915]: I0127 20:00:03.217923 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dfcq\" (UniqueName: \"kubernetes.io/projected/71005214-b374-41b3-b484-609fe44d2fa1-kube-api-access-6dfcq\") pod \"71005214-b374-41b3-b484-609fe44d2fa1\" (UID: \"71005214-b374-41b3-b484-609fe44d2fa1\") " Jan 27 20:00:03 crc kubenswrapper[4915]: I0127 20:00:03.218454 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71005214-b374-41b3-b484-609fe44d2fa1-config-volume" (OuterVolumeSpecName: "config-volume") pod "71005214-b374-41b3-b484-609fe44d2fa1" (UID: "71005214-b374-41b3-b484-609fe44d2fa1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:00:03 crc kubenswrapper[4915]: I0127 20:00:03.222705 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71005214-b374-41b3-b484-609fe44d2fa1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "71005214-b374-41b3-b484-609fe44d2fa1" (UID: "71005214-b374-41b3-b484-609fe44d2fa1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:00:03 crc kubenswrapper[4915]: I0127 20:00:03.223131 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71005214-b374-41b3-b484-609fe44d2fa1-kube-api-access-6dfcq" (OuterVolumeSpecName: "kube-api-access-6dfcq") pod "71005214-b374-41b3-b484-609fe44d2fa1" (UID: "71005214-b374-41b3-b484-609fe44d2fa1"). InnerVolumeSpecName "kube-api-access-6dfcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:00:03 crc kubenswrapper[4915]: I0127 20:00:03.319913 4915 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71005214-b374-41b3-b484-609fe44d2fa1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 20:00:03 crc kubenswrapper[4915]: I0127 20:00:03.319954 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dfcq\" (UniqueName: \"kubernetes.io/projected/71005214-b374-41b3-b484-609fe44d2fa1-kube-api-access-6dfcq\") on node \"crc\" DevicePath \"\"" Jan 27 20:00:03 crc kubenswrapper[4915]: I0127 20:00:03.319972 4915 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71005214-b374-41b3-b484-609fe44d2fa1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 20:00:03 crc kubenswrapper[4915]: I0127 20:00:03.825499 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-4m4nb" event={"ID":"71005214-b374-41b3-b484-609fe44d2fa1","Type":"ContainerDied","Data":"0cbddb37976ade09ef5a682c4a7cd25a36e220d6b2a8406edeef2c6759bed55f"} Jan 27 20:00:03 crc kubenswrapper[4915]: I0127 20:00:03.825542 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cbddb37976ade09ef5a682c4a7cd25a36e220d6b2a8406edeef2c6759bed55f" Jan 27 20:00:03 crc kubenswrapper[4915]: I0127 20:00:03.825600 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-4m4nb" Jan 27 20:00:04 crc kubenswrapper[4915]: I0127 20:00:04.123625 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492355-w2cbr"] Jan 27 20:00:04 crc kubenswrapper[4915]: I0127 20:00:04.129149 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492355-w2cbr"] Jan 27 20:00:05 crc kubenswrapper[4915]: I0127 20:00:05.366778 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89136b8e-1c34-4298-9d3d-9619f50ace98" path="/var/lib/kubelet/pods/89136b8e-1c34-4298-9d3d-9619f50ace98/volumes" Jan 27 20:00:19 crc kubenswrapper[4915]: I0127 20:00:19.165399 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9pzfg"] Jan 27 20:00:19 crc kubenswrapper[4915]: E0127 20:00:19.166400 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71005214-b374-41b3-b484-609fe44d2fa1" containerName="collect-profiles" Jan 27 20:00:19 crc kubenswrapper[4915]: I0127 20:00:19.166422 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="71005214-b374-41b3-b484-609fe44d2fa1" containerName="collect-profiles" Jan 27 20:00:19 crc kubenswrapper[4915]: I0127 20:00:19.166741 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="71005214-b374-41b3-b484-609fe44d2fa1" containerName="collect-profiles" Jan 27 20:00:19 crc kubenswrapper[4915]: I0127 20:00:19.169389 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pzfg" Jan 27 20:00:19 crc kubenswrapper[4915]: I0127 20:00:19.188631 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9pzfg"] Jan 27 20:00:19 crc kubenswrapper[4915]: I0127 20:00:19.360012 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20b1b451-535d-4c6a-95ab-523b9c13ae1b-catalog-content\") pod \"certified-operators-9pzfg\" (UID: \"20b1b451-535d-4c6a-95ab-523b9c13ae1b\") " pod="openshift-marketplace/certified-operators-9pzfg" Jan 27 20:00:19 crc kubenswrapper[4915]: I0127 20:00:19.360417 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqdw9\" (UniqueName: \"kubernetes.io/projected/20b1b451-535d-4c6a-95ab-523b9c13ae1b-kube-api-access-tqdw9\") pod \"certified-operators-9pzfg\" (UID: \"20b1b451-535d-4c6a-95ab-523b9c13ae1b\") " pod="openshift-marketplace/certified-operators-9pzfg" Jan 27 20:00:19 crc kubenswrapper[4915]: I0127 20:00:19.360567 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20b1b451-535d-4c6a-95ab-523b9c13ae1b-utilities\") pod \"certified-operators-9pzfg\" (UID: \"20b1b451-535d-4c6a-95ab-523b9c13ae1b\") " pod="openshift-marketplace/certified-operators-9pzfg" Jan 27 20:00:19 crc kubenswrapper[4915]: I0127 20:00:19.462485 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20b1b451-535d-4c6a-95ab-523b9c13ae1b-catalog-content\") pod \"certified-operators-9pzfg\" (UID: \"20b1b451-535d-4c6a-95ab-523b9c13ae1b\") " pod="openshift-marketplace/certified-operators-9pzfg" Jan 27 20:00:19 crc kubenswrapper[4915]: I0127 20:00:19.462621 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqdw9\" (UniqueName: \"kubernetes.io/projected/20b1b451-535d-4c6a-95ab-523b9c13ae1b-kube-api-access-tqdw9\") pod \"certified-operators-9pzfg\" (UID: \"20b1b451-535d-4c6a-95ab-523b9c13ae1b\") " pod="openshift-marketplace/certified-operators-9pzfg" Jan 27 20:00:19 crc kubenswrapper[4915]: I0127 20:00:19.462650 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20b1b451-535d-4c6a-95ab-523b9c13ae1b-utilities\") pod \"certified-operators-9pzfg\" (UID: \"20b1b451-535d-4c6a-95ab-523b9c13ae1b\") " pod="openshift-marketplace/certified-operators-9pzfg" Jan 27 20:00:19 crc kubenswrapper[4915]: I0127 20:00:19.463152 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20b1b451-535d-4c6a-95ab-523b9c13ae1b-utilities\") pod \"certified-operators-9pzfg\" (UID: \"20b1b451-535d-4c6a-95ab-523b9c13ae1b\") " pod="openshift-marketplace/certified-operators-9pzfg" Jan 27 20:00:19 crc kubenswrapper[4915]: I0127 20:00:19.463408 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20b1b451-535d-4c6a-95ab-523b9c13ae1b-catalog-content\") pod \"certified-operators-9pzfg\" (UID: \"20b1b451-535d-4c6a-95ab-523b9c13ae1b\") " pod="openshift-marketplace/certified-operators-9pzfg" Jan 27 20:00:19 crc kubenswrapper[4915]: I0127 20:00:19.484905 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqdw9\" (UniqueName: \"kubernetes.io/projected/20b1b451-535d-4c6a-95ab-523b9c13ae1b-kube-api-access-tqdw9\") pod \"certified-operators-9pzfg\" (UID: \"20b1b451-535d-4c6a-95ab-523b9c13ae1b\") " pod="openshift-marketplace/certified-operators-9pzfg" Jan 27 20:00:19 crc kubenswrapper[4915]: I0127 20:00:19.495645 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pzfg" Jan 27 20:00:20 crc kubenswrapper[4915]: I0127 20:00:20.049010 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9pzfg"] Jan 27 20:00:20 crc kubenswrapper[4915]: I0127 20:00:20.968020 4915 generic.go:334] "Generic (PLEG): container finished" podID="20b1b451-535d-4c6a-95ab-523b9c13ae1b" containerID="13ef5d4fa4fa2183bb29170299e0dc8b155c55e3d53eadcb177c1bbea39a5457" exitCode=0 Jan 27 20:00:20 crc kubenswrapper[4915]: I0127 20:00:20.968120 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pzfg" event={"ID":"20b1b451-535d-4c6a-95ab-523b9c13ae1b","Type":"ContainerDied","Data":"13ef5d4fa4fa2183bb29170299e0dc8b155c55e3d53eadcb177c1bbea39a5457"} Jan 27 20:00:20 crc kubenswrapper[4915]: I0127 20:00:20.968263 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pzfg" event={"ID":"20b1b451-535d-4c6a-95ab-523b9c13ae1b","Type":"ContainerStarted","Data":"7e45c43903ab2337089092da7ef2a4433fd78395988b8c3c875a2fdbcaea29b8"} Jan 27 20:00:21 crc kubenswrapper[4915]: I0127 20:00:21.976689 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pzfg" event={"ID":"20b1b451-535d-4c6a-95ab-523b9c13ae1b","Type":"ContainerStarted","Data":"dd755c5d70485ab1d76583865b2017747ba78d0038a5f168a72845f74f659538"} Jan 27 20:00:22 crc kubenswrapper[4915]: I0127 20:00:22.985257 4915 generic.go:334] "Generic (PLEG): container finished" podID="20b1b451-535d-4c6a-95ab-523b9c13ae1b" containerID="dd755c5d70485ab1d76583865b2017747ba78d0038a5f168a72845f74f659538" exitCode=0 Jan 27 20:00:22 crc kubenswrapper[4915]: I0127 20:00:22.985315 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pzfg" event={"ID":"20b1b451-535d-4c6a-95ab-523b9c13ae1b","Type":"ContainerDied","Data":"dd755c5d70485ab1d76583865b2017747ba78d0038a5f168a72845f74f659538"} Jan 27 20:00:23 crc kubenswrapper[4915]: I0127 20:00:23.995763 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pzfg" event={"ID":"20b1b451-535d-4c6a-95ab-523b9c13ae1b","Type":"ContainerStarted","Data":"643f67081a14f327e0f5e0391a8f6748181b8bfafc6a6bcfab0862fe7745e38a"} Jan 27 20:00:24 crc kubenswrapper[4915]: I0127 20:00:24.029022 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9pzfg" podStartSLOduration=2.403555491 podStartE2EDuration="5.028993608s" podCreationTimestamp="2026-01-27 20:00:19 +0000 UTC" firstStartedPulling="2026-01-27 20:00:20.973157306 +0000 UTC m=+4712.331010970" lastFinishedPulling="2026-01-27 20:00:23.598595383 +0000 UTC m=+4714.956449087" observedRunningTime="2026-01-27 20:00:24.014775538 +0000 UTC m=+4715.372629232" watchObservedRunningTime="2026-01-27 20:00:24.028993608 +0000 UTC m=+4715.386847312" Jan 27 20:00:29 crc kubenswrapper[4915]: I0127 20:00:29.496763 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9pzfg" Jan 27 20:00:29 crc kubenswrapper[4915]: I0127 20:00:29.497991 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9pzfg" Jan 27 20:00:29 crc kubenswrapper[4915]: I0127 20:00:29.544398 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9pzfg" Jan 27 20:00:30 crc kubenswrapper[4915]: I0127 20:00:30.109689 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9pzfg" Jan 27 20:00:30 crc kubenswrapper[4915]: I0127 20:00:30.151551 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9pzfg"] Jan 27 20:00:32 crc kubenswrapper[4915]: I0127 20:00:32.067121 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9pzfg" podUID="20b1b451-535d-4c6a-95ab-523b9c13ae1b" containerName="registry-server" containerID="cri-o://643f67081a14f327e0f5e0391a8f6748181b8bfafc6a6bcfab0862fe7745e38a" gracePeriod=2 Jan 27 20:00:32 crc kubenswrapper[4915]: I0127 20:00:32.464267 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pzfg" Jan 27 20:00:32 crc kubenswrapper[4915]: I0127 20:00:32.666215 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20b1b451-535d-4c6a-95ab-523b9c13ae1b-utilities\") pod \"20b1b451-535d-4c6a-95ab-523b9c13ae1b\" (UID: \"20b1b451-535d-4c6a-95ab-523b9c13ae1b\") " Jan 27 20:00:32 crc kubenswrapper[4915]: I0127 20:00:32.666343 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqdw9\" (UniqueName: \"kubernetes.io/projected/20b1b451-535d-4c6a-95ab-523b9c13ae1b-kube-api-access-tqdw9\") pod \"20b1b451-535d-4c6a-95ab-523b9c13ae1b\" (UID: \"20b1b451-535d-4c6a-95ab-523b9c13ae1b\") " Jan 27 20:00:32 crc kubenswrapper[4915]: I0127 20:00:32.666416 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20b1b451-535d-4c6a-95ab-523b9c13ae1b-catalog-content\") pod \"20b1b451-535d-4c6a-95ab-523b9c13ae1b\" (UID: \"20b1b451-535d-4c6a-95ab-523b9c13ae1b\") " Jan 27 20:00:32 crc kubenswrapper[4915]: I0127 20:00:32.667352 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20b1b451-535d-4c6a-95ab-523b9c13ae1b-utilities" (OuterVolumeSpecName: "utilities") pod "20b1b451-535d-4c6a-95ab-523b9c13ae1b" (UID: "20b1b451-535d-4c6a-95ab-523b9c13ae1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:00:32 crc kubenswrapper[4915]: I0127 20:00:32.672404 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b1b451-535d-4c6a-95ab-523b9c13ae1b-kube-api-access-tqdw9" (OuterVolumeSpecName: "kube-api-access-tqdw9") pod "20b1b451-535d-4c6a-95ab-523b9c13ae1b" (UID: "20b1b451-535d-4c6a-95ab-523b9c13ae1b"). InnerVolumeSpecName "kube-api-access-tqdw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:00:32 crc kubenswrapper[4915]: I0127 20:00:32.717621 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20b1b451-535d-4c6a-95ab-523b9c13ae1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20b1b451-535d-4c6a-95ab-523b9c13ae1b" (UID: "20b1b451-535d-4c6a-95ab-523b9c13ae1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:00:32 crc kubenswrapper[4915]: I0127 20:00:32.768716 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20b1b451-535d-4c6a-95ab-523b9c13ae1b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:00:32 crc kubenswrapper[4915]: I0127 20:00:32.768747 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqdw9\" (UniqueName: \"kubernetes.io/projected/20b1b451-535d-4c6a-95ab-523b9c13ae1b-kube-api-access-tqdw9\") on node \"crc\" DevicePath \"\"" Jan 27 20:00:32 crc kubenswrapper[4915]: I0127 20:00:32.768757 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20b1b451-535d-4c6a-95ab-523b9c13ae1b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:00:33 crc kubenswrapper[4915]: I0127 20:00:33.079571 4915 generic.go:334] "Generic (PLEG): container finished" podID="20b1b451-535d-4c6a-95ab-523b9c13ae1b" containerID="643f67081a14f327e0f5e0391a8f6748181b8bfafc6a6bcfab0862fe7745e38a" exitCode=0 Jan 27 20:00:33 crc kubenswrapper[4915]: I0127 20:00:33.079620 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pzfg" event={"ID":"20b1b451-535d-4c6a-95ab-523b9c13ae1b","Type":"ContainerDied","Data":"643f67081a14f327e0f5e0391a8f6748181b8bfafc6a6bcfab0862fe7745e38a"} Jan 27 20:00:33 crc kubenswrapper[4915]: I0127 20:00:33.079652 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pzfg" event={"ID":"20b1b451-535d-4c6a-95ab-523b9c13ae1b","Type":"ContainerDied","Data":"7e45c43903ab2337089092da7ef2a4433fd78395988b8c3c875a2fdbcaea29b8"} Jan 27 20:00:33 crc kubenswrapper[4915]: I0127 20:00:33.079668 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pzfg" Jan 27 20:00:33 crc kubenswrapper[4915]: I0127 20:00:33.079697 4915 scope.go:117] "RemoveContainer" containerID="643f67081a14f327e0f5e0391a8f6748181b8bfafc6a6bcfab0862fe7745e38a" Jan 27 20:00:33 crc kubenswrapper[4915]: I0127 20:00:33.102358 4915 scope.go:117] "RemoveContainer" containerID="dd755c5d70485ab1d76583865b2017747ba78d0038a5f168a72845f74f659538" Jan 27 20:00:33 crc kubenswrapper[4915]: I0127 20:00:33.122090 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9pzfg"] Jan 27 20:00:33 crc kubenswrapper[4915]: I0127 20:00:33.132609 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9pzfg"] Jan 27 20:00:33 crc kubenswrapper[4915]: I0127 20:00:33.252510 4915 scope.go:117] "RemoveContainer" containerID="13ef5d4fa4fa2183bb29170299e0dc8b155c55e3d53eadcb177c1bbea39a5457" Jan 27 20:00:33 crc kubenswrapper[4915]: I0127 20:00:33.273163 4915 scope.go:117] "RemoveContainer" containerID="643f67081a14f327e0f5e0391a8f6748181b8bfafc6a6bcfab0862fe7745e38a" Jan 27 20:00:33 crc kubenswrapper[4915]: E0127 20:00:33.273531 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"643f67081a14f327e0f5e0391a8f6748181b8bfafc6a6bcfab0862fe7745e38a\": container with ID starting with 643f67081a14f327e0f5e0391a8f6748181b8bfafc6a6bcfab0862fe7745e38a not found: ID does not exist" containerID="643f67081a14f327e0f5e0391a8f6748181b8bfafc6a6bcfab0862fe7745e38a" Jan 27 20:00:33 crc kubenswrapper[4915]: I0127 20:00:33.273569 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"643f67081a14f327e0f5e0391a8f6748181b8bfafc6a6bcfab0862fe7745e38a"} err="failed to get container status \"643f67081a14f327e0f5e0391a8f6748181b8bfafc6a6bcfab0862fe7745e38a\": rpc error: code = NotFound desc = could not find container \"643f67081a14f327e0f5e0391a8f6748181b8bfafc6a6bcfab0862fe7745e38a\": container with ID starting with 643f67081a14f327e0f5e0391a8f6748181b8bfafc6a6bcfab0862fe7745e38a not found: ID does not exist" Jan 27 20:00:33 crc kubenswrapper[4915]: I0127 20:00:33.273589 4915 scope.go:117] "RemoveContainer" containerID="dd755c5d70485ab1d76583865b2017747ba78d0038a5f168a72845f74f659538" Jan 27 20:00:33 crc kubenswrapper[4915]: E0127 20:00:33.273977 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd755c5d70485ab1d76583865b2017747ba78d0038a5f168a72845f74f659538\": container with ID starting with dd755c5d70485ab1d76583865b2017747ba78d0038a5f168a72845f74f659538 not found: ID does not exist" containerID="dd755c5d70485ab1d76583865b2017747ba78d0038a5f168a72845f74f659538" Jan 27 20:00:33 crc kubenswrapper[4915]: I0127 20:00:33.274032 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd755c5d70485ab1d76583865b2017747ba78d0038a5f168a72845f74f659538"} err="failed to get container status \"dd755c5d70485ab1d76583865b2017747ba78d0038a5f168a72845f74f659538\": rpc error: code = NotFound desc = could not find container \"dd755c5d70485ab1d76583865b2017747ba78d0038a5f168a72845f74f659538\": container with ID starting with dd755c5d70485ab1d76583865b2017747ba78d0038a5f168a72845f74f659538 not found: ID does not exist" Jan 27 20:00:33 crc kubenswrapper[4915]: I0127 20:00:33.274062 4915 scope.go:117] "RemoveContainer" containerID="13ef5d4fa4fa2183bb29170299e0dc8b155c55e3d53eadcb177c1bbea39a5457" Jan 27 20:00:33 crc kubenswrapper[4915]: E0127 20:00:33.275073 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13ef5d4fa4fa2183bb29170299e0dc8b155c55e3d53eadcb177c1bbea39a5457\": container with ID starting with 13ef5d4fa4fa2183bb29170299e0dc8b155c55e3d53eadcb177c1bbea39a5457 not found: ID does not exist" containerID="13ef5d4fa4fa2183bb29170299e0dc8b155c55e3d53eadcb177c1bbea39a5457" Jan 27 20:00:33 crc kubenswrapper[4915]: I0127 20:00:33.275111 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13ef5d4fa4fa2183bb29170299e0dc8b155c55e3d53eadcb177c1bbea39a5457"} err="failed to get container status \"13ef5d4fa4fa2183bb29170299e0dc8b155c55e3d53eadcb177c1bbea39a5457\": rpc error: code = NotFound desc = could not find container \"13ef5d4fa4fa2183bb29170299e0dc8b155c55e3d53eadcb177c1bbea39a5457\": container with ID starting with 13ef5d4fa4fa2183bb29170299e0dc8b155c55e3d53eadcb177c1bbea39a5457 not found: ID does not exist" Jan 27 20:00:33 crc kubenswrapper[4915]: I0127 20:00:33.370437 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b1b451-535d-4c6a-95ab-523b9c13ae1b" path="/var/lib/kubelet/pods/20b1b451-535d-4c6a-95ab-523b9c13ae1b/volumes" Jan 27 20:00:55 crc kubenswrapper[4915]: I0127 20:00:55.563390 4915 scope.go:117] "RemoveContainer" containerID="e1f76f2a25d4ff1e856ba0dc8a99d8921f506735e3f4d5ed28228dc8e30badfa" Jan 27 20:01:20 crc kubenswrapper[4915]: I0127 20:01:20.873549 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-kmrp4"] Jan 27 20:01:20 crc kubenswrapper[4915]: E0127 20:01:20.874430 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b1b451-535d-4c6a-95ab-523b9c13ae1b" containerName="extract-content" Jan 27 20:01:20 crc kubenswrapper[4915]: I0127 20:01:20.874448 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b1b451-535d-4c6a-95ab-523b9c13ae1b" containerName="extract-content" Jan 27 20:01:20 crc kubenswrapper[4915]: E0127 20:01:20.874459 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b1b451-535d-4c6a-95ab-523b9c13ae1b" containerName="extract-utilities" Jan 27 20:01:20 crc kubenswrapper[4915]: I0127 20:01:20.874464 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b1b451-535d-4c6a-95ab-523b9c13ae1b" containerName="extract-utilities" Jan 27 20:01:20 crc kubenswrapper[4915]: E0127 20:01:20.874474 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b1b451-535d-4c6a-95ab-523b9c13ae1b" containerName="registry-server" Jan 27 20:01:20 crc kubenswrapper[4915]: I0127 20:01:20.874479 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b1b451-535d-4c6a-95ab-523b9c13ae1b" containerName="registry-server" Jan 27 20:01:20 crc kubenswrapper[4915]: I0127 20:01:20.874601 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="20b1b451-535d-4c6a-95ab-523b9c13ae1b" containerName="registry-server" Jan 27 20:01:20 crc kubenswrapper[4915]: I0127 20:01:20.875464 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-kmrp4" Jan 27 20:01:20 crc kubenswrapper[4915]: I0127 20:01:20.878080 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 27 20:01:20 crc kubenswrapper[4915]: I0127 20:01:20.881131 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 27 20:01:20 crc kubenswrapper[4915]: I0127 20:01:20.881232 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 27 20:01:20 crc kubenswrapper[4915]: I0127 20:01:20.881410 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 27 20:01:20 crc kubenswrapper[4915]: I0127 20:01:20.881525 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-kz596" Jan 27 20:01:20 crc kubenswrapper[4915]: I0127 20:01:20.890044 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-kmrp4"] Jan 27 20:01:20 crc kubenswrapper[4915]: I0127 20:01:20.934862 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1d2a764-bb16-4c36-bf1e-ae4550c1071d-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-kmrp4\" (UID: \"a1d2a764-bb16-4c36-bf1e-ae4550c1071d\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kmrp4" Jan 27 20:01:20 crc kubenswrapper[4915]: I0127 20:01:20.934944 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2px4s\" (UniqueName: \"kubernetes.io/projected/a1d2a764-bb16-4c36-bf1e-ae4550c1071d-kube-api-access-2px4s\") pod \"dnsmasq-dns-5d7b5456f5-kmrp4\" (UID: \"a1d2a764-bb16-4c36-bf1e-ae4550c1071d\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kmrp4" Jan 27 20:01:20 crc kubenswrapper[4915]: I0127 20:01:20.934987 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1d2a764-bb16-4c36-bf1e-ae4550c1071d-config\") pod \"dnsmasq-dns-5d7b5456f5-kmrp4\" (UID: \"a1d2a764-bb16-4c36-bf1e-ae4550c1071d\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kmrp4" Jan 27 20:01:21 crc kubenswrapper[4915]: I0127 20:01:21.036085 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1d2a764-bb16-4c36-bf1e-ae4550c1071d-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-kmrp4\" (UID: \"a1d2a764-bb16-4c36-bf1e-ae4550c1071d\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kmrp4" Jan 27 20:01:21 crc kubenswrapper[4915]: I0127 20:01:21.036316 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2px4s\" (UniqueName: \"kubernetes.io/projected/a1d2a764-bb16-4c36-bf1e-ae4550c1071d-kube-api-access-2px4s\") pod \"dnsmasq-dns-5d7b5456f5-kmrp4\" (UID: \"a1d2a764-bb16-4c36-bf1e-ae4550c1071d\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kmrp4" Jan 27 20:01:21 crc kubenswrapper[4915]: I0127 20:01:21.036432 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1d2a764-bb16-4c36-bf1e-ae4550c1071d-config\") pod \"dnsmasq-dns-5d7b5456f5-kmrp4\" (UID: \"a1d2a764-bb16-4c36-bf1e-ae4550c1071d\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kmrp4" Jan 27 20:01:21 crc kubenswrapper[4915]: I0127 20:01:21.037121 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1d2a764-bb16-4c36-bf1e-ae4550c1071d-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-kmrp4\" (UID: \"a1d2a764-bb16-4c36-bf1e-ae4550c1071d\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kmrp4" Jan 27 20:01:21 crc kubenswrapper[4915]: I0127 20:01:21.037418 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1d2a764-bb16-4c36-bf1e-ae4550c1071d-config\") pod \"dnsmasq-dns-5d7b5456f5-kmrp4\" (UID: \"a1d2a764-bb16-4c36-bf1e-ae4550c1071d\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kmrp4" Jan 27 20:01:21 crc kubenswrapper[4915]: I0127 20:01:21.076770 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2px4s\" (UniqueName: \"kubernetes.io/projected/a1d2a764-bb16-4c36-bf1e-ae4550c1071d-kube-api-access-2px4s\") pod \"dnsmasq-dns-5d7b5456f5-kmrp4\" (UID: \"a1d2a764-bb16-4c36-bf1e-ae4550c1071d\") " pod="openstack/dnsmasq-dns-5d7b5456f5-kmrp4" Jan 27 20:01:21 crc kubenswrapper[4915]: I0127 20:01:21.202756 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-kmrp4" Jan 27 20:01:21 crc kubenswrapper[4915]: I0127 20:01:21.350765 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-92fkj"] Jan 27 20:01:21 crc kubenswrapper[4915]: I0127 20:01:21.352275 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-92fkj" Jan 27 20:01:21 crc kubenswrapper[4915]: I0127 20:01:21.376306 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-92fkj"] Jan 27 20:01:21 crc kubenswrapper[4915]: I0127 20:01:21.546105 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgc99\" (UniqueName: \"kubernetes.io/projected/c3acbe65-7957-4891-897a-d920fd17f93d-kube-api-access-cgc99\") pod \"dnsmasq-dns-98ddfc8f-92fkj\" (UID: \"c3acbe65-7957-4891-897a-d920fd17f93d\") " pod="openstack/dnsmasq-dns-98ddfc8f-92fkj" Jan 27 20:01:21 crc kubenswrapper[4915]: I0127 20:01:21.546575 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3acbe65-7957-4891-897a-d920fd17f93d-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-92fkj\" (UID: \"c3acbe65-7957-4891-897a-d920fd17f93d\") " pod="openstack/dnsmasq-dns-98ddfc8f-92fkj" Jan 27 20:01:21 crc kubenswrapper[4915]: I0127 20:01:21.546650 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3acbe65-7957-4891-897a-d920fd17f93d-config\") pod \"dnsmasq-dns-98ddfc8f-92fkj\" (UID: \"c3acbe65-7957-4891-897a-d920fd17f93d\") " pod="openstack/dnsmasq-dns-98ddfc8f-92fkj" Jan 27 20:01:21 crc kubenswrapper[4915]: I0127 20:01:21.647178 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3acbe65-7957-4891-897a-d920fd17f93d-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-92fkj\" (UID: \"c3acbe65-7957-4891-897a-d920fd17f93d\") " pod="openstack/dnsmasq-dns-98ddfc8f-92fkj" Jan 27 20:01:21 crc kubenswrapper[4915]: I0127 20:01:21.647236 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3acbe65-7957-4891-897a-d920fd17f93d-config\") pod \"dnsmasq-dns-98ddfc8f-92fkj\" (UID: \"c3acbe65-7957-4891-897a-d920fd17f93d\") " pod="openstack/dnsmasq-dns-98ddfc8f-92fkj" Jan 27 20:01:21 crc kubenswrapper[4915]: I0127 20:01:21.647318 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgc99\" (UniqueName: \"kubernetes.io/projected/c3acbe65-7957-4891-897a-d920fd17f93d-kube-api-access-cgc99\") pod \"dnsmasq-dns-98ddfc8f-92fkj\" (UID: \"c3acbe65-7957-4891-897a-d920fd17f93d\") " pod="openstack/dnsmasq-dns-98ddfc8f-92fkj" Jan 27 20:01:21 crc kubenswrapper[4915]: I0127 20:01:21.648203 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3acbe65-7957-4891-897a-d920fd17f93d-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-92fkj\" (UID: \"c3acbe65-7957-4891-897a-d920fd17f93d\") " pod="openstack/dnsmasq-dns-98ddfc8f-92fkj" Jan 27 20:01:21 crc kubenswrapper[4915]: I0127 20:01:21.648737 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3acbe65-7957-4891-897a-d920fd17f93d-config\") pod \"dnsmasq-dns-98ddfc8f-92fkj\" (UID: \"c3acbe65-7957-4891-897a-d920fd17f93d\") " pod="openstack/dnsmasq-dns-98ddfc8f-92fkj" Jan 27 20:01:21 crc kubenswrapper[4915]: I0127 20:01:21.673334 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgc99\" (UniqueName: \"kubernetes.io/projected/c3acbe65-7957-4891-897a-d920fd17f93d-kube-api-access-cgc99\") pod \"dnsmasq-dns-98ddfc8f-92fkj\" (UID: \"c3acbe65-7957-4891-897a-d920fd17f93d\") " pod="openstack/dnsmasq-dns-98ddfc8f-92fkj" Jan 27 20:01:21 crc kubenswrapper[4915]: I0127 20:01:21.802537 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-kmrp4"] Jan 27 20:01:21 crc kubenswrapper[4915]: I0127 20:01:21.971610 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-92fkj" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.106368 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.108467 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.114104 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.114338 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.115622 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.115971 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.116753 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-67fn6" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.122748 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.261729 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/42f8b37f-1bac-42b5-9529-e2671a11fc2f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.261782 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b0ca5657-62ed-472e-a2ca-646e5eecde32\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b0ca5657-62ed-472e-a2ca-646e5eecde32\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.261821 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/42f8b37f-1bac-42b5-9529-e2671a11fc2f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.261853 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/42f8b37f-1bac-42b5-9529-e2671a11fc2f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.261877 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/42f8b37f-1bac-42b5-9529-e2671a11fc2f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.261891 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/42f8b37f-1bac-42b5-9529-e2671a11fc2f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.261930 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/42f8b37f-1bac-42b5-9529-e2671a11fc2f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.261955 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/42f8b37f-1bac-42b5-9529-e2671a11fc2f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.261999 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9bj7\" (UniqueName: \"kubernetes.io/projected/42f8b37f-1bac-42b5-9529-e2671a11fc2f-kube-api-access-q9bj7\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.363949 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9bj7\" (UniqueName: \"kubernetes.io/projected/42f8b37f-1bac-42b5-9529-e2671a11fc2f-kube-api-access-q9bj7\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.364020 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/42f8b37f-1bac-42b5-9529-e2671a11fc2f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.364050 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b0ca5657-62ed-472e-a2ca-646e5eecde32\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b0ca5657-62ed-472e-a2ca-646e5eecde32\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.364075 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/42f8b37f-1bac-42b5-9529-e2671a11fc2f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.364103 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/42f8b37f-1bac-42b5-9529-e2671a11fc2f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.364127 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/42f8b37f-1bac-42b5-9529-e2671a11fc2f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.364143 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/42f8b37f-1bac-42b5-9529-e2671a11fc2f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.364180 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/42f8b37f-1bac-42b5-9529-e2671a11fc2f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.364212 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/42f8b37f-1bac-42b5-9529-e2671a11fc2f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.364621 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/42f8b37f-1bac-42b5-9529-e2671a11fc2f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.365381 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/42f8b37f-1bac-42b5-9529-e2671a11fc2f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.365865 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/42f8b37f-1bac-42b5-9529-e2671a11fc2f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.365924 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/42f8b37f-1bac-42b5-9529-e2671a11fc2f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.366810 4915 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.366843 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b0ca5657-62ed-472e-a2ca-646e5eecde32\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b0ca5657-62ed-472e-a2ca-646e5eecde32\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/66c49b705702902e0a71589547a6220019a29c6546a265081af230c092011e5c/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.370851 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/42f8b37f-1bac-42b5-9529-e2671a11fc2f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.371476 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/42f8b37f-1bac-42b5-9529-e2671a11fc2f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.391204 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/42f8b37f-1bac-42b5-9529-e2671a11fc2f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.409636 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9bj7\" (UniqueName: \"kubernetes.io/projected/42f8b37f-1bac-42b5-9529-e2671a11fc2f-kube-api-access-q9bj7\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.414908 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b0ca5657-62ed-472e-a2ca-646e5eecde32\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b0ca5657-62ed-472e-a2ca-646e5eecde32\") pod \"rabbitmq-server-0\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.448756 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-92fkj"] Jan 27 20:01:22 crc kubenswrapper[4915]: W0127 20:01:22.460591 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3acbe65_7957_4891_897a_d920fd17f93d.slice/crio-7a21ad39b787f5c32219cb94abbf2fb96eb457df1b2cf7d880d1efd8fa0ccbdb WatchSource:0}: Error finding container 7a21ad39b787f5c32219cb94abbf2fb96eb457df1b2cf7d880d1efd8fa0ccbdb: Status 404 returned error can't find the container with id 7a21ad39b787f5c32219cb94abbf2fb96eb457df1b2cf7d880d1efd8fa0ccbdb Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.486880 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-92fkj" event={"ID":"c3acbe65-7957-4891-897a-d920fd17f93d","Type":"ContainerStarted","Data":"7a21ad39b787f5c32219cb94abbf2fb96eb457df1b2cf7d880d1efd8fa0ccbdb"} Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.491155 4915 generic.go:334] "Generic (PLEG): container finished" podID="a1d2a764-bb16-4c36-bf1e-ae4550c1071d" containerID="0627374ba5873023f41b6c633b5ca9dc7c223b289294c9a7874a5dc181d78c6e" exitCode=0 Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.491205 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-kmrp4" event={"ID":"a1d2a764-bb16-4c36-bf1e-ae4550c1071d","Type":"ContainerDied","Data":"0627374ba5873023f41b6c633b5ca9dc7c223b289294c9a7874a5dc181d78c6e"} Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.491230 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-kmrp4" event={"ID":"a1d2a764-bb16-4c36-bf1e-ae4550c1071d","Type":"ContainerStarted","Data":"1092df53e4a8e3ffe299c5b7af25bd087d40ce765226b701fbad18c762eebe01"} Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.492047 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.506098 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.507504 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.517641 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.518976 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.519005 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.519302 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.519318 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-9n6w9" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.538359 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.672747 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg8fj\" (UniqueName: \"kubernetes.io/projected/c4797330-4b41-4a32-b43b-81cb7d6e946f-kube-api-access-fg8fj\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.673291 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-807d62f3-5761-4647-8431-4568b999b71a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-807d62f3-5761-4647-8431-4568b999b71a\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.673343 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c4797330-4b41-4a32-b43b-81cb7d6e946f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.673400 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c4797330-4b41-4a32-b43b-81cb7d6e946f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.673507 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c4797330-4b41-4a32-b43b-81cb7d6e946f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.673585 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c4797330-4b41-4a32-b43b-81cb7d6e946f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.673614 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c4797330-4b41-4a32-b43b-81cb7d6e946f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.673669 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c4797330-4b41-4a32-b43b-81cb7d6e946f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.673749 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c4797330-4b41-4a32-b43b-81cb7d6e946f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.775713 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c4797330-4b41-4a32-b43b-81cb7d6e946f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.775777 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c4797330-4b41-4a32-b43b-81cb7d6e946f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.775838 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg8fj\" (UniqueName: \"kubernetes.io/projected/c4797330-4b41-4a32-b43b-81cb7d6e946f-kube-api-access-fg8fj\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.776930 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-807d62f3-5761-4647-8431-4568b999b71a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-807d62f3-5761-4647-8431-4568b999b71a\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.776972 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c4797330-4b41-4a32-b43b-81cb7d6e946f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.777019 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c4797330-4b41-4a32-b43b-81cb7d6e946f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.777065 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c4797330-4b41-4a32-b43b-81cb7d6e946f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.777095 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c4797330-4b41-4a32-b43b-81cb7d6e946f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.777114 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c4797330-4b41-4a32-b43b-81cb7d6e946f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.778618 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c4797330-4b41-4a32-b43b-81cb7d6e946f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.778750 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c4797330-4b41-4a32-b43b-81cb7d6e946f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.778871 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c4797330-4b41-4a32-b43b-81cb7d6e946f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.778955 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c4797330-4b41-4a32-b43b-81cb7d6e946f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.790028 4915 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.790073 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-807d62f3-5761-4647-8431-4568b999b71a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-807d62f3-5761-4647-8431-4568b999b71a\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/38c5dd8f125a0881309e51be0e84302bce9cf15962a92bb5571bad5d2ebbe2a2/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:22 crc kubenswrapper[4915]: I0127 20:01:22.986707 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.037906 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c4797330-4b41-4a32-b43b-81cb7d6e946f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.038973 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c4797330-4b41-4a32-b43b-81cb7d6e946f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.039929 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c4797330-4b41-4a32-b43b-81cb7d6e946f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.040456 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg8fj\" (UniqueName: \"kubernetes.io/projected/c4797330-4b41-4a32-b43b-81cb7d6e946f-kube-api-access-fg8fj\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:23 crc kubenswrapper[4915]: W0127 20:01:23.042676 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42f8b37f_1bac_42b5_9529_e2671a11fc2f.slice/crio-17290e389f4efdf964f106db874438aa4a5dfa65c38d1ee0a45e875a949b1841 WatchSource:0}: Error finding container 17290e389f4efdf964f106db874438aa4a5dfa65c38d1ee0a45e875a949b1841: Status 404 returned error can't find the container with id 17290e389f4efdf964f106db874438aa4a5dfa65c38d1ee0a45e875a949b1841 Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.057201 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-807d62f3-5761-4647-8431-4568b999b71a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-807d62f3-5761-4647-8431-4568b999b71a\") pod \"rabbitmq-cell1-server-0\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.166504 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.501492 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-kmrp4" event={"ID":"a1d2a764-bb16-4c36-bf1e-ae4550c1071d","Type":"ContainerStarted","Data":"5a89883cd51024f4854534749f80c0a809a4252ed6f8e0117a63452e33a704ef"} Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.501625 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d7b5456f5-kmrp4" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.502918 4915 generic.go:334] "Generic (PLEG): container finished" podID="c3acbe65-7957-4891-897a-d920fd17f93d" containerID="ed534706fc469c9f590a9c074e7f6ad30d108602443cdd51a59376610511a945" exitCode=0 Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.502986 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-92fkj" event={"ID":"c3acbe65-7957-4891-897a-d920fd17f93d","Type":"ContainerDied","Data":"ed534706fc469c9f590a9c074e7f6ad30d108602443cdd51a59376610511a945"} Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.510654 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"42f8b37f-1bac-42b5-9529-e2671a11fc2f","Type":"ContainerStarted","Data":"17290e389f4efdf964f106db874438aa4a5dfa65c38d1ee0a45e875a949b1841"} Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.532911 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d7b5456f5-kmrp4" podStartSLOduration=3.532892208 podStartE2EDuration="3.532892208s" podCreationTimestamp="2026-01-27 20:01:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:01:23.531260388 +0000 UTC m=+4774.889114052" watchObservedRunningTime="2026-01-27 20:01:23.532892208 +0000 UTC m=+4774.890745872" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.552046 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.553392 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.555465 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-s9q4s" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.555681 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.556036 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.556197 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.564402 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.575653 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.648924 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.691419 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cc7f0cb-29c3-471a-bb68-723c51a8bc6d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d\") " pod="openstack/openstack-galera-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.691467 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6cc7f0cb-29c3-471a-bb68-723c51a8bc6d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d\") " pod="openstack/openstack-galera-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.691489 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cc7f0cb-29c3-471a-bb68-723c51a8bc6d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d\") " pod="openstack/openstack-galera-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.691585 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6cc7f0cb-29c3-471a-bb68-723c51a8bc6d-kolla-config\") pod \"openstack-galera-0\" (UID: \"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d\") " pod="openstack/openstack-galera-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.691802 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cc7f0cb-29c3-471a-bb68-723c51a8bc6d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d\") " pod="openstack/openstack-galera-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.691871 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-337c1ea9-906f-4314-a72b-d4ad2a47f645\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-337c1ea9-906f-4314-a72b-d4ad2a47f645\") pod \"openstack-galera-0\" (UID: \"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d\") " pod="openstack/openstack-galera-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.691931 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67rw8\" (UniqueName: \"kubernetes.io/projected/6cc7f0cb-29c3-471a-bb68-723c51a8bc6d-kube-api-access-67rw8\") pod \"openstack-galera-0\" (UID: \"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d\") " pod="openstack/openstack-galera-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.691956 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6cc7f0cb-29c3-471a-bb68-723c51a8bc6d-config-data-default\") pod \"openstack-galera-0\" (UID: \"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d\") " pod="openstack/openstack-galera-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.722909 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.723745 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.725544 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.725696 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-gt5sr" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.738759 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.794437 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cc7f0cb-29c3-471a-bb68-723c51a8bc6d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d\") " pod="openstack/openstack-galera-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.794517 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6cc7f0cb-29c3-471a-bb68-723c51a8bc6d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d\") " pod="openstack/openstack-galera-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.794551 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cc7f0cb-29c3-471a-bb68-723c51a8bc6d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d\") " pod="openstack/openstack-galera-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.794584 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6cc7f0cb-29c3-471a-bb68-723c51a8bc6d-kolla-config\") pod \"openstack-galera-0\" (UID: \"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d\") " pod="openstack/openstack-galera-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.794669 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cc7f0cb-29c3-471a-bb68-723c51a8bc6d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d\") " pod="openstack/openstack-galera-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.794723 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-337c1ea9-906f-4314-a72b-d4ad2a47f645\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-337c1ea9-906f-4314-a72b-d4ad2a47f645\") pod \"openstack-galera-0\" (UID: \"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d\") " pod="openstack/openstack-galera-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.794763 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67rw8\" (UniqueName: \"kubernetes.io/projected/6cc7f0cb-29c3-471a-bb68-723c51a8bc6d-kube-api-access-67rw8\") pod \"openstack-galera-0\" (UID: \"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d\") " pod="openstack/openstack-galera-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.794807 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6cc7f0cb-29c3-471a-bb68-723c51a8bc6d-config-data-default\") pod \"openstack-galera-0\" (UID: \"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d\") " pod="openstack/openstack-galera-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.795724 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6cc7f0cb-29c3-471a-bb68-723c51a8bc6d-config-data-default\") pod \"openstack-galera-0\" (UID: \"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d\") " pod="openstack/openstack-galera-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.799155 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cc7f0cb-29c3-471a-bb68-723c51a8bc6d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d\") " pod="openstack/openstack-galera-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.800663 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6cc7f0cb-29c3-471a-bb68-723c51a8bc6d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d\") " pod="openstack/openstack-galera-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.801596 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6cc7f0cb-29c3-471a-bb68-723c51a8bc6d-kolla-config\") pod \"openstack-galera-0\" (UID: \"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d\") " pod="openstack/openstack-galera-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.806356 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cc7f0cb-29c3-471a-bb68-723c51a8bc6d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d\") " pod="openstack/openstack-galera-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.807689 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cc7f0cb-29c3-471a-bb68-723c51a8bc6d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d\") " pod="openstack/openstack-galera-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.816852 4915 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.816932 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-337c1ea9-906f-4314-a72b-d4ad2a47f645\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-337c1ea9-906f-4314-a72b-d4ad2a47f645\") pod \"openstack-galera-0\" (UID: \"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4bbddda31c0a642ab1cdcb83edc33ad9249277b23bb91fe073e22ccb29a54a58/globalmount\"" pod="openstack/openstack-galera-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.829823 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67rw8\" (UniqueName: \"kubernetes.io/projected/6cc7f0cb-29c3-471a-bb68-723c51a8bc6d-kube-api-access-67rw8\") pod \"openstack-galera-0\" (UID: \"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d\") " pod="openstack/openstack-galera-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.890271 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-337c1ea9-906f-4314-a72b-d4ad2a47f645\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-337c1ea9-906f-4314-a72b-d4ad2a47f645\") pod \"openstack-galera-0\" (UID: \"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d\") " pod="openstack/openstack-galera-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.898733 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec21d9de-6a54-4606-bcb5-ca3b1dad1edf-config-data\") pod \"memcached-0\" (UID: \"ec21d9de-6a54-4606-bcb5-ca3b1dad1edf\") " pod="openstack/memcached-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.898816 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ec21d9de-6a54-4606-bcb5-ca3b1dad1edf-kolla-config\") pod \"memcached-0\" (UID: \"ec21d9de-6a54-4606-bcb5-ca3b1dad1edf\") " pod="openstack/memcached-0" Jan 27 20:01:23 crc kubenswrapper[4915]: I0127 20:01:23.898853 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w2s9\" (UniqueName: \"kubernetes.io/projected/ec21d9de-6a54-4606-bcb5-ca3b1dad1edf-kube-api-access-2w2s9\") pod \"memcached-0\" (UID: \"ec21d9de-6a54-4606-bcb5-ca3b1dad1edf\") " pod="openstack/memcached-0" Jan 27 20:01:24 crc kubenswrapper[4915]: I0127 20:01:24.000483 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ec21d9de-6a54-4606-bcb5-ca3b1dad1edf-kolla-config\") pod \"memcached-0\" (UID: \"ec21d9de-6a54-4606-bcb5-ca3b1dad1edf\") " pod="openstack/memcached-0" Jan 27 20:01:24 crc kubenswrapper[4915]: I0127 20:01:24.000535 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w2s9\" (UniqueName: \"kubernetes.io/projected/ec21d9de-6a54-4606-bcb5-ca3b1dad1edf-kube-api-access-2w2s9\") pod \"memcached-0\" (UID: \"ec21d9de-6a54-4606-bcb5-ca3b1dad1edf\") " pod="openstack/memcached-0" Jan 27 20:01:24 crc kubenswrapper[4915]: I0127 20:01:24.000663 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec21d9de-6a54-4606-bcb5-ca3b1dad1edf-config-data\") pod \"memcached-0\" (UID: \"ec21d9de-6a54-4606-bcb5-ca3b1dad1edf\") " pod="openstack/memcached-0" Jan 27 20:01:24 crc kubenswrapper[4915]: I0127 20:01:24.001575 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec21d9de-6a54-4606-bcb5-ca3b1dad1edf-config-data\") pod \"memcached-0\" (UID: \"ec21d9de-6a54-4606-bcb5-ca3b1dad1edf\") " pod="openstack/memcached-0" Jan 27 20:01:24 crc kubenswrapper[4915]: I0127 20:01:24.001875 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ec21d9de-6a54-4606-bcb5-ca3b1dad1edf-kolla-config\") pod \"memcached-0\" (UID: \"ec21d9de-6a54-4606-bcb5-ca3b1dad1edf\") " pod="openstack/memcached-0" Jan 27 20:01:24 crc kubenswrapper[4915]: I0127 20:01:24.019507 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w2s9\" (UniqueName: \"kubernetes.io/projected/ec21d9de-6a54-4606-bcb5-ca3b1dad1edf-kube-api-access-2w2s9\") pod \"memcached-0\" (UID: \"ec21d9de-6a54-4606-bcb5-ca3b1dad1edf\") " pod="openstack/memcached-0" Jan 27 20:01:24 crc kubenswrapper[4915]: I0127 20:01:24.046726 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 20:01:24 crc kubenswrapper[4915]: I0127 20:01:24.181452 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 20:01:24 crc kubenswrapper[4915]: W0127 20:01:24.402333 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec21d9de_6a54_4606_bcb5_ca3b1dad1edf.slice/crio-a98d1a6e7a11c0a58e42dee7ce8c7c0d926953bcbe0d5ed80a5ca4ae470aa619 WatchSource:0}: Error finding container a98d1a6e7a11c0a58e42dee7ce8c7c0d926953bcbe0d5ed80a5ca4ae470aa619: Status 404 returned error can't find the container with id a98d1a6e7a11c0a58e42dee7ce8c7c0d926953bcbe0d5ed80a5ca4ae470aa619 Jan 27 20:01:24 crc kubenswrapper[4915]: I0127 20:01:24.404270 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 20:01:24 crc kubenswrapper[4915]: I0127 20:01:24.519941 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c4797330-4b41-4a32-b43b-81cb7d6e946f","Type":"ContainerStarted","Data":"54382ebf6d0365f3e723dcb0b86b61bf954b5d447c3f3fa91baf55437c92a56f"} Jan 27 20:01:24 crc kubenswrapper[4915]: I0127 20:01:24.522523 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ec21d9de-6a54-4606-bcb5-ca3b1dad1edf","Type":"ContainerStarted","Data":"a98d1a6e7a11c0a58e42dee7ce8c7c0d926953bcbe0d5ed80a5ca4ae470aa619"} Jan 27 20:01:24 crc kubenswrapper[4915]: I0127 20:01:24.524717 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-92fkj" event={"ID":"c3acbe65-7957-4891-897a-d920fd17f93d","Type":"ContainerStarted","Data":"6247872bcb2dc5f9fc673c03a3ea9b37148f0bca9ef0d1f55009b5dde97a4e05"} Jan 27 20:01:24 crc kubenswrapper[4915]: I0127 20:01:24.524875 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-98ddfc8f-92fkj" Jan 27 20:01:24 crc kubenswrapper[4915]: I0127 20:01:24.534349 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"42f8b37f-1bac-42b5-9529-e2671a11fc2f","Type":"ContainerStarted","Data":"a6b9f6b0f0a30ac183d88502b90a91589c5f80585c993295b0031ba84513fac7"} Jan 27 20:01:24 crc kubenswrapper[4915]: I0127 20:01:24.555596 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-98ddfc8f-92fkj" podStartSLOduration=3.55557873 podStartE2EDuration="3.55557873s" podCreationTimestamp="2026-01-27 20:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:01:24.553301074 +0000 UTC m=+4775.911154748" watchObservedRunningTime="2026-01-27 20:01:24.55557873 +0000 UTC m=+4775.913432394" Jan 27 20:01:24 crc kubenswrapper[4915]: I0127 20:01:24.794318 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 20:01:24 crc kubenswrapper[4915]: W0127 20:01:24.802050 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cc7f0cb_29c3_471a_bb68_723c51a8bc6d.slice/crio-119fb3282aa5b44b0cb4d83aae46667f8f49120c06f94c211c416194a66b6f0e WatchSource:0}: Error finding container 119fb3282aa5b44b0cb4d83aae46667f8f49120c06f94c211c416194a66b6f0e: Status 404 returned error can't find the container with id 119fb3282aa5b44b0cb4d83aae46667f8f49120c06f94c211c416194a66b6f0e Jan 27 20:01:24 crc kubenswrapper[4915]: I0127 20:01:24.901604 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 20:01:24 crc kubenswrapper[4915]: I0127 20:01:24.904657 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:24 crc kubenswrapper[4915]: I0127 20:01:24.911263 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 27 20:01:24 crc kubenswrapper[4915]: I0127 20:01:24.912326 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 27 20:01:24 crc kubenswrapper[4915]: I0127 20:01:24.912652 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 27 20:01:24 crc kubenswrapper[4915]: I0127 20:01:24.913150 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-rg785" Jan 27 20:01:24 crc kubenswrapper[4915]: I0127 20:01:24.914805 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.017033 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxlbm\" (UniqueName: \"kubernetes.io/projected/f79f430a-9a9a-4bd9-b475-09ab36193443-kube-api-access-mxlbm\") pod \"openstack-cell1-galera-0\" (UID: \"f79f430a-9a9a-4bd9-b475-09ab36193443\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.017095 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f79f430a-9a9a-4bd9-b475-09ab36193443-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f79f430a-9a9a-4bd9-b475-09ab36193443\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.017117 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f79f430a-9a9a-4bd9-b475-09ab36193443-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f79f430a-9a9a-4bd9-b475-09ab36193443\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.017288 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f79f430a-9a9a-4bd9-b475-09ab36193443-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f79f430a-9a9a-4bd9-b475-09ab36193443\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.017329 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f79f430a-9a9a-4bd9-b475-09ab36193443-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f79f430a-9a9a-4bd9-b475-09ab36193443\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.017369 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ce044503-54ce-427b-9c77-70dddd7e7747\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ce044503-54ce-427b-9c77-70dddd7e7747\") pod \"openstack-cell1-galera-0\" (UID: \"f79f430a-9a9a-4bd9-b475-09ab36193443\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.017394 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f79f430a-9a9a-4bd9-b475-09ab36193443-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f79f430a-9a9a-4bd9-b475-09ab36193443\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.017460 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f79f430a-9a9a-4bd9-b475-09ab36193443-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f79f430a-9a9a-4bd9-b475-09ab36193443\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.119121 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f79f430a-9a9a-4bd9-b475-09ab36193443-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f79f430a-9a9a-4bd9-b475-09ab36193443\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.120119 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f79f430a-9a9a-4bd9-b475-09ab36193443-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f79f430a-9a9a-4bd9-b475-09ab36193443\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.120163 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f79f430a-9a9a-4bd9-b475-09ab36193443-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f79f430a-9a9a-4bd9-b475-09ab36193443\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.120180 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f79f430a-9a9a-4bd9-b475-09ab36193443-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f79f430a-9a9a-4bd9-b475-09ab36193443\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.120206 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ce044503-54ce-427b-9c77-70dddd7e7747\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ce044503-54ce-427b-9c77-70dddd7e7747\") pod \"openstack-cell1-galera-0\" (UID: \"f79f430a-9a9a-4bd9-b475-09ab36193443\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.120506 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f79f430a-9a9a-4bd9-b475-09ab36193443-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f79f430a-9a9a-4bd9-b475-09ab36193443\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.120539 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f79f430a-9a9a-4bd9-b475-09ab36193443-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f79f430a-9a9a-4bd9-b475-09ab36193443\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.120617 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxlbm\" (UniqueName: \"kubernetes.io/projected/f79f430a-9a9a-4bd9-b475-09ab36193443-kube-api-access-mxlbm\") pod \"openstack-cell1-galera-0\" (UID: \"f79f430a-9a9a-4bd9-b475-09ab36193443\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.121163 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f79f430a-9a9a-4bd9-b475-09ab36193443-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f79f430a-9a9a-4bd9-b475-09ab36193443\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.121722 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f79f430a-9a9a-4bd9-b475-09ab36193443-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f79f430a-9a9a-4bd9-b475-09ab36193443\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.125425 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f79f430a-9a9a-4bd9-b475-09ab36193443-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f79f430a-9a9a-4bd9-b475-09ab36193443\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.125436 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f79f430a-9a9a-4bd9-b475-09ab36193443-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f79f430a-9a9a-4bd9-b475-09ab36193443\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.126574 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f79f430a-9a9a-4bd9-b475-09ab36193443-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f79f430a-9a9a-4bd9-b475-09ab36193443\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.128442 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f79f430a-9a9a-4bd9-b475-09ab36193443-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f79f430a-9a9a-4bd9-b475-09ab36193443\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.135533 4915 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.135582 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ce044503-54ce-427b-9c77-70dddd7e7747\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ce044503-54ce-427b-9c77-70dddd7e7747\") pod \"openstack-cell1-galera-0\" (UID: \"f79f430a-9a9a-4bd9-b475-09ab36193443\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9a58ac6a79cd7d2f5d8b7224992f875d6b747097ed45618f4f58d9b54692b288/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.140507 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxlbm\" (UniqueName: \"kubernetes.io/projected/f79f430a-9a9a-4bd9-b475-09ab36193443-kube-api-access-mxlbm\") pod \"openstack-cell1-galera-0\" (UID: \"f79f430a-9a9a-4bd9-b475-09ab36193443\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.170928 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ce044503-54ce-427b-9c77-70dddd7e7747\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ce044503-54ce-427b-9c77-70dddd7e7747\") pod \"openstack-cell1-galera-0\" (UID: \"f79f430a-9a9a-4bd9-b475-09ab36193443\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.231019 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.542841 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ec21d9de-6a54-4606-bcb5-ca3b1dad1edf","Type":"ContainerStarted","Data":"2b74cfa3b639b2303335dba7b53d4c944189a5f5b4c49c764f292d25fc9d251c"} Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.543284 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.544118 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d","Type":"ContainerStarted","Data":"ea3af828db2094674162658ebd7ff1e824d67052285a1e374878013b6759f8c5"} Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.544148 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d","Type":"ContainerStarted","Data":"119fb3282aa5b44b0cb4d83aae46667f8f49120c06f94c211c416194a66b6f0e"} Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.552909 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c4797330-4b41-4a32-b43b-81cb7d6e946f","Type":"ContainerStarted","Data":"4de384dd2e011ebdfdbb60a43f286410d16828aa7cf92a23860edd896717427f"} Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.579982 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.579960502 podStartE2EDuration="2.579960502s" podCreationTimestamp="2026-01-27 20:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:01:25.574711723 +0000 UTC m=+4776.932565407" watchObservedRunningTime="2026-01-27 20:01:25.579960502 +0000 UTC m=+4776.937814166" Jan 27 20:01:25 crc kubenswrapper[4915]: I0127 20:01:25.724552 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 20:01:26 crc kubenswrapper[4915]: I0127 20:01:26.564171 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f79f430a-9a9a-4bd9-b475-09ab36193443","Type":"ContainerStarted","Data":"b32848d390f8a3b15446c66b0d96129224ca3d235da1fbd643024a738ad67ea1"} Jan 27 20:01:26 crc kubenswrapper[4915]: I0127 20:01:26.565468 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f79f430a-9a9a-4bd9-b475-09ab36193443","Type":"ContainerStarted","Data":"1382b32c8f7c14dc46dd091da51970aad25390ee03ce99e10c77e410b9990643"} Jan 27 20:01:29 crc kubenswrapper[4915]: I0127 20:01:29.049074 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 27 20:01:31 crc kubenswrapper[4915]: I0127 20:01:31.204117 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d7b5456f5-kmrp4" Jan 27 20:01:31 crc kubenswrapper[4915]: I0127 20:01:31.370613 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qg7qd"] Jan 27 20:01:31 crc kubenswrapper[4915]: I0127 20:01:31.372424 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qg7qd" Jan 27 20:01:31 crc kubenswrapper[4915]: I0127 20:01:31.389386 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qg7qd"] Jan 27 20:01:31 crc kubenswrapper[4915]: I0127 20:01:31.520388 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd5nt\" (UniqueName: \"kubernetes.io/projected/b680591d-a4b1-45b8-8e3b-c7ab219004e3-kube-api-access-zd5nt\") pod \"redhat-operators-qg7qd\" (UID: \"b680591d-a4b1-45b8-8e3b-c7ab219004e3\") " pod="openshift-marketplace/redhat-operators-qg7qd" Jan 27 20:01:31 crc kubenswrapper[4915]: I0127 20:01:31.520445 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b680591d-a4b1-45b8-8e3b-c7ab219004e3-catalog-content\") pod \"redhat-operators-qg7qd\" (UID: \"b680591d-a4b1-45b8-8e3b-c7ab219004e3\") " pod="openshift-marketplace/redhat-operators-qg7qd" Jan 27 20:01:31 crc kubenswrapper[4915]: I0127 20:01:31.520475 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b680591d-a4b1-45b8-8e3b-c7ab219004e3-utilities\") pod \"redhat-operators-qg7qd\" (UID: \"b680591d-a4b1-45b8-8e3b-c7ab219004e3\") " pod="openshift-marketplace/redhat-operators-qg7qd" Jan 27 20:01:31 crc kubenswrapper[4915]: I0127 20:01:31.605582 4915 generic.go:334] "Generic (PLEG): container finished" podID="f79f430a-9a9a-4bd9-b475-09ab36193443" containerID="b32848d390f8a3b15446c66b0d96129224ca3d235da1fbd643024a738ad67ea1" exitCode=0 Jan 27 20:01:31 crc kubenswrapper[4915]: I0127 20:01:31.605664 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f79f430a-9a9a-4bd9-b475-09ab36193443","Type":"ContainerDied","Data":"b32848d390f8a3b15446c66b0d96129224ca3d235da1fbd643024a738ad67ea1"} Jan 27 20:01:31 crc kubenswrapper[4915]: I0127 20:01:31.608724 4915 generic.go:334] "Generic (PLEG): container finished" podID="6cc7f0cb-29c3-471a-bb68-723c51a8bc6d" containerID="ea3af828db2094674162658ebd7ff1e824d67052285a1e374878013b6759f8c5" exitCode=0 Jan 27 20:01:31 crc kubenswrapper[4915]: I0127 20:01:31.608761 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d","Type":"ContainerDied","Data":"ea3af828db2094674162658ebd7ff1e824d67052285a1e374878013b6759f8c5"} Jan 27 20:01:31 crc kubenswrapper[4915]: I0127 20:01:31.622402 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd5nt\" (UniqueName: \"kubernetes.io/projected/b680591d-a4b1-45b8-8e3b-c7ab219004e3-kube-api-access-zd5nt\") pod \"redhat-operators-qg7qd\" (UID: \"b680591d-a4b1-45b8-8e3b-c7ab219004e3\") " pod="openshift-marketplace/redhat-operators-qg7qd" Jan 27 20:01:31 crc kubenswrapper[4915]: I0127 20:01:31.622459 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b680591d-a4b1-45b8-8e3b-c7ab219004e3-catalog-content\") pod \"redhat-operators-qg7qd\" (UID: \"b680591d-a4b1-45b8-8e3b-c7ab219004e3\") " pod="openshift-marketplace/redhat-operators-qg7qd" Jan 27 20:01:31 crc kubenswrapper[4915]: I0127 20:01:31.622491 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b680591d-a4b1-45b8-8e3b-c7ab219004e3-utilities\") pod \"redhat-operators-qg7qd\" (UID: \"b680591d-a4b1-45b8-8e3b-c7ab219004e3\") " pod="openshift-marketplace/redhat-operators-qg7qd" Jan 27 20:01:31 crc kubenswrapper[4915]: I0127 20:01:31.623043 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b680591d-a4b1-45b8-8e3b-c7ab219004e3-utilities\") pod \"redhat-operators-qg7qd\" (UID: \"b680591d-a4b1-45b8-8e3b-c7ab219004e3\") " pod="openshift-marketplace/redhat-operators-qg7qd" Jan 27 20:01:31 crc kubenswrapper[4915]: I0127 20:01:31.623095 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b680591d-a4b1-45b8-8e3b-c7ab219004e3-catalog-content\") pod \"redhat-operators-qg7qd\" (UID: \"b680591d-a4b1-45b8-8e3b-c7ab219004e3\") " pod="openshift-marketplace/redhat-operators-qg7qd" Jan 27 20:01:31 crc kubenswrapper[4915]: I0127 20:01:31.643502 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd5nt\" (UniqueName: \"kubernetes.io/projected/b680591d-a4b1-45b8-8e3b-c7ab219004e3-kube-api-access-zd5nt\") pod \"redhat-operators-qg7qd\" (UID: \"b680591d-a4b1-45b8-8e3b-c7ab219004e3\") " pod="openshift-marketplace/redhat-operators-qg7qd" Jan 27 20:01:31 crc kubenswrapper[4915]: I0127 20:01:31.719324 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qg7qd" Jan 27 20:01:31 crc kubenswrapper[4915]: I0127 20:01:31.974233 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-98ddfc8f-92fkj" Jan 27 20:01:32 crc kubenswrapper[4915]: I0127 20:01:32.042095 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-kmrp4"] Jan 27 20:01:32 crc kubenswrapper[4915]: I0127 20:01:32.042353 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d7b5456f5-kmrp4" podUID="a1d2a764-bb16-4c36-bf1e-ae4550c1071d" containerName="dnsmasq-dns" containerID="cri-o://5a89883cd51024f4854534749f80c0a809a4252ed6f8e0117a63452e33a704ef" gracePeriod=10 Jan 27 20:01:32 crc kubenswrapper[4915]: I0127 20:01:32.165445 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qg7qd"] Jan 27 20:01:32 crc kubenswrapper[4915]: W0127 20:01:32.374455 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb680591d_a4b1_45b8_8e3b_c7ab219004e3.slice/crio-c8e1845e548acf3bae2320fe6e486f3b1df8a9fd5626e889da6c5c7447bba400 WatchSource:0}: Error finding container c8e1845e548acf3bae2320fe6e486f3b1df8a9fd5626e889da6c5c7447bba400: Status 404 returned error can't find the container with id c8e1845e548acf3bae2320fe6e486f3b1df8a9fd5626e889da6c5c7447bba400 Jan 27 20:01:32 crc kubenswrapper[4915]: I0127 20:01:32.618093 4915 generic.go:334] "Generic (PLEG): container finished" podID="a1d2a764-bb16-4c36-bf1e-ae4550c1071d" containerID="5a89883cd51024f4854534749f80c0a809a4252ed6f8e0117a63452e33a704ef" exitCode=0 Jan 27 20:01:32 crc kubenswrapper[4915]: I0127 20:01:32.618127 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-kmrp4" event={"ID":"a1d2a764-bb16-4c36-bf1e-ae4550c1071d","Type":"ContainerDied","Data":"5a89883cd51024f4854534749f80c0a809a4252ed6f8e0117a63452e33a704ef"} Jan 27 20:01:32 crc kubenswrapper[4915]: I0127 20:01:32.620598 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6cc7f0cb-29c3-471a-bb68-723c51a8bc6d","Type":"ContainerStarted","Data":"a19df823f2d436301388324afff41ebe2d2f74068f8e95c377aac128a9d43a93"} Jan 27 20:01:32 crc kubenswrapper[4915]: I0127 20:01:32.623410 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f79f430a-9a9a-4bd9-b475-09ab36193443","Type":"ContainerStarted","Data":"be86072ed880c2f4281be19dd75d4157687371ea2d918e79b9c0e2231b3a4c8d"} Jan 27 20:01:32 crc kubenswrapper[4915]: I0127 20:01:32.625176 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qg7qd" event={"ID":"b680591d-a4b1-45b8-8e3b-c7ab219004e3","Type":"ContainerStarted","Data":"0807482bc10b1d719d9ec9864fbab88f908bddc22daf5751e177fc48abf86e04"} Jan 27 20:01:32 crc kubenswrapper[4915]: I0127 20:01:32.625221 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qg7qd" event={"ID":"b680591d-a4b1-45b8-8e3b-c7ab219004e3","Type":"ContainerStarted","Data":"c8e1845e548acf3bae2320fe6e486f3b1df8a9fd5626e889da6c5c7447bba400"} Jan 27 20:01:32 crc kubenswrapper[4915]: I0127 20:01:32.653238 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=10.653204384 podStartE2EDuration="10.653204384s" podCreationTimestamp="2026-01-27 20:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:01:32.643785552 +0000 UTC m=+4784.001639236" watchObservedRunningTime="2026-01-27 20:01:32.653204384 +0000 UTC m=+4784.011058048" Jan 27 20:01:32 crc kubenswrapper[4915]: I0127 20:01:32.672218 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=9.6721733 podStartE2EDuration="9.6721733s" podCreationTimestamp="2026-01-27 20:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:01:32.666745857 +0000 UTC m=+4784.024599521" watchObservedRunningTime="2026-01-27 20:01:32.6721733 +0000 UTC m=+4784.030026964" Jan 27 20:01:33 crc kubenswrapper[4915]: I0127 20:01:33.279377 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-kmrp4" Jan 27 20:01:33 crc kubenswrapper[4915]: I0127 20:01:33.349533 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2px4s\" (UniqueName: \"kubernetes.io/projected/a1d2a764-bb16-4c36-bf1e-ae4550c1071d-kube-api-access-2px4s\") pod \"a1d2a764-bb16-4c36-bf1e-ae4550c1071d\" (UID: \"a1d2a764-bb16-4c36-bf1e-ae4550c1071d\") " Jan 27 20:01:33 crc kubenswrapper[4915]: I0127 20:01:33.349874 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1d2a764-bb16-4c36-bf1e-ae4550c1071d-config\") pod \"a1d2a764-bb16-4c36-bf1e-ae4550c1071d\" (UID: \"a1d2a764-bb16-4c36-bf1e-ae4550c1071d\") " Jan 27 20:01:33 crc kubenswrapper[4915]: I0127 20:01:33.350528 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1d2a764-bb16-4c36-bf1e-ae4550c1071d-dns-svc\") pod \"a1d2a764-bb16-4c36-bf1e-ae4550c1071d\" (UID: \"a1d2a764-bb16-4c36-bf1e-ae4550c1071d\") " Jan 27 20:01:33 crc kubenswrapper[4915]: I0127 20:01:33.360322 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1d2a764-bb16-4c36-bf1e-ae4550c1071d-kube-api-access-2px4s" (OuterVolumeSpecName: "kube-api-access-2px4s") pod "a1d2a764-bb16-4c36-bf1e-ae4550c1071d" (UID: "a1d2a764-bb16-4c36-bf1e-ae4550c1071d"). InnerVolumeSpecName "kube-api-access-2px4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:01:33 crc kubenswrapper[4915]: I0127 20:01:33.389612 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1d2a764-bb16-4c36-bf1e-ae4550c1071d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a1d2a764-bb16-4c36-bf1e-ae4550c1071d" (UID: "a1d2a764-bb16-4c36-bf1e-ae4550c1071d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:01:33 crc kubenswrapper[4915]: I0127 20:01:33.394827 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1d2a764-bb16-4c36-bf1e-ae4550c1071d-config" (OuterVolumeSpecName: "config") pod "a1d2a764-bb16-4c36-bf1e-ae4550c1071d" (UID: "a1d2a764-bb16-4c36-bf1e-ae4550c1071d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:01:33 crc kubenswrapper[4915]: I0127 20:01:33.452858 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2px4s\" (UniqueName: \"kubernetes.io/projected/a1d2a764-bb16-4c36-bf1e-ae4550c1071d-kube-api-access-2px4s\") on node \"crc\" DevicePath \"\"" Jan 27 20:01:33 crc kubenswrapper[4915]: I0127 20:01:33.452890 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1d2a764-bb16-4c36-bf1e-ae4550c1071d-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:01:33 crc kubenswrapper[4915]: I0127 20:01:33.452902 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1d2a764-bb16-4c36-bf1e-ae4550c1071d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:01:33 crc kubenswrapper[4915]: I0127 20:01:33.633632 4915 generic.go:334] "Generic (PLEG): container finished" podID="b680591d-a4b1-45b8-8e3b-c7ab219004e3" containerID="0807482bc10b1d719d9ec9864fbab88f908bddc22daf5751e177fc48abf86e04" exitCode=0 Jan 27 20:01:33 crc kubenswrapper[4915]: I0127 20:01:33.633713 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qg7qd" event={"ID":"b680591d-a4b1-45b8-8e3b-c7ab219004e3","Type":"ContainerDied","Data":"0807482bc10b1d719d9ec9864fbab88f908bddc22daf5751e177fc48abf86e04"} Jan 27 20:01:33 crc kubenswrapper[4915]: I0127 20:01:33.636560 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-kmrp4" event={"ID":"a1d2a764-bb16-4c36-bf1e-ae4550c1071d","Type":"ContainerDied","Data":"1092df53e4a8e3ffe299c5b7af25bd087d40ce765226b701fbad18c762eebe01"} Jan 27 20:01:33 crc kubenswrapper[4915]: I0127 20:01:33.636664 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-kmrp4" Jan 27 20:01:33 crc kubenswrapper[4915]: I0127 20:01:33.636787 4915 scope.go:117] "RemoveContainer" containerID="5a89883cd51024f4854534749f80c0a809a4252ed6f8e0117a63452e33a704ef" Jan 27 20:01:33 crc kubenswrapper[4915]: I0127 20:01:33.983144 4915 scope.go:117] "RemoveContainer" containerID="0627374ba5873023f41b6c633b5ca9dc7c223b289294c9a7874a5dc181d78c6e" Jan 27 20:01:33 crc kubenswrapper[4915]: I0127 20:01:33.992222 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-kmrp4"] Jan 27 20:01:34 crc kubenswrapper[4915]: I0127 20:01:33.998515 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-kmrp4"] Jan 27 20:01:34 crc kubenswrapper[4915]: I0127 20:01:34.182185 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 20:01:34 crc kubenswrapper[4915]: I0127 20:01:34.182246 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 20:01:35 crc kubenswrapper[4915]: E0127 20:01:35.084983 4915 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.106:56680->38.102.83.106:36797: write tcp 38.102.83.106:56680->38.102.83.106:36797: write: broken pipe Jan 27 20:01:35 crc kubenswrapper[4915]: I0127 20:01:35.232050 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:35 crc kubenswrapper[4915]: I0127 20:01:35.232102 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:35 crc kubenswrapper[4915]: I0127 20:01:35.368855 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1d2a764-bb16-4c36-bf1e-ae4550c1071d" path="/var/lib/kubelet/pods/a1d2a764-bb16-4c36-bf1e-ae4550c1071d/volumes" Jan 27 20:01:35 crc kubenswrapper[4915]: I0127 20:01:35.653914 4915 generic.go:334] "Generic (PLEG): container finished" podID="b680591d-a4b1-45b8-8e3b-c7ab219004e3" containerID="68b0e630b4dff554fc5ac5e5077b469e157afb8b30dc8201cbf2df4c54d33f29" exitCode=0 Jan 27 20:01:35 crc kubenswrapper[4915]: I0127 20:01:35.653975 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qg7qd" event={"ID":"b680591d-a4b1-45b8-8e3b-c7ab219004e3","Type":"ContainerDied","Data":"68b0e630b4dff554fc5ac5e5077b469e157afb8b30dc8201cbf2df4c54d33f29"} Jan 27 20:01:37 crc kubenswrapper[4915]: I0127 20:01:37.543585 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:37 crc kubenswrapper[4915]: I0127 20:01:37.674842 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qg7qd" event={"ID":"b680591d-a4b1-45b8-8e3b-c7ab219004e3","Type":"ContainerStarted","Data":"6370f22e28fc16eadd74de96300db3f9bcdf00064a3201567dd241f3fd0a16da"} Jan 27 20:01:37 crc kubenswrapper[4915]: I0127 20:01:37.698942 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qg7qd" podStartSLOduration=3.667960132 podStartE2EDuration="6.698918802s" podCreationTimestamp="2026-01-27 20:01:31 +0000 UTC" firstStartedPulling="2026-01-27 20:01:33.635486771 +0000 UTC m=+4784.993340435" lastFinishedPulling="2026-01-27 20:01:36.666445421 +0000 UTC m=+4788.024299105" observedRunningTime="2026-01-27 20:01:37.691211503 +0000 UTC m=+4789.049065177" watchObservedRunningTime="2026-01-27 20:01:37.698918802 +0000 UTC m=+4789.056772466" Jan 27 20:01:37 crc kubenswrapper[4915]: I0127 20:01:37.709289 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 27 20:01:40 crc kubenswrapper[4915]: I0127 20:01:40.282428 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 27 20:01:40 crc kubenswrapper[4915]: I0127 20:01:40.368212 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 27 20:01:41 crc kubenswrapper[4915]: I0127 20:01:41.719680 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qg7qd" Jan 27 20:01:41 crc kubenswrapper[4915]: I0127 20:01:41.719726 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qg7qd" Jan 27 20:01:42 crc kubenswrapper[4915]: I0127 20:01:42.476432 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-dhhxm"] Jan 27 20:01:42 crc kubenswrapper[4915]: E0127 20:01:42.477112 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1d2a764-bb16-4c36-bf1e-ae4550c1071d" containerName="init" Jan 27 20:01:42 crc kubenswrapper[4915]: I0127 20:01:42.477135 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1d2a764-bb16-4c36-bf1e-ae4550c1071d" containerName="init" Jan 27 20:01:42 crc kubenswrapper[4915]: E0127 20:01:42.477164 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1d2a764-bb16-4c36-bf1e-ae4550c1071d" containerName="dnsmasq-dns" Jan 27 20:01:42 crc kubenswrapper[4915]: I0127 20:01:42.477172 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1d2a764-bb16-4c36-bf1e-ae4550c1071d" containerName="dnsmasq-dns" Jan 27 20:01:42 crc kubenswrapper[4915]: I0127 20:01:42.477354 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1d2a764-bb16-4c36-bf1e-ae4550c1071d" containerName="dnsmasq-dns" Jan 27 20:01:42 crc kubenswrapper[4915]: I0127 20:01:42.477937 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dhhxm" Jan 27 20:01:42 crc kubenswrapper[4915]: I0127 20:01:42.486920 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 20:01:42 crc kubenswrapper[4915]: I0127 20:01:42.494759 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dhhxm"] Jan 27 20:01:42 crc kubenswrapper[4915]: I0127 20:01:42.504679 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnq6g\" (UniqueName: \"kubernetes.io/projected/21bf5523-8bfc-4cfd-bbb7-f82b96975d85-kube-api-access-pnq6g\") pod \"root-account-create-update-dhhxm\" (UID: \"21bf5523-8bfc-4cfd-bbb7-f82b96975d85\") " pod="openstack/root-account-create-update-dhhxm" Jan 27 20:01:42 crc kubenswrapper[4915]: I0127 20:01:42.504754 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21bf5523-8bfc-4cfd-bbb7-f82b96975d85-operator-scripts\") pod \"root-account-create-update-dhhxm\" (UID: \"21bf5523-8bfc-4cfd-bbb7-f82b96975d85\") " pod="openstack/root-account-create-update-dhhxm" Jan 27 20:01:42 crc kubenswrapper[4915]: I0127 20:01:42.606331 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnq6g\" (UniqueName: \"kubernetes.io/projected/21bf5523-8bfc-4cfd-bbb7-f82b96975d85-kube-api-access-pnq6g\") pod \"root-account-create-update-dhhxm\" (UID: \"21bf5523-8bfc-4cfd-bbb7-f82b96975d85\") " pod="openstack/root-account-create-update-dhhxm" Jan 27 20:01:42 crc kubenswrapper[4915]: I0127 20:01:42.606397 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21bf5523-8bfc-4cfd-bbb7-f82b96975d85-operator-scripts\") pod \"root-account-create-update-dhhxm\" (UID: \"21bf5523-8bfc-4cfd-bbb7-f82b96975d85\") " pod="openstack/root-account-create-update-dhhxm" Jan 27 20:01:42 crc kubenswrapper[4915]: I0127 20:01:42.607255 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21bf5523-8bfc-4cfd-bbb7-f82b96975d85-operator-scripts\") pod \"root-account-create-update-dhhxm\" (UID: \"21bf5523-8bfc-4cfd-bbb7-f82b96975d85\") " pod="openstack/root-account-create-update-dhhxm" Jan 27 20:01:42 crc kubenswrapper[4915]: I0127 20:01:42.634503 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnq6g\" (UniqueName: \"kubernetes.io/projected/21bf5523-8bfc-4cfd-bbb7-f82b96975d85-kube-api-access-pnq6g\") pod \"root-account-create-update-dhhxm\" (UID: \"21bf5523-8bfc-4cfd-bbb7-f82b96975d85\") " pod="openstack/root-account-create-update-dhhxm" Jan 27 20:01:42 crc kubenswrapper[4915]: I0127 20:01:42.774089 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qg7qd" podUID="b680591d-a4b1-45b8-8e3b-c7ab219004e3" containerName="registry-server" probeResult="failure" output=< Jan 27 20:01:42 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Jan 27 20:01:42 crc kubenswrapper[4915]: > Jan 27 20:01:42 crc kubenswrapper[4915]: I0127 20:01:42.795423 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dhhxm" Jan 27 20:01:43 crc kubenswrapper[4915]: I0127 20:01:43.302951 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dhhxm"] Jan 27 20:01:43 crc kubenswrapper[4915]: I0127 20:01:43.720622 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dhhxm" event={"ID":"21bf5523-8bfc-4cfd-bbb7-f82b96975d85","Type":"ContainerStarted","Data":"43b05342dc1519e61b23f5a664f15c2e58a9173e8666d9e3214fb6bed984e1b0"} Jan 27 20:01:43 crc kubenswrapper[4915]: I0127 20:01:43.720953 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dhhxm" event={"ID":"21bf5523-8bfc-4cfd-bbb7-f82b96975d85","Type":"ContainerStarted","Data":"88eaab7f3c75ecad437f41776ca725c114d9b66567d47e3bde4699c6502cb1c8"} Jan 27 20:01:43 crc kubenswrapper[4915]: I0127 20:01:43.735053 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-dhhxm" podStartSLOduration=1.7350364470000001 podStartE2EDuration="1.735036447s" podCreationTimestamp="2026-01-27 20:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:01:43.73393048 +0000 UTC m=+4795.091784144" watchObservedRunningTime="2026-01-27 20:01:43.735036447 +0000 UTC m=+4795.092890111" Jan 27 20:01:44 crc kubenswrapper[4915]: I0127 20:01:44.729630 4915 generic.go:334] "Generic (PLEG): container finished" podID="21bf5523-8bfc-4cfd-bbb7-f82b96975d85" containerID="43b05342dc1519e61b23f5a664f15c2e58a9173e8666d9e3214fb6bed984e1b0" exitCode=0 Jan 27 20:01:44 crc kubenswrapper[4915]: I0127 20:01:44.729676 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dhhxm" event={"ID":"21bf5523-8bfc-4cfd-bbb7-f82b96975d85","Type":"ContainerDied","Data":"43b05342dc1519e61b23f5a664f15c2e58a9173e8666d9e3214fb6bed984e1b0"} Jan 27 20:01:46 crc kubenswrapper[4915]: I0127 20:01:46.002324 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dhhxm" Jan 27 20:01:46 crc kubenswrapper[4915]: I0127 20:01:46.069064 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnq6g\" (UniqueName: \"kubernetes.io/projected/21bf5523-8bfc-4cfd-bbb7-f82b96975d85-kube-api-access-pnq6g\") pod \"21bf5523-8bfc-4cfd-bbb7-f82b96975d85\" (UID: \"21bf5523-8bfc-4cfd-bbb7-f82b96975d85\") " Jan 27 20:01:46 crc kubenswrapper[4915]: I0127 20:01:46.069296 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21bf5523-8bfc-4cfd-bbb7-f82b96975d85-operator-scripts\") pod \"21bf5523-8bfc-4cfd-bbb7-f82b96975d85\" (UID: \"21bf5523-8bfc-4cfd-bbb7-f82b96975d85\") " Jan 27 20:01:46 crc kubenswrapper[4915]: I0127 20:01:46.069903 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21bf5523-8bfc-4cfd-bbb7-f82b96975d85-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21bf5523-8bfc-4cfd-bbb7-f82b96975d85" (UID: "21bf5523-8bfc-4cfd-bbb7-f82b96975d85"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:01:46 crc kubenswrapper[4915]: I0127 20:01:46.075632 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21bf5523-8bfc-4cfd-bbb7-f82b96975d85-kube-api-access-pnq6g" (OuterVolumeSpecName: "kube-api-access-pnq6g") pod "21bf5523-8bfc-4cfd-bbb7-f82b96975d85" (UID: "21bf5523-8bfc-4cfd-bbb7-f82b96975d85"). InnerVolumeSpecName "kube-api-access-pnq6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:01:46 crc kubenswrapper[4915]: I0127 20:01:46.171416 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21bf5523-8bfc-4cfd-bbb7-f82b96975d85-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:01:46 crc kubenswrapper[4915]: I0127 20:01:46.171458 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnq6g\" (UniqueName: \"kubernetes.io/projected/21bf5523-8bfc-4cfd-bbb7-f82b96975d85-kube-api-access-pnq6g\") on node \"crc\" DevicePath \"\"" Jan 27 20:01:46 crc kubenswrapper[4915]: I0127 20:01:46.745668 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dhhxm" event={"ID":"21bf5523-8bfc-4cfd-bbb7-f82b96975d85","Type":"ContainerDied","Data":"88eaab7f3c75ecad437f41776ca725c114d9b66567d47e3bde4699c6502cb1c8"} Jan 27 20:01:46 crc kubenswrapper[4915]: I0127 20:01:46.746083 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88eaab7f3c75ecad437f41776ca725c114d9b66567d47e3bde4699c6502cb1c8" Jan 27 20:01:46 crc kubenswrapper[4915]: I0127 20:01:46.745725 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dhhxm" Jan 27 20:01:48 crc kubenswrapper[4915]: I0127 20:01:48.907243 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-dhhxm"] Jan 27 20:01:48 crc kubenswrapper[4915]: I0127 20:01:48.915149 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-dhhxm"] Jan 27 20:01:49 crc kubenswrapper[4915]: I0127 20:01:49.368917 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21bf5523-8bfc-4cfd-bbb7-f82b96975d85" path="/var/lib/kubelet/pods/21bf5523-8bfc-4cfd-bbb7-f82b96975d85/volumes" Jan 27 20:01:51 crc kubenswrapper[4915]: I0127 20:01:51.768887 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qg7qd" Jan 27 20:01:51 crc kubenswrapper[4915]: I0127 20:01:51.809721 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qg7qd" Jan 27 20:01:52 crc kubenswrapper[4915]: I0127 20:01:52.014034 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qg7qd"] Jan 27 20:01:52 crc kubenswrapper[4915]: I0127 20:01:52.795189 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qg7qd" podUID="b680591d-a4b1-45b8-8e3b-c7ab219004e3" containerName="registry-server" containerID="cri-o://6370f22e28fc16eadd74de96300db3f9bcdf00064a3201567dd241f3fd0a16da" gracePeriod=2 Jan 27 20:01:53 crc kubenswrapper[4915]: I0127 20:01:53.806775 4915 generic.go:334] "Generic (PLEG): container finished" podID="b680591d-a4b1-45b8-8e3b-c7ab219004e3" containerID="6370f22e28fc16eadd74de96300db3f9bcdf00064a3201567dd241f3fd0a16da" exitCode=0 Jan 27 20:01:53 crc kubenswrapper[4915]: I0127 20:01:53.807015 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qg7qd" event={"ID":"b680591d-a4b1-45b8-8e3b-c7ab219004e3","Type":"ContainerDied","Data":"6370f22e28fc16eadd74de96300db3f9bcdf00064a3201567dd241f3fd0a16da"} Jan 27 20:01:53 crc kubenswrapper[4915]: I0127 20:01:53.909610 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-l25hd"] Jan 27 20:01:53 crc kubenswrapper[4915]: E0127 20:01:53.910048 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21bf5523-8bfc-4cfd-bbb7-f82b96975d85" containerName="mariadb-account-create-update" Jan 27 20:01:53 crc kubenswrapper[4915]: I0127 20:01:53.910067 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="21bf5523-8bfc-4cfd-bbb7-f82b96975d85" containerName="mariadb-account-create-update" Jan 27 20:01:53 crc kubenswrapper[4915]: I0127 20:01:53.910272 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="21bf5523-8bfc-4cfd-bbb7-f82b96975d85" containerName="mariadb-account-create-update" Jan 27 20:01:53 crc kubenswrapper[4915]: I0127 20:01:53.910883 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l25hd" Jan 27 20:01:53 crc kubenswrapper[4915]: I0127 20:01:53.914581 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 27 20:01:53 crc kubenswrapper[4915]: I0127 20:01:53.916321 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-l25hd"] Jan 27 20:01:53 crc kubenswrapper[4915]: I0127 20:01:53.923566 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qg7qd" Jan 27 20:01:54 crc kubenswrapper[4915]: I0127 20:01:54.003012 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd5nt\" (UniqueName: \"kubernetes.io/projected/b680591d-a4b1-45b8-8e3b-c7ab219004e3-kube-api-access-zd5nt\") pod \"b680591d-a4b1-45b8-8e3b-c7ab219004e3\" (UID: \"b680591d-a4b1-45b8-8e3b-c7ab219004e3\") " Jan 27 20:01:54 crc kubenswrapper[4915]: I0127 20:01:54.003120 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b680591d-a4b1-45b8-8e3b-c7ab219004e3-catalog-content\") pod \"b680591d-a4b1-45b8-8e3b-c7ab219004e3\" (UID: \"b680591d-a4b1-45b8-8e3b-c7ab219004e3\") " Jan 27 20:01:54 crc kubenswrapper[4915]: I0127 20:01:54.003254 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b680591d-a4b1-45b8-8e3b-c7ab219004e3-utilities\") pod \"b680591d-a4b1-45b8-8e3b-c7ab219004e3\" (UID: \"b680591d-a4b1-45b8-8e3b-c7ab219004e3\") " Jan 27 20:01:54 crc kubenswrapper[4915]: I0127 20:01:54.003494 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a06248c-abed-4c8f-b135-d45412d4427a-operator-scripts\") pod \"root-account-create-update-l25hd\" (UID: \"4a06248c-abed-4c8f-b135-d45412d4427a\") " pod="openstack/root-account-create-update-l25hd" Jan 27 20:01:54 crc kubenswrapper[4915]: I0127 20:01:54.003522 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9rvg\" (UniqueName: \"kubernetes.io/projected/4a06248c-abed-4c8f-b135-d45412d4427a-kube-api-access-l9rvg\") pod \"root-account-create-update-l25hd\" (UID: \"4a06248c-abed-4c8f-b135-d45412d4427a\") " pod="openstack/root-account-create-update-l25hd" Jan 27 20:01:54 crc kubenswrapper[4915]: I0127 20:01:54.004720 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b680591d-a4b1-45b8-8e3b-c7ab219004e3-utilities" (OuterVolumeSpecName: "utilities") pod "b680591d-a4b1-45b8-8e3b-c7ab219004e3" (UID: "b680591d-a4b1-45b8-8e3b-c7ab219004e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:01:54 crc kubenswrapper[4915]: I0127 20:01:54.009008 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b680591d-a4b1-45b8-8e3b-c7ab219004e3-kube-api-access-zd5nt" (OuterVolumeSpecName: "kube-api-access-zd5nt") pod "b680591d-a4b1-45b8-8e3b-c7ab219004e3" (UID: "b680591d-a4b1-45b8-8e3b-c7ab219004e3"). InnerVolumeSpecName "kube-api-access-zd5nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:01:54 crc kubenswrapper[4915]: I0127 20:01:54.105592 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a06248c-abed-4c8f-b135-d45412d4427a-operator-scripts\") pod \"root-account-create-update-l25hd\" (UID: \"4a06248c-abed-4c8f-b135-d45412d4427a\") " pod="openstack/root-account-create-update-l25hd" Jan 27 20:01:54 crc kubenswrapper[4915]: I0127 20:01:54.105634 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9rvg\" (UniqueName: \"kubernetes.io/projected/4a06248c-abed-4c8f-b135-d45412d4427a-kube-api-access-l9rvg\") pod \"root-account-create-update-l25hd\" (UID: \"4a06248c-abed-4c8f-b135-d45412d4427a\") " pod="openstack/root-account-create-update-l25hd" Jan 27 20:01:54 crc kubenswrapper[4915]: I0127 20:01:54.105728 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b680591d-a4b1-45b8-8e3b-c7ab219004e3-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:01:54 crc kubenswrapper[4915]: I0127 20:01:54.105740 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd5nt\" (UniqueName: \"kubernetes.io/projected/b680591d-a4b1-45b8-8e3b-c7ab219004e3-kube-api-access-zd5nt\") on node \"crc\" DevicePath \"\"" Jan 27 20:01:54 crc kubenswrapper[4915]: I0127 20:01:54.106433 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a06248c-abed-4c8f-b135-d45412d4427a-operator-scripts\") pod \"root-account-create-update-l25hd\" (UID: \"4a06248c-abed-4c8f-b135-d45412d4427a\") " pod="openstack/root-account-create-update-l25hd" Jan 27 20:01:54 crc kubenswrapper[4915]: I0127 20:01:54.126072 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b680591d-a4b1-45b8-8e3b-c7ab219004e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b680591d-a4b1-45b8-8e3b-c7ab219004e3" (UID: "b680591d-a4b1-45b8-8e3b-c7ab219004e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:01:54 crc kubenswrapper[4915]: I0127 20:01:54.134440 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9rvg\" (UniqueName: \"kubernetes.io/projected/4a06248c-abed-4c8f-b135-d45412d4427a-kube-api-access-l9rvg\") pod \"root-account-create-update-l25hd\" (UID: \"4a06248c-abed-4c8f-b135-d45412d4427a\") " pod="openstack/root-account-create-update-l25hd" Jan 27 20:01:54 crc kubenswrapper[4915]: I0127 20:01:54.207255 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b680591d-a4b1-45b8-8e3b-c7ab219004e3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:01:54 crc kubenswrapper[4915]: I0127 20:01:54.232242 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l25hd" Jan 27 20:01:54 crc kubenswrapper[4915]: I0127 20:01:54.644260 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-l25hd"] Jan 27 20:01:54 crc kubenswrapper[4915]: I0127 20:01:54.818782 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qg7qd" event={"ID":"b680591d-a4b1-45b8-8e3b-c7ab219004e3","Type":"ContainerDied","Data":"c8e1845e548acf3bae2320fe6e486f3b1df8a9fd5626e889da6c5c7447bba400"} Jan 27 20:01:54 crc kubenswrapper[4915]: I0127 20:01:54.818864 4915 scope.go:117] "RemoveContainer" containerID="6370f22e28fc16eadd74de96300db3f9bcdf00064a3201567dd241f3fd0a16da" Jan 27 20:01:54 crc kubenswrapper[4915]: I0127 20:01:54.818898 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qg7qd" Jan 27 20:01:54 crc kubenswrapper[4915]: I0127 20:01:54.820126 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l25hd" event={"ID":"4a06248c-abed-4c8f-b135-d45412d4427a","Type":"ContainerStarted","Data":"e5f27fc14a78ae47231fb5b82ce14f84b319dafb12b9959400c647fff6336f05"} Jan 27 20:01:54 crc kubenswrapper[4915]: I0127 20:01:54.836634 4915 scope.go:117] "RemoveContainer" containerID="68b0e630b4dff554fc5ac5e5077b469e157afb8b30dc8201cbf2df4c54d33f29" Jan 27 20:01:54 crc kubenswrapper[4915]: I0127 20:01:54.861144 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qg7qd"] Jan 27 20:01:54 crc kubenswrapper[4915]: I0127 20:01:54.867424 4915 scope.go:117] "RemoveContainer" containerID="0807482bc10b1d719d9ec9864fbab88f908bddc22daf5751e177fc48abf86e04" Jan 27 20:01:54 crc kubenswrapper[4915]: I0127 20:01:54.873458 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qg7qd"] Jan 27 20:01:55 crc kubenswrapper[4915]: I0127 20:01:55.372756 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b680591d-a4b1-45b8-8e3b-c7ab219004e3" path="/var/lib/kubelet/pods/b680591d-a4b1-45b8-8e3b-c7ab219004e3/volumes" Jan 27 20:01:55 crc kubenswrapper[4915]: I0127 20:01:55.828082 4915 generic.go:334] "Generic (PLEG): container finished" podID="4a06248c-abed-4c8f-b135-d45412d4427a" containerID="dcf2d93585da668ed3eba54949c404bbb14978f679eb0561e387088e0bb97bb3" exitCode=0 Jan 27 20:01:55 crc kubenswrapper[4915]: I0127 20:01:55.828137 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l25hd" event={"ID":"4a06248c-abed-4c8f-b135-d45412d4427a","Type":"ContainerDied","Data":"dcf2d93585da668ed3eba54949c404bbb14978f679eb0561e387088e0bb97bb3"} Jan 27 20:01:56 crc kubenswrapper[4915]: I0127 20:01:56.837994 4915 generic.go:334] "Generic (PLEG): container finished" podID="42f8b37f-1bac-42b5-9529-e2671a11fc2f" containerID="a6b9f6b0f0a30ac183d88502b90a91589c5f80585c993295b0031ba84513fac7" exitCode=0 Jan 27 20:01:56 crc kubenswrapper[4915]: I0127 20:01:56.838100 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"42f8b37f-1bac-42b5-9529-e2671a11fc2f","Type":"ContainerDied","Data":"a6b9f6b0f0a30ac183d88502b90a91589c5f80585c993295b0031ba84513fac7"} Jan 27 20:01:57 crc kubenswrapper[4915]: I0127 20:01:57.129931 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l25hd" Jan 27 20:01:57 crc kubenswrapper[4915]: I0127 20:01:57.254445 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9rvg\" (UniqueName: \"kubernetes.io/projected/4a06248c-abed-4c8f-b135-d45412d4427a-kube-api-access-l9rvg\") pod \"4a06248c-abed-4c8f-b135-d45412d4427a\" (UID: \"4a06248c-abed-4c8f-b135-d45412d4427a\") " Jan 27 20:01:57 crc kubenswrapper[4915]: I0127 20:01:57.254629 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a06248c-abed-4c8f-b135-d45412d4427a-operator-scripts\") pod \"4a06248c-abed-4c8f-b135-d45412d4427a\" (UID: \"4a06248c-abed-4c8f-b135-d45412d4427a\") " Jan 27 20:01:57 crc kubenswrapper[4915]: I0127 20:01:57.255410 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a06248c-abed-4c8f-b135-d45412d4427a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a06248c-abed-4c8f-b135-d45412d4427a" (UID: "4a06248c-abed-4c8f-b135-d45412d4427a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:01:57 crc kubenswrapper[4915]: I0127 20:01:57.258033 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a06248c-abed-4c8f-b135-d45412d4427a-kube-api-access-l9rvg" (OuterVolumeSpecName: "kube-api-access-l9rvg") pod "4a06248c-abed-4c8f-b135-d45412d4427a" (UID: "4a06248c-abed-4c8f-b135-d45412d4427a"). InnerVolumeSpecName "kube-api-access-l9rvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:01:57 crc kubenswrapper[4915]: I0127 20:01:57.356870 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9rvg\" (UniqueName: \"kubernetes.io/projected/4a06248c-abed-4c8f-b135-d45412d4427a-kube-api-access-l9rvg\") on node \"crc\" DevicePath \"\"" Jan 27 20:01:57 crc kubenswrapper[4915]: I0127 20:01:57.356911 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a06248c-abed-4c8f-b135-d45412d4427a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:01:57 crc kubenswrapper[4915]: I0127 20:01:57.846819 4915 generic.go:334] "Generic (PLEG): container finished" podID="c4797330-4b41-4a32-b43b-81cb7d6e946f" containerID="4de384dd2e011ebdfdbb60a43f286410d16828aa7cf92a23860edd896717427f" exitCode=0 Jan 27 20:01:57 crc kubenswrapper[4915]: I0127 20:01:57.846883 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c4797330-4b41-4a32-b43b-81cb7d6e946f","Type":"ContainerDied","Data":"4de384dd2e011ebdfdbb60a43f286410d16828aa7cf92a23860edd896717427f"} Jan 27 20:01:57 crc kubenswrapper[4915]: I0127 20:01:57.852566 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l25hd" event={"ID":"4a06248c-abed-4c8f-b135-d45412d4427a","Type":"ContainerDied","Data":"e5f27fc14a78ae47231fb5b82ce14f84b319dafb12b9959400c647fff6336f05"} Jan 27 20:01:57 crc kubenswrapper[4915]: I0127 20:01:57.852614 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5f27fc14a78ae47231fb5b82ce14f84b319dafb12b9959400c647fff6336f05" Jan 27 20:01:57 crc kubenswrapper[4915]: I0127 20:01:57.852678 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l25hd" Jan 27 20:01:57 crc kubenswrapper[4915]: I0127 20:01:57.864569 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"42f8b37f-1bac-42b5-9529-e2671a11fc2f","Type":"ContainerStarted","Data":"33d809371bab6f59c9007d7ca2bcf11d6fd7a240695b7b9a447d23451da8672d"} Jan 27 20:01:57 crc kubenswrapper[4915]: I0127 20:01:57.865352 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 20:01:57 crc kubenswrapper[4915]: I0127 20:01:57.923631 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.923609717 podStartE2EDuration="36.923609717s" podCreationTimestamp="2026-01-27 20:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:01:57.916343218 +0000 UTC m=+4809.274196882" watchObservedRunningTime="2026-01-27 20:01:57.923609717 +0000 UTC m=+4809.281463381" Jan 27 20:01:58 crc kubenswrapper[4915]: I0127 20:01:58.872460 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c4797330-4b41-4a32-b43b-81cb7d6e946f","Type":"ContainerStarted","Data":"bdafa1edb8b6ccde093da604772d59e7918af9f2d0c064696e5f81c6d7a48ce5"} Jan 27 20:01:58 crc kubenswrapper[4915]: I0127 20:01:58.873929 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:01:58 crc kubenswrapper[4915]: I0127 20:01:58.895357 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.895336135 podStartE2EDuration="37.895336135s" podCreationTimestamp="2026-01-27 20:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:01:58.891515841 +0000 UTC m=+4810.249369505" watchObservedRunningTime="2026-01-27 20:01:58.895336135 +0000 UTC m=+4810.253189799" Jan 27 20:02:12 crc kubenswrapper[4915]: I0127 20:02:12.497146 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 20:02:13 crc kubenswrapper[4915]: I0127 20:02:13.169019 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:18 crc kubenswrapper[4915]: I0127 20:02:18.113674 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-vpsjx"] Jan 27 20:02:18 crc kubenswrapper[4915]: E0127 20:02:18.114649 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b680591d-a4b1-45b8-8e3b-c7ab219004e3" containerName="registry-server" Jan 27 20:02:18 crc kubenswrapper[4915]: I0127 20:02:18.114683 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b680591d-a4b1-45b8-8e3b-c7ab219004e3" containerName="registry-server" Jan 27 20:02:18 crc kubenswrapper[4915]: E0127 20:02:18.114703 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a06248c-abed-4c8f-b135-d45412d4427a" containerName="mariadb-account-create-update" Jan 27 20:02:18 crc kubenswrapper[4915]: I0127 20:02:18.114711 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a06248c-abed-4c8f-b135-d45412d4427a" containerName="mariadb-account-create-update" Jan 27 20:02:18 crc kubenswrapper[4915]: E0127 20:02:18.114729 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b680591d-a4b1-45b8-8e3b-c7ab219004e3" containerName="extract-utilities" Jan 27 20:02:18 crc kubenswrapper[4915]: I0127 20:02:18.114737 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b680591d-a4b1-45b8-8e3b-c7ab219004e3" containerName="extract-utilities" Jan 27 20:02:18 crc kubenswrapper[4915]: E0127 20:02:18.114757 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b680591d-a4b1-45b8-8e3b-c7ab219004e3" containerName="extract-content" Jan 27 20:02:18 crc kubenswrapper[4915]: I0127 20:02:18.114763 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b680591d-a4b1-45b8-8e3b-c7ab219004e3" containerName="extract-content" Jan 27 20:02:18 crc kubenswrapper[4915]: I0127 20:02:18.114939 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a06248c-abed-4c8f-b135-d45412d4427a" containerName="mariadb-account-create-update" Jan 27 20:02:18 crc kubenswrapper[4915]: I0127 20:02:18.114963 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="b680591d-a4b1-45b8-8e3b-c7ab219004e3" containerName="registry-server" Jan 27 20:02:18 crc kubenswrapper[4915]: I0127 20:02:18.115917 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-vpsjx" Jan 27 20:02:18 crc kubenswrapper[4915]: I0127 20:02:18.126367 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-vpsjx"] Jan 27 20:02:18 crc kubenswrapper[4915]: I0127 20:02:18.275743 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fb21b39-c709-40ca-bdc1-8be83768bb5b-config\") pod \"dnsmasq-dns-5b7946d7b9-vpsjx\" (UID: \"8fb21b39-c709-40ca-bdc1-8be83768bb5b\") " pod="openstack/dnsmasq-dns-5b7946d7b9-vpsjx" Jan 27 20:02:18 crc kubenswrapper[4915]: I0127 20:02:18.275834 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fb21b39-c709-40ca-bdc1-8be83768bb5b-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-vpsjx\" (UID: \"8fb21b39-c709-40ca-bdc1-8be83768bb5b\") " pod="openstack/dnsmasq-dns-5b7946d7b9-vpsjx" Jan 27 20:02:18 crc kubenswrapper[4915]: I0127 20:02:18.275907 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7fcf\" (UniqueName: \"kubernetes.io/projected/8fb21b39-c709-40ca-bdc1-8be83768bb5b-kube-api-access-j7fcf\") pod \"dnsmasq-dns-5b7946d7b9-vpsjx\" (UID: \"8fb21b39-c709-40ca-bdc1-8be83768bb5b\") " pod="openstack/dnsmasq-dns-5b7946d7b9-vpsjx" Jan 27 20:02:18 crc kubenswrapper[4915]: I0127 20:02:18.377215 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fb21b39-c709-40ca-bdc1-8be83768bb5b-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-vpsjx\" (UID: \"8fb21b39-c709-40ca-bdc1-8be83768bb5b\") " pod="openstack/dnsmasq-dns-5b7946d7b9-vpsjx" Jan 27 20:02:18 crc kubenswrapper[4915]: I0127 20:02:18.377302 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7fcf\" (UniqueName: \"kubernetes.io/projected/8fb21b39-c709-40ca-bdc1-8be83768bb5b-kube-api-access-j7fcf\") pod \"dnsmasq-dns-5b7946d7b9-vpsjx\" (UID: \"8fb21b39-c709-40ca-bdc1-8be83768bb5b\") " pod="openstack/dnsmasq-dns-5b7946d7b9-vpsjx" Jan 27 20:02:18 crc kubenswrapper[4915]: I0127 20:02:18.377419 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fb21b39-c709-40ca-bdc1-8be83768bb5b-config\") pod \"dnsmasq-dns-5b7946d7b9-vpsjx\" (UID: \"8fb21b39-c709-40ca-bdc1-8be83768bb5b\") " pod="openstack/dnsmasq-dns-5b7946d7b9-vpsjx" Jan 27 20:02:18 crc kubenswrapper[4915]: I0127 20:02:18.378355 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fb21b39-c709-40ca-bdc1-8be83768bb5b-config\") pod \"dnsmasq-dns-5b7946d7b9-vpsjx\" (UID: \"8fb21b39-c709-40ca-bdc1-8be83768bb5b\") " pod="openstack/dnsmasq-dns-5b7946d7b9-vpsjx" Jan 27 20:02:18 crc kubenswrapper[4915]: I0127 20:02:18.378535 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fb21b39-c709-40ca-bdc1-8be83768bb5b-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-vpsjx\" (UID: \"8fb21b39-c709-40ca-bdc1-8be83768bb5b\") " pod="openstack/dnsmasq-dns-5b7946d7b9-vpsjx" Jan 27 20:02:18 crc kubenswrapper[4915]: I0127 20:02:18.397410 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7fcf\" (UniqueName: \"kubernetes.io/projected/8fb21b39-c709-40ca-bdc1-8be83768bb5b-kube-api-access-j7fcf\") pod \"dnsmasq-dns-5b7946d7b9-vpsjx\" (UID: \"8fb21b39-c709-40ca-bdc1-8be83768bb5b\") " pod="openstack/dnsmasq-dns-5b7946d7b9-vpsjx" Jan 27 20:02:18 crc kubenswrapper[4915]: I0127 20:02:18.435368 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-vpsjx" Jan 27 20:02:18 crc kubenswrapper[4915]: I0127 20:02:18.795850 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 20:02:18 crc kubenswrapper[4915]: I0127 20:02:18.858219 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-vpsjx"] Jan 27 20:02:18 crc kubenswrapper[4915]: W0127 20:02:18.862000 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fb21b39_c709_40ca_bdc1_8be83768bb5b.slice/crio-74fd7c79e2c9d9997a0cd77b6ac1cdb73410f8d6b286642dcc191a7946e4cedc WatchSource:0}: Error finding container 74fd7c79e2c9d9997a0cd77b6ac1cdb73410f8d6b286642dcc191a7946e4cedc: Status 404 returned error can't find the container with id 74fd7c79e2c9d9997a0cd77b6ac1cdb73410f8d6b286642dcc191a7946e4cedc Jan 27 20:02:19 crc kubenswrapper[4915]: I0127 20:02:19.046026 4915 generic.go:334] "Generic (PLEG): container finished" podID="8fb21b39-c709-40ca-bdc1-8be83768bb5b" containerID="7ef96f8218d470885fe2e4cf1dcf50865e97144dc3cabc08081a586c3c6882d8" exitCode=0 Jan 27 20:02:19 crc kubenswrapper[4915]: I0127 20:02:19.046123 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-vpsjx" event={"ID":"8fb21b39-c709-40ca-bdc1-8be83768bb5b","Type":"ContainerDied","Data":"7ef96f8218d470885fe2e4cf1dcf50865e97144dc3cabc08081a586c3c6882d8"} Jan 27 20:02:19 crc kubenswrapper[4915]: I0127 20:02:19.046460 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-vpsjx" event={"ID":"8fb21b39-c709-40ca-bdc1-8be83768bb5b","Type":"ContainerStarted","Data":"74fd7c79e2c9d9997a0cd77b6ac1cdb73410f8d6b286642dcc191a7946e4cedc"} Jan 27 20:02:19 crc kubenswrapper[4915]: I0127 20:02:19.529307 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 20:02:20 crc kubenswrapper[4915]: I0127 20:02:20.054773 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-vpsjx" event={"ID":"8fb21b39-c709-40ca-bdc1-8be83768bb5b","Type":"ContainerStarted","Data":"502c4aff969ee29cb603ffa4e1aa3108a4bdca6260e4c2ba8b9f272264df359b"} Jan 27 20:02:20 crc kubenswrapper[4915]: I0127 20:02:20.054965 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b7946d7b9-vpsjx" Jan 27 20:02:20 crc kubenswrapper[4915]: I0127 20:02:20.529064 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="42f8b37f-1bac-42b5-9529-e2671a11fc2f" containerName="rabbitmq" containerID="cri-o://33d809371bab6f59c9007d7ca2bcf11d6fd7a240695b7b9a447d23451da8672d" gracePeriod=604799 Jan 27 20:02:20 crc kubenswrapper[4915]: I0127 20:02:20.624746 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:02:20 crc kubenswrapper[4915]: I0127 20:02:20.624836 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:02:21 crc kubenswrapper[4915]: I0127 20:02:21.154315 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="c4797330-4b41-4a32-b43b-81cb7d6e946f" containerName="rabbitmq" containerID="cri-o://bdafa1edb8b6ccde093da604772d59e7918af9f2d0c064696e5f81c6d7a48ce5" gracePeriod=604799 Jan 27 20:02:22 crc kubenswrapper[4915]: I0127 20:02:22.493640 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="42f8b37f-1bac-42b5-9529-e2671a11fc2f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.244:5672: connect: connection refused" Jan 27 20:02:23 crc kubenswrapper[4915]: I0127 20:02:23.167341 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="c4797330-4b41-4a32-b43b-81cb7d6e946f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.245:5672: connect: connection refused" Jan 27 20:02:26 crc kubenswrapper[4915]: E0127 20:02:26.804830 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42f8b37f_1bac_42b5_9529_e2671a11fc2f.slice/crio-33d809371bab6f59c9007d7ca2bcf11d6fd7a240695b7b9a447d23451da8672d.scope\": RecentStats: unable to find data in memory cache]" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.121340 4915 generic.go:334] "Generic (PLEG): container finished" podID="42f8b37f-1bac-42b5-9529-e2671a11fc2f" containerID="33d809371bab6f59c9007d7ca2bcf11d6fd7a240695b7b9a447d23451da8672d" exitCode=0 Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.121399 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"42f8b37f-1bac-42b5-9529-e2671a11fc2f","Type":"ContainerDied","Data":"33d809371bab6f59c9007d7ca2bcf11d6fd7a240695b7b9a447d23451da8672d"} Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.222396 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.248186 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b7946d7b9-vpsjx" podStartSLOduration=9.24809287 podStartE2EDuration="9.24809287s" podCreationTimestamp="2026-01-27 20:02:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:02:20.071093297 +0000 UTC m=+4831.428946971" watchObservedRunningTime="2026-01-27 20:02:27.24809287 +0000 UTC m=+4838.605946524" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.318691 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/42f8b37f-1bac-42b5-9529-e2671a11fc2f-pod-info\") pod \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.318775 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/42f8b37f-1bac-42b5-9529-e2671a11fc2f-rabbitmq-erlang-cookie\") pod \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.318851 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9bj7\" (UniqueName: \"kubernetes.io/projected/42f8b37f-1bac-42b5-9529-e2671a11fc2f-kube-api-access-q9bj7\") pod \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.319991 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/42f8b37f-1bac-42b5-9529-e2671a11fc2f-server-conf\") pod \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.320025 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42f8b37f-1bac-42b5-9529-e2671a11fc2f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "42f8b37f-1bac-42b5-9529-e2671a11fc2f" (UID: "42f8b37f-1bac-42b5-9529-e2671a11fc2f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.320058 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/42f8b37f-1bac-42b5-9529-e2671a11fc2f-plugins-conf\") pod \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.320149 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/42f8b37f-1bac-42b5-9529-e2671a11fc2f-rabbitmq-confd\") pod \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.320178 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/42f8b37f-1bac-42b5-9529-e2671a11fc2f-erlang-cookie-secret\") pod \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.320227 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/42f8b37f-1bac-42b5-9529-e2671a11fc2f-rabbitmq-plugins\") pod \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.320489 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b0ca5657-62ed-472e-a2ca-646e5eecde32\") pod \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\" (UID: \"42f8b37f-1bac-42b5-9529-e2671a11fc2f\") " Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.320591 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42f8b37f-1bac-42b5-9529-e2671a11fc2f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "42f8b37f-1bac-42b5-9529-e2671a11fc2f" (UID: "42f8b37f-1bac-42b5-9529-e2671a11fc2f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.321081 4915 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/42f8b37f-1bac-42b5-9529-e2671a11fc2f-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.321098 4915 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/42f8b37f-1bac-42b5-9529-e2671a11fc2f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.321548 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42f8b37f-1bac-42b5-9529-e2671a11fc2f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "42f8b37f-1bac-42b5-9529-e2671a11fc2f" (UID: "42f8b37f-1bac-42b5-9529-e2671a11fc2f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.325512 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42f8b37f-1bac-42b5-9529-e2671a11fc2f-kube-api-access-q9bj7" (OuterVolumeSpecName: "kube-api-access-q9bj7") pod "42f8b37f-1bac-42b5-9529-e2671a11fc2f" (UID: "42f8b37f-1bac-42b5-9529-e2671a11fc2f"). InnerVolumeSpecName "kube-api-access-q9bj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.326272 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/42f8b37f-1bac-42b5-9529-e2671a11fc2f-pod-info" (OuterVolumeSpecName: "pod-info") pod "42f8b37f-1bac-42b5-9529-e2671a11fc2f" (UID: "42f8b37f-1bac-42b5-9529-e2671a11fc2f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.338354 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b0ca5657-62ed-472e-a2ca-646e5eecde32" (OuterVolumeSpecName: "persistence") pod "42f8b37f-1bac-42b5-9529-e2671a11fc2f" (UID: "42f8b37f-1bac-42b5-9529-e2671a11fc2f"). InnerVolumeSpecName "pvc-b0ca5657-62ed-472e-a2ca-646e5eecde32". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.338413 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42f8b37f-1bac-42b5-9529-e2671a11fc2f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "42f8b37f-1bac-42b5-9529-e2671a11fc2f" (UID: "42f8b37f-1bac-42b5-9529-e2671a11fc2f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.342892 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42f8b37f-1bac-42b5-9529-e2671a11fc2f-server-conf" (OuterVolumeSpecName: "server-conf") pod "42f8b37f-1bac-42b5-9529-e2671a11fc2f" (UID: "42f8b37f-1bac-42b5-9529-e2671a11fc2f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.395022 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42f8b37f-1bac-42b5-9529-e2671a11fc2f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "42f8b37f-1bac-42b5-9529-e2671a11fc2f" (UID: "42f8b37f-1bac-42b5-9529-e2671a11fc2f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.422965 4915 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/42f8b37f-1bac-42b5-9529-e2671a11fc2f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.423005 4915 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/42f8b37f-1bac-42b5-9529-e2671a11fc2f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.423015 4915 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/42f8b37f-1bac-42b5-9529-e2671a11fc2f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.423053 4915 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b0ca5657-62ed-472e-a2ca-646e5eecde32\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b0ca5657-62ed-472e-a2ca-646e5eecde32\") on node \"crc\" " Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.423067 4915 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/42f8b37f-1bac-42b5-9529-e2671a11fc2f-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.423077 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9bj7\" (UniqueName: \"kubernetes.io/projected/42f8b37f-1bac-42b5-9529-e2671a11fc2f-kube-api-access-q9bj7\") on node \"crc\" DevicePath \"\"" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.423087 4915 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/42f8b37f-1bac-42b5-9529-e2671a11fc2f-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.443338 4915 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.443474 4915 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b0ca5657-62ed-472e-a2ca-646e5eecde32" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b0ca5657-62ed-472e-a2ca-646e5eecde32") on node "crc" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.526169 4915 reconciler_common.go:293] "Volume detached for volume \"pvc-b0ca5657-62ed-472e-a2ca-646e5eecde32\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b0ca5657-62ed-472e-a2ca-646e5eecde32\") on node \"crc\" DevicePath \"\"" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.655321 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.728313 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c4797330-4b41-4a32-b43b-81cb7d6e946f-erlang-cookie-secret\") pod \"c4797330-4b41-4a32-b43b-81cb7d6e946f\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.728387 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg8fj\" (UniqueName: \"kubernetes.io/projected/c4797330-4b41-4a32-b43b-81cb7d6e946f-kube-api-access-fg8fj\") pod \"c4797330-4b41-4a32-b43b-81cb7d6e946f\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.728425 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c4797330-4b41-4a32-b43b-81cb7d6e946f-plugins-conf\") pod \"c4797330-4b41-4a32-b43b-81cb7d6e946f\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.728456 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c4797330-4b41-4a32-b43b-81cb7d6e946f-rabbitmq-confd\") pod \"c4797330-4b41-4a32-b43b-81cb7d6e946f\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.728489 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c4797330-4b41-4a32-b43b-81cb7d6e946f-pod-info\") pod \"c4797330-4b41-4a32-b43b-81cb7d6e946f\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.728528 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c4797330-4b41-4a32-b43b-81cb7d6e946f-server-conf\") pod \"c4797330-4b41-4a32-b43b-81cb7d6e946f\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.728551 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c4797330-4b41-4a32-b43b-81cb7d6e946f-rabbitmq-erlang-cookie\") pod \"c4797330-4b41-4a32-b43b-81cb7d6e946f\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.728604 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c4797330-4b41-4a32-b43b-81cb7d6e946f-rabbitmq-plugins\") pod \"c4797330-4b41-4a32-b43b-81cb7d6e946f\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.728804 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-807d62f3-5761-4647-8431-4568b999b71a\") pod \"c4797330-4b41-4a32-b43b-81cb7d6e946f\" (UID: \"c4797330-4b41-4a32-b43b-81cb7d6e946f\") " Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.731944 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4797330-4b41-4a32-b43b-81cb7d6e946f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "c4797330-4b41-4a32-b43b-81cb7d6e946f" (UID: "c4797330-4b41-4a32-b43b-81cb7d6e946f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.734603 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4797330-4b41-4a32-b43b-81cb7d6e946f-kube-api-access-fg8fj" (OuterVolumeSpecName: "kube-api-access-fg8fj") pod "c4797330-4b41-4a32-b43b-81cb7d6e946f" (UID: "c4797330-4b41-4a32-b43b-81cb7d6e946f"). InnerVolumeSpecName "kube-api-access-fg8fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.735049 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4797330-4b41-4a32-b43b-81cb7d6e946f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "c4797330-4b41-4a32-b43b-81cb7d6e946f" (UID: "c4797330-4b41-4a32-b43b-81cb7d6e946f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.735341 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4797330-4b41-4a32-b43b-81cb7d6e946f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "c4797330-4b41-4a32-b43b-81cb7d6e946f" (UID: "c4797330-4b41-4a32-b43b-81cb7d6e946f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.737642 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4797330-4b41-4a32-b43b-81cb7d6e946f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "c4797330-4b41-4a32-b43b-81cb7d6e946f" (UID: "c4797330-4b41-4a32-b43b-81cb7d6e946f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.742360 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/c4797330-4b41-4a32-b43b-81cb7d6e946f-pod-info" (OuterVolumeSpecName: "pod-info") pod "c4797330-4b41-4a32-b43b-81cb7d6e946f" (UID: "c4797330-4b41-4a32-b43b-81cb7d6e946f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.757546 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-807d62f3-5761-4647-8431-4568b999b71a" (OuterVolumeSpecName: "persistence") pod "c4797330-4b41-4a32-b43b-81cb7d6e946f" (UID: "c4797330-4b41-4a32-b43b-81cb7d6e946f"). InnerVolumeSpecName "pvc-807d62f3-5761-4647-8431-4568b999b71a". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.762635 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4797330-4b41-4a32-b43b-81cb7d6e946f-server-conf" (OuterVolumeSpecName: "server-conf") pod "c4797330-4b41-4a32-b43b-81cb7d6e946f" (UID: "c4797330-4b41-4a32-b43b-81cb7d6e946f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.807893 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4797330-4b41-4a32-b43b-81cb7d6e946f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "c4797330-4b41-4a32-b43b-81cb7d6e946f" (UID: "c4797330-4b41-4a32-b43b-81cb7d6e946f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.838150 4915 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c4797330-4b41-4a32-b43b-81cb7d6e946f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.838233 4915 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-807d62f3-5761-4647-8431-4568b999b71a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-807d62f3-5761-4647-8431-4568b999b71a\") on node \"crc\" " Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.838254 4915 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c4797330-4b41-4a32-b43b-81cb7d6e946f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.838266 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg8fj\" (UniqueName: \"kubernetes.io/projected/c4797330-4b41-4a32-b43b-81cb7d6e946f-kube-api-access-fg8fj\") on node \"crc\" DevicePath \"\"" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.838278 4915 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c4797330-4b41-4a32-b43b-81cb7d6e946f-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.838289 4915 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c4797330-4b41-4a32-b43b-81cb7d6e946f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.838299 4915 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c4797330-4b41-4a32-b43b-81cb7d6e946f-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.838307 4915 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c4797330-4b41-4a32-b43b-81cb7d6e946f-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.838315 4915 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c4797330-4b41-4a32-b43b-81cb7d6e946f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.852612 4915 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.852965 4915 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-807d62f3-5761-4647-8431-4568b999b71a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-807d62f3-5761-4647-8431-4568b999b71a") on node "crc" Jan 27 20:02:27 crc kubenswrapper[4915]: I0127 20:02:27.939590 4915 reconciler_common.go:293] "Volume detached for volume \"pvc-807d62f3-5761-4647-8431-4568b999b71a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-807d62f3-5761-4647-8431-4568b999b71a\") on node \"crc\" DevicePath \"\"" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.133074 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"42f8b37f-1bac-42b5-9529-e2671a11fc2f","Type":"ContainerDied","Data":"17290e389f4efdf964f106db874438aa4a5dfa65c38d1ee0a45e875a949b1841"} Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.133100 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.133138 4915 scope.go:117] "RemoveContainer" containerID="33d809371bab6f59c9007d7ca2bcf11d6fd7a240695b7b9a447d23451da8672d" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.134724 4915 generic.go:334] "Generic (PLEG): container finished" podID="c4797330-4b41-4a32-b43b-81cb7d6e946f" containerID="bdafa1edb8b6ccde093da604772d59e7918af9f2d0c064696e5f81c6d7a48ce5" exitCode=0 Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.134765 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c4797330-4b41-4a32-b43b-81cb7d6e946f","Type":"ContainerDied","Data":"bdafa1edb8b6ccde093da604772d59e7918af9f2d0c064696e5f81c6d7a48ce5"} Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.134829 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c4797330-4b41-4a32-b43b-81cb7d6e946f","Type":"ContainerDied","Data":"54382ebf6d0365f3e723dcb0b86b61bf954b5d447c3f3fa91baf55437c92a56f"} Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.134897 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.165545 4915 scope.go:117] "RemoveContainer" containerID="a6b9f6b0f0a30ac183d88502b90a91589c5f80585c993295b0031ba84513fac7" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.174089 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.180707 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.201120 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.202397 4915 scope.go:117] "RemoveContainer" containerID="bdafa1edb8b6ccde093da604772d59e7918af9f2d0c064696e5f81c6d7a48ce5" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.208920 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.221402 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 20:02:28 crc kubenswrapper[4915]: E0127 20:02:28.221849 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f8b37f-1bac-42b5-9529-e2671a11fc2f" containerName="setup-container" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.221871 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f8b37f-1bac-42b5-9529-e2671a11fc2f" containerName="setup-container" Jan 27 20:02:28 crc kubenswrapper[4915]: E0127 20:02:28.221899 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4797330-4b41-4a32-b43b-81cb7d6e946f" containerName="rabbitmq" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.221909 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4797330-4b41-4a32-b43b-81cb7d6e946f" containerName="rabbitmq" Jan 27 20:02:28 crc kubenswrapper[4915]: E0127 20:02:28.221918 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f8b37f-1bac-42b5-9529-e2671a11fc2f" containerName="rabbitmq" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.221925 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f8b37f-1bac-42b5-9529-e2671a11fc2f" containerName="rabbitmq" Jan 27 20:02:28 crc kubenswrapper[4915]: E0127 20:02:28.221940 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4797330-4b41-4a32-b43b-81cb7d6e946f" containerName="setup-container" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.221947 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4797330-4b41-4a32-b43b-81cb7d6e946f" containerName="setup-container" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.222117 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4797330-4b41-4a32-b43b-81cb7d6e946f" containerName="rabbitmq" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.222145 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="42f8b37f-1bac-42b5-9529-e2671a11fc2f" containerName="rabbitmq" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.226539 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.231634 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.231686 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-67fn6" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.231814 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.231846 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.235740 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.237824 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.240210 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.243067 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-9n6w9" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.243245 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.243457 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.243608 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.244834 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.246981 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.255099 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.263414 4915 scope.go:117] "RemoveContainer" containerID="4de384dd2e011ebdfdbb60a43f286410d16828aa7cf92a23860edd896717427f" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.293387 4915 scope.go:117] "RemoveContainer" containerID="bdafa1edb8b6ccde093da604772d59e7918af9f2d0c064696e5f81c6d7a48ce5" Jan 27 20:02:28 crc kubenswrapper[4915]: E0127 20:02:28.302300 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdafa1edb8b6ccde093da604772d59e7918af9f2d0c064696e5f81c6d7a48ce5\": container with ID starting with bdafa1edb8b6ccde093da604772d59e7918af9f2d0c064696e5f81c6d7a48ce5 not found: ID does not exist" containerID="bdafa1edb8b6ccde093da604772d59e7918af9f2d0c064696e5f81c6d7a48ce5" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.302331 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdafa1edb8b6ccde093da604772d59e7918af9f2d0c064696e5f81c6d7a48ce5"} err="failed to get container status \"bdafa1edb8b6ccde093da604772d59e7918af9f2d0c064696e5f81c6d7a48ce5\": rpc error: code = NotFound desc = could not find container \"bdafa1edb8b6ccde093da604772d59e7918af9f2d0c064696e5f81c6d7a48ce5\": container with ID starting with bdafa1edb8b6ccde093da604772d59e7918af9f2d0c064696e5f81c6d7a48ce5 not found: ID does not exist" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.302354 4915 scope.go:117] "RemoveContainer" containerID="4de384dd2e011ebdfdbb60a43f286410d16828aa7cf92a23860edd896717427f" Jan 27 20:02:28 crc kubenswrapper[4915]: E0127 20:02:28.302767 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4de384dd2e011ebdfdbb60a43f286410d16828aa7cf92a23860edd896717427f\": container with ID starting with 4de384dd2e011ebdfdbb60a43f286410d16828aa7cf92a23860edd896717427f not found: ID does not exist" containerID="4de384dd2e011ebdfdbb60a43f286410d16828aa7cf92a23860edd896717427f" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.302908 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4de384dd2e011ebdfdbb60a43f286410d16828aa7cf92a23860edd896717427f"} err="failed to get container status \"4de384dd2e011ebdfdbb60a43f286410d16828aa7cf92a23860edd896717427f\": rpc error: code = NotFound desc = could not find container \"4de384dd2e011ebdfdbb60a43f286410d16828aa7cf92a23860edd896717427f\": container with ID starting with 4de384dd2e011ebdfdbb60a43f286410d16828aa7cf92a23860edd896717427f not found: ID does not exist" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.345955 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b0ca5657-62ed-472e-a2ca-646e5eecde32\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b0ca5657-62ed-472e-a2ca-646e5eecde32\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") " pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.346010 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d90d24c5-20d2-49ea-928c-484d5baeeb9e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.346081 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgl67\" (UniqueName: \"kubernetes.io/projected/0c15f8bf-aaf2-46e5-b80a-fe471d471444-kube-api-access-mgl67\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") " pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.346119 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzj6v\" (UniqueName: \"kubernetes.io/projected/d90d24c5-20d2-49ea-928c-484d5baeeb9e-kube-api-access-qzj6v\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.346188 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0c15f8bf-aaf2-46e5-b80a-fe471d471444-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") " pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.346257 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d90d24c5-20d2-49ea-928c-484d5baeeb9e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.346320 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0c15f8bf-aaf2-46e5-b80a-fe471d471444-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") " pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.346372 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0c15f8bf-aaf2-46e5-b80a-fe471d471444-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") " pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.346444 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d90d24c5-20d2-49ea-928c-484d5baeeb9e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.346487 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-807d62f3-5761-4647-8431-4568b999b71a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-807d62f3-5761-4647-8431-4568b999b71a\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.346741 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0c15f8bf-aaf2-46e5-b80a-fe471d471444-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") " pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.346783 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d90d24c5-20d2-49ea-928c-484d5baeeb9e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.346878 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d90d24c5-20d2-49ea-928c-484d5baeeb9e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.346904 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d90d24c5-20d2-49ea-928c-484d5baeeb9e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.346932 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d90d24c5-20d2-49ea-928c-484d5baeeb9e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.346994 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0c15f8bf-aaf2-46e5-b80a-fe471d471444-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") " pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.347023 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0c15f8bf-aaf2-46e5-b80a-fe471d471444-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") " pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.347054 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0c15f8bf-aaf2-46e5-b80a-fe471d471444-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") " pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.437351 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b7946d7b9-vpsjx" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.448518 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d90d24c5-20d2-49ea-928c-484d5baeeb9e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.448568 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgl67\" (UniqueName: \"kubernetes.io/projected/0c15f8bf-aaf2-46e5-b80a-fe471d471444-kube-api-access-mgl67\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") " pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.448606 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzj6v\" (UniqueName: \"kubernetes.io/projected/d90d24c5-20d2-49ea-928c-484d5baeeb9e-kube-api-access-qzj6v\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.448632 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0c15f8bf-aaf2-46e5-b80a-fe471d471444-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") " pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.448653 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d90d24c5-20d2-49ea-928c-484d5baeeb9e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.448680 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0c15f8bf-aaf2-46e5-b80a-fe471d471444-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") " pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.448696 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0c15f8bf-aaf2-46e5-b80a-fe471d471444-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") " pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.448712 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d90d24c5-20d2-49ea-928c-484d5baeeb9e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.448752 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-807d62f3-5761-4647-8431-4568b999b71a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-807d62f3-5761-4647-8431-4568b999b71a\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.448812 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0c15f8bf-aaf2-46e5-b80a-fe471d471444-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") " pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.448855 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d90d24c5-20d2-49ea-928c-484d5baeeb9e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.448917 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d90d24c5-20d2-49ea-928c-484d5baeeb9e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.448941 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d90d24c5-20d2-49ea-928c-484d5baeeb9e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.448971 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d90d24c5-20d2-49ea-928c-484d5baeeb9e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.449002 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0c15f8bf-aaf2-46e5-b80a-fe471d471444-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") " pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.449036 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0c15f8bf-aaf2-46e5-b80a-fe471d471444-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") " pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.449076 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0c15f8bf-aaf2-46e5-b80a-fe471d471444-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") " pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.449109 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b0ca5657-62ed-472e-a2ca-646e5eecde32\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b0ca5657-62ed-472e-a2ca-646e5eecde32\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") " pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.450174 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d90d24c5-20d2-49ea-928c-484d5baeeb9e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.450589 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0c15f8bf-aaf2-46e5-b80a-fe471d471444-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") " pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.451565 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d90d24c5-20d2-49ea-928c-484d5baeeb9e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.451598 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d90d24c5-20d2-49ea-928c-484d5baeeb9e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.452153 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0c15f8bf-aaf2-46e5-b80a-fe471d471444-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") " pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.452640 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0c15f8bf-aaf2-46e5-b80a-fe471d471444-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") " pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.452700 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0c15f8bf-aaf2-46e5-b80a-fe471d471444-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") " pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.453304 4915 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.453423 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b0ca5657-62ed-472e-a2ca-646e5eecde32\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b0ca5657-62ed-472e-a2ca-646e5eecde32\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/66c49b705702902e0a71589547a6220019a29c6546a265081af230c092011e5c/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.453583 4915 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.453632 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-807d62f3-5761-4647-8431-4568b999b71a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-807d62f3-5761-4647-8431-4568b999b71a\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/38c5dd8f125a0881309e51be0e84302bce9cf15962a92bb5571bad5d2ebbe2a2/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.454543 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d90d24c5-20d2-49ea-928c-484d5baeeb9e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.456042 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0c15f8bf-aaf2-46e5-b80a-fe471d471444-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") " pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.457111 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d90d24c5-20d2-49ea-928c-484d5baeeb9e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.457893 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d90d24c5-20d2-49ea-928c-484d5baeeb9e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.460637 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d90d24c5-20d2-49ea-928c-484d5baeeb9e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.462855 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0c15f8bf-aaf2-46e5-b80a-fe471d471444-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") " pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.463481 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0c15f8bf-aaf2-46e5-b80a-fe471d471444-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") " pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.475719 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzj6v\" (UniqueName: \"kubernetes.io/projected/d90d24c5-20d2-49ea-928c-484d5baeeb9e-kube-api-access-qzj6v\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.493940 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b0ca5657-62ed-472e-a2ca-646e5eecde32\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b0ca5657-62ed-472e-a2ca-646e5eecde32\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") " pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.503407 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-807d62f3-5761-4647-8431-4568b999b71a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-807d62f3-5761-4647-8431-4568b999b71a\") pod \"rabbitmq-cell1-server-0\" (UID: \"d90d24c5-20d2-49ea-928c-484d5baeeb9e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.510287 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgl67\" (UniqueName: \"kubernetes.io/projected/0c15f8bf-aaf2-46e5-b80a-fe471d471444-kube-api-access-mgl67\") pod \"rabbitmq-server-0\" (UID: \"0c15f8bf-aaf2-46e5-b80a-fe471d471444\") " pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.516214 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-92fkj"] Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.516508 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-98ddfc8f-92fkj" podUID="c3acbe65-7957-4891-897a-d920fd17f93d" containerName="dnsmasq-dns" containerID="cri-o://6247872bcb2dc5f9fc673c03a3ea9b37148f0bca9ef0d1f55009b5dde97a4e05" gracePeriod=10 Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.557967 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 20:02:28 crc kubenswrapper[4915]: I0127 20:02:28.582368 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:02:29 crc kubenswrapper[4915]: I0127 20:02:29.147587 4915 generic.go:334] "Generic (PLEG): container finished" podID="c3acbe65-7957-4891-897a-d920fd17f93d" containerID="6247872bcb2dc5f9fc673c03a3ea9b37148f0bca9ef0d1f55009b5dde97a4e05" exitCode=0 Jan 27 20:02:29 crc kubenswrapper[4915]: I0127 20:02:29.147663 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-92fkj" event={"ID":"c3acbe65-7957-4891-897a-d920fd17f93d","Type":"ContainerDied","Data":"6247872bcb2dc5f9fc673c03a3ea9b37148f0bca9ef0d1f55009b5dde97a4e05"} Jan 27 20:02:29 crc kubenswrapper[4915]: I0127 20:02:29.399152 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42f8b37f-1bac-42b5-9529-e2671a11fc2f" path="/var/lib/kubelet/pods/42f8b37f-1bac-42b5-9529-e2671a11fc2f/volumes" Jan 27 20:02:29 crc kubenswrapper[4915]: I0127 20:02:29.400369 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4797330-4b41-4a32-b43b-81cb7d6e946f" path="/var/lib/kubelet/pods/c4797330-4b41-4a32-b43b-81cb7d6e946f/volumes" Jan 27 20:02:29 crc kubenswrapper[4915]: I0127 20:02:29.464085 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 20:02:29 crc kubenswrapper[4915]: I0127 20:02:29.653397 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-92fkj" Jan 27 20:02:29 crc kubenswrapper[4915]: I0127 20:02:29.741387 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 20:02:29 crc kubenswrapper[4915]: W0127 20:02:29.759626 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd90d24c5_20d2_49ea_928c_484d5baeeb9e.slice/crio-01e0d915da55f18519c6c14793cefa1671de637aa0a52ce52d9b7e7b5eb86641 WatchSource:0}: Error finding container 01e0d915da55f18519c6c14793cefa1671de637aa0a52ce52d9b7e7b5eb86641: Status 404 returned error can't find the container with id 01e0d915da55f18519c6c14793cefa1671de637aa0a52ce52d9b7e7b5eb86641 Jan 27 20:02:29 crc kubenswrapper[4915]: I0127 20:02:29.773480 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgc99\" (UniqueName: \"kubernetes.io/projected/c3acbe65-7957-4891-897a-d920fd17f93d-kube-api-access-cgc99\") pod \"c3acbe65-7957-4891-897a-d920fd17f93d\" (UID: \"c3acbe65-7957-4891-897a-d920fd17f93d\") " Jan 27 20:02:29 crc kubenswrapper[4915]: I0127 20:02:29.773814 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3acbe65-7957-4891-897a-d920fd17f93d-config\") pod \"c3acbe65-7957-4891-897a-d920fd17f93d\" (UID: \"c3acbe65-7957-4891-897a-d920fd17f93d\") " Jan 27 20:02:29 crc kubenswrapper[4915]: I0127 20:02:29.774050 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3acbe65-7957-4891-897a-d920fd17f93d-dns-svc\") pod \"c3acbe65-7957-4891-897a-d920fd17f93d\" (UID: \"c3acbe65-7957-4891-897a-d920fd17f93d\") " Jan 27 20:02:29 crc kubenswrapper[4915]: I0127 20:02:29.778943 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3acbe65-7957-4891-897a-d920fd17f93d-kube-api-access-cgc99" (OuterVolumeSpecName: "kube-api-access-cgc99") pod "c3acbe65-7957-4891-897a-d920fd17f93d" (UID: "c3acbe65-7957-4891-897a-d920fd17f93d"). InnerVolumeSpecName "kube-api-access-cgc99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:02:29 crc kubenswrapper[4915]: I0127 20:02:29.812083 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3acbe65-7957-4891-897a-d920fd17f93d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c3acbe65-7957-4891-897a-d920fd17f93d" (UID: "c3acbe65-7957-4891-897a-d920fd17f93d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:02:29 crc kubenswrapper[4915]: I0127 20:02:29.815133 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3acbe65-7957-4891-897a-d920fd17f93d-config" (OuterVolumeSpecName: "config") pod "c3acbe65-7957-4891-897a-d920fd17f93d" (UID: "c3acbe65-7957-4891-897a-d920fd17f93d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:02:29 crc kubenswrapper[4915]: I0127 20:02:29.875733 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgc99\" (UniqueName: \"kubernetes.io/projected/c3acbe65-7957-4891-897a-d920fd17f93d-kube-api-access-cgc99\") on node \"crc\" DevicePath \"\"" Jan 27 20:02:29 crc kubenswrapper[4915]: I0127 20:02:29.875764 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3acbe65-7957-4891-897a-d920fd17f93d-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:02:29 crc kubenswrapper[4915]: I0127 20:02:29.875774 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3acbe65-7957-4891-897a-d920fd17f93d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:02:30 crc kubenswrapper[4915]: I0127 20:02:30.156709 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d90d24c5-20d2-49ea-928c-484d5baeeb9e","Type":"ContainerStarted","Data":"01e0d915da55f18519c6c14793cefa1671de637aa0a52ce52d9b7e7b5eb86641"} Jan 27 20:02:30 crc kubenswrapper[4915]: I0127 20:02:30.158522 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0c15f8bf-aaf2-46e5-b80a-fe471d471444","Type":"ContainerStarted","Data":"6133df947b6ca44a9dd9d94910928976b624a1f21cbd6ff6682da034598bfc8d"} Jan 27 20:02:30 crc kubenswrapper[4915]: I0127 20:02:30.160995 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-92fkj" event={"ID":"c3acbe65-7957-4891-897a-d920fd17f93d","Type":"ContainerDied","Data":"7a21ad39b787f5c32219cb94abbf2fb96eb457df1b2cf7d880d1efd8fa0ccbdb"} Jan 27 20:02:30 crc kubenswrapper[4915]: I0127 20:02:30.161273 4915 scope.go:117] "RemoveContainer" containerID="6247872bcb2dc5f9fc673c03a3ea9b37148f0bca9ef0d1f55009b5dde97a4e05" Jan 27 20:02:30 crc kubenswrapper[4915]: I0127 20:02:30.161037 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-92fkj" Jan 27 20:02:30 crc kubenswrapper[4915]: I0127 20:02:30.181901 4915 scope.go:117] "RemoveContainer" containerID="ed534706fc469c9f590a9c074e7f6ad30d108602443cdd51a59376610511a945" Jan 27 20:02:30 crc kubenswrapper[4915]: I0127 20:02:30.202846 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-92fkj"] Jan 27 20:02:30 crc kubenswrapper[4915]: I0127 20:02:30.207995 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-92fkj"] Jan 27 20:02:31 crc kubenswrapper[4915]: I0127 20:02:31.173104 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d90d24c5-20d2-49ea-928c-484d5baeeb9e","Type":"ContainerStarted","Data":"95588046b750a28c0726a0456a620570aa02e714b0bf8db1759353ef48bf3a64"} Jan 27 20:02:31 crc kubenswrapper[4915]: I0127 20:02:31.174530 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0c15f8bf-aaf2-46e5-b80a-fe471d471444","Type":"ContainerStarted","Data":"77b930aff0a1f1e4b8715b19538d55e380b89494d24648c5bf0614873cb7a7bc"} Jan 27 20:02:31 crc kubenswrapper[4915]: I0127 20:02:31.370214 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3acbe65-7957-4891-897a-d920fd17f93d" path="/var/lib/kubelet/pods/c3acbe65-7957-4891-897a-d920fd17f93d/volumes" Jan 27 20:02:50 crc kubenswrapper[4915]: I0127 20:02:50.624995 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:02:50 crc kubenswrapper[4915]: I0127 20:02:50.626056 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:03:03 crc kubenswrapper[4915]: I0127 20:03:03.449917 4915 generic.go:334] "Generic (PLEG): container finished" podID="d90d24c5-20d2-49ea-928c-484d5baeeb9e" containerID="95588046b750a28c0726a0456a620570aa02e714b0bf8db1759353ef48bf3a64" exitCode=0 Jan 27 20:03:03 crc kubenswrapper[4915]: I0127 20:03:03.450447 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d90d24c5-20d2-49ea-928c-484d5baeeb9e","Type":"ContainerDied","Data":"95588046b750a28c0726a0456a620570aa02e714b0bf8db1759353ef48bf3a64"} Jan 27 20:03:04 crc kubenswrapper[4915]: I0127 20:03:04.457880 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d90d24c5-20d2-49ea-928c-484d5baeeb9e","Type":"ContainerStarted","Data":"e7758d1d06e541517f9cf9ef9036957cb19e9fc10709112e5a9493cb935d0f74"} Jan 27 20:03:04 crc kubenswrapper[4915]: I0127 20:03:04.458744 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:03:04 crc kubenswrapper[4915]: I0127 20:03:04.459691 4915 generic.go:334] "Generic (PLEG): container finished" podID="0c15f8bf-aaf2-46e5-b80a-fe471d471444" containerID="77b930aff0a1f1e4b8715b19538d55e380b89494d24648c5bf0614873cb7a7bc" exitCode=0 Jan 27 20:03:04 crc kubenswrapper[4915]: I0127 20:03:04.459713 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0c15f8bf-aaf2-46e5-b80a-fe471d471444","Type":"ContainerDied","Data":"77b930aff0a1f1e4b8715b19538d55e380b89494d24648c5bf0614873cb7a7bc"} Jan 27 20:03:04 crc kubenswrapper[4915]: I0127 20:03:04.490515 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.490493148 podStartE2EDuration="36.490493148s" podCreationTimestamp="2026-01-27 20:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:03:04.486745496 +0000 UTC m=+4875.844599180" watchObservedRunningTime="2026-01-27 20:03:04.490493148 +0000 UTC m=+4875.848346812" Jan 27 20:03:05 crc kubenswrapper[4915]: I0127 20:03:05.469670 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0c15f8bf-aaf2-46e5-b80a-fe471d471444","Type":"ContainerStarted","Data":"9499214e4c5b890935b8a9e62ea95cbda654d23063b90242930f3ce352dee749"} Jan 27 20:03:05 crc kubenswrapper[4915]: I0127 20:03:05.470382 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 20:03:05 crc kubenswrapper[4915]: I0127 20:03:05.497758 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.497732519 podStartE2EDuration="37.497732519s" podCreationTimestamp="2026-01-27 20:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:03:05.493021853 +0000 UTC m=+4876.850875527" watchObservedRunningTime="2026-01-27 20:03:05.497732519 +0000 UTC m=+4876.855586193" Jan 27 20:03:18 crc kubenswrapper[4915]: I0127 20:03:18.560012 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 20:03:18 crc kubenswrapper[4915]: I0127 20:03:18.586417 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:03:20 crc kubenswrapper[4915]: I0127 20:03:20.625497 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:03:20 crc kubenswrapper[4915]: I0127 20:03:20.625915 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:03:20 crc kubenswrapper[4915]: I0127 20:03:20.625966 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 20:03:20 crc kubenswrapper[4915]: I0127 20:03:20.626661 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d580dbaf336b4143ceeb4e1292fe240a1a27371f6459ca5ccb047b9704b2054c"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 20:03:20 crc kubenswrapper[4915]: I0127 20:03:20.626726 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://d580dbaf336b4143ceeb4e1292fe240a1a27371f6459ca5ccb047b9704b2054c" gracePeriod=600 Jan 27 20:03:21 crc kubenswrapper[4915]: I0127 20:03:21.596831 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="d580dbaf336b4143ceeb4e1292fe240a1a27371f6459ca5ccb047b9704b2054c" exitCode=0 Jan 27 20:03:21 crc kubenswrapper[4915]: I0127 20:03:21.596914 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"d580dbaf336b4143ceeb4e1292fe240a1a27371f6459ca5ccb047b9704b2054c"} Jan 27 20:03:21 crc kubenswrapper[4915]: I0127 20:03:21.597523 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda"} Jan 27 20:03:21 crc kubenswrapper[4915]: I0127 20:03:21.597547 4915 scope.go:117] "RemoveContainer" containerID="be39ac7f0307b0c83059b2ba126f8a9bc11016be2d794eaecd02d4a9b89c699c" Jan 27 20:03:25 crc kubenswrapper[4915]: I0127 20:03:25.861657 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 27 20:03:25 crc kubenswrapper[4915]: E0127 20:03:25.862494 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3acbe65-7957-4891-897a-d920fd17f93d" containerName="dnsmasq-dns" Jan 27 20:03:25 crc kubenswrapper[4915]: I0127 20:03:25.862507 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3acbe65-7957-4891-897a-d920fd17f93d" containerName="dnsmasq-dns" Jan 27 20:03:25 crc kubenswrapper[4915]: E0127 20:03:25.862520 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3acbe65-7957-4891-897a-d920fd17f93d" containerName="init" Jan 27 20:03:25 crc kubenswrapper[4915]: I0127 20:03:25.862526 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3acbe65-7957-4891-897a-d920fd17f93d" containerName="init" Jan 27 20:03:25 crc kubenswrapper[4915]: I0127 20:03:25.862673 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3acbe65-7957-4891-897a-d920fd17f93d" containerName="dnsmasq-dns" Jan 27 20:03:25 crc kubenswrapper[4915]: I0127 20:03:25.863159 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 20:03:25 crc kubenswrapper[4915]: I0127 20:03:25.867909 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-grfcr" Jan 27 20:03:25 crc kubenswrapper[4915]: I0127 20:03:25.871344 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 27 20:03:25 crc kubenswrapper[4915]: I0127 20:03:25.948604 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8prx\" (UniqueName: \"kubernetes.io/projected/995a993e-5d31-4844-88e6-8ab9db573274-kube-api-access-g8prx\") pod \"mariadb-client\" (UID: \"995a993e-5d31-4844-88e6-8ab9db573274\") " pod="openstack/mariadb-client" Jan 27 20:03:26 crc kubenswrapper[4915]: I0127 20:03:26.050495 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8prx\" (UniqueName: \"kubernetes.io/projected/995a993e-5d31-4844-88e6-8ab9db573274-kube-api-access-g8prx\") pod \"mariadb-client\" (UID: \"995a993e-5d31-4844-88e6-8ab9db573274\") " pod="openstack/mariadb-client" Jan 27 20:03:26 crc kubenswrapper[4915]: I0127 20:03:26.072448 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8prx\" (UniqueName: \"kubernetes.io/projected/995a993e-5d31-4844-88e6-8ab9db573274-kube-api-access-g8prx\") pod \"mariadb-client\" (UID: \"995a993e-5d31-4844-88e6-8ab9db573274\") " pod="openstack/mariadb-client" Jan 27 20:03:26 crc kubenswrapper[4915]: I0127 20:03:26.188308 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 20:03:26 crc kubenswrapper[4915]: W0127 20:03:26.715217 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod995a993e_5d31_4844_88e6_8ab9db573274.slice/crio-a5aa3ae62ca123a1c29bb01f12ab519020f2054414923cad6f7d906198861ead WatchSource:0}: Error finding container a5aa3ae62ca123a1c29bb01f12ab519020f2054414923cad6f7d906198861ead: Status 404 returned error can't find the container with id a5aa3ae62ca123a1c29bb01f12ab519020f2054414923cad6f7d906198861ead Jan 27 20:03:26 crc kubenswrapper[4915]: I0127 20:03:26.715950 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 27 20:03:27 crc kubenswrapper[4915]: I0127 20:03:27.655129 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"995a993e-5d31-4844-88e6-8ab9db573274","Type":"ContainerStarted","Data":"1bc7ed3d7cad66331fbd76c129309cae53003cd1b0fe36ca931899463b95cac2"} Jan 27 20:03:27 crc kubenswrapper[4915]: I0127 20:03:27.655484 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"995a993e-5d31-4844-88e6-8ab9db573274","Type":"ContainerStarted","Data":"a5aa3ae62ca123a1c29bb01f12ab519020f2054414923cad6f7d906198861ead"} Jan 27 20:03:27 crc kubenswrapper[4915]: I0127 20:03:27.677000 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=2.676980544 podStartE2EDuration="2.676980544s" podCreationTimestamp="2026-01-27 20:03:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:03:27.674529844 +0000 UTC m=+4899.032383498" watchObservedRunningTime="2026-01-27 20:03:27.676980544 +0000 UTC m=+4899.034834208" Jan 27 20:03:43 crc kubenswrapper[4915]: I0127 20:03:43.210019 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 27 20:03:43 crc kubenswrapper[4915]: I0127 20:03:43.211485 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="995a993e-5d31-4844-88e6-8ab9db573274" containerName="mariadb-client" containerID="cri-o://1bc7ed3d7cad66331fbd76c129309cae53003cd1b0fe36ca931899463b95cac2" gracePeriod=30 Jan 27 20:03:43 crc kubenswrapper[4915]: I0127 20:03:43.722448 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 20:03:43 crc kubenswrapper[4915]: I0127 20:03:43.800190 4915 generic.go:334] "Generic (PLEG): container finished" podID="995a993e-5d31-4844-88e6-8ab9db573274" containerID="1bc7ed3d7cad66331fbd76c129309cae53003cd1b0fe36ca931899463b95cac2" exitCode=143 Jan 27 20:03:43 crc kubenswrapper[4915]: I0127 20:03:43.800261 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 20:03:43 crc kubenswrapper[4915]: I0127 20:03:43.800249 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"995a993e-5d31-4844-88e6-8ab9db573274","Type":"ContainerDied","Data":"1bc7ed3d7cad66331fbd76c129309cae53003cd1b0fe36ca931899463b95cac2"} Jan 27 20:03:43 crc kubenswrapper[4915]: I0127 20:03:43.800424 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"995a993e-5d31-4844-88e6-8ab9db573274","Type":"ContainerDied","Data":"a5aa3ae62ca123a1c29bb01f12ab519020f2054414923cad6f7d906198861ead"} Jan 27 20:03:43 crc kubenswrapper[4915]: I0127 20:03:43.800447 4915 scope.go:117] "RemoveContainer" containerID="1bc7ed3d7cad66331fbd76c129309cae53003cd1b0fe36ca931899463b95cac2" Jan 27 20:03:43 crc kubenswrapper[4915]: I0127 20:03:43.815393 4915 scope.go:117] "RemoveContainer" containerID="1bc7ed3d7cad66331fbd76c129309cae53003cd1b0fe36ca931899463b95cac2" Jan 27 20:03:43 crc kubenswrapper[4915]: E0127 20:03:43.815688 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bc7ed3d7cad66331fbd76c129309cae53003cd1b0fe36ca931899463b95cac2\": container with ID starting with 1bc7ed3d7cad66331fbd76c129309cae53003cd1b0fe36ca931899463b95cac2 not found: ID does not exist" containerID="1bc7ed3d7cad66331fbd76c129309cae53003cd1b0fe36ca931899463b95cac2" Jan 27 20:03:43 crc kubenswrapper[4915]: I0127 20:03:43.815738 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bc7ed3d7cad66331fbd76c129309cae53003cd1b0fe36ca931899463b95cac2"} err="failed to get container status \"1bc7ed3d7cad66331fbd76c129309cae53003cd1b0fe36ca931899463b95cac2\": rpc error: code = NotFound desc = could not find container \"1bc7ed3d7cad66331fbd76c129309cae53003cd1b0fe36ca931899463b95cac2\": container with ID starting with 1bc7ed3d7cad66331fbd76c129309cae53003cd1b0fe36ca931899463b95cac2 not found: ID does not exist" Jan 27 20:03:43 crc kubenswrapper[4915]: I0127 20:03:43.849273 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8prx\" (UniqueName: \"kubernetes.io/projected/995a993e-5d31-4844-88e6-8ab9db573274-kube-api-access-g8prx\") pod \"995a993e-5d31-4844-88e6-8ab9db573274\" (UID: \"995a993e-5d31-4844-88e6-8ab9db573274\") " Jan 27 20:03:43 crc kubenswrapper[4915]: I0127 20:03:43.858991 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/995a993e-5d31-4844-88e6-8ab9db573274-kube-api-access-g8prx" (OuterVolumeSpecName: "kube-api-access-g8prx") pod "995a993e-5d31-4844-88e6-8ab9db573274" (UID: "995a993e-5d31-4844-88e6-8ab9db573274"). InnerVolumeSpecName "kube-api-access-g8prx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:03:43 crc kubenswrapper[4915]: I0127 20:03:43.951712 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8prx\" (UniqueName: \"kubernetes.io/projected/995a993e-5d31-4844-88e6-8ab9db573274-kube-api-access-g8prx\") on node \"crc\" DevicePath \"\"" Jan 27 20:03:44 crc kubenswrapper[4915]: I0127 20:03:44.135475 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 27 20:03:44 crc kubenswrapper[4915]: I0127 20:03:44.142524 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 27 20:03:45 crc kubenswrapper[4915]: I0127 20:03:45.384301 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="995a993e-5d31-4844-88e6-8ab9db573274" path="/var/lib/kubelet/pods/995a993e-5d31-4844-88e6-8ab9db573274/volumes" Jan 27 20:03:55 crc kubenswrapper[4915]: I0127 20:03:55.741640 4915 scope.go:117] "RemoveContainer" containerID="f835b485a2f0a391b77314b5350b80f347bb0565f43cae1fabace47a6fe1dd3f" Jan 27 20:04:38 crc kubenswrapper[4915]: I0127 20:04:38.039467 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-58nfc"] Jan 27 20:04:38 crc kubenswrapper[4915]: E0127 20:04:38.040501 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="995a993e-5d31-4844-88e6-8ab9db573274" containerName="mariadb-client" Jan 27 20:04:38 crc kubenswrapper[4915]: I0127 20:04:38.040521 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="995a993e-5d31-4844-88e6-8ab9db573274" containerName="mariadb-client" Jan 27 20:04:38 crc kubenswrapper[4915]: I0127 20:04:38.040782 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="995a993e-5d31-4844-88e6-8ab9db573274" containerName="mariadb-client" Jan 27 20:04:38 crc kubenswrapper[4915]: I0127 20:04:38.042433 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-58nfc" Jan 27 20:04:38 crc kubenswrapper[4915]: I0127 20:04:38.051534 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-58nfc"] Jan 27 20:04:38 crc kubenswrapper[4915]: I0127 20:04:38.214986 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh7bf\" (UniqueName: \"kubernetes.io/projected/d06b648e-6377-4f0e-89d6-1be6577c18a2-kube-api-access-mh7bf\") pod \"community-operators-58nfc\" (UID: \"d06b648e-6377-4f0e-89d6-1be6577c18a2\") " pod="openshift-marketplace/community-operators-58nfc" Jan 27 20:04:38 crc kubenswrapper[4915]: I0127 20:04:38.215131 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d06b648e-6377-4f0e-89d6-1be6577c18a2-catalog-content\") pod \"community-operators-58nfc\" (UID: \"d06b648e-6377-4f0e-89d6-1be6577c18a2\") " pod="openshift-marketplace/community-operators-58nfc" Jan 27 20:04:38 crc kubenswrapper[4915]: I0127 20:04:38.215202 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d06b648e-6377-4f0e-89d6-1be6577c18a2-utilities\") pod \"community-operators-58nfc\" (UID: \"d06b648e-6377-4f0e-89d6-1be6577c18a2\") " pod="openshift-marketplace/community-operators-58nfc" Jan 27 20:04:38 crc kubenswrapper[4915]: I0127 20:04:38.316242 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh7bf\" (UniqueName: \"kubernetes.io/projected/d06b648e-6377-4f0e-89d6-1be6577c18a2-kube-api-access-mh7bf\") pod \"community-operators-58nfc\" (UID: \"d06b648e-6377-4f0e-89d6-1be6577c18a2\") " pod="openshift-marketplace/community-operators-58nfc" Jan 27 20:04:38 crc kubenswrapper[4915]: I0127 20:04:38.316339 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d06b648e-6377-4f0e-89d6-1be6577c18a2-catalog-content\") pod \"community-operators-58nfc\" (UID: \"d06b648e-6377-4f0e-89d6-1be6577c18a2\") " pod="openshift-marketplace/community-operators-58nfc" Jan 27 20:04:38 crc kubenswrapper[4915]: I0127 20:04:38.316400 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d06b648e-6377-4f0e-89d6-1be6577c18a2-utilities\") pod \"community-operators-58nfc\" (UID: \"d06b648e-6377-4f0e-89d6-1be6577c18a2\") " pod="openshift-marketplace/community-operators-58nfc" Jan 27 20:04:38 crc kubenswrapper[4915]: I0127 20:04:38.316973 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d06b648e-6377-4f0e-89d6-1be6577c18a2-utilities\") pod \"community-operators-58nfc\" (UID: \"d06b648e-6377-4f0e-89d6-1be6577c18a2\") " pod="openshift-marketplace/community-operators-58nfc" Jan 27 20:04:38 crc kubenswrapper[4915]: I0127 20:04:38.316955 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d06b648e-6377-4f0e-89d6-1be6577c18a2-catalog-content\") pod \"community-operators-58nfc\" (UID: \"d06b648e-6377-4f0e-89d6-1be6577c18a2\") " pod="openshift-marketplace/community-operators-58nfc" Jan 27 20:04:38 crc kubenswrapper[4915]: I0127 20:04:38.335527 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh7bf\" (UniqueName: \"kubernetes.io/projected/d06b648e-6377-4f0e-89d6-1be6577c18a2-kube-api-access-mh7bf\") pod \"community-operators-58nfc\" (UID: \"d06b648e-6377-4f0e-89d6-1be6577c18a2\") " pod="openshift-marketplace/community-operators-58nfc" Jan 27 20:04:38 crc kubenswrapper[4915]: I0127 20:04:38.362109 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-58nfc" Jan 27 20:04:38 crc kubenswrapper[4915]: I0127 20:04:38.816768 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-58nfc"] Jan 27 20:04:39 crc kubenswrapper[4915]: I0127 20:04:39.270433 4915 generic.go:334] "Generic (PLEG): container finished" podID="d06b648e-6377-4f0e-89d6-1be6577c18a2" containerID="2a9ac9f2d7e8049fed7c54d19c91bb0e8561a4a3d2bdbc7a48711826b814b336" exitCode=0 Jan 27 20:04:39 crc kubenswrapper[4915]: I0127 20:04:39.270525 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58nfc" event={"ID":"d06b648e-6377-4f0e-89d6-1be6577c18a2","Type":"ContainerDied","Data":"2a9ac9f2d7e8049fed7c54d19c91bb0e8561a4a3d2bdbc7a48711826b814b336"} Jan 27 20:04:39 crc kubenswrapper[4915]: I0127 20:04:39.270742 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58nfc" event={"ID":"d06b648e-6377-4f0e-89d6-1be6577c18a2","Type":"ContainerStarted","Data":"b407f90b4fdaa189fdd8da54f79ab4169550ef499c3e762264d8a50acdbb99f5"} Jan 27 20:04:39 crc kubenswrapper[4915]: I0127 20:04:39.272242 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 20:04:40 crc kubenswrapper[4915]: I0127 20:04:40.278749 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58nfc" event={"ID":"d06b648e-6377-4f0e-89d6-1be6577c18a2","Type":"ContainerStarted","Data":"9f724359e6214c813c4b5db6ced3b8939830be39457a2cf115cce22c8e7ab685"} Jan 27 20:04:41 crc kubenswrapper[4915]: I0127 20:04:41.289901 4915 generic.go:334] "Generic (PLEG): container finished" podID="d06b648e-6377-4f0e-89d6-1be6577c18a2" containerID="9f724359e6214c813c4b5db6ced3b8939830be39457a2cf115cce22c8e7ab685" exitCode=0 Jan 27 20:04:41 crc kubenswrapper[4915]: I0127 20:04:41.289947 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58nfc" event={"ID":"d06b648e-6377-4f0e-89d6-1be6577c18a2","Type":"ContainerDied","Data":"9f724359e6214c813c4b5db6ced3b8939830be39457a2cf115cce22c8e7ab685"} Jan 27 20:04:42 crc kubenswrapper[4915]: I0127 20:04:42.297535 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58nfc" event={"ID":"d06b648e-6377-4f0e-89d6-1be6577c18a2","Type":"ContainerStarted","Data":"6a6f8ecbc22217ae0f530ceb18cdede97d70dd8248c4b8589951a36b102e71b4"} Jan 27 20:04:42 crc kubenswrapper[4915]: I0127 20:04:42.317675 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-58nfc" podStartSLOduration=1.695135225 podStartE2EDuration="4.31765577s" podCreationTimestamp="2026-01-27 20:04:38 +0000 UTC" firstStartedPulling="2026-01-27 20:04:39.271941577 +0000 UTC m=+4970.629795251" lastFinishedPulling="2026-01-27 20:04:41.894462132 +0000 UTC m=+4973.252315796" observedRunningTime="2026-01-27 20:04:42.313870697 +0000 UTC m=+4973.671724371" watchObservedRunningTime="2026-01-27 20:04:42.31765577 +0000 UTC m=+4973.675509434" Jan 27 20:04:43 crc kubenswrapper[4915]: I0127 20:04:43.422495 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cftnr"] Jan 27 20:04:43 crc kubenswrapper[4915]: I0127 20:04:43.425160 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cftnr" Jan 27 20:04:43 crc kubenswrapper[4915]: I0127 20:04:43.435137 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cftnr"] Jan 27 20:04:43 crc kubenswrapper[4915]: I0127 20:04:43.598089 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p77nn\" (UniqueName: \"kubernetes.io/projected/885cbaf4-4622-4a1a-9045-895d0a82801e-kube-api-access-p77nn\") pod \"redhat-marketplace-cftnr\" (UID: \"885cbaf4-4622-4a1a-9045-895d0a82801e\") " pod="openshift-marketplace/redhat-marketplace-cftnr" Jan 27 20:04:43 crc kubenswrapper[4915]: I0127 20:04:43.598178 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/885cbaf4-4622-4a1a-9045-895d0a82801e-catalog-content\") pod \"redhat-marketplace-cftnr\" (UID: \"885cbaf4-4622-4a1a-9045-895d0a82801e\") " pod="openshift-marketplace/redhat-marketplace-cftnr" Jan 27 20:04:43 crc kubenswrapper[4915]: I0127 20:04:43.598240 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/885cbaf4-4622-4a1a-9045-895d0a82801e-utilities\") pod \"redhat-marketplace-cftnr\" (UID: \"885cbaf4-4622-4a1a-9045-895d0a82801e\") " pod="openshift-marketplace/redhat-marketplace-cftnr" Jan 27 20:04:43 crc kubenswrapper[4915]: I0127 20:04:43.699162 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/885cbaf4-4622-4a1a-9045-895d0a82801e-catalog-content\") pod \"redhat-marketplace-cftnr\" (UID: \"885cbaf4-4622-4a1a-9045-895d0a82801e\") " pod="openshift-marketplace/redhat-marketplace-cftnr" Jan 27 20:04:43 crc kubenswrapper[4915]: I0127 20:04:43.699240 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/885cbaf4-4622-4a1a-9045-895d0a82801e-utilities\") pod \"redhat-marketplace-cftnr\" (UID: \"885cbaf4-4622-4a1a-9045-895d0a82801e\") " pod="openshift-marketplace/redhat-marketplace-cftnr" Jan 27 20:04:43 crc kubenswrapper[4915]: I0127 20:04:43.699301 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p77nn\" (UniqueName: \"kubernetes.io/projected/885cbaf4-4622-4a1a-9045-895d0a82801e-kube-api-access-p77nn\") pod \"redhat-marketplace-cftnr\" (UID: \"885cbaf4-4622-4a1a-9045-895d0a82801e\") " pod="openshift-marketplace/redhat-marketplace-cftnr" Jan 27 20:04:43 crc kubenswrapper[4915]: I0127 20:04:43.699868 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/885cbaf4-4622-4a1a-9045-895d0a82801e-catalog-content\") pod \"redhat-marketplace-cftnr\" (UID: \"885cbaf4-4622-4a1a-9045-895d0a82801e\") " pod="openshift-marketplace/redhat-marketplace-cftnr" Jan 27 20:04:43 crc kubenswrapper[4915]: I0127 20:04:43.699909 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/885cbaf4-4622-4a1a-9045-895d0a82801e-utilities\") pod \"redhat-marketplace-cftnr\" (UID: \"885cbaf4-4622-4a1a-9045-895d0a82801e\") " pod="openshift-marketplace/redhat-marketplace-cftnr" Jan 27 20:04:43 crc kubenswrapper[4915]: I0127 20:04:43.719783 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p77nn\" (UniqueName: \"kubernetes.io/projected/885cbaf4-4622-4a1a-9045-895d0a82801e-kube-api-access-p77nn\") pod \"redhat-marketplace-cftnr\" (UID: \"885cbaf4-4622-4a1a-9045-895d0a82801e\") " pod="openshift-marketplace/redhat-marketplace-cftnr" Jan 27 20:04:43 crc kubenswrapper[4915]: I0127 20:04:43.751031 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cftnr" Jan 27 20:04:44 crc kubenswrapper[4915]: I0127 20:04:44.196449 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cftnr"] Jan 27 20:04:44 crc kubenswrapper[4915]: I0127 20:04:44.313334 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cftnr" event={"ID":"885cbaf4-4622-4a1a-9045-895d0a82801e","Type":"ContainerStarted","Data":"3bbad01ff8cfccb75e430ddcd98c3d685ffb3a8650e9797d4b757eec622a43de"} Jan 27 20:04:45 crc kubenswrapper[4915]: I0127 20:04:45.321845 4915 generic.go:334] "Generic (PLEG): container finished" podID="885cbaf4-4622-4a1a-9045-895d0a82801e" containerID="b5d90c27bcc2f26a09cfb109f8b355bf903c952f214814e5a7dd6366e6dd3edf" exitCode=0 Jan 27 20:04:45 crc kubenswrapper[4915]: I0127 20:04:45.321906 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cftnr" event={"ID":"885cbaf4-4622-4a1a-9045-895d0a82801e","Type":"ContainerDied","Data":"b5d90c27bcc2f26a09cfb109f8b355bf903c952f214814e5a7dd6366e6dd3edf"} Jan 27 20:04:46 crc kubenswrapper[4915]: I0127 20:04:46.329382 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cftnr" event={"ID":"885cbaf4-4622-4a1a-9045-895d0a82801e","Type":"ContainerStarted","Data":"ad7c3664c1841f98388ba04577cbf629a956f01c57dd6564f647c98bc2965540"} Jan 27 20:04:47 crc kubenswrapper[4915]: I0127 20:04:47.341738 4915 generic.go:334] "Generic (PLEG): container finished" podID="885cbaf4-4622-4a1a-9045-895d0a82801e" containerID="ad7c3664c1841f98388ba04577cbf629a956f01c57dd6564f647c98bc2965540" exitCode=0 Jan 27 20:04:47 crc kubenswrapper[4915]: I0127 20:04:47.341832 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cftnr" event={"ID":"885cbaf4-4622-4a1a-9045-895d0a82801e","Type":"ContainerDied","Data":"ad7c3664c1841f98388ba04577cbf629a956f01c57dd6564f647c98bc2965540"} Jan 27 20:04:48 crc kubenswrapper[4915]: I0127 20:04:48.351025 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cftnr" event={"ID":"885cbaf4-4622-4a1a-9045-895d0a82801e","Type":"ContainerStarted","Data":"3580264e9665d691b8be893d5fb7e45685942bb59f3129829e6772996468b327"} Jan 27 20:04:48 crc kubenswrapper[4915]: I0127 20:04:48.362907 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-58nfc" Jan 27 20:04:48 crc kubenswrapper[4915]: I0127 20:04:48.362967 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-58nfc" Jan 27 20:04:48 crc kubenswrapper[4915]: I0127 20:04:48.374532 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cftnr" podStartSLOduration=2.903040365 podStartE2EDuration="5.374508135s" podCreationTimestamp="2026-01-27 20:04:43 +0000 UTC" firstStartedPulling="2026-01-27 20:04:45.324356513 +0000 UTC m=+4976.682210187" lastFinishedPulling="2026-01-27 20:04:47.795824273 +0000 UTC m=+4979.153677957" observedRunningTime="2026-01-27 20:04:48.365915873 +0000 UTC m=+4979.723769547" watchObservedRunningTime="2026-01-27 20:04:48.374508135 +0000 UTC m=+4979.732361799" Jan 27 20:04:48 crc kubenswrapper[4915]: I0127 20:04:48.428030 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-58nfc" Jan 27 20:04:49 crc kubenswrapper[4915]: I0127 20:04:49.479196 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-58nfc" Jan 27 20:04:50 crc kubenswrapper[4915]: I0127 20:04:50.598631 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-58nfc"] Jan 27 20:04:51 crc kubenswrapper[4915]: I0127 20:04:51.385197 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-58nfc" podUID="d06b648e-6377-4f0e-89d6-1be6577c18a2" containerName="registry-server" containerID="cri-o://6a6f8ecbc22217ae0f530ceb18cdede97d70dd8248c4b8589951a36b102e71b4" gracePeriod=2 Jan 27 20:04:52 crc kubenswrapper[4915]: I0127 20:04:52.319318 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-58nfc" Jan 27 20:04:52 crc kubenswrapper[4915]: I0127 20:04:52.392562 4915 generic.go:334] "Generic (PLEG): container finished" podID="d06b648e-6377-4f0e-89d6-1be6577c18a2" containerID="6a6f8ecbc22217ae0f530ceb18cdede97d70dd8248c4b8589951a36b102e71b4" exitCode=0 Jan 27 20:04:52 crc kubenswrapper[4915]: I0127 20:04:52.392608 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58nfc" event={"ID":"d06b648e-6377-4f0e-89d6-1be6577c18a2","Type":"ContainerDied","Data":"6a6f8ecbc22217ae0f530ceb18cdede97d70dd8248c4b8589951a36b102e71b4"} Jan 27 20:04:52 crc kubenswrapper[4915]: I0127 20:04:52.392633 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-58nfc" event={"ID":"d06b648e-6377-4f0e-89d6-1be6577c18a2","Type":"ContainerDied","Data":"b407f90b4fdaa189fdd8da54f79ab4169550ef499c3e762264d8a50acdbb99f5"} Jan 27 20:04:52 crc kubenswrapper[4915]: I0127 20:04:52.392641 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-58nfc" Jan 27 20:04:52 crc kubenswrapper[4915]: I0127 20:04:52.392667 4915 scope.go:117] "RemoveContainer" containerID="6a6f8ecbc22217ae0f530ceb18cdede97d70dd8248c4b8589951a36b102e71b4" Jan 27 20:04:52 crc kubenswrapper[4915]: I0127 20:04:52.410403 4915 scope.go:117] "RemoveContainer" containerID="9f724359e6214c813c4b5db6ced3b8939830be39457a2cf115cce22c8e7ab685" Jan 27 20:04:52 crc kubenswrapper[4915]: I0127 20:04:52.427469 4915 scope.go:117] "RemoveContainer" containerID="2a9ac9f2d7e8049fed7c54d19c91bb0e8561a4a3d2bdbc7a48711826b814b336" Jan 27 20:04:52 crc kubenswrapper[4915]: I0127 20:04:52.448457 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh7bf\" (UniqueName: \"kubernetes.io/projected/d06b648e-6377-4f0e-89d6-1be6577c18a2-kube-api-access-mh7bf\") pod \"d06b648e-6377-4f0e-89d6-1be6577c18a2\" (UID: \"d06b648e-6377-4f0e-89d6-1be6577c18a2\") " Jan 27 20:04:52 crc kubenswrapper[4915]: I0127 20:04:52.448549 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d06b648e-6377-4f0e-89d6-1be6577c18a2-utilities\") pod \"d06b648e-6377-4f0e-89d6-1be6577c18a2\" (UID: \"d06b648e-6377-4f0e-89d6-1be6577c18a2\") " Jan 27 20:04:52 crc kubenswrapper[4915]: I0127 20:04:52.448579 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d06b648e-6377-4f0e-89d6-1be6577c18a2-catalog-content\") pod \"d06b648e-6377-4f0e-89d6-1be6577c18a2\" (UID: \"d06b648e-6377-4f0e-89d6-1be6577c18a2\") " Jan 27 20:04:52 crc kubenswrapper[4915]: I0127 20:04:52.449365 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d06b648e-6377-4f0e-89d6-1be6577c18a2-utilities" (OuterVolumeSpecName: "utilities") pod "d06b648e-6377-4f0e-89d6-1be6577c18a2" (UID: "d06b648e-6377-4f0e-89d6-1be6577c18a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:04:52 crc kubenswrapper[4915]: I0127 20:04:52.453947 4915 scope.go:117] "RemoveContainer" containerID="6a6f8ecbc22217ae0f530ceb18cdede97d70dd8248c4b8589951a36b102e71b4" Jan 27 20:04:52 crc kubenswrapper[4915]: E0127 20:04:52.454632 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a6f8ecbc22217ae0f530ceb18cdede97d70dd8248c4b8589951a36b102e71b4\": container with ID starting with 6a6f8ecbc22217ae0f530ceb18cdede97d70dd8248c4b8589951a36b102e71b4 not found: ID does not exist" containerID="6a6f8ecbc22217ae0f530ceb18cdede97d70dd8248c4b8589951a36b102e71b4" Jan 27 20:04:52 crc kubenswrapper[4915]: I0127 20:04:52.454662 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a6f8ecbc22217ae0f530ceb18cdede97d70dd8248c4b8589951a36b102e71b4"} err="failed to get container status \"6a6f8ecbc22217ae0f530ceb18cdede97d70dd8248c4b8589951a36b102e71b4\": rpc error: code = NotFound desc = could not find container \"6a6f8ecbc22217ae0f530ceb18cdede97d70dd8248c4b8589951a36b102e71b4\": container with ID starting with 6a6f8ecbc22217ae0f530ceb18cdede97d70dd8248c4b8589951a36b102e71b4 not found: ID does not exist" Jan 27 20:04:52 crc kubenswrapper[4915]: I0127 20:04:52.454682 4915 scope.go:117] "RemoveContainer" containerID="9f724359e6214c813c4b5db6ced3b8939830be39457a2cf115cce22c8e7ab685" Jan 27 20:04:52 crc kubenswrapper[4915]: E0127 20:04:52.455173 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f724359e6214c813c4b5db6ced3b8939830be39457a2cf115cce22c8e7ab685\": container with ID starting with 9f724359e6214c813c4b5db6ced3b8939830be39457a2cf115cce22c8e7ab685 not found: ID does not exist" containerID="9f724359e6214c813c4b5db6ced3b8939830be39457a2cf115cce22c8e7ab685" Jan 27 20:04:52 crc kubenswrapper[4915]: I0127 20:04:52.455221 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f724359e6214c813c4b5db6ced3b8939830be39457a2cf115cce22c8e7ab685"} err="failed to get container status \"9f724359e6214c813c4b5db6ced3b8939830be39457a2cf115cce22c8e7ab685\": rpc error: code = NotFound desc = could not find container \"9f724359e6214c813c4b5db6ced3b8939830be39457a2cf115cce22c8e7ab685\": container with ID starting with 9f724359e6214c813c4b5db6ced3b8939830be39457a2cf115cce22c8e7ab685 not found: ID does not exist" Jan 27 20:04:52 crc kubenswrapper[4915]: I0127 20:04:52.455252 4915 scope.go:117] "RemoveContainer" containerID="2a9ac9f2d7e8049fed7c54d19c91bb0e8561a4a3d2bdbc7a48711826b814b336" Jan 27 20:04:52 crc kubenswrapper[4915]: I0127 20:04:52.455367 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d06b648e-6377-4f0e-89d6-1be6577c18a2-kube-api-access-mh7bf" (OuterVolumeSpecName: "kube-api-access-mh7bf") pod "d06b648e-6377-4f0e-89d6-1be6577c18a2" (UID: "d06b648e-6377-4f0e-89d6-1be6577c18a2"). InnerVolumeSpecName "kube-api-access-mh7bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:04:52 crc kubenswrapper[4915]: E0127 20:04:52.455722 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a9ac9f2d7e8049fed7c54d19c91bb0e8561a4a3d2bdbc7a48711826b814b336\": container with ID starting with 2a9ac9f2d7e8049fed7c54d19c91bb0e8561a4a3d2bdbc7a48711826b814b336 not found: ID does not exist" containerID="2a9ac9f2d7e8049fed7c54d19c91bb0e8561a4a3d2bdbc7a48711826b814b336" Jan 27 20:04:52 crc kubenswrapper[4915]: I0127 20:04:52.455749 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a9ac9f2d7e8049fed7c54d19c91bb0e8561a4a3d2bdbc7a48711826b814b336"} err="failed to get container status \"2a9ac9f2d7e8049fed7c54d19c91bb0e8561a4a3d2bdbc7a48711826b814b336\": rpc error: code = NotFound desc = could not find container \"2a9ac9f2d7e8049fed7c54d19c91bb0e8561a4a3d2bdbc7a48711826b814b336\": container with ID starting with 2a9ac9f2d7e8049fed7c54d19c91bb0e8561a4a3d2bdbc7a48711826b814b336 not found: ID does not exist" Jan 27 20:04:52 crc kubenswrapper[4915]: I0127 20:04:52.498496 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d06b648e-6377-4f0e-89d6-1be6577c18a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d06b648e-6377-4f0e-89d6-1be6577c18a2" (UID: "d06b648e-6377-4f0e-89d6-1be6577c18a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:04:52 crc kubenswrapper[4915]: I0127 20:04:52.550042 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh7bf\" (UniqueName: \"kubernetes.io/projected/d06b648e-6377-4f0e-89d6-1be6577c18a2-kube-api-access-mh7bf\") on node \"crc\" DevicePath \"\"" Jan 27 20:04:52 crc kubenswrapper[4915]: I0127 20:04:52.550408 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d06b648e-6377-4f0e-89d6-1be6577c18a2-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:04:52 crc kubenswrapper[4915]: I0127 20:04:52.550532 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d06b648e-6377-4f0e-89d6-1be6577c18a2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:04:52 crc kubenswrapper[4915]: I0127 20:04:52.736458 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-58nfc"] Jan 27 20:04:52 crc kubenswrapper[4915]: I0127 20:04:52.742877 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-58nfc"] Jan 27 20:04:53 crc kubenswrapper[4915]: I0127 20:04:53.369011 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d06b648e-6377-4f0e-89d6-1be6577c18a2" path="/var/lib/kubelet/pods/d06b648e-6377-4f0e-89d6-1be6577c18a2/volumes" Jan 27 20:04:53 crc kubenswrapper[4915]: I0127 20:04:53.751659 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cftnr" Jan 27 20:04:53 crc kubenswrapper[4915]: I0127 20:04:53.752146 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cftnr" Jan 27 20:04:53 crc kubenswrapper[4915]: I0127 20:04:53.797086 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cftnr" Jan 27 20:04:54 crc kubenswrapper[4915]: I0127 20:04:54.464449 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cftnr" Jan 27 20:04:54 crc kubenswrapper[4915]: I0127 20:04:54.991691 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cftnr"] Jan 27 20:04:56 crc kubenswrapper[4915]: I0127 20:04:56.425108 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cftnr" podUID="885cbaf4-4622-4a1a-9045-895d0a82801e" containerName="registry-server" containerID="cri-o://3580264e9665d691b8be893d5fb7e45685942bb59f3129829e6772996468b327" gracePeriod=2 Jan 27 20:04:56 crc kubenswrapper[4915]: I0127 20:04:56.823139 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cftnr" Jan 27 20:04:57 crc kubenswrapper[4915]: I0127 20:04:57.018348 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/885cbaf4-4622-4a1a-9045-895d0a82801e-utilities\") pod \"885cbaf4-4622-4a1a-9045-895d0a82801e\" (UID: \"885cbaf4-4622-4a1a-9045-895d0a82801e\") " Jan 27 20:04:57 crc kubenswrapper[4915]: I0127 20:04:57.018438 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p77nn\" (UniqueName: \"kubernetes.io/projected/885cbaf4-4622-4a1a-9045-895d0a82801e-kube-api-access-p77nn\") pod \"885cbaf4-4622-4a1a-9045-895d0a82801e\" (UID: \"885cbaf4-4622-4a1a-9045-895d0a82801e\") " Jan 27 20:04:57 crc kubenswrapper[4915]: I0127 20:04:57.018579 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/885cbaf4-4622-4a1a-9045-895d0a82801e-catalog-content\") pod \"885cbaf4-4622-4a1a-9045-895d0a82801e\" (UID: \"885cbaf4-4622-4a1a-9045-895d0a82801e\") " Jan 27 20:04:57 crc kubenswrapper[4915]: I0127 20:04:57.019328 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/885cbaf4-4622-4a1a-9045-895d0a82801e-utilities" (OuterVolumeSpecName: "utilities") pod "885cbaf4-4622-4a1a-9045-895d0a82801e" (UID: "885cbaf4-4622-4a1a-9045-895d0a82801e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:04:57 crc kubenswrapper[4915]: I0127 20:04:57.027768 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/885cbaf4-4622-4a1a-9045-895d0a82801e-kube-api-access-p77nn" (OuterVolumeSpecName: "kube-api-access-p77nn") pod "885cbaf4-4622-4a1a-9045-895d0a82801e" (UID: "885cbaf4-4622-4a1a-9045-895d0a82801e"). InnerVolumeSpecName "kube-api-access-p77nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:04:57 crc kubenswrapper[4915]: I0127 20:04:57.049716 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/885cbaf4-4622-4a1a-9045-895d0a82801e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "885cbaf4-4622-4a1a-9045-895d0a82801e" (UID: "885cbaf4-4622-4a1a-9045-895d0a82801e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:04:57 crc kubenswrapper[4915]: I0127 20:04:57.120703 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/885cbaf4-4622-4a1a-9045-895d0a82801e-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:04:57 crc kubenswrapper[4915]: I0127 20:04:57.120759 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p77nn\" (UniqueName: \"kubernetes.io/projected/885cbaf4-4622-4a1a-9045-895d0a82801e-kube-api-access-p77nn\") on node \"crc\" DevicePath \"\"" Jan 27 20:04:57 crc kubenswrapper[4915]: I0127 20:04:57.120772 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/885cbaf4-4622-4a1a-9045-895d0a82801e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:04:57 crc kubenswrapper[4915]: I0127 20:04:57.436204 4915 generic.go:334] "Generic (PLEG): container finished" podID="885cbaf4-4622-4a1a-9045-895d0a82801e" containerID="3580264e9665d691b8be893d5fb7e45685942bb59f3129829e6772996468b327" exitCode=0 Jan 27 20:04:57 crc kubenswrapper[4915]: I0127 20:04:57.436279 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cftnr" event={"ID":"885cbaf4-4622-4a1a-9045-895d0a82801e","Type":"ContainerDied","Data":"3580264e9665d691b8be893d5fb7e45685942bb59f3129829e6772996468b327"} Jan 27 20:04:57 crc kubenswrapper[4915]: I0127 20:04:57.436321 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cftnr" event={"ID":"885cbaf4-4622-4a1a-9045-895d0a82801e","Type":"ContainerDied","Data":"3bbad01ff8cfccb75e430ddcd98c3d685ffb3a8650e9797d4b757eec622a43de"} Jan 27 20:04:57 crc kubenswrapper[4915]: I0127 20:04:57.436465 4915 scope.go:117] "RemoveContainer" containerID="3580264e9665d691b8be893d5fb7e45685942bb59f3129829e6772996468b327" Jan 27 20:04:57 crc kubenswrapper[4915]: I0127 20:04:57.436464 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cftnr" Jan 27 20:04:57 crc kubenswrapper[4915]: I0127 20:04:57.462745 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cftnr"] Jan 27 20:04:57 crc kubenswrapper[4915]: I0127 20:04:57.472710 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cftnr"] Jan 27 20:04:57 crc kubenswrapper[4915]: I0127 20:04:57.478285 4915 scope.go:117] "RemoveContainer" containerID="ad7c3664c1841f98388ba04577cbf629a956f01c57dd6564f647c98bc2965540" Jan 27 20:04:57 crc kubenswrapper[4915]: I0127 20:04:57.500054 4915 scope.go:117] "RemoveContainer" containerID="b5d90c27bcc2f26a09cfb109f8b355bf903c952f214814e5a7dd6366e6dd3edf" Jan 27 20:04:57 crc kubenswrapper[4915]: I0127 20:04:57.529780 4915 scope.go:117] "RemoveContainer" containerID="3580264e9665d691b8be893d5fb7e45685942bb59f3129829e6772996468b327" Jan 27 20:04:57 crc kubenswrapper[4915]: E0127 20:04:57.530359 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3580264e9665d691b8be893d5fb7e45685942bb59f3129829e6772996468b327\": container with ID starting with 3580264e9665d691b8be893d5fb7e45685942bb59f3129829e6772996468b327 not found: ID does not exist" containerID="3580264e9665d691b8be893d5fb7e45685942bb59f3129829e6772996468b327" Jan 27 20:04:57 crc kubenswrapper[4915]: I0127 20:04:57.530396 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3580264e9665d691b8be893d5fb7e45685942bb59f3129829e6772996468b327"} err="failed to get container status \"3580264e9665d691b8be893d5fb7e45685942bb59f3129829e6772996468b327\": rpc error: code = NotFound desc = could not find container \"3580264e9665d691b8be893d5fb7e45685942bb59f3129829e6772996468b327\": container with ID starting with 3580264e9665d691b8be893d5fb7e45685942bb59f3129829e6772996468b327 not found: ID does not exist" Jan 27 20:04:57 crc kubenswrapper[4915]: I0127 20:04:57.530422 4915 scope.go:117] "RemoveContainer" containerID="ad7c3664c1841f98388ba04577cbf629a956f01c57dd6564f647c98bc2965540" Jan 27 20:04:57 crc kubenswrapper[4915]: E0127 20:04:57.531032 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad7c3664c1841f98388ba04577cbf629a956f01c57dd6564f647c98bc2965540\": container with ID starting with ad7c3664c1841f98388ba04577cbf629a956f01c57dd6564f647c98bc2965540 not found: ID does not exist" containerID="ad7c3664c1841f98388ba04577cbf629a956f01c57dd6564f647c98bc2965540" Jan 27 20:04:57 crc kubenswrapper[4915]: I0127 20:04:57.531053 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad7c3664c1841f98388ba04577cbf629a956f01c57dd6564f647c98bc2965540"} err="failed to get container status \"ad7c3664c1841f98388ba04577cbf629a956f01c57dd6564f647c98bc2965540\": rpc error: code = NotFound desc = could not find container \"ad7c3664c1841f98388ba04577cbf629a956f01c57dd6564f647c98bc2965540\": container with ID starting with ad7c3664c1841f98388ba04577cbf629a956f01c57dd6564f647c98bc2965540 not found: ID does not exist" Jan 27 20:04:57 crc kubenswrapper[4915]: I0127 20:04:57.531067 4915 scope.go:117] "RemoveContainer" containerID="b5d90c27bcc2f26a09cfb109f8b355bf903c952f214814e5a7dd6366e6dd3edf" Jan 27 20:04:57 crc kubenswrapper[4915]: E0127 20:04:57.531356 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5d90c27bcc2f26a09cfb109f8b355bf903c952f214814e5a7dd6366e6dd3edf\": container with ID starting with b5d90c27bcc2f26a09cfb109f8b355bf903c952f214814e5a7dd6366e6dd3edf not found: ID does not exist" containerID="b5d90c27bcc2f26a09cfb109f8b355bf903c952f214814e5a7dd6366e6dd3edf" Jan 27 20:04:57 crc kubenswrapper[4915]: I0127 20:04:57.531376 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5d90c27bcc2f26a09cfb109f8b355bf903c952f214814e5a7dd6366e6dd3edf"} err="failed to get container status \"b5d90c27bcc2f26a09cfb109f8b355bf903c952f214814e5a7dd6366e6dd3edf\": rpc error: code = NotFound desc = could not find container \"b5d90c27bcc2f26a09cfb109f8b355bf903c952f214814e5a7dd6366e6dd3edf\": container with ID starting with b5d90c27bcc2f26a09cfb109f8b355bf903c952f214814e5a7dd6366e6dd3edf not found: ID does not exist" Jan 27 20:04:59 crc kubenswrapper[4915]: I0127 20:04:59.368055 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="885cbaf4-4622-4a1a-9045-895d0a82801e" path="/var/lib/kubelet/pods/885cbaf4-4622-4a1a-9045-895d0a82801e/volumes" Jan 27 20:05:20 crc kubenswrapper[4915]: I0127 20:05:20.624385 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:05:20 crc kubenswrapper[4915]: I0127 20:05:20.625204 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:05:50 crc kubenswrapper[4915]: I0127 20:05:50.625102 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:05:50 crc kubenswrapper[4915]: I0127 20:05:50.625701 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:06:20 crc kubenswrapper[4915]: I0127 20:06:20.624975 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:06:20 crc kubenswrapper[4915]: I0127 20:06:20.625714 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:06:20 crc kubenswrapper[4915]: I0127 20:06:20.625785 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 20:06:20 crc kubenswrapper[4915]: I0127 20:06:20.626773 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 20:06:20 crc kubenswrapper[4915]: I0127 20:06:20.626904 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda" gracePeriod=600 Jan 27 20:06:20 crc kubenswrapper[4915]: E0127 20:06:20.757071 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:06:21 crc kubenswrapper[4915]: I0127 20:06:21.156896 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda" exitCode=0 Jan 27 20:06:21 crc kubenswrapper[4915]: I0127 20:06:21.156983 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda"} Jan 27 20:06:21 crc kubenswrapper[4915]: I0127 20:06:21.157252 4915 scope.go:117] "RemoveContainer" containerID="d580dbaf336b4143ceeb4e1292fe240a1a27371f6459ca5ccb047b9704b2054c" Jan 27 20:06:21 crc kubenswrapper[4915]: I0127 20:06:21.158039 4915 scope.go:117] "RemoveContainer" containerID="5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda" Jan 27 20:06:21 crc kubenswrapper[4915]: E0127 20:06:21.158385 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:06:33 crc kubenswrapper[4915]: I0127 20:06:33.359472 4915 scope.go:117] "RemoveContainer" containerID="5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda" Jan 27 20:06:33 crc kubenswrapper[4915]: E0127 20:06:33.360901 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:06:45 crc kubenswrapper[4915]: I0127 20:06:45.357884 4915 scope.go:117] "RemoveContainer" containerID="5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda" Jan 27 20:06:45 crc kubenswrapper[4915]: E0127 20:06:45.358868 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:06:58 crc kubenswrapper[4915]: I0127 20:06:58.357306 4915 scope.go:117] "RemoveContainer" containerID="5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda" Jan 27 20:06:58 crc kubenswrapper[4915]: E0127 20:06:58.358267 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:07:13 crc kubenswrapper[4915]: I0127 20:07:13.358026 4915 scope.go:117] "RemoveContainer" containerID="5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda" Jan 27 20:07:13 crc kubenswrapper[4915]: E0127 20:07:13.358785 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:07:27 crc kubenswrapper[4915]: I0127 20:07:27.357681 4915 scope.go:117] "RemoveContainer" containerID="5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda" Jan 27 20:07:27 crc kubenswrapper[4915]: E0127 20:07:27.358509 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:07:41 crc kubenswrapper[4915]: I0127 20:07:41.358171 4915 scope.go:117] "RemoveContainer" containerID="5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda" Jan 27 20:07:41 crc kubenswrapper[4915]: E0127 20:07:41.359201 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:07:50 crc kubenswrapper[4915]: I0127 20:07:50.952739 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Jan 27 20:07:50 crc kubenswrapper[4915]: E0127 20:07:50.953372 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d06b648e-6377-4f0e-89d6-1be6577c18a2" containerName="extract-utilities" Jan 27 20:07:50 crc kubenswrapper[4915]: I0127 20:07:50.953390 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d06b648e-6377-4f0e-89d6-1be6577c18a2" containerName="extract-utilities" Jan 27 20:07:50 crc kubenswrapper[4915]: E0127 20:07:50.953409 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="885cbaf4-4622-4a1a-9045-895d0a82801e" containerName="extract-utilities" Jan 27 20:07:50 crc kubenswrapper[4915]: I0127 20:07:50.953419 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="885cbaf4-4622-4a1a-9045-895d0a82801e" containerName="extract-utilities" Jan 27 20:07:50 crc kubenswrapper[4915]: E0127 20:07:50.953436 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="885cbaf4-4622-4a1a-9045-895d0a82801e" containerName="extract-content" Jan 27 20:07:50 crc kubenswrapper[4915]: I0127 20:07:50.953445 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="885cbaf4-4622-4a1a-9045-895d0a82801e" containerName="extract-content" Jan 27 20:07:50 crc kubenswrapper[4915]: E0127 20:07:50.953457 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d06b648e-6377-4f0e-89d6-1be6577c18a2" containerName="registry-server" Jan 27 20:07:50 crc kubenswrapper[4915]: I0127 20:07:50.953465 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d06b648e-6377-4f0e-89d6-1be6577c18a2" containerName="registry-server" Jan 27 20:07:50 crc kubenswrapper[4915]: E0127 20:07:50.953482 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d06b648e-6377-4f0e-89d6-1be6577c18a2" containerName="extract-content" Jan 27 20:07:50 crc kubenswrapper[4915]: I0127 20:07:50.953491 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d06b648e-6377-4f0e-89d6-1be6577c18a2" containerName="extract-content" Jan 27 20:07:50 crc kubenswrapper[4915]: E0127 20:07:50.953509 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="885cbaf4-4622-4a1a-9045-895d0a82801e" containerName="registry-server" Jan 27 20:07:50 crc kubenswrapper[4915]: I0127 20:07:50.953518 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="885cbaf4-4622-4a1a-9045-895d0a82801e" containerName="registry-server" Jan 27 20:07:50 crc kubenswrapper[4915]: I0127 20:07:50.953697 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="885cbaf4-4622-4a1a-9045-895d0a82801e" containerName="registry-server" Jan 27 20:07:50 crc kubenswrapper[4915]: I0127 20:07:50.953709 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="d06b648e-6377-4f0e-89d6-1be6577c18a2" containerName="registry-server" Jan 27 20:07:50 crc kubenswrapper[4915]: I0127 20:07:50.954374 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 27 20:07:50 crc kubenswrapper[4915]: I0127 20:07:50.959370 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-grfcr" Jan 27 20:07:50 crc kubenswrapper[4915]: I0127 20:07:50.968955 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 27 20:07:51 crc kubenswrapper[4915]: I0127 20:07:51.122747 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d2bba3d3-8770-4d7c-beac-41284199d8c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2bba3d3-8770-4d7c-beac-41284199d8c0\") pod \"mariadb-copy-data\" (UID: \"05230450-655e-4be4-a0fb-031369fc838d\") " pod="openstack/mariadb-copy-data" Jan 27 20:07:51 crc kubenswrapper[4915]: I0127 20:07:51.123151 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w595s\" (UniqueName: \"kubernetes.io/projected/05230450-655e-4be4-a0fb-031369fc838d-kube-api-access-w595s\") pod \"mariadb-copy-data\" (UID: \"05230450-655e-4be4-a0fb-031369fc838d\") " pod="openstack/mariadb-copy-data" Jan 27 20:07:51 crc kubenswrapper[4915]: I0127 20:07:51.224073 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w595s\" (UniqueName: \"kubernetes.io/projected/05230450-655e-4be4-a0fb-031369fc838d-kube-api-access-w595s\") pod \"mariadb-copy-data\" (UID: \"05230450-655e-4be4-a0fb-031369fc838d\") " pod="openstack/mariadb-copy-data" Jan 27 20:07:51 crc kubenswrapper[4915]: I0127 20:07:51.224133 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d2bba3d3-8770-4d7c-beac-41284199d8c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2bba3d3-8770-4d7c-beac-41284199d8c0\") pod \"mariadb-copy-data\" (UID: \"05230450-655e-4be4-a0fb-031369fc838d\") " pod="openstack/mariadb-copy-data" Jan 27 20:07:51 crc kubenswrapper[4915]: I0127 20:07:51.226913 4915 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 20:07:51 crc kubenswrapper[4915]: I0127 20:07:51.227063 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d2bba3d3-8770-4d7c-beac-41284199d8c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2bba3d3-8770-4d7c-beac-41284199d8c0\") pod \"mariadb-copy-data\" (UID: \"05230450-655e-4be4-a0fb-031369fc838d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2a30880b0ae907a9e7c7447d3d188361a84e152ef771cecfaba9dac2d1d3a10e/globalmount\"" pod="openstack/mariadb-copy-data" Jan 27 20:07:51 crc kubenswrapper[4915]: I0127 20:07:51.245372 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w595s\" (UniqueName: \"kubernetes.io/projected/05230450-655e-4be4-a0fb-031369fc838d-kube-api-access-w595s\") pod \"mariadb-copy-data\" (UID: \"05230450-655e-4be4-a0fb-031369fc838d\") " pod="openstack/mariadb-copy-data" Jan 27 20:07:51 crc kubenswrapper[4915]: I0127 20:07:51.255180 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d2bba3d3-8770-4d7c-beac-41284199d8c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2bba3d3-8770-4d7c-beac-41284199d8c0\") pod \"mariadb-copy-data\" (UID: \"05230450-655e-4be4-a0fb-031369fc838d\") " pod="openstack/mariadb-copy-data" Jan 27 20:07:51 crc kubenswrapper[4915]: I0127 20:07:51.286826 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 27 20:07:51 crc kubenswrapper[4915]: I0127 20:07:51.783888 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 27 20:07:51 crc kubenswrapper[4915]: I0127 20:07:51.855714 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"05230450-655e-4be4-a0fb-031369fc838d","Type":"ContainerStarted","Data":"ae3c4c0932d76e1cb3045426871eb2c04db896ecca9040b0f1d29843410f00c6"} Jan 27 20:07:52 crc kubenswrapper[4915]: I0127 20:07:52.863755 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"05230450-655e-4be4-a0fb-031369fc838d","Type":"ContainerStarted","Data":"d31818ade1020df8c34507e77341f7bf1c552f5da5e2c40b2c528420e1809b8d"} Jan 27 20:07:52 crc kubenswrapper[4915]: I0127 20:07:52.880848 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.8808180180000003 podStartE2EDuration="3.880818018s" podCreationTimestamp="2026-01-27 20:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:07:52.877001574 +0000 UTC m=+5164.234855248" watchObservedRunningTime="2026-01-27 20:07:52.880818018 +0000 UTC m=+5164.238671692" Jan 27 20:07:55 crc kubenswrapper[4915]: I0127 20:07:55.894591 4915 scope.go:117] "RemoveContainer" containerID="43b05342dc1519e61b23f5a664f15c2e58a9173e8666d9e3214fb6bed984e1b0" Jan 27 20:07:55 crc kubenswrapper[4915]: I0127 20:07:55.945591 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 27 20:07:55 crc kubenswrapper[4915]: I0127 20:07:55.946853 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 20:07:55 crc kubenswrapper[4915]: I0127 20:07:55.966696 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 27 20:07:56 crc kubenswrapper[4915]: I0127 20:07:56.105183 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtjpd\" (UniqueName: \"kubernetes.io/projected/45393e80-6e08-4ed6-86fd-ebbac4fae0ae-kube-api-access-mtjpd\") pod \"mariadb-client\" (UID: \"45393e80-6e08-4ed6-86fd-ebbac4fae0ae\") " pod="openstack/mariadb-client" Jan 27 20:07:56 crc kubenswrapper[4915]: I0127 20:07:56.207440 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtjpd\" (UniqueName: \"kubernetes.io/projected/45393e80-6e08-4ed6-86fd-ebbac4fae0ae-kube-api-access-mtjpd\") pod \"mariadb-client\" (UID: \"45393e80-6e08-4ed6-86fd-ebbac4fae0ae\") " pod="openstack/mariadb-client" Jan 27 20:07:56 crc kubenswrapper[4915]: I0127 20:07:56.245417 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtjpd\" (UniqueName: \"kubernetes.io/projected/45393e80-6e08-4ed6-86fd-ebbac4fae0ae-kube-api-access-mtjpd\") pod \"mariadb-client\" (UID: \"45393e80-6e08-4ed6-86fd-ebbac4fae0ae\") " pod="openstack/mariadb-client" Jan 27 20:07:56 crc kubenswrapper[4915]: I0127 20:07:56.273137 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 20:07:56 crc kubenswrapper[4915]: I0127 20:07:56.358287 4915 scope.go:117] "RemoveContainer" containerID="5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda" Jan 27 20:07:56 crc kubenswrapper[4915]: E0127 20:07:56.358743 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:07:56 crc kubenswrapper[4915]: I0127 20:07:56.697616 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 27 20:07:56 crc kubenswrapper[4915]: W0127 20:07:56.698992 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45393e80_6e08_4ed6_86fd_ebbac4fae0ae.slice/crio-1ce402f8fb0b65b4e63eaf5b735895462cb15d0b580c9aa88316400c581a0de9 WatchSource:0}: Error finding container 1ce402f8fb0b65b4e63eaf5b735895462cb15d0b580c9aa88316400c581a0de9: Status 404 returned error can't find the container with id 1ce402f8fb0b65b4e63eaf5b735895462cb15d0b580c9aa88316400c581a0de9 Jan 27 20:07:56 crc kubenswrapper[4915]: I0127 20:07:56.904253 4915 generic.go:334] "Generic (PLEG): container finished" podID="45393e80-6e08-4ed6-86fd-ebbac4fae0ae" containerID="74c18c6694a0e108607c5b1348d49a223bffa1dfbd42ef36699a336ee3bfd818" exitCode=0 Jan 27 20:07:56 crc kubenswrapper[4915]: I0127 20:07:56.904355 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"45393e80-6e08-4ed6-86fd-ebbac4fae0ae","Type":"ContainerDied","Data":"74c18c6694a0e108607c5b1348d49a223bffa1dfbd42ef36699a336ee3bfd818"} Jan 27 20:07:56 crc kubenswrapper[4915]: I0127 20:07:56.904590 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"45393e80-6e08-4ed6-86fd-ebbac4fae0ae","Type":"ContainerStarted","Data":"1ce402f8fb0b65b4e63eaf5b735895462cb15d0b580c9aa88316400c581a0de9"} Jan 27 20:07:58 crc kubenswrapper[4915]: I0127 20:07:58.184345 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 20:07:58 crc kubenswrapper[4915]: I0127 20:07:58.223163 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_45393e80-6e08-4ed6-86fd-ebbac4fae0ae/mariadb-client/0.log" Jan 27 20:07:58 crc kubenswrapper[4915]: I0127 20:07:58.256632 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 27 20:07:58 crc kubenswrapper[4915]: I0127 20:07:58.266704 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 27 20:07:58 crc kubenswrapper[4915]: I0127 20:07:58.348578 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtjpd\" (UniqueName: \"kubernetes.io/projected/45393e80-6e08-4ed6-86fd-ebbac4fae0ae-kube-api-access-mtjpd\") pod \"45393e80-6e08-4ed6-86fd-ebbac4fae0ae\" (UID: \"45393e80-6e08-4ed6-86fd-ebbac4fae0ae\") " Jan 27 20:07:58 crc kubenswrapper[4915]: I0127 20:07:58.359512 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45393e80-6e08-4ed6-86fd-ebbac4fae0ae-kube-api-access-mtjpd" (OuterVolumeSpecName: "kube-api-access-mtjpd") pod "45393e80-6e08-4ed6-86fd-ebbac4fae0ae" (UID: "45393e80-6e08-4ed6-86fd-ebbac4fae0ae"). InnerVolumeSpecName "kube-api-access-mtjpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4915]: I0127 20:07:58.400427 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 27 20:07:58 crc kubenswrapper[4915]: E0127 20:07:58.401169 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45393e80-6e08-4ed6-86fd-ebbac4fae0ae" containerName="mariadb-client" Jan 27 20:07:58 crc kubenswrapper[4915]: I0127 20:07:58.401194 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="45393e80-6e08-4ed6-86fd-ebbac4fae0ae" containerName="mariadb-client" Jan 27 20:07:58 crc kubenswrapper[4915]: I0127 20:07:58.401367 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="45393e80-6e08-4ed6-86fd-ebbac4fae0ae" containerName="mariadb-client" Jan 27 20:07:58 crc kubenswrapper[4915]: I0127 20:07:58.401997 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 20:07:58 crc kubenswrapper[4915]: I0127 20:07:58.407383 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 27 20:07:58 crc kubenswrapper[4915]: I0127 20:07:58.450787 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtjpd\" (UniqueName: \"kubernetes.io/projected/45393e80-6e08-4ed6-86fd-ebbac4fae0ae-kube-api-access-mtjpd\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4915]: I0127 20:07:58.552272 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z85gz\" (UniqueName: \"kubernetes.io/projected/e4159a5c-00d8-45bb-87fc-c9ab37cacc50-kube-api-access-z85gz\") pod \"mariadb-client\" (UID: \"e4159a5c-00d8-45bb-87fc-c9ab37cacc50\") " pod="openstack/mariadb-client" Jan 27 20:07:58 crc kubenswrapper[4915]: I0127 20:07:58.653240 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z85gz\" (UniqueName: \"kubernetes.io/projected/e4159a5c-00d8-45bb-87fc-c9ab37cacc50-kube-api-access-z85gz\") pod \"mariadb-client\" (UID: \"e4159a5c-00d8-45bb-87fc-c9ab37cacc50\") " pod="openstack/mariadb-client" Jan 27 20:07:58 crc kubenswrapper[4915]: I0127 20:07:58.671331 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z85gz\" (UniqueName: \"kubernetes.io/projected/e4159a5c-00d8-45bb-87fc-c9ab37cacc50-kube-api-access-z85gz\") pod \"mariadb-client\" (UID: \"e4159a5c-00d8-45bb-87fc-c9ab37cacc50\") " pod="openstack/mariadb-client" Jan 27 20:07:58 crc kubenswrapper[4915]: I0127 20:07:58.722093 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 20:07:58 crc kubenswrapper[4915]: I0127 20:07:58.917207 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ce402f8fb0b65b4e63eaf5b735895462cb15d0b580c9aa88316400c581a0de9" Jan 27 20:07:58 crc kubenswrapper[4915]: I0127 20:07:58.917293 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 20:07:58 crc kubenswrapper[4915]: I0127 20:07:58.934581 4915 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="45393e80-6e08-4ed6-86fd-ebbac4fae0ae" podUID="e4159a5c-00d8-45bb-87fc-c9ab37cacc50" Jan 27 20:07:59 crc kubenswrapper[4915]: I0127 20:07:59.137788 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 27 20:07:59 crc kubenswrapper[4915]: W0127 20:07:59.140915 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4159a5c_00d8_45bb_87fc_c9ab37cacc50.slice/crio-bd14895aa66bec89441a2f92d3d46b29aa90819b02da7fe9bf1372482e31efcd WatchSource:0}: Error finding container bd14895aa66bec89441a2f92d3d46b29aa90819b02da7fe9bf1372482e31efcd: Status 404 returned error can't find the container with id bd14895aa66bec89441a2f92d3d46b29aa90819b02da7fe9bf1372482e31efcd Jan 27 20:07:59 crc kubenswrapper[4915]: I0127 20:07:59.366535 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45393e80-6e08-4ed6-86fd-ebbac4fae0ae" path="/var/lib/kubelet/pods/45393e80-6e08-4ed6-86fd-ebbac4fae0ae/volumes" Jan 27 20:07:59 crc kubenswrapper[4915]: I0127 20:07:59.929418 4915 generic.go:334] "Generic (PLEG): container finished" podID="e4159a5c-00d8-45bb-87fc-c9ab37cacc50" containerID="ae819cb5b55e900209d1ad4d8c1ba0204421bd57a84902cf136a4f4af3776a59" exitCode=0 Jan 27 20:07:59 crc kubenswrapper[4915]: I0127 20:07:59.929469 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"e4159a5c-00d8-45bb-87fc-c9ab37cacc50","Type":"ContainerDied","Data":"ae819cb5b55e900209d1ad4d8c1ba0204421bd57a84902cf136a4f4af3776a59"} Jan 27 20:07:59 crc kubenswrapper[4915]: I0127 20:07:59.929504 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"e4159a5c-00d8-45bb-87fc-c9ab37cacc50","Type":"ContainerStarted","Data":"bd14895aa66bec89441a2f92d3d46b29aa90819b02da7fe9bf1372482e31efcd"} Jan 27 20:08:01 crc kubenswrapper[4915]: I0127 20:08:01.281516 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 20:08:01 crc kubenswrapper[4915]: I0127 20:08:01.311006 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_e4159a5c-00d8-45bb-87fc-c9ab37cacc50/mariadb-client/0.log" Jan 27 20:08:01 crc kubenswrapper[4915]: I0127 20:08:01.338446 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 27 20:08:01 crc kubenswrapper[4915]: I0127 20:08:01.343166 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 27 20:08:01 crc kubenswrapper[4915]: I0127 20:08:01.401210 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z85gz\" (UniqueName: \"kubernetes.io/projected/e4159a5c-00d8-45bb-87fc-c9ab37cacc50-kube-api-access-z85gz\") pod \"e4159a5c-00d8-45bb-87fc-c9ab37cacc50\" (UID: \"e4159a5c-00d8-45bb-87fc-c9ab37cacc50\") " Jan 27 20:08:01 crc kubenswrapper[4915]: I0127 20:08:01.407371 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4159a5c-00d8-45bb-87fc-c9ab37cacc50-kube-api-access-z85gz" (OuterVolumeSpecName: "kube-api-access-z85gz") pod "e4159a5c-00d8-45bb-87fc-c9ab37cacc50" (UID: "e4159a5c-00d8-45bb-87fc-c9ab37cacc50"). InnerVolumeSpecName "kube-api-access-z85gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:08:01 crc kubenswrapper[4915]: I0127 20:08:01.503212 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z85gz\" (UniqueName: \"kubernetes.io/projected/e4159a5c-00d8-45bb-87fc-c9ab37cacc50-kube-api-access-z85gz\") on node \"crc\" DevicePath \"\"" Jan 27 20:08:01 crc kubenswrapper[4915]: I0127 20:08:01.947402 4915 scope.go:117] "RemoveContainer" containerID="ae819cb5b55e900209d1ad4d8c1ba0204421bd57a84902cf136a4f4af3776a59" Jan 27 20:08:01 crc kubenswrapper[4915]: I0127 20:08:01.947492 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 20:08:03 crc kubenswrapper[4915]: I0127 20:08:03.369291 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4159a5c-00d8-45bb-87fc-c9ab37cacc50" path="/var/lib/kubelet/pods/e4159a5c-00d8-45bb-87fc-c9ab37cacc50/volumes" Jan 27 20:08:11 crc kubenswrapper[4915]: I0127 20:08:11.357863 4915 scope.go:117] "RemoveContainer" containerID="5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda" Jan 27 20:08:11 crc kubenswrapper[4915]: E0127 20:08:11.358690 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:08:23 crc kubenswrapper[4915]: I0127 20:08:23.357954 4915 scope.go:117] "RemoveContainer" containerID="5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda" Jan 27 20:08:23 crc kubenswrapper[4915]: E0127 20:08:23.358884 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:08:35 crc kubenswrapper[4915]: I0127 20:08:35.357772 4915 scope.go:117] "RemoveContainer" containerID="5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda" Jan 27 20:08:35 crc kubenswrapper[4915]: E0127 20:08:35.358556 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.721336 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 20:08:38 crc kubenswrapper[4915]: E0127 20:08:38.752276 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4159a5c-00d8-45bb-87fc-c9ab37cacc50" containerName="mariadb-client" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.752332 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4159a5c-00d8-45bb-87fc-c9ab37cacc50" containerName="mariadb-client" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.753152 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4159a5c-00d8-45bb-87fc-c9ab37cacc50" containerName="mariadb-client" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.756487 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.764318 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-9dp8v" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.766570 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.767247 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.768003 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.770101 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.793265 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.803978 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.806442 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.813203 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.819729 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.907888 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.909177 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.914051 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.914174 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-bmhtm" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.914572 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.929257 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.930291 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bb28fc6-fbee-4298-b22c-0581e2f377f0-config\") pod \"ovsdbserver-nb-2\" (UID: \"6bb28fc6-fbee-4298-b22c-0581e2f377f0\") " pod="openstack/ovsdbserver-nb-2" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.930343 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9574c10f-efd5-4584-842b-d2aa7e5aa980\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9574c10f-efd5-4584-842b-d2aa7e5aa980\") pod \"ovsdbserver-nb-1\" (UID: \"2e0e8901-adce-4d27-8f75-95a18d45659a\") " pod="openstack/ovsdbserver-nb-1" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.930403 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2e0e8901-adce-4d27-8f75-95a18d45659a-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"2e0e8901-adce-4d27-8f75-95a18d45659a\") " pod="openstack/ovsdbserver-nb-1" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.930422 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lbjb\" (UniqueName: \"kubernetes.io/projected/2e0e8901-adce-4d27-8f75-95a18d45659a-kube-api-access-8lbjb\") pod \"ovsdbserver-nb-1\" (UID: \"2e0e8901-adce-4d27-8f75-95a18d45659a\") " pod="openstack/ovsdbserver-nb-1" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.930444 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e0e8901-adce-4d27-8f75-95a18d45659a-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"2e0e8901-adce-4d27-8f75-95a18d45659a\") " pod="openstack/ovsdbserver-nb-1" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.930469 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/29af113a-6c28-4034-8541-3f450288d15f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"29af113a-6c28-4034-8541-3f450288d15f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.930486 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb28fc6-fbee-4298-b22c-0581e2f377f0-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"6bb28fc6-fbee-4298-b22c-0581e2f377f0\") " pod="openstack/ovsdbserver-nb-2" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.930520 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2c675e80-00de-4201-ac00-5db413ffc5ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2c675e80-00de-4201-ac00-5db413ffc5ee\") pod \"ovsdbserver-nb-0\" (UID: \"29af113a-6c28-4034-8541-3f450288d15f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.930536 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e0e8901-adce-4d27-8f75-95a18d45659a-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"2e0e8901-adce-4d27-8f75-95a18d45659a\") " pod="openstack/ovsdbserver-nb-1" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.930562 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29af113a-6c28-4034-8541-3f450288d15f-config\") pod \"ovsdbserver-nb-0\" (UID: \"29af113a-6c28-4034-8541-3f450288d15f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.930599 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5c01beb9-7732-48f6-b79e-44bab88f3bb8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c01beb9-7732-48f6-b79e-44bab88f3bb8\") pod \"ovsdbserver-nb-2\" (UID: \"6bb28fc6-fbee-4298-b22c-0581e2f377f0\") " pod="openstack/ovsdbserver-nb-2" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.930627 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bb28fc6-fbee-4298-b22c-0581e2f377f0-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"6bb28fc6-fbee-4298-b22c-0581e2f377f0\") " pod="openstack/ovsdbserver-nb-2" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.930650 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29af113a-6c28-4034-8541-3f450288d15f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"29af113a-6c28-4034-8541-3f450288d15f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.930671 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt5p4\" (UniqueName: \"kubernetes.io/projected/29af113a-6c28-4034-8541-3f450288d15f-kube-api-access-wt5p4\") pod \"ovsdbserver-nb-0\" (UID: \"29af113a-6c28-4034-8541-3f450288d15f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.930692 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6bb28fc6-fbee-4298-b22c-0581e2f377f0-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"6bb28fc6-fbee-4298-b22c-0581e2f377f0\") " pod="openstack/ovsdbserver-nb-2" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.930717 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29af113a-6c28-4034-8541-3f450288d15f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"29af113a-6c28-4034-8541-3f450288d15f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.930740 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4mzh\" (UniqueName: \"kubernetes.io/projected/6bb28fc6-fbee-4298-b22c-0581e2f377f0-kube-api-access-x4mzh\") pod \"ovsdbserver-nb-2\" (UID: \"6bb28fc6-fbee-4298-b22c-0581e2f377f0\") " pod="openstack/ovsdbserver-nb-2" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.930768 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e0e8901-adce-4d27-8f75-95a18d45659a-config\") pod \"ovsdbserver-nb-1\" (UID: \"2e0e8901-adce-4d27-8f75-95a18d45659a\") " pod="openstack/ovsdbserver-nb-1" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.936614 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.940854 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.942754 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.944306 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.968709 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 27 20:08:38 crc kubenswrapper[4915]: I0127 20:08:38.983452 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.031642 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e0e8901-adce-4d27-8f75-95a18d45659a-config\") pod \"ovsdbserver-nb-1\" (UID: \"2e0e8901-adce-4d27-8f75-95a18d45659a\") " pod="openstack/ovsdbserver-nb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.031687 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/814e662a-f9d9-422d-b2e3-7dd5be4313c3-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"814e662a-f9d9-422d-b2e3-7dd5be4313c3\") " pod="openstack/ovsdbserver-sb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.031715 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/814e662a-f9d9-422d-b2e3-7dd5be4313c3-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"814e662a-f9d9-422d-b2e3-7dd5be4313c3\") " pod="openstack/ovsdbserver-sb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.031736 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bb28fc6-fbee-4298-b22c-0581e2f377f0-config\") pod \"ovsdbserver-nb-2\" (UID: \"6bb28fc6-fbee-4298-b22c-0581e2f377f0\") " pod="openstack/ovsdbserver-nb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.031760 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lsmp\" (UniqueName: \"kubernetes.io/projected/814e662a-f9d9-422d-b2e3-7dd5be4313c3-kube-api-access-4lsmp\") pod \"ovsdbserver-sb-2\" (UID: \"814e662a-f9d9-422d-b2e3-7dd5be4313c3\") " pod="openstack/ovsdbserver-sb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.031781 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9574c10f-efd5-4584-842b-d2aa7e5aa980\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9574c10f-efd5-4584-842b-d2aa7e5aa980\") pod \"ovsdbserver-nb-1\" (UID: \"2e0e8901-adce-4d27-8f75-95a18d45659a\") " pod="openstack/ovsdbserver-nb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.031818 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2e0e8901-adce-4d27-8f75-95a18d45659a-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"2e0e8901-adce-4d27-8f75-95a18d45659a\") " pod="openstack/ovsdbserver-nb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.031888 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lbjb\" (UniqueName: \"kubernetes.io/projected/2e0e8901-adce-4d27-8f75-95a18d45659a-kube-api-access-8lbjb\") pod \"ovsdbserver-nb-1\" (UID: \"2e0e8901-adce-4d27-8f75-95a18d45659a\") " pod="openstack/ovsdbserver-nb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.031944 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e0e8901-adce-4d27-8f75-95a18d45659a-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"2e0e8901-adce-4d27-8f75-95a18d45659a\") " pod="openstack/ovsdbserver-nb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.031995 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/29af113a-6c28-4034-8541-3f450288d15f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"29af113a-6c28-4034-8541-3f450288d15f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.032026 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/814e662a-f9d9-422d-b2e3-7dd5be4313c3-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"814e662a-f9d9-422d-b2e3-7dd5be4313c3\") " pod="openstack/ovsdbserver-sb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.032055 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb28fc6-fbee-4298-b22c-0581e2f377f0-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"6bb28fc6-fbee-4298-b22c-0581e2f377f0\") " pod="openstack/ovsdbserver-nb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.032340 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lfff\" (UniqueName: \"kubernetes.io/projected/2c487705-cbc3-408b-82be-8634bfdb534d-kube-api-access-7lfff\") pod \"ovsdbserver-sb-0\" (UID: \"2c487705-cbc3-408b-82be-8634bfdb534d\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.032404 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2c675e80-00de-4201-ac00-5db413ffc5ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2c675e80-00de-4201-ac00-5db413ffc5ee\") pod \"ovsdbserver-nb-0\" (UID: \"29af113a-6c28-4034-8541-3f450288d15f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.032409 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2e0e8901-adce-4d27-8f75-95a18d45659a-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"2e0e8901-adce-4d27-8f75-95a18d45659a\") " pod="openstack/ovsdbserver-nb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.032425 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e0e8901-adce-4d27-8f75-95a18d45659a-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"2e0e8901-adce-4d27-8f75-95a18d45659a\") " pod="openstack/ovsdbserver-nb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.032465 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/814e662a-f9d9-422d-b2e3-7dd5be4313c3-config\") pod \"ovsdbserver-sb-2\" (UID: \"814e662a-f9d9-422d-b2e3-7dd5be4313c3\") " pod="openstack/ovsdbserver-sb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.032498 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-70aa9a80-860b-4d39-84c7-e29cc60e7e90\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-70aa9a80-860b-4d39-84c7-e29cc60e7e90\") pod \"ovsdbserver-sb-2\" (UID: \"814e662a-f9d9-422d-b2e3-7dd5be4313c3\") " pod="openstack/ovsdbserver-sb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.032524 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29af113a-6c28-4034-8541-3f450288d15f-config\") pod \"ovsdbserver-nb-0\" (UID: \"29af113a-6c28-4034-8541-3f450288d15f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.032550 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2c487705-cbc3-408b-82be-8634bfdb534d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2c487705-cbc3-408b-82be-8634bfdb534d\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.032598 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c487705-cbc3-408b-82be-8634bfdb534d-config\") pod \"ovsdbserver-sb-0\" (UID: \"2c487705-cbc3-408b-82be-8634bfdb534d\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.032621 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c487705-cbc3-408b-82be-8634bfdb534d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2c487705-cbc3-408b-82be-8634bfdb534d\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.032651 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c487705-cbc3-408b-82be-8634bfdb534d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2c487705-cbc3-408b-82be-8634bfdb534d\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.032685 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91cd6e4c-0379-4e16-8074-d2b2b347e64e-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"91cd6e4c-0379-4e16-8074-d2b2b347e64e\") " pod="openstack/ovsdbserver-sb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.032714 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5c01beb9-7732-48f6-b79e-44bab88f3bb8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c01beb9-7732-48f6-b79e-44bab88f3bb8\") pod \"ovsdbserver-nb-2\" (UID: \"6bb28fc6-fbee-4298-b22c-0581e2f377f0\") " pod="openstack/ovsdbserver-nb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.032740 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91cd6e4c-0379-4e16-8074-d2b2b347e64e-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"91cd6e4c-0379-4e16-8074-d2b2b347e64e\") " pod="openstack/ovsdbserver-sb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.032769 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8aa11b0d-5654-4923-8d9a-e7fab4e7f083\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8aa11b0d-5654-4923-8d9a-e7fab4e7f083\") pod \"ovsdbserver-sb-1\" (UID: \"91cd6e4c-0379-4e16-8074-d2b2b347e64e\") " pod="openstack/ovsdbserver-sb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.032827 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bb28fc6-fbee-4298-b22c-0581e2f377f0-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"6bb28fc6-fbee-4298-b22c-0581e2f377f0\") " pod="openstack/ovsdbserver-nb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.032466 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/29af113a-6c28-4034-8541-3f450288d15f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"29af113a-6c28-4034-8541-3f450288d15f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.032869 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/91cd6e4c-0379-4e16-8074-d2b2b347e64e-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"91cd6e4c-0379-4e16-8074-d2b2b347e64e\") " pod="openstack/ovsdbserver-sb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.032903 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29af113a-6c28-4034-8541-3f450288d15f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"29af113a-6c28-4034-8541-3f450288d15f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.032929 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91cd6e4c-0379-4e16-8074-d2b2b347e64e-config\") pod \"ovsdbserver-sb-1\" (UID: \"91cd6e4c-0379-4e16-8074-d2b2b347e64e\") " pod="openstack/ovsdbserver-sb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.032954 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt5p4\" (UniqueName: \"kubernetes.io/projected/29af113a-6c28-4034-8541-3f450288d15f-kube-api-access-wt5p4\") pod \"ovsdbserver-nb-0\" (UID: \"29af113a-6c28-4034-8541-3f450288d15f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.032977 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws5k9\" (UniqueName: \"kubernetes.io/projected/91cd6e4c-0379-4e16-8074-d2b2b347e64e-kube-api-access-ws5k9\") pod \"ovsdbserver-sb-1\" (UID: \"91cd6e4c-0379-4e16-8074-d2b2b347e64e\") " pod="openstack/ovsdbserver-sb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.033186 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6bb28fc6-fbee-4298-b22c-0581e2f377f0-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"6bb28fc6-fbee-4298-b22c-0581e2f377f0\") " pod="openstack/ovsdbserver-nb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.033209 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29af113a-6c28-4034-8541-3f450288d15f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"29af113a-6c28-4034-8541-3f450288d15f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.033238 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4mzh\" (UniqueName: \"kubernetes.io/projected/6bb28fc6-fbee-4298-b22c-0581e2f377f0-kube-api-access-x4mzh\") pod \"ovsdbserver-nb-2\" (UID: \"6bb28fc6-fbee-4298-b22c-0581e2f377f0\") " pod="openstack/ovsdbserver-nb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.033253 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bb28fc6-fbee-4298-b22c-0581e2f377f0-config\") pod \"ovsdbserver-nb-2\" (UID: \"6bb28fc6-fbee-4298-b22c-0581e2f377f0\") " pod="openstack/ovsdbserver-nb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.033290 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2a1505b3-577b-4108-a7c3-5fa257180f7f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a1505b3-577b-4108-a7c3-5fa257180f7f\") pod \"ovsdbserver-sb-0\" (UID: \"2c487705-cbc3-408b-82be-8634bfdb534d\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.033413 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e0e8901-adce-4d27-8f75-95a18d45659a-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"2e0e8901-adce-4d27-8f75-95a18d45659a\") " pod="openstack/ovsdbserver-nb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.033913 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6bb28fc6-fbee-4298-b22c-0581e2f377f0-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"6bb28fc6-fbee-4298-b22c-0581e2f377f0\") " pod="openstack/ovsdbserver-nb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.033968 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29af113a-6c28-4034-8541-3f450288d15f-config\") pod \"ovsdbserver-nb-0\" (UID: \"29af113a-6c28-4034-8541-3f450288d15f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.034400 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6bb28fc6-fbee-4298-b22c-0581e2f377f0-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"6bb28fc6-fbee-4298-b22c-0581e2f377f0\") " pod="openstack/ovsdbserver-nb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.034509 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29af113a-6c28-4034-8541-3f450288d15f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"29af113a-6c28-4034-8541-3f450288d15f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.035420 4915 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.035454 4915 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.035459 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9574c10f-efd5-4584-842b-d2aa7e5aa980\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9574c10f-efd5-4584-842b-d2aa7e5aa980\") pod \"ovsdbserver-nb-1\" (UID: \"2e0e8901-adce-4d27-8f75-95a18d45659a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/298141326886ede1ce93bdabfac987b287eb6ec7c0f87ed1f0a239878f1a7bcd/globalmount\"" pod="openstack/ovsdbserver-nb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.035482 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2c675e80-00de-4201-ac00-5db413ffc5ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2c675e80-00de-4201-ac00-5db413ffc5ee\") pod \"ovsdbserver-nb-0\" (UID: \"29af113a-6c28-4034-8541-3f450288d15f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c83b16eb87063d2e0233ce62a2ed5eaef5def269ee33317a6362c317938202ed/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.035546 4915 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.035584 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5c01beb9-7732-48f6-b79e-44bab88f3bb8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c01beb9-7732-48f6-b79e-44bab88f3bb8\") pod \"ovsdbserver-nb-2\" (UID: \"6bb28fc6-fbee-4298-b22c-0581e2f377f0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/36446e565e7218ef407ead000a4e30b6887ec2d2c00e07bc3cced6deefc8ab37/globalmount\"" pod="openstack/ovsdbserver-nb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.035974 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e0e8901-adce-4d27-8f75-95a18d45659a-config\") pod \"ovsdbserver-nb-1\" (UID: \"2e0e8901-adce-4d27-8f75-95a18d45659a\") " pod="openstack/ovsdbserver-nb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.038457 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29af113a-6c28-4034-8541-3f450288d15f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"29af113a-6c28-4034-8541-3f450288d15f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.039336 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e0e8901-adce-4d27-8f75-95a18d45659a-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"2e0e8901-adce-4d27-8f75-95a18d45659a\") " pod="openstack/ovsdbserver-nb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.039970 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bb28fc6-fbee-4298-b22c-0581e2f377f0-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"6bb28fc6-fbee-4298-b22c-0581e2f377f0\") " pod="openstack/ovsdbserver-nb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.049205 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt5p4\" (UniqueName: \"kubernetes.io/projected/29af113a-6c28-4034-8541-3f450288d15f-kube-api-access-wt5p4\") pod \"ovsdbserver-nb-0\" (UID: \"29af113a-6c28-4034-8541-3f450288d15f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.051991 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4mzh\" (UniqueName: \"kubernetes.io/projected/6bb28fc6-fbee-4298-b22c-0581e2f377f0-kube-api-access-x4mzh\") pod \"ovsdbserver-nb-2\" (UID: \"6bb28fc6-fbee-4298-b22c-0581e2f377f0\") " pod="openstack/ovsdbserver-nb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.052438 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lbjb\" (UniqueName: \"kubernetes.io/projected/2e0e8901-adce-4d27-8f75-95a18d45659a-kube-api-access-8lbjb\") pod \"ovsdbserver-nb-1\" (UID: \"2e0e8901-adce-4d27-8f75-95a18d45659a\") " pod="openstack/ovsdbserver-nb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.067295 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9574c10f-efd5-4584-842b-d2aa7e5aa980\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9574c10f-efd5-4584-842b-d2aa7e5aa980\") pod \"ovsdbserver-nb-1\" (UID: \"2e0e8901-adce-4d27-8f75-95a18d45659a\") " pod="openstack/ovsdbserver-nb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.068041 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5c01beb9-7732-48f6-b79e-44bab88f3bb8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c01beb9-7732-48f6-b79e-44bab88f3bb8\") pod \"ovsdbserver-nb-2\" (UID: \"6bb28fc6-fbee-4298-b22c-0581e2f377f0\") " pod="openstack/ovsdbserver-nb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.076749 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2c675e80-00de-4201-ac00-5db413ffc5ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2c675e80-00de-4201-ac00-5db413ffc5ee\") pod \"ovsdbserver-nb-0\" (UID: \"29af113a-6c28-4034-8541-3f450288d15f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.099133 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.117562 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.132391 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.134385 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/814e662a-f9d9-422d-b2e3-7dd5be4313c3-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"814e662a-f9d9-422d-b2e3-7dd5be4313c3\") " pod="openstack/ovsdbserver-sb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.134452 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lsmp\" (UniqueName: \"kubernetes.io/projected/814e662a-f9d9-422d-b2e3-7dd5be4313c3-kube-api-access-4lsmp\") pod \"ovsdbserver-sb-2\" (UID: \"814e662a-f9d9-422d-b2e3-7dd5be4313c3\") " pod="openstack/ovsdbserver-sb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.134753 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/814e662a-f9d9-422d-b2e3-7dd5be4313c3-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"814e662a-f9d9-422d-b2e3-7dd5be4313c3\") " pod="openstack/ovsdbserver-sb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.135234 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lfff\" (UniqueName: \"kubernetes.io/projected/2c487705-cbc3-408b-82be-8634bfdb534d-kube-api-access-7lfff\") pod \"ovsdbserver-sb-0\" (UID: \"2c487705-cbc3-408b-82be-8634bfdb534d\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.135315 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/814e662a-f9d9-422d-b2e3-7dd5be4313c3-config\") pod \"ovsdbserver-sb-2\" (UID: \"814e662a-f9d9-422d-b2e3-7dd5be4313c3\") " pod="openstack/ovsdbserver-sb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.135356 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-70aa9a80-860b-4d39-84c7-e29cc60e7e90\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-70aa9a80-860b-4d39-84c7-e29cc60e7e90\") pod \"ovsdbserver-sb-2\" (UID: \"814e662a-f9d9-422d-b2e3-7dd5be4313c3\") " pod="openstack/ovsdbserver-sb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.135410 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2c487705-cbc3-408b-82be-8634bfdb534d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2c487705-cbc3-408b-82be-8634bfdb534d\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.135456 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c487705-cbc3-408b-82be-8634bfdb534d-config\") pod \"ovsdbserver-sb-0\" (UID: \"2c487705-cbc3-408b-82be-8634bfdb534d\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.135475 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/814e662a-f9d9-422d-b2e3-7dd5be4313c3-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"814e662a-f9d9-422d-b2e3-7dd5be4313c3\") " pod="openstack/ovsdbserver-sb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.135486 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c487705-cbc3-408b-82be-8634bfdb534d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2c487705-cbc3-408b-82be-8634bfdb534d\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.135614 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c487705-cbc3-408b-82be-8634bfdb534d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2c487705-cbc3-408b-82be-8634bfdb534d\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.135679 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91cd6e4c-0379-4e16-8074-d2b2b347e64e-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"91cd6e4c-0379-4e16-8074-d2b2b347e64e\") " pod="openstack/ovsdbserver-sb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.135726 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91cd6e4c-0379-4e16-8074-d2b2b347e64e-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"91cd6e4c-0379-4e16-8074-d2b2b347e64e\") " pod="openstack/ovsdbserver-sb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.135782 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8aa11b0d-5654-4923-8d9a-e7fab4e7f083\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8aa11b0d-5654-4923-8d9a-e7fab4e7f083\") pod \"ovsdbserver-sb-1\" (UID: \"91cd6e4c-0379-4e16-8074-d2b2b347e64e\") " pod="openstack/ovsdbserver-sb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.135886 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/91cd6e4c-0379-4e16-8074-d2b2b347e64e-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"91cd6e4c-0379-4e16-8074-d2b2b347e64e\") " pod="openstack/ovsdbserver-sb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.135949 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91cd6e4c-0379-4e16-8074-d2b2b347e64e-config\") pod \"ovsdbserver-sb-1\" (UID: \"91cd6e4c-0379-4e16-8074-d2b2b347e64e\") " pod="openstack/ovsdbserver-sb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.135979 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/814e662a-f9d9-422d-b2e3-7dd5be4313c3-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"814e662a-f9d9-422d-b2e3-7dd5be4313c3\") " pod="openstack/ovsdbserver-sb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.135986 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws5k9\" (UniqueName: \"kubernetes.io/projected/91cd6e4c-0379-4e16-8074-d2b2b347e64e-kube-api-access-ws5k9\") pod \"ovsdbserver-sb-1\" (UID: \"91cd6e4c-0379-4e16-8074-d2b2b347e64e\") " pod="openstack/ovsdbserver-sb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.136336 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2a1505b3-577b-4108-a7c3-5fa257180f7f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a1505b3-577b-4108-a7c3-5fa257180f7f\") pod \"ovsdbserver-sb-0\" (UID: \"2c487705-cbc3-408b-82be-8634bfdb534d\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.136442 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/814e662a-f9d9-422d-b2e3-7dd5be4313c3-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"814e662a-f9d9-422d-b2e3-7dd5be4313c3\") " pod="openstack/ovsdbserver-sb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.140128 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/814e662a-f9d9-422d-b2e3-7dd5be4313c3-config\") pod \"ovsdbserver-sb-2\" (UID: \"814e662a-f9d9-422d-b2e3-7dd5be4313c3\") " pod="openstack/ovsdbserver-sb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.140158 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2c487705-cbc3-408b-82be-8634bfdb534d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2c487705-cbc3-408b-82be-8634bfdb534d\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.141087 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91cd6e4c-0379-4e16-8074-d2b2b347e64e-config\") pod \"ovsdbserver-sb-1\" (UID: \"91cd6e4c-0379-4e16-8074-d2b2b347e64e\") " pod="openstack/ovsdbserver-sb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.141443 4915 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.141471 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8aa11b0d-5654-4923-8d9a-e7fab4e7f083\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8aa11b0d-5654-4923-8d9a-e7fab4e7f083\") pod \"ovsdbserver-sb-1\" (UID: \"91cd6e4c-0379-4e16-8074-d2b2b347e64e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9c61ce9c0420eac0c1936d41e7fb183197067337b921c345175c9212a34acb76/globalmount\"" pod="openstack/ovsdbserver-sb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.141503 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91cd6e4c-0379-4e16-8074-d2b2b347e64e-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"91cd6e4c-0379-4e16-8074-d2b2b347e64e\") " pod="openstack/ovsdbserver-sb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.141744 4915 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.141777 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2a1505b3-577b-4108-a7c3-5fa257180f7f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a1505b3-577b-4108-a7c3-5fa257180f7f\") pod \"ovsdbserver-sb-0\" (UID: \"2c487705-cbc3-408b-82be-8634bfdb534d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f886ff69c1577577e9b56a3ab593b01cf3bf697603b0ea174fd52bd2986c39d4/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.142160 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/91cd6e4c-0379-4e16-8074-d2b2b347e64e-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"91cd6e4c-0379-4e16-8074-d2b2b347e64e\") " pod="openstack/ovsdbserver-sb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.142410 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c487705-cbc3-408b-82be-8634bfdb534d-config\") pod \"ovsdbserver-sb-0\" (UID: \"2c487705-cbc3-408b-82be-8634bfdb534d\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.142458 4915 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.142537 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-70aa9a80-860b-4d39-84c7-e29cc60e7e90\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-70aa9a80-860b-4d39-84c7-e29cc60e7e90\") pod \"ovsdbserver-sb-2\" (UID: \"814e662a-f9d9-422d-b2e3-7dd5be4313c3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a090fa9e17e73559180fe1f425648d9f9075c6f7b2d992b0b14a36b97dfc68c2/globalmount\"" pod="openstack/ovsdbserver-sb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.142825 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c487705-cbc3-408b-82be-8634bfdb534d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2c487705-cbc3-408b-82be-8634bfdb534d\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.143743 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c487705-cbc3-408b-82be-8634bfdb534d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2c487705-cbc3-408b-82be-8634bfdb534d\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.154071 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/814e662a-f9d9-422d-b2e3-7dd5be4313c3-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"814e662a-f9d9-422d-b2e3-7dd5be4313c3\") " pod="openstack/ovsdbserver-sb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.155017 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91cd6e4c-0379-4e16-8074-d2b2b347e64e-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"91cd6e4c-0379-4e16-8074-d2b2b347e64e\") " pod="openstack/ovsdbserver-sb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.161614 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lfff\" (UniqueName: \"kubernetes.io/projected/2c487705-cbc3-408b-82be-8634bfdb534d-kube-api-access-7lfff\") pod \"ovsdbserver-sb-0\" (UID: \"2c487705-cbc3-408b-82be-8634bfdb534d\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.161843 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws5k9\" (UniqueName: \"kubernetes.io/projected/91cd6e4c-0379-4e16-8074-d2b2b347e64e-kube-api-access-ws5k9\") pod \"ovsdbserver-sb-1\" (UID: \"91cd6e4c-0379-4e16-8074-d2b2b347e64e\") " pod="openstack/ovsdbserver-sb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.162537 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lsmp\" (UniqueName: \"kubernetes.io/projected/814e662a-f9d9-422d-b2e3-7dd5be4313c3-kube-api-access-4lsmp\") pod \"ovsdbserver-sb-2\" (UID: \"814e662a-f9d9-422d-b2e3-7dd5be4313c3\") " pod="openstack/ovsdbserver-sb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.187301 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2a1505b3-577b-4108-a7c3-5fa257180f7f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a1505b3-577b-4108-a7c3-5fa257180f7f\") pod \"ovsdbserver-sb-0\" (UID: \"2c487705-cbc3-408b-82be-8634bfdb534d\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.192762 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-70aa9a80-860b-4d39-84c7-e29cc60e7e90\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-70aa9a80-860b-4d39-84c7-e29cc60e7e90\") pod \"ovsdbserver-sb-2\" (UID: \"814e662a-f9d9-422d-b2e3-7dd5be4313c3\") " pod="openstack/ovsdbserver-sb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.210512 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8aa11b0d-5654-4923-8d9a-e7fab4e7f083\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8aa11b0d-5654-4923-8d9a-e7fab4e7f083\") pod \"ovsdbserver-sb-1\" (UID: \"91cd6e4c-0379-4e16-8074-d2b2b347e64e\") " pod="openstack/ovsdbserver-sb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.228316 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.258485 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.269591 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.668649 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.768424 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 20:08:39 crc kubenswrapper[4915]: I0127 20:08:39.861081 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 27 20:08:39 crc kubenswrapper[4915]: W0127 20:08:39.870387 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod814e662a_f9d9_422d_b2e3_7dd5be4313c3.slice/crio-c35043698e6f871d31dfa09be48416fb106ceab3db04295ead3ec8fc91dbea44 WatchSource:0}: Error finding container c35043698e6f871d31dfa09be48416fb106ceab3db04295ead3ec8fc91dbea44: Status 404 returned error can't find the container with id c35043698e6f871d31dfa09be48416fb106ceab3db04295ead3ec8fc91dbea44 Jan 27 20:08:40 crc kubenswrapper[4915]: I0127 20:08:40.256324 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"29af113a-6c28-4034-8541-3f450288d15f","Type":"ContainerStarted","Data":"d818a93cad2104eb99625e535666f5d198346d95cc9d0263bec3ba67f2ec7559"} Jan 27 20:08:40 crc kubenswrapper[4915]: I0127 20:08:40.256370 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"29af113a-6c28-4034-8541-3f450288d15f","Type":"ContainerStarted","Data":"af4d5549f0a1eaec92b8ccbeec7071f719bc3ce70894d7a168264622b91cb6d5"} Jan 27 20:08:40 crc kubenswrapper[4915]: I0127 20:08:40.256385 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"29af113a-6c28-4034-8541-3f450288d15f","Type":"ContainerStarted","Data":"ecbb025901416a1b789cd92407421e0d6df4c2347e538a225540e811e525bb5d"} Jan 27 20:08:40 crc kubenswrapper[4915]: I0127 20:08:40.258676 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"814e662a-f9d9-422d-b2e3-7dd5be4313c3","Type":"ContainerStarted","Data":"0a4d1826fe00c0550835afe34e1967a4900540ad4a7c4484e467c7cfdad09d18"} Jan 27 20:08:40 crc kubenswrapper[4915]: I0127 20:08:40.258718 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"814e662a-f9d9-422d-b2e3-7dd5be4313c3","Type":"ContainerStarted","Data":"dd30710fcdb7da29d3ead410f8223a82fde9f7e79db31a7c51488061cf644109"} Jan 27 20:08:40 crc kubenswrapper[4915]: I0127 20:08:40.258731 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"814e662a-f9d9-422d-b2e3-7dd5be4313c3","Type":"ContainerStarted","Data":"c35043698e6f871d31dfa09be48416fb106ceab3db04295ead3ec8fc91dbea44"} Jan 27 20:08:40 crc kubenswrapper[4915]: I0127 20:08:40.260930 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"2e0e8901-adce-4d27-8f75-95a18d45659a","Type":"ContainerStarted","Data":"b99dc930b4e525ebaf881f843c48ebd09c1ee28fc2a74ee11c7f629b1907f318"} Jan 27 20:08:40 crc kubenswrapper[4915]: I0127 20:08:40.260974 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"2e0e8901-adce-4d27-8f75-95a18d45659a","Type":"ContainerStarted","Data":"ab9e1bdba25406b054e5ca1bfe46579d6fb28fedfbbe1204dfa179ae497c86ef"} Jan 27 20:08:40 crc kubenswrapper[4915]: I0127 20:08:40.260987 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"2e0e8901-adce-4d27-8f75-95a18d45659a","Type":"ContainerStarted","Data":"bc80b8ce2a7496cbb3b2a08aa5feff339d432b90181b1db350e05023d90c39f9"} Jan 27 20:08:40 crc kubenswrapper[4915]: I0127 20:08:40.276171 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.276150067 podStartE2EDuration="3.276150067s" podCreationTimestamp="2026-01-27 20:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:08:40.271348968 +0000 UTC m=+5211.629202652" watchObservedRunningTime="2026-01-27 20:08:40.276150067 +0000 UTC m=+5211.634003731" Jan 27 20:08:40 crc kubenswrapper[4915]: I0127 20:08:40.293653 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=3.293636737 podStartE2EDuration="3.293636737s" podCreationTimestamp="2026-01-27 20:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:08:40.28849285 +0000 UTC m=+5211.646346514" watchObservedRunningTime="2026-01-27 20:08:40.293636737 +0000 UTC m=+5211.651490401" Jan 27 20:08:40 crc kubenswrapper[4915]: I0127 20:08:40.311761 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=3.311741532 podStartE2EDuration="3.311741532s" podCreationTimestamp="2026-01-27 20:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:08:40.304987306 +0000 UTC m=+5211.662840980" watchObservedRunningTime="2026-01-27 20:08:40.311741532 +0000 UTC m=+5211.669595196" Jan 27 20:08:40 crc kubenswrapper[4915]: I0127 20:08:40.538435 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 27 20:08:40 crc kubenswrapper[4915]: W0127 20:08:40.538724 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91cd6e4c_0379_4e16_8074_d2b2b347e64e.slice/crio-c3b96e4473ee40b555293acdacf3ee5e2b855208947efd49eb35998a6603a05a WatchSource:0}: Error finding container c3b96e4473ee40b555293acdacf3ee5e2b855208947efd49eb35998a6603a05a: Status 404 returned error can't find the container with id c3b96e4473ee40b555293acdacf3ee5e2b855208947efd49eb35998a6603a05a Jan 27 20:08:40 crc kubenswrapper[4915]: I0127 20:08:40.679767 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 27 20:08:40 crc kubenswrapper[4915]: W0127 20:08:40.688191 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bb28fc6_fbee_4298_b22c_0581e2f377f0.slice/crio-f86c900c466b324ec79c2238c659b99c602899f06827a60736a1b47355f843af WatchSource:0}: Error finding container f86c900c466b324ec79c2238c659b99c602899f06827a60736a1b47355f843af: Status 404 returned error can't find the container with id f86c900c466b324ec79c2238c659b99c602899f06827a60736a1b47355f843af Jan 27 20:08:40 crc kubenswrapper[4915]: I0127 20:08:40.832106 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 20:08:41 crc kubenswrapper[4915]: I0127 20:08:41.269815 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"91cd6e4c-0379-4e16-8074-d2b2b347e64e","Type":"ContainerStarted","Data":"e0805dca0a82099635eed2ae85f391395b5cf80975d46a1757bab8e32bdf75e2"} Jan 27 20:08:41 crc kubenswrapper[4915]: I0127 20:08:41.269867 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"91cd6e4c-0379-4e16-8074-d2b2b347e64e","Type":"ContainerStarted","Data":"20126dfc92437c471ccb4298e9e1e691ce0c1e756824b13ea89a5d7331467a14"} Jan 27 20:08:41 crc kubenswrapper[4915]: I0127 20:08:41.269879 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"91cd6e4c-0379-4e16-8074-d2b2b347e64e","Type":"ContainerStarted","Data":"c3b96e4473ee40b555293acdacf3ee5e2b855208947efd49eb35998a6603a05a"} Jan 27 20:08:41 crc kubenswrapper[4915]: I0127 20:08:41.272172 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2c487705-cbc3-408b-82be-8634bfdb534d","Type":"ContainerStarted","Data":"118f788f4c69efe8cf61096059a759ffa1ca92e167d64646857403d3aa48d911"} Jan 27 20:08:41 crc kubenswrapper[4915]: I0127 20:08:41.272218 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2c487705-cbc3-408b-82be-8634bfdb534d","Type":"ContainerStarted","Data":"4ed6928d374fbc3bfcc658f44ee441aef5c74f0998fe1834a8e1af710eec2381"} Jan 27 20:08:41 crc kubenswrapper[4915]: I0127 20:08:41.272235 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2c487705-cbc3-408b-82be-8634bfdb534d","Type":"ContainerStarted","Data":"d2f307ff4ff5aa1156c98d3065946caa0d2cbde97dd96e33df08346d8c72ea27"} Jan 27 20:08:41 crc kubenswrapper[4915]: I0127 20:08:41.274105 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"6bb28fc6-fbee-4298-b22c-0581e2f377f0","Type":"ContainerStarted","Data":"93696473579bfc30c1dd6ec516336f4cc8eea26adb0a8b2fa2d6b4896c2fde28"} Jan 27 20:08:41 crc kubenswrapper[4915]: I0127 20:08:41.274147 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"6bb28fc6-fbee-4298-b22c-0581e2f377f0","Type":"ContainerStarted","Data":"2b11afc50e5bcc24de0ba5e342b650fdea87d04ab188e018a083273532ed513d"} Jan 27 20:08:41 crc kubenswrapper[4915]: I0127 20:08:41.274165 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"6bb28fc6-fbee-4298-b22c-0581e2f377f0","Type":"ContainerStarted","Data":"f86c900c466b324ec79c2238c659b99c602899f06827a60736a1b47355f843af"} Jan 27 20:08:41 crc kubenswrapper[4915]: I0127 20:08:41.295368 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=4.295347932 podStartE2EDuration="4.295347932s" podCreationTimestamp="2026-01-27 20:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:08:41.286273649 +0000 UTC m=+5212.644127323" watchObservedRunningTime="2026-01-27 20:08:41.295347932 +0000 UTC m=+5212.653201606" Jan 27 20:08:41 crc kubenswrapper[4915]: I0127 20:08:41.309519 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.30949035 podStartE2EDuration="4.30949035s" podCreationTimestamp="2026-01-27 20:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:08:41.301091723 +0000 UTC m=+5212.658945397" watchObservedRunningTime="2026-01-27 20:08:41.30949035 +0000 UTC m=+5212.667344024" Jan 27 20:08:41 crc kubenswrapper[4915]: I0127 20:08:41.320006 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=4.319979988 podStartE2EDuration="4.319979988s" podCreationTimestamp="2026-01-27 20:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:08:41.317493366 +0000 UTC m=+5212.675347060" watchObservedRunningTime="2026-01-27 20:08:41.319979988 +0000 UTC m=+5212.677833692" Jan 27 20:08:42 crc kubenswrapper[4915]: I0127 20:08:42.099843 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 27 20:08:42 crc kubenswrapper[4915]: I0127 20:08:42.118612 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Jan 27 20:08:42 crc kubenswrapper[4915]: I0127 20:08:42.134067 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Jan 27 20:08:42 crc kubenswrapper[4915]: I0127 20:08:42.135747 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 27 20:08:42 crc kubenswrapper[4915]: I0127 20:08:42.164036 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Jan 27 20:08:42 crc kubenswrapper[4915]: I0127 20:08:42.228494 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 27 20:08:42 crc kubenswrapper[4915]: I0127 20:08:42.260001 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Jan 27 20:08:42 crc kubenswrapper[4915]: I0127 20:08:42.269853 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Jan 27 20:08:42 crc kubenswrapper[4915]: I0127 20:08:42.283942 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 27 20:08:42 crc kubenswrapper[4915]: I0127 20:08:42.283979 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Jan 27 20:08:44 crc kubenswrapper[4915]: I0127 20:08:44.133950 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Jan 27 20:08:44 crc kubenswrapper[4915]: I0127 20:08:44.147055 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 27 20:08:44 crc kubenswrapper[4915]: I0127 20:08:44.157069 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Jan 27 20:08:44 crc kubenswrapper[4915]: I0127 20:08:44.232078 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 27 20:08:44 crc kubenswrapper[4915]: I0127 20:08:44.259659 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Jan 27 20:08:44 crc kubenswrapper[4915]: I0127 20:08:44.270367 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Jan 27 20:08:44 crc kubenswrapper[4915]: I0127 20:08:44.368362 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77cb7b48b5-2r8wl"] Jan 27 20:08:44 crc kubenswrapper[4915]: I0127 20:08:44.370674 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77cb7b48b5-2r8wl" Jan 27 20:08:44 crc kubenswrapper[4915]: I0127 20:08:44.376054 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77cb7b48b5-2r8wl"] Jan 27 20:08:44 crc kubenswrapper[4915]: I0127 20:08:44.377839 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 27 20:08:44 crc kubenswrapper[4915]: I0127 20:08:44.430625 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8g8f\" (UniqueName: \"kubernetes.io/projected/244e3af8-dbee-4500-9f8a-7220c4b058e9-kube-api-access-k8g8f\") pod \"dnsmasq-dns-77cb7b48b5-2r8wl\" (UID: \"244e3af8-dbee-4500-9f8a-7220c4b058e9\") " pod="openstack/dnsmasq-dns-77cb7b48b5-2r8wl" Jan 27 20:08:44 crc kubenswrapper[4915]: I0127 20:08:44.430935 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/244e3af8-dbee-4500-9f8a-7220c4b058e9-dns-svc\") pod \"dnsmasq-dns-77cb7b48b5-2r8wl\" (UID: \"244e3af8-dbee-4500-9f8a-7220c4b058e9\") " pod="openstack/dnsmasq-dns-77cb7b48b5-2r8wl" Jan 27 20:08:44 crc kubenswrapper[4915]: I0127 20:08:44.430998 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/244e3af8-dbee-4500-9f8a-7220c4b058e9-config\") pod \"dnsmasq-dns-77cb7b48b5-2r8wl\" (UID: \"244e3af8-dbee-4500-9f8a-7220c4b058e9\") " pod="openstack/dnsmasq-dns-77cb7b48b5-2r8wl" Jan 27 20:08:44 crc kubenswrapper[4915]: I0127 20:08:44.431104 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/244e3af8-dbee-4500-9f8a-7220c4b058e9-ovsdbserver-nb\") pod \"dnsmasq-dns-77cb7b48b5-2r8wl\" (UID: \"244e3af8-dbee-4500-9f8a-7220c4b058e9\") " pod="openstack/dnsmasq-dns-77cb7b48b5-2r8wl" Jan 27 20:08:44 crc kubenswrapper[4915]: I0127 20:08:44.531947 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/244e3af8-dbee-4500-9f8a-7220c4b058e9-ovsdbserver-nb\") pod \"dnsmasq-dns-77cb7b48b5-2r8wl\" (UID: \"244e3af8-dbee-4500-9f8a-7220c4b058e9\") " pod="openstack/dnsmasq-dns-77cb7b48b5-2r8wl" Jan 27 20:08:44 crc kubenswrapper[4915]: I0127 20:08:44.532054 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8g8f\" (UniqueName: \"kubernetes.io/projected/244e3af8-dbee-4500-9f8a-7220c4b058e9-kube-api-access-k8g8f\") pod \"dnsmasq-dns-77cb7b48b5-2r8wl\" (UID: \"244e3af8-dbee-4500-9f8a-7220c4b058e9\") " pod="openstack/dnsmasq-dns-77cb7b48b5-2r8wl" Jan 27 20:08:44 crc kubenswrapper[4915]: I0127 20:08:44.532088 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/244e3af8-dbee-4500-9f8a-7220c4b058e9-dns-svc\") pod \"dnsmasq-dns-77cb7b48b5-2r8wl\" (UID: \"244e3af8-dbee-4500-9f8a-7220c4b058e9\") " pod="openstack/dnsmasq-dns-77cb7b48b5-2r8wl" Jan 27 20:08:44 crc kubenswrapper[4915]: I0127 20:08:44.532107 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/244e3af8-dbee-4500-9f8a-7220c4b058e9-config\") pod \"dnsmasq-dns-77cb7b48b5-2r8wl\" (UID: \"244e3af8-dbee-4500-9f8a-7220c4b058e9\") " pod="openstack/dnsmasq-dns-77cb7b48b5-2r8wl" Jan 27 20:08:44 crc kubenswrapper[4915]: I0127 20:08:44.532912 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/244e3af8-dbee-4500-9f8a-7220c4b058e9-config\") pod \"dnsmasq-dns-77cb7b48b5-2r8wl\" (UID: \"244e3af8-dbee-4500-9f8a-7220c4b058e9\") " pod="openstack/dnsmasq-dns-77cb7b48b5-2r8wl" Jan 27 20:08:44 crc kubenswrapper[4915]: I0127 20:08:44.532947 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/244e3af8-dbee-4500-9f8a-7220c4b058e9-ovsdbserver-nb\") pod \"dnsmasq-dns-77cb7b48b5-2r8wl\" (UID: \"244e3af8-dbee-4500-9f8a-7220c4b058e9\") " pod="openstack/dnsmasq-dns-77cb7b48b5-2r8wl" Jan 27 20:08:44 crc kubenswrapper[4915]: I0127 20:08:44.532996 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/244e3af8-dbee-4500-9f8a-7220c4b058e9-dns-svc\") pod \"dnsmasq-dns-77cb7b48b5-2r8wl\" (UID: \"244e3af8-dbee-4500-9f8a-7220c4b058e9\") " pod="openstack/dnsmasq-dns-77cb7b48b5-2r8wl" Jan 27 20:08:44 crc kubenswrapper[4915]: I0127 20:08:44.554352 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8g8f\" (UniqueName: \"kubernetes.io/projected/244e3af8-dbee-4500-9f8a-7220c4b058e9-kube-api-access-k8g8f\") pod \"dnsmasq-dns-77cb7b48b5-2r8wl\" (UID: \"244e3af8-dbee-4500-9f8a-7220c4b058e9\") " pod="openstack/dnsmasq-dns-77cb7b48b5-2r8wl" Jan 27 20:08:44 crc kubenswrapper[4915]: I0127 20:08:44.690428 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77cb7b48b5-2r8wl" Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.176876 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Jan 27 20:08:45 crc kubenswrapper[4915]: W0127 20:08:45.214464 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod244e3af8_dbee_4500_9f8a_7220c4b058e9.slice/crio-2f4810e62a24a43ac46a4bdd4cc1b12e4cb1e78ca6fad0e2248c224d4f91a63f WatchSource:0}: Error finding container 2f4810e62a24a43ac46a4bdd4cc1b12e4cb1e78ca6fad0e2248c224d4f91a63f: Status 404 returned error can't find the container with id 2f4810e62a24a43ac46a4bdd4cc1b12e4cb1e78ca6fad0e2248c224d4f91a63f Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.219298 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77cb7b48b5-2r8wl"] Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.273559 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.314538 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.321693 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.339615 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77cb7b48b5-2r8wl" event={"ID":"244e3af8-dbee-4500-9f8a-7220c4b058e9","Type":"ContainerStarted","Data":"2f4810e62a24a43ac46a4bdd4cc1b12e4cb1e78ca6fad0e2248c224d4f91a63f"} Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.385156 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.385523 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.388472 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.396772 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.563088 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77cb7b48b5-2r8wl"] Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.585372 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6646cf9bdf-52hbf"] Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.586559 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.589062 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.599090 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6646cf9bdf-52hbf"] Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.753144 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3bda1c2-6b01-40ff-81f2-7056d572ac93-ovsdbserver-sb\") pod \"dnsmasq-dns-6646cf9bdf-52hbf\" (UID: \"f3bda1c2-6b01-40ff-81f2-7056d572ac93\") " pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.753394 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3bda1c2-6b01-40ff-81f2-7056d572ac93-config\") pod \"dnsmasq-dns-6646cf9bdf-52hbf\" (UID: \"f3bda1c2-6b01-40ff-81f2-7056d572ac93\") " pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.753488 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3bda1c2-6b01-40ff-81f2-7056d572ac93-dns-svc\") pod \"dnsmasq-dns-6646cf9bdf-52hbf\" (UID: \"f3bda1c2-6b01-40ff-81f2-7056d572ac93\") " pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.753654 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3bda1c2-6b01-40ff-81f2-7056d572ac93-ovsdbserver-nb\") pod \"dnsmasq-dns-6646cf9bdf-52hbf\" (UID: \"f3bda1c2-6b01-40ff-81f2-7056d572ac93\") " pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.753707 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-927tr\" (UniqueName: \"kubernetes.io/projected/f3bda1c2-6b01-40ff-81f2-7056d572ac93-kube-api-access-927tr\") pod \"dnsmasq-dns-6646cf9bdf-52hbf\" (UID: \"f3bda1c2-6b01-40ff-81f2-7056d572ac93\") " pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.855784 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3bda1c2-6b01-40ff-81f2-7056d572ac93-ovsdbserver-sb\") pod \"dnsmasq-dns-6646cf9bdf-52hbf\" (UID: \"f3bda1c2-6b01-40ff-81f2-7056d572ac93\") " pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.855964 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3bda1c2-6b01-40ff-81f2-7056d572ac93-config\") pod \"dnsmasq-dns-6646cf9bdf-52hbf\" (UID: \"f3bda1c2-6b01-40ff-81f2-7056d572ac93\") " pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.856015 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3bda1c2-6b01-40ff-81f2-7056d572ac93-dns-svc\") pod \"dnsmasq-dns-6646cf9bdf-52hbf\" (UID: \"f3bda1c2-6b01-40ff-81f2-7056d572ac93\") " pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.856082 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3bda1c2-6b01-40ff-81f2-7056d572ac93-ovsdbserver-nb\") pod \"dnsmasq-dns-6646cf9bdf-52hbf\" (UID: \"f3bda1c2-6b01-40ff-81f2-7056d572ac93\") " pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.856120 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-927tr\" (UniqueName: \"kubernetes.io/projected/f3bda1c2-6b01-40ff-81f2-7056d572ac93-kube-api-access-927tr\") pod \"dnsmasq-dns-6646cf9bdf-52hbf\" (UID: \"f3bda1c2-6b01-40ff-81f2-7056d572ac93\") " pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.856941 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3bda1c2-6b01-40ff-81f2-7056d572ac93-ovsdbserver-sb\") pod \"dnsmasq-dns-6646cf9bdf-52hbf\" (UID: \"f3bda1c2-6b01-40ff-81f2-7056d572ac93\") " pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.856978 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3bda1c2-6b01-40ff-81f2-7056d572ac93-config\") pod \"dnsmasq-dns-6646cf9bdf-52hbf\" (UID: \"f3bda1c2-6b01-40ff-81f2-7056d572ac93\") " pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.857161 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3bda1c2-6b01-40ff-81f2-7056d572ac93-ovsdbserver-nb\") pod \"dnsmasq-dns-6646cf9bdf-52hbf\" (UID: \"f3bda1c2-6b01-40ff-81f2-7056d572ac93\") " pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.857229 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3bda1c2-6b01-40ff-81f2-7056d572ac93-dns-svc\") pod \"dnsmasq-dns-6646cf9bdf-52hbf\" (UID: \"f3bda1c2-6b01-40ff-81f2-7056d572ac93\") " pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.881600 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-927tr\" (UniqueName: \"kubernetes.io/projected/f3bda1c2-6b01-40ff-81f2-7056d572ac93-kube-api-access-927tr\") pod \"dnsmasq-dns-6646cf9bdf-52hbf\" (UID: \"f3bda1c2-6b01-40ff-81f2-7056d572ac93\") " pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" Jan 27 20:08:45 crc kubenswrapper[4915]: I0127 20:08:45.904572 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" Jan 27 20:08:46 crc kubenswrapper[4915]: I0127 20:08:46.348190 4915 generic.go:334] "Generic (PLEG): container finished" podID="244e3af8-dbee-4500-9f8a-7220c4b058e9" containerID="1e5d5ea6d352667c8cef043e2bafcba995cabc5936cf8c81de3e66b7fd0f8d3d" exitCode=0 Jan 27 20:08:46 crc kubenswrapper[4915]: I0127 20:08:46.348320 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77cb7b48b5-2r8wl" event={"ID":"244e3af8-dbee-4500-9f8a-7220c4b058e9","Type":"ContainerDied","Data":"1e5d5ea6d352667c8cef043e2bafcba995cabc5936cf8c81de3e66b7fd0f8d3d"} Jan 27 20:08:46 crc kubenswrapper[4915]: I0127 20:08:46.902082 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6646cf9bdf-52hbf"] Jan 27 20:08:46 crc kubenswrapper[4915]: W0127 20:08:46.913373 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3bda1c2_6b01_40ff_81f2_7056d572ac93.slice/crio-7865701e35f1751546b9ea10b2f8f08eab418c45911c2923fae7772ca3bb1d8a WatchSource:0}: Error finding container 7865701e35f1751546b9ea10b2f8f08eab418c45911c2923fae7772ca3bb1d8a: Status 404 returned error can't find the container with id 7865701e35f1751546b9ea10b2f8f08eab418c45911c2923fae7772ca3bb1d8a Jan 27 20:08:47 crc kubenswrapper[4915]: I0127 20:08:47.363064 4915 generic.go:334] "Generic (PLEG): container finished" podID="f3bda1c2-6b01-40ff-81f2-7056d572ac93" containerID="431edf60c8d72469395f2b72c8f53e35d7c190ac49177a2b5301e3bc1284eb2f" exitCode=0 Jan 27 20:08:47 crc kubenswrapper[4915]: I0127 20:08:47.368173 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77cb7b48b5-2r8wl" podUID="244e3af8-dbee-4500-9f8a-7220c4b058e9" containerName="dnsmasq-dns" containerID="cri-o://e5590fbe7b1c412befe0fb7dd8fcb66fcb7ad98e21555f1e91bf85165dc06afa" gracePeriod=10 Jan 27 20:08:47 crc kubenswrapper[4915]: I0127 20:08:47.374877 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" event={"ID":"f3bda1c2-6b01-40ff-81f2-7056d572ac93","Type":"ContainerDied","Data":"431edf60c8d72469395f2b72c8f53e35d7c190ac49177a2b5301e3bc1284eb2f"} Jan 27 20:08:47 crc kubenswrapper[4915]: I0127 20:08:47.374953 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77cb7b48b5-2r8wl" Jan 27 20:08:47 crc kubenswrapper[4915]: I0127 20:08:47.374998 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" event={"ID":"f3bda1c2-6b01-40ff-81f2-7056d572ac93","Type":"ContainerStarted","Data":"7865701e35f1751546b9ea10b2f8f08eab418c45911c2923fae7772ca3bb1d8a"} Jan 27 20:08:47 crc kubenswrapper[4915]: I0127 20:08:47.375012 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77cb7b48b5-2r8wl" event={"ID":"244e3af8-dbee-4500-9f8a-7220c4b058e9","Type":"ContainerStarted","Data":"e5590fbe7b1c412befe0fb7dd8fcb66fcb7ad98e21555f1e91bf85165dc06afa"} Jan 27 20:08:47 crc kubenswrapper[4915]: I0127 20:08:47.426646 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77cb7b48b5-2r8wl" podStartSLOduration=3.426624307 podStartE2EDuration="3.426624307s" podCreationTimestamp="2026-01-27 20:08:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:08:47.421070701 +0000 UTC m=+5218.778924385" watchObservedRunningTime="2026-01-27 20:08:47.426624307 +0000 UTC m=+5218.784477971" Jan 27 20:08:47 crc kubenswrapper[4915]: I0127 20:08:47.824448 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77cb7b48b5-2r8wl" Jan 27 20:08:47 crc kubenswrapper[4915]: I0127 20:08:47.890720 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/244e3af8-dbee-4500-9f8a-7220c4b058e9-config\") pod \"244e3af8-dbee-4500-9f8a-7220c4b058e9\" (UID: \"244e3af8-dbee-4500-9f8a-7220c4b058e9\") " Jan 27 20:08:47 crc kubenswrapper[4915]: I0127 20:08:47.890846 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/244e3af8-dbee-4500-9f8a-7220c4b058e9-dns-svc\") pod \"244e3af8-dbee-4500-9f8a-7220c4b058e9\" (UID: \"244e3af8-dbee-4500-9f8a-7220c4b058e9\") " Jan 27 20:08:47 crc kubenswrapper[4915]: I0127 20:08:47.890976 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8g8f\" (UniqueName: \"kubernetes.io/projected/244e3af8-dbee-4500-9f8a-7220c4b058e9-kube-api-access-k8g8f\") pod \"244e3af8-dbee-4500-9f8a-7220c4b058e9\" (UID: \"244e3af8-dbee-4500-9f8a-7220c4b058e9\") " Jan 27 20:08:47 crc kubenswrapper[4915]: I0127 20:08:47.891099 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/244e3af8-dbee-4500-9f8a-7220c4b058e9-ovsdbserver-nb\") pod \"244e3af8-dbee-4500-9f8a-7220c4b058e9\" (UID: \"244e3af8-dbee-4500-9f8a-7220c4b058e9\") " Jan 27 20:08:47 crc kubenswrapper[4915]: I0127 20:08:47.895950 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/244e3af8-dbee-4500-9f8a-7220c4b058e9-kube-api-access-k8g8f" (OuterVolumeSpecName: "kube-api-access-k8g8f") pod "244e3af8-dbee-4500-9f8a-7220c4b058e9" (UID: "244e3af8-dbee-4500-9f8a-7220c4b058e9"). InnerVolumeSpecName "kube-api-access-k8g8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:08:47 crc kubenswrapper[4915]: I0127 20:08:47.935776 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/244e3af8-dbee-4500-9f8a-7220c4b058e9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "244e3af8-dbee-4500-9f8a-7220c4b058e9" (UID: "244e3af8-dbee-4500-9f8a-7220c4b058e9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:08:47 crc kubenswrapper[4915]: I0127 20:08:47.937205 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/244e3af8-dbee-4500-9f8a-7220c4b058e9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "244e3af8-dbee-4500-9f8a-7220c4b058e9" (UID: "244e3af8-dbee-4500-9f8a-7220c4b058e9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:08:47 crc kubenswrapper[4915]: I0127 20:08:47.946498 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/244e3af8-dbee-4500-9f8a-7220c4b058e9-config" (OuterVolumeSpecName: "config") pod "244e3af8-dbee-4500-9f8a-7220c4b058e9" (UID: "244e3af8-dbee-4500-9f8a-7220c4b058e9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:08:47 crc kubenswrapper[4915]: I0127 20:08:47.995097 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/244e3af8-dbee-4500-9f8a-7220c4b058e9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 20:08:47 crc kubenswrapper[4915]: I0127 20:08:47.995142 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/244e3af8-dbee-4500-9f8a-7220c4b058e9-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:08:47 crc kubenswrapper[4915]: I0127 20:08:47.995157 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/244e3af8-dbee-4500-9f8a-7220c4b058e9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:08:47 crc kubenswrapper[4915]: I0127 20:08:47.995172 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8g8f\" (UniqueName: \"kubernetes.io/projected/244e3af8-dbee-4500-9f8a-7220c4b058e9-kube-api-access-k8g8f\") on node \"crc\" DevicePath \"\"" Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.168296 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Jan 27 20:08:48 crc kubenswrapper[4915]: E0127 20:08:48.168943 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="244e3af8-dbee-4500-9f8a-7220c4b058e9" containerName="dnsmasq-dns" Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.168972 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="244e3af8-dbee-4500-9f8a-7220c4b058e9" containerName="dnsmasq-dns" Jan 27 20:08:48 crc kubenswrapper[4915]: E0127 20:08:48.168997 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="244e3af8-dbee-4500-9f8a-7220c4b058e9" containerName="init" Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.169005 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="244e3af8-dbee-4500-9f8a-7220c4b058e9" containerName="init" Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.169202 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="244e3af8-dbee-4500-9f8a-7220c4b058e9" containerName="dnsmasq-dns" Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.170054 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.173699 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.179128 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.303850 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-55ecfaa5-4641-456c-9b33-a9cf8c6507f7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-55ecfaa5-4641-456c-9b33-a9cf8c6507f7\") pod \"ovn-copy-data\" (UID: \"fd3e52e3-4b88-4172-998c-b2e4048f94c3\") " pod="openstack/ovn-copy-data" Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.304020 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/fd3e52e3-4b88-4172-998c-b2e4048f94c3-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"fd3e52e3-4b88-4172-998c-b2e4048f94c3\") " pod="openstack/ovn-copy-data" Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.304087 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsvln\" (UniqueName: \"kubernetes.io/projected/fd3e52e3-4b88-4172-998c-b2e4048f94c3-kube-api-access-bsvln\") pod \"ovn-copy-data\" (UID: \"fd3e52e3-4b88-4172-998c-b2e4048f94c3\") " pod="openstack/ovn-copy-data" Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.379641 4915 generic.go:334] "Generic (PLEG): container finished" podID="244e3af8-dbee-4500-9f8a-7220c4b058e9" containerID="e5590fbe7b1c412befe0fb7dd8fcb66fcb7ad98e21555f1e91bf85165dc06afa" exitCode=0 Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.379726 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77cb7b48b5-2r8wl" Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.379756 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77cb7b48b5-2r8wl" event={"ID":"244e3af8-dbee-4500-9f8a-7220c4b058e9","Type":"ContainerDied","Data":"e5590fbe7b1c412befe0fb7dd8fcb66fcb7ad98e21555f1e91bf85165dc06afa"} Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.379828 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77cb7b48b5-2r8wl" event={"ID":"244e3af8-dbee-4500-9f8a-7220c4b058e9","Type":"ContainerDied","Data":"2f4810e62a24a43ac46a4bdd4cc1b12e4cb1e78ca6fad0e2248c224d4f91a63f"} Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.379850 4915 scope.go:117] "RemoveContainer" containerID="e5590fbe7b1c412befe0fb7dd8fcb66fcb7ad98e21555f1e91bf85165dc06afa" Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.383462 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" event={"ID":"f3bda1c2-6b01-40ff-81f2-7056d572ac93","Type":"ContainerStarted","Data":"849c54f277444de56721b46a6e90f1d198c2a4091d2b315b3d8ff9750ad58c5f"} Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.406897 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-55ecfaa5-4641-456c-9b33-a9cf8c6507f7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-55ecfaa5-4641-456c-9b33-a9cf8c6507f7\") pod \"ovn-copy-data\" (UID: \"fd3e52e3-4b88-4172-998c-b2e4048f94c3\") " pod="openstack/ovn-copy-data" Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.407167 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/fd3e52e3-4b88-4172-998c-b2e4048f94c3-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"fd3e52e3-4b88-4172-998c-b2e4048f94c3\") " pod="openstack/ovn-copy-data" Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.407295 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsvln\" (UniqueName: \"kubernetes.io/projected/fd3e52e3-4b88-4172-998c-b2e4048f94c3-kube-api-access-bsvln\") pod \"ovn-copy-data\" (UID: \"fd3e52e3-4b88-4172-998c-b2e4048f94c3\") " pod="openstack/ovn-copy-data" Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.414454 4915 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.414539 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-55ecfaa5-4641-456c-9b33-a9cf8c6507f7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-55ecfaa5-4641-456c-9b33-a9cf8c6507f7\") pod \"ovn-copy-data\" (UID: \"fd3e52e3-4b88-4172-998c-b2e4048f94c3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0526966253a9dccf17fe3b69315082220e4f19ab7237c5f7aefca4a418149c1a/globalmount\"" pod="openstack/ovn-copy-data" Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.415572 4915 scope.go:117] "RemoveContainer" containerID="1e5d5ea6d352667c8cef043e2bafcba995cabc5936cf8c81de3e66b7fd0f8d3d" Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.419260 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" podStartSLOduration=3.419233749 podStartE2EDuration="3.419233749s" podCreationTimestamp="2026-01-27 20:08:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:08:48.4140017 +0000 UTC m=+5219.771855354" watchObservedRunningTime="2026-01-27 20:08:48.419233749 +0000 UTC m=+5219.777087423" Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.419930 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/fd3e52e3-4b88-4172-998c-b2e4048f94c3-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"fd3e52e3-4b88-4172-998c-b2e4048f94c3\") " pod="openstack/ovn-copy-data" Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.437230 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsvln\" (UniqueName: \"kubernetes.io/projected/fd3e52e3-4b88-4172-998c-b2e4048f94c3-kube-api-access-bsvln\") pod \"ovn-copy-data\" (UID: \"fd3e52e3-4b88-4172-998c-b2e4048f94c3\") " pod="openstack/ovn-copy-data" Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.437730 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77cb7b48b5-2r8wl"] Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.446083 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77cb7b48b5-2r8wl"] Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.457125 4915 scope.go:117] "RemoveContainer" containerID="e5590fbe7b1c412befe0fb7dd8fcb66fcb7ad98e21555f1e91bf85165dc06afa" Jan 27 20:08:48 crc kubenswrapper[4915]: E0127 20:08:48.457953 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5590fbe7b1c412befe0fb7dd8fcb66fcb7ad98e21555f1e91bf85165dc06afa\": container with ID starting with e5590fbe7b1c412befe0fb7dd8fcb66fcb7ad98e21555f1e91bf85165dc06afa not found: ID does not exist" containerID="e5590fbe7b1c412befe0fb7dd8fcb66fcb7ad98e21555f1e91bf85165dc06afa" Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.458026 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5590fbe7b1c412befe0fb7dd8fcb66fcb7ad98e21555f1e91bf85165dc06afa"} err="failed to get container status \"e5590fbe7b1c412befe0fb7dd8fcb66fcb7ad98e21555f1e91bf85165dc06afa\": rpc error: code = NotFound desc = could not find container \"e5590fbe7b1c412befe0fb7dd8fcb66fcb7ad98e21555f1e91bf85165dc06afa\": container with ID starting with e5590fbe7b1c412befe0fb7dd8fcb66fcb7ad98e21555f1e91bf85165dc06afa not found: ID does not exist" Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.458078 4915 scope.go:117] "RemoveContainer" containerID="1e5d5ea6d352667c8cef043e2bafcba995cabc5936cf8c81de3e66b7fd0f8d3d" Jan 27 20:08:48 crc kubenswrapper[4915]: E0127 20:08:48.458908 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e5d5ea6d352667c8cef043e2bafcba995cabc5936cf8c81de3e66b7fd0f8d3d\": container with ID starting with 1e5d5ea6d352667c8cef043e2bafcba995cabc5936cf8c81de3e66b7fd0f8d3d not found: ID does not exist" containerID="1e5d5ea6d352667c8cef043e2bafcba995cabc5936cf8c81de3e66b7fd0f8d3d" Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.458958 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e5d5ea6d352667c8cef043e2bafcba995cabc5936cf8c81de3e66b7fd0f8d3d"} err="failed to get container status \"1e5d5ea6d352667c8cef043e2bafcba995cabc5936cf8c81de3e66b7fd0f8d3d\": rpc error: code = NotFound desc = could not find container \"1e5d5ea6d352667c8cef043e2bafcba995cabc5936cf8c81de3e66b7fd0f8d3d\": container with ID starting with 1e5d5ea6d352667c8cef043e2bafcba995cabc5936cf8c81de3e66b7fd0f8d3d not found: ID does not exist" Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.463396 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-55ecfaa5-4641-456c-9b33-a9cf8c6507f7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-55ecfaa5-4641-456c-9b33-a9cf8c6507f7\") pod \"ovn-copy-data\" (UID: \"fd3e52e3-4b88-4172-998c-b2e4048f94c3\") " pod="openstack/ovn-copy-data" Jan 27 20:08:48 crc kubenswrapper[4915]: I0127 20:08:48.515354 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 27 20:08:49 crc kubenswrapper[4915]: I0127 20:08:49.083647 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 27 20:08:49 crc kubenswrapper[4915]: I0127 20:08:49.369716 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="244e3af8-dbee-4500-9f8a-7220c4b058e9" path="/var/lib/kubelet/pods/244e3af8-dbee-4500-9f8a-7220c4b058e9/volumes" Jan 27 20:08:49 crc kubenswrapper[4915]: I0127 20:08:49.402522 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"fd3e52e3-4b88-4172-998c-b2e4048f94c3","Type":"ContainerStarted","Data":"eed09e1c1d18df349ea825c67dba2ac44121f21c68f300511247068e6ba8d857"} Jan 27 20:08:49 crc kubenswrapper[4915]: I0127 20:08:49.402565 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"fd3e52e3-4b88-4172-998c-b2e4048f94c3","Type":"ContainerStarted","Data":"4693487ec7d24315b3971b86db795024d1bfc46a768ef8c73f75a4d3868d105e"} Jan 27 20:08:49 crc kubenswrapper[4915]: I0127 20:08:49.403308 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" Jan 27 20:08:49 crc kubenswrapper[4915]: I0127 20:08:49.424539 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=2.424507741 podStartE2EDuration="2.424507741s" podCreationTimestamp="2026-01-27 20:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:08:49.419673033 +0000 UTC m=+5220.777526697" watchObservedRunningTime="2026-01-27 20:08:49.424507741 +0000 UTC m=+5220.782361405" Jan 27 20:08:50 crc kubenswrapper[4915]: I0127 20:08:50.358411 4915 scope.go:117] "RemoveContainer" containerID="5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda" Jan 27 20:08:50 crc kubenswrapper[4915]: E0127 20:08:50.359384 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:08:54 crc kubenswrapper[4915]: I0127 20:08:54.642400 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 27 20:08:54 crc kubenswrapper[4915]: I0127 20:08:54.644591 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 20:08:54 crc kubenswrapper[4915]: I0127 20:08:54.648546 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-kgmlm" Jan 27 20:08:54 crc kubenswrapper[4915]: I0127 20:08:54.648641 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 27 20:08:54 crc kubenswrapper[4915]: I0127 20:08:54.648826 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 27 20:08:54 crc kubenswrapper[4915]: I0127 20:08:54.682493 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 20:08:54 crc kubenswrapper[4915]: I0127 20:08:54.725987 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab068f49-8db7-44b8-8210-d43885e4958c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ab068f49-8db7-44b8-8210-d43885e4958c\") " pod="openstack/ovn-northd-0" Jan 27 20:08:54 crc kubenswrapper[4915]: I0127 20:08:54.726079 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ab068f49-8db7-44b8-8210-d43885e4958c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ab068f49-8db7-44b8-8210-d43885e4958c\") " pod="openstack/ovn-northd-0" Jan 27 20:08:54 crc kubenswrapper[4915]: I0127 20:08:54.726238 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab068f49-8db7-44b8-8210-d43885e4958c-config\") pod \"ovn-northd-0\" (UID: \"ab068f49-8db7-44b8-8210-d43885e4958c\") " pod="openstack/ovn-northd-0" Jan 27 20:08:54 crc kubenswrapper[4915]: I0127 20:08:54.726265 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdw7d\" (UniqueName: \"kubernetes.io/projected/ab068f49-8db7-44b8-8210-d43885e4958c-kube-api-access-vdw7d\") pod \"ovn-northd-0\" (UID: \"ab068f49-8db7-44b8-8210-d43885e4958c\") " pod="openstack/ovn-northd-0" Jan 27 20:08:54 crc kubenswrapper[4915]: I0127 20:08:54.726318 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab068f49-8db7-44b8-8210-d43885e4958c-scripts\") pod \"ovn-northd-0\" (UID: \"ab068f49-8db7-44b8-8210-d43885e4958c\") " pod="openstack/ovn-northd-0" Jan 27 20:08:54 crc kubenswrapper[4915]: I0127 20:08:54.827952 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab068f49-8db7-44b8-8210-d43885e4958c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ab068f49-8db7-44b8-8210-d43885e4958c\") " pod="openstack/ovn-northd-0" Jan 27 20:08:54 crc kubenswrapper[4915]: I0127 20:08:54.828032 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ab068f49-8db7-44b8-8210-d43885e4958c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ab068f49-8db7-44b8-8210-d43885e4958c\") " pod="openstack/ovn-northd-0" Jan 27 20:08:54 crc kubenswrapper[4915]: I0127 20:08:54.828119 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab068f49-8db7-44b8-8210-d43885e4958c-config\") pod \"ovn-northd-0\" (UID: \"ab068f49-8db7-44b8-8210-d43885e4958c\") " pod="openstack/ovn-northd-0" Jan 27 20:08:54 crc kubenswrapper[4915]: I0127 20:08:54.828149 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdw7d\" (UniqueName: \"kubernetes.io/projected/ab068f49-8db7-44b8-8210-d43885e4958c-kube-api-access-vdw7d\") pod \"ovn-northd-0\" (UID: \"ab068f49-8db7-44b8-8210-d43885e4958c\") " pod="openstack/ovn-northd-0" Jan 27 20:08:54 crc kubenswrapper[4915]: I0127 20:08:54.828195 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab068f49-8db7-44b8-8210-d43885e4958c-scripts\") pod \"ovn-northd-0\" (UID: \"ab068f49-8db7-44b8-8210-d43885e4958c\") " pod="openstack/ovn-northd-0" Jan 27 20:08:54 crc kubenswrapper[4915]: I0127 20:08:54.828515 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ab068f49-8db7-44b8-8210-d43885e4958c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ab068f49-8db7-44b8-8210-d43885e4958c\") " pod="openstack/ovn-northd-0" Jan 27 20:08:54 crc kubenswrapper[4915]: I0127 20:08:54.829038 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab068f49-8db7-44b8-8210-d43885e4958c-config\") pod \"ovn-northd-0\" (UID: \"ab068f49-8db7-44b8-8210-d43885e4958c\") " pod="openstack/ovn-northd-0" Jan 27 20:08:54 crc kubenswrapper[4915]: I0127 20:08:54.829222 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab068f49-8db7-44b8-8210-d43885e4958c-scripts\") pod \"ovn-northd-0\" (UID: \"ab068f49-8db7-44b8-8210-d43885e4958c\") " pod="openstack/ovn-northd-0" Jan 27 20:08:54 crc kubenswrapper[4915]: I0127 20:08:54.835258 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab068f49-8db7-44b8-8210-d43885e4958c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ab068f49-8db7-44b8-8210-d43885e4958c\") " pod="openstack/ovn-northd-0" Jan 27 20:08:54 crc kubenswrapper[4915]: I0127 20:08:54.850260 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdw7d\" (UniqueName: \"kubernetes.io/projected/ab068f49-8db7-44b8-8210-d43885e4958c-kube-api-access-vdw7d\") pod \"ovn-northd-0\" (UID: \"ab068f49-8db7-44b8-8210-d43885e4958c\") " pod="openstack/ovn-northd-0" Jan 27 20:08:54 crc kubenswrapper[4915]: I0127 20:08:54.976898 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 20:08:55 crc kubenswrapper[4915]: I0127 20:08:55.399590 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 20:08:55 crc kubenswrapper[4915]: I0127 20:08:55.465305 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ab068f49-8db7-44b8-8210-d43885e4958c","Type":"ContainerStarted","Data":"96dc17baf70026b92568021569721a4f20e9e4454916b83880294c5ead52c107"} Jan 27 20:08:55 crc kubenswrapper[4915]: I0127 20:08:55.906965 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" Jan 27 20:08:55 crc kubenswrapper[4915]: I0127 20:08:55.994214 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-vpsjx"] Jan 27 20:08:55 crc kubenswrapper[4915]: I0127 20:08:55.994631 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b7946d7b9-vpsjx" podUID="8fb21b39-c709-40ca-bdc1-8be83768bb5b" containerName="dnsmasq-dns" containerID="cri-o://502c4aff969ee29cb603ffa4e1aa3108a4bdca6260e4c2ba8b9f272264df359b" gracePeriod=10 Jan 27 20:08:56 crc kubenswrapper[4915]: I0127 20:08:56.477295 4915 generic.go:334] "Generic (PLEG): container finished" podID="8fb21b39-c709-40ca-bdc1-8be83768bb5b" containerID="502c4aff969ee29cb603ffa4e1aa3108a4bdca6260e4c2ba8b9f272264df359b" exitCode=0 Jan 27 20:08:56 crc kubenswrapper[4915]: I0127 20:08:56.477399 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-vpsjx" event={"ID":"8fb21b39-c709-40ca-bdc1-8be83768bb5b","Type":"ContainerDied","Data":"502c4aff969ee29cb603ffa4e1aa3108a4bdca6260e4c2ba8b9f272264df359b"} Jan 27 20:08:56 crc kubenswrapper[4915]: I0127 20:08:56.477700 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-vpsjx" event={"ID":"8fb21b39-c709-40ca-bdc1-8be83768bb5b","Type":"ContainerDied","Data":"74fd7c79e2c9d9997a0cd77b6ac1cdb73410f8d6b286642dcc191a7946e4cedc"} Jan 27 20:08:56 crc kubenswrapper[4915]: I0127 20:08:56.477721 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74fd7c79e2c9d9997a0cd77b6ac1cdb73410f8d6b286642dcc191a7946e4cedc" Jan 27 20:08:56 crc kubenswrapper[4915]: I0127 20:08:56.479578 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ab068f49-8db7-44b8-8210-d43885e4958c","Type":"ContainerStarted","Data":"a2574f74e5a5e7e90f09be6f9fa9b42440de61e1686fca01bcb0d28647bcd66b"} Jan 27 20:08:56 crc kubenswrapper[4915]: I0127 20:08:56.479622 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ab068f49-8db7-44b8-8210-d43885e4958c","Type":"ContainerStarted","Data":"6858c33bff8b48af5bfdec897c8484397c3eda6b83c8eb25acff31b35fe2bc81"} Jan 27 20:08:56 crc kubenswrapper[4915]: I0127 20:08:56.479752 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 27 20:08:56 crc kubenswrapper[4915]: I0127 20:08:56.496518 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-vpsjx" Jan 27 20:08:56 crc kubenswrapper[4915]: I0127 20:08:56.501708 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.501688399 podStartE2EDuration="2.501688399s" podCreationTimestamp="2026-01-27 20:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:08:56.495721773 +0000 UTC m=+5227.853575447" watchObservedRunningTime="2026-01-27 20:08:56.501688399 +0000 UTC m=+5227.859542063" Jan 27 20:08:56 crc kubenswrapper[4915]: I0127 20:08:56.556088 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fb21b39-c709-40ca-bdc1-8be83768bb5b-config\") pod \"8fb21b39-c709-40ca-bdc1-8be83768bb5b\" (UID: \"8fb21b39-c709-40ca-bdc1-8be83768bb5b\") " Jan 27 20:08:56 crc kubenswrapper[4915]: I0127 20:08:56.556169 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7fcf\" (UniqueName: \"kubernetes.io/projected/8fb21b39-c709-40ca-bdc1-8be83768bb5b-kube-api-access-j7fcf\") pod \"8fb21b39-c709-40ca-bdc1-8be83768bb5b\" (UID: \"8fb21b39-c709-40ca-bdc1-8be83768bb5b\") " Jan 27 20:08:56 crc kubenswrapper[4915]: I0127 20:08:56.556195 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fb21b39-c709-40ca-bdc1-8be83768bb5b-dns-svc\") pod \"8fb21b39-c709-40ca-bdc1-8be83768bb5b\" (UID: \"8fb21b39-c709-40ca-bdc1-8be83768bb5b\") " Jan 27 20:08:56 crc kubenswrapper[4915]: I0127 20:08:56.567074 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fb21b39-c709-40ca-bdc1-8be83768bb5b-kube-api-access-j7fcf" (OuterVolumeSpecName: "kube-api-access-j7fcf") pod "8fb21b39-c709-40ca-bdc1-8be83768bb5b" (UID: "8fb21b39-c709-40ca-bdc1-8be83768bb5b"). InnerVolumeSpecName "kube-api-access-j7fcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:08:56 crc kubenswrapper[4915]: I0127 20:08:56.604387 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fb21b39-c709-40ca-bdc1-8be83768bb5b-config" (OuterVolumeSpecName: "config") pod "8fb21b39-c709-40ca-bdc1-8be83768bb5b" (UID: "8fb21b39-c709-40ca-bdc1-8be83768bb5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:08:56 crc kubenswrapper[4915]: I0127 20:08:56.607624 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fb21b39-c709-40ca-bdc1-8be83768bb5b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8fb21b39-c709-40ca-bdc1-8be83768bb5b" (UID: "8fb21b39-c709-40ca-bdc1-8be83768bb5b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:08:56 crc kubenswrapper[4915]: I0127 20:08:56.658981 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fb21b39-c709-40ca-bdc1-8be83768bb5b-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:08:56 crc kubenswrapper[4915]: I0127 20:08:56.659017 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fb21b39-c709-40ca-bdc1-8be83768bb5b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:08:56 crc kubenswrapper[4915]: I0127 20:08:56.659030 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7fcf\" (UniqueName: \"kubernetes.io/projected/8fb21b39-c709-40ca-bdc1-8be83768bb5b-kube-api-access-j7fcf\") on node \"crc\" DevicePath \"\"" Jan 27 20:08:57 crc kubenswrapper[4915]: I0127 20:08:57.488985 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-vpsjx" Jan 27 20:08:57 crc kubenswrapper[4915]: I0127 20:08:57.518393 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-vpsjx"] Jan 27 20:08:57 crc kubenswrapper[4915]: I0127 20:08:57.525463 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-vpsjx"] Jan 27 20:08:59 crc kubenswrapper[4915]: I0127 20:08:59.368075 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fb21b39-c709-40ca-bdc1-8be83768bb5b" path="/var/lib/kubelet/pods/8fb21b39-c709-40ca-bdc1-8be83768bb5b/volumes" Jan 27 20:08:59 crc kubenswrapper[4915]: I0127 20:08:59.829245 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-wn645"] Jan 27 20:08:59 crc kubenswrapper[4915]: E0127 20:08:59.829595 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb21b39-c709-40ca-bdc1-8be83768bb5b" containerName="init" Jan 27 20:08:59 crc kubenswrapper[4915]: I0127 20:08:59.829611 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb21b39-c709-40ca-bdc1-8be83768bb5b" containerName="init" Jan 27 20:08:59 crc kubenswrapper[4915]: E0127 20:08:59.829623 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb21b39-c709-40ca-bdc1-8be83768bb5b" containerName="dnsmasq-dns" Jan 27 20:08:59 crc kubenswrapper[4915]: I0127 20:08:59.829631 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb21b39-c709-40ca-bdc1-8be83768bb5b" containerName="dnsmasq-dns" Jan 27 20:08:59 crc kubenswrapper[4915]: I0127 20:08:59.829782 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fb21b39-c709-40ca-bdc1-8be83768bb5b" containerName="dnsmasq-dns" Jan 27 20:08:59 crc kubenswrapper[4915]: I0127 20:08:59.830527 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wn645" Jan 27 20:08:59 crc kubenswrapper[4915]: I0127 20:08:59.848961 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wn645"] Jan 27 20:08:59 crc kubenswrapper[4915]: I0127 20:08:59.916630 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85fqb\" (UniqueName: \"kubernetes.io/projected/7feb35cc-b62a-40af-9109-63180f240627-kube-api-access-85fqb\") pod \"keystone-db-create-wn645\" (UID: \"7feb35cc-b62a-40af-9109-63180f240627\") " pod="openstack/keystone-db-create-wn645" Jan 27 20:08:59 crc kubenswrapper[4915]: I0127 20:08:59.916730 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7feb35cc-b62a-40af-9109-63180f240627-operator-scripts\") pod \"keystone-db-create-wn645\" (UID: \"7feb35cc-b62a-40af-9109-63180f240627\") " pod="openstack/keystone-db-create-wn645" Jan 27 20:08:59 crc kubenswrapper[4915]: I0127 20:08:59.924357 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5200-account-create-update-xkkjg"] Jan 27 20:08:59 crc kubenswrapper[4915]: I0127 20:08:59.925518 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5200-account-create-update-xkkjg" Jan 27 20:08:59 crc kubenswrapper[4915]: I0127 20:08:59.928768 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 27 20:08:59 crc kubenswrapper[4915]: I0127 20:08:59.933147 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5200-account-create-update-xkkjg"] Jan 27 20:09:00 crc kubenswrapper[4915]: I0127 20:09:00.018078 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85fqb\" (UniqueName: \"kubernetes.io/projected/7feb35cc-b62a-40af-9109-63180f240627-kube-api-access-85fqb\") pod \"keystone-db-create-wn645\" (UID: \"7feb35cc-b62a-40af-9109-63180f240627\") " pod="openstack/keystone-db-create-wn645" Jan 27 20:09:00 crc kubenswrapper[4915]: I0127 20:09:00.018143 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0774762f-3eb3-4068-ba74-aeed37d4474e-operator-scripts\") pod \"keystone-5200-account-create-update-xkkjg\" (UID: \"0774762f-3eb3-4068-ba74-aeed37d4474e\") " pod="openstack/keystone-5200-account-create-update-xkkjg" Jan 27 20:09:00 crc kubenswrapper[4915]: I0127 20:09:00.018209 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7feb35cc-b62a-40af-9109-63180f240627-operator-scripts\") pod \"keystone-db-create-wn645\" (UID: \"7feb35cc-b62a-40af-9109-63180f240627\") " pod="openstack/keystone-db-create-wn645" Jan 27 20:09:00 crc kubenswrapper[4915]: I0127 20:09:00.018327 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87v8c\" (UniqueName: \"kubernetes.io/projected/0774762f-3eb3-4068-ba74-aeed37d4474e-kube-api-access-87v8c\") pod \"keystone-5200-account-create-update-xkkjg\" (UID: \"0774762f-3eb3-4068-ba74-aeed37d4474e\") " pod="openstack/keystone-5200-account-create-update-xkkjg" Jan 27 20:09:00 crc kubenswrapper[4915]: I0127 20:09:00.019073 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7feb35cc-b62a-40af-9109-63180f240627-operator-scripts\") pod \"keystone-db-create-wn645\" (UID: \"7feb35cc-b62a-40af-9109-63180f240627\") " pod="openstack/keystone-db-create-wn645" Jan 27 20:09:00 crc kubenswrapper[4915]: I0127 20:09:00.036981 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85fqb\" (UniqueName: \"kubernetes.io/projected/7feb35cc-b62a-40af-9109-63180f240627-kube-api-access-85fqb\") pod \"keystone-db-create-wn645\" (UID: \"7feb35cc-b62a-40af-9109-63180f240627\") " pod="openstack/keystone-db-create-wn645" Jan 27 20:09:00 crc kubenswrapper[4915]: I0127 20:09:00.119386 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87v8c\" (UniqueName: \"kubernetes.io/projected/0774762f-3eb3-4068-ba74-aeed37d4474e-kube-api-access-87v8c\") pod \"keystone-5200-account-create-update-xkkjg\" (UID: \"0774762f-3eb3-4068-ba74-aeed37d4474e\") " pod="openstack/keystone-5200-account-create-update-xkkjg" Jan 27 20:09:00 crc kubenswrapper[4915]: I0127 20:09:00.119815 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0774762f-3eb3-4068-ba74-aeed37d4474e-operator-scripts\") pod \"keystone-5200-account-create-update-xkkjg\" (UID: \"0774762f-3eb3-4068-ba74-aeed37d4474e\") " pod="openstack/keystone-5200-account-create-update-xkkjg" Jan 27 20:09:00 crc kubenswrapper[4915]: I0127 20:09:00.120562 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0774762f-3eb3-4068-ba74-aeed37d4474e-operator-scripts\") pod \"keystone-5200-account-create-update-xkkjg\" (UID: \"0774762f-3eb3-4068-ba74-aeed37d4474e\") " pod="openstack/keystone-5200-account-create-update-xkkjg" Jan 27 20:09:00 crc kubenswrapper[4915]: I0127 20:09:00.138963 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87v8c\" (UniqueName: \"kubernetes.io/projected/0774762f-3eb3-4068-ba74-aeed37d4474e-kube-api-access-87v8c\") pod \"keystone-5200-account-create-update-xkkjg\" (UID: \"0774762f-3eb3-4068-ba74-aeed37d4474e\") " pod="openstack/keystone-5200-account-create-update-xkkjg" Jan 27 20:09:00 crc kubenswrapper[4915]: I0127 20:09:00.148874 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wn645" Jan 27 20:09:00 crc kubenswrapper[4915]: I0127 20:09:00.242306 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5200-account-create-update-xkkjg" Jan 27 20:09:00 crc kubenswrapper[4915]: I0127 20:09:00.578714 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wn645"] Jan 27 20:09:00 crc kubenswrapper[4915]: W0127 20:09:00.582218 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7feb35cc_b62a_40af_9109_63180f240627.slice/crio-b2892f324b3f8d3b1dbba385f89633920fad016230d811fc98d629cc2eb7d5ae WatchSource:0}: Error finding container b2892f324b3f8d3b1dbba385f89633920fad016230d811fc98d629cc2eb7d5ae: Status 404 returned error can't find the container with id b2892f324b3f8d3b1dbba385f89633920fad016230d811fc98d629cc2eb7d5ae Jan 27 20:09:00 crc kubenswrapper[4915]: I0127 20:09:00.720478 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5200-account-create-update-xkkjg"] Jan 27 20:09:00 crc kubenswrapper[4915]: W0127 20:09:00.724886 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0774762f_3eb3_4068_ba74_aeed37d4474e.slice/crio-881784bd823d9265037101f08190a75cdf147949ca310be4aa67ef3f1c75c6e8 WatchSource:0}: Error finding container 881784bd823d9265037101f08190a75cdf147949ca310be4aa67ef3f1c75c6e8: Status 404 returned error can't find the container with id 881784bd823d9265037101f08190a75cdf147949ca310be4aa67ef3f1c75c6e8 Jan 27 20:09:01 crc kubenswrapper[4915]: I0127 20:09:01.540961 4915 generic.go:334] "Generic (PLEG): container finished" podID="7feb35cc-b62a-40af-9109-63180f240627" containerID="731c17d61e9351362f4c4180e150274fdaced8e7abbce4e463d0bf5107a3376f" exitCode=0 Jan 27 20:09:01 crc kubenswrapper[4915]: I0127 20:09:01.541123 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wn645" event={"ID":"7feb35cc-b62a-40af-9109-63180f240627","Type":"ContainerDied","Data":"731c17d61e9351362f4c4180e150274fdaced8e7abbce4e463d0bf5107a3376f"} Jan 27 20:09:01 crc kubenswrapper[4915]: I0127 20:09:01.541231 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wn645" event={"ID":"7feb35cc-b62a-40af-9109-63180f240627","Type":"ContainerStarted","Data":"b2892f324b3f8d3b1dbba385f89633920fad016230d811fc98d629cc2eb7d5ae"} Jan 27 20:09:01 crc kubenswrapper[4915]: I0127 20:09:01.543952 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5200-account-create-update-xkkjg" event={"ID":"0774762f-3eb3-4068-ba74-aeed37d4474e","Type":"ContainerStarted","Data":"1f71c75cb432c24c2fa48dc464d58a7b71ae37a1f95615b832795542499c11e1"} Jan 27 20:09:01 crc kubenswrapper[4915]: I0127 20:09:01.544030 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5200-account-create-update-xkkjg" event={"ID":"0774762f-3eb3-4068-ba74-aeed37d4474e","Type":"ContainerStarted","Data":"881784bd823d9265037101f08190a75cdf147949ca310be4aa67ef3f1c75c6e8"} Jan 27 20:09:01 crc kubenswrapper[4915]: I0127 20:09:01.586951 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5200-account-create-update-xkkjg" podStartSLOduration=2.5869305000000002 podStartE2EDuration="2.5869305s" podCreationTimestamp="2026-01-27 20:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:09:01.580962993 +0000 UTC m=+5232.938816657" watchObservedRunningTime="2026-01-27 20:09:01.5869305 +0000 UTC m=+5232.944784164" Jan 27 20:09:02 crc kubenswrapper[4915]: I0127 20:09:02.556161 4915 generic.go:334] "Generic (PLEG): container finished" podID="0774762f-3eb3-4068-ba74-aeed37d4474e" containerID="1f71c75cb432c24c2fa48dc464d58a7b71ae37a1f95615b832795542499c11e1" exitCode=0 Jan 27 20:09:02 crc kubenswrapper[4915]: I0127 20:09:02.556270 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5200-account-create-update-xkkjg" event={"ID":"0774762f-3eb3-4068-ba74-aeed37d4474e","Type":"ContainerDied","Data":"1f71c75cb432c24c2fa48dc464d58a7b71ae37a1f95615b832795542499c11e1"} Jan 27 20:09:02 crc kubenswrapper[4915]: I0127 20:09:02.891179 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wn645" Jan 27 20:09:02 crc kubenswrapper[4915]: I0127 20:09:02.969222 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7feb35cc-b62a-40af-9109-63180f240627-operator-scripts\") pod \"7feb35cc-b62a-40af-9109-63180f240627\" (UID: \"7feb35cc-b62a-40af-9109-63180f240627\") " Jan 27 20:09:02 crc kubenswrapper[4915]: I0127 20:09:02.969298 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85fqb\" (UniqueName: \"kubernetes.io/projected/7feb35cc-b62a-40af-9109-63180f240627-kube-api-access-85fqb\") pod \"7feb35cc-b62a-40af-9109-63180f240627\" (UID: \"7feb35cc-b62a-40af-9109-63180f240627\") " Jan 27 20:09:02 crc kubenswrapper[4915]: I0127 20:09:02.969710 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7feb35cc-b62a-40af-9109-63180f240627-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7feb35cc-b62a-40af-9109-63180f240627" (UID: "7feb35cc-b62a-40af-9109-63180f240627"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:09:02 crc kubenswrapper[4915]: I0127 20:09:02.974409 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7feb35cc-b62a-40af-9109-63180f240627-kube-api-access-85fqb" (OuterVolumeSpecName: "kube-api-access-85fqb") pod "7feb35cc-b62a-40af-9109-63180f240627" (UID: "7feb35cc-b62a-40af-9109-63180f240627"). InnerVolumeSpecName "kube-api-access-85fqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:09:03 crc kubenswrapper[4915]: I0127 20:09:03.071262 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7feb35cc-b62a-40af-9109-63180f240627-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:09:03 crc kubenswrapper[4915]: I0127 20:09:03.071299 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85fqb\" (UniqueName: \"kubernetes.io/projected/7feb35cc-b62a-40af-9109-63180f240627-kube-api-access-85fqb\") on node \"crc\" DevicePath \"\"" Jan 27 20:09:03 crc kubenswrapper[4915]: I0127 20:09:03.566652 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wn645" Jan 27 20:09:03 crc kubenswrapper[4915]: I0127 20:09:03.567364 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wn645" event={"ID":"7feb35cc-b62a-40af-9109-63180f240627","Type":"ContainerDied","Data":"b2892f324b3f8d3b1dbba385f89633920fad016230d811fc98d629cc2eb7d5ae"} Jan 27 20:09:03 crc kubenswrapper[4915]: I0127 20:09:03.567429 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2892f324b3f8d3b1dbba385f89633920fad016230d811fc98d629cc2eb7d5ae" Jan 27 20:09:03 crc kubenswrapper[4915]: I0127 20:09:03.960430 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5200-account-create-update-xkkjg" Jan 27 20:09:04 crc kubenswrapper[4915]: I0127 20:09:04.088764 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0774762f-3eb3-4068-ba74-aeed37d4474e-operator-scripts\") pod \"0774762f-3eb3-4068-ba74-aeed37d4474e\" (UID: \"0774762f-3eb3-4068-ba74-aeed37d4474e\") " Jan 27 20:09:04 crc kubenswrapper[4915]: I0127 20:09:04.088903 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87v8c\" (UniqueName: \"kubernetes.io/projected/0774762f-3eb3-4068-ba74-aeed37d4474e-kube-api-access-87v8c\") pod \"0774762f-3eb3-4068-ba74-aeed37d4474e\" (UID: \"0774762f-3eb3-4068-ba74-aeed37d4474e\") " Jan 27 20:09:04 crc kubenswrapper[4915]: I0127 20:09:04.089375 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0774762f-3eb3-4068-ba74-aeed37d4474e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0774762f-3eb3-4068-ba74-aeed37d4474e" (UID: "0774762f-3eb3-4068-ba74-aeed37d4474e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:09:04 crc kubenswrapper[4915]: I0127 20:09:04.093648 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0774762f-3eb3-4068-ba74-aeed37d4474e-kube-api-access-87v8c" (OuterVolumeSpecName: "kube-api-access-87v8c") pod "0774762f-3eb3-4068-ba74-aeed37d4474e" (UID: "0774762f-3eb3-4068-ba74-aeed37d4474e"). InnerVolumeSpecName "kube-api-access-87v8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:09:04 crc kubenswrapper[4915]: I0127 20:09:04.191259 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0774762f-3eb3-4068-ba74-aeed37d4474e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:09:04 crc kubenswrapper[4915]: I0127 20:09:04.191620 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87v8c\" (UniqueName: \"kubernetes.io/projected/0774762f-3eb3-4068-ba74-aeed37d4474e-kube-api-access-87v8c\") on node \"crc\" DevicePath \"\"" Jan 27 20:09:04 crc kubenswrapper[4915]: I0127 20:09:04.358046 4915 scope.go:117] "RemoveContainer" containerID="5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda" Jan 27 20:09:04 crc kubenswrapper[4915]: E0127 20:09:04.358532 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:09:04 crc kubenswrapper[4915]: I0127 20:09:04.575217 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5200-account-create-update-xkkjg" event={"ID":"0774762f-3eb3-4068-ba74-aeed37d4474e","Type":"ContainerDied","Data":"881784bd823d9265037101f08190a75cdf147949ca310be4aa67ef3f1c75c6e8"} Jan 27 20:09:04 crc kubenswrapper[4915]: I0127 20:09:04.575270 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="881784bd823d9265037101f08190a75cdf147949ca310be4aa67ef3f1c75c6e8" Jan 27 20:09:04 crc kubenswrapper[4915]: I0127 20:09:04.575328 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5200-account-create-update-xkkjg" Jan 27 20:09:05 crc kubenswrapper[4915]: I0127 20:09:05.656354 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-flbdx"] Jan 27 20:09:05 crc kubenswrapper[4915]: E0127 20:09:05.657130 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7feb35cc-b62a-40af-9109-63180f240627" containerName="mariadb-database-create" Jan 27 20:09:05 crc kubenswrapper[4915]: I0127 20:09:05.657153 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="7feb35cc-b62a-40af-9109-63180f240627" containerName="mariadb-database-create" Jan 27 20:09:05 crc kubenswrapper[4915]: E0127 20:09:05.657187 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0774762f-3eb3-4068-ba74-aeed37d4474e" containerName="mariadb-account-create-update" Jan 27 20:09:05 crc kubenswrapper[4915]: I0127 20:09:05.657198 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0774762f-3eb3-4068-ba74-aeed37d4474e" containerName="mariadb-account-create-update" Jan 27 20:09:05 crc kubenswrapper[4915]: I0127 20:09:05.657432 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="0774762f-3eb3-4068-ba74-aeed37d4474e" containerName="mariadb-account-create-update" Jan 27 20:09:05 crc kubenswrapper[4915]: I0127 20:09:05.657471 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="7feb35cc-b62a-40af-9109-63180f240627" containerName="mariadb-database-create" Jan 27 20:09:05 crc kubenswrapper[4915]: I0127 20:09:05.658494 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-flbdx" Jan 27 20:09:05 crc kubenswrapper[4915]: I0127 20:09:05.661399 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 20:09:05 crc kubenswrapper[4915]: I0127 20:09:05.661431 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 20:09:05 crc kubenswrapper[4915]: I0127 20:09:05.661398 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 20:09:05 crc kubenswrapper[4915]: I0127 20:09:05.661400 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dm82j" Jan 27 20:09:05 crc kubenswrapper[4915]: I0127 20:09:05.669887 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-flbdx"] Jan 27 20:09:05 crc kubenswrapper[4915]: I0127 20:09:05.714402 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68c47c5d-eb47-40e0-9fb6-cf364ff82535-combined-ca-bundle\") pod \"keystone-db-sync-flbdx\" (UID: \"68c47c5d-eb47-40e0-9fb6-cf364ff82535\") " pod="openstack/keystone-db-sync-flbdx" Jan 27 20:09:05 crc kubenswrapper[4915]: I0127 20:09:05.714593 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68c47c5d-eb47-40e0-9fb6-cf364ff82535-config-data\") pod \"keystone-db-sync-flbdx\" (UID: \"68c47c5d-eb47-40e0-9fb6-cf364ff82535\") " pod="openstack/keystone-db-sync-flbdx" Jan 27 20:09:05 crc kubenswrapper[4915]: I0127 20:09:05.714674 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjzb4\" (UniqueName: \"kubernetes.io/projected/68c47c5d-eb47-40e0-9fb6-cf364ff82535-kube-api-access-vjzb4\") pod \"keystone-db-sync-flbdx\" (UID: \"68c47c5d-eb47-40e0-9fb6-cf364ff82535\") " pod="openstack/keystone-db-sync-flbdx" Jan 27 20:09:05 crc kubenswrapper[4915]: I0127 20:09:05.816749 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjzb4\" (UniqueName: \"kubernetes.io/projected/68c47c5d-eb47-40e0-9fb6-cf364ff82535-kube-api-access-vjzb4\") pod \"keystone-db-sync-flbdx\" (UID: \"68c47c5d-eb47-40e0-9fb6-cf364ff82535\") " pod="openstack/keystone-db-sync-flbdx" Jan 27 20:09:05 crc kubenswrapper[4915]: I0127 20:09:05.816902 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68c47c5d-eb47-40e0-9fb6-cf364ff82535-combined-ca-bundle\") pod \"keystone-db-sync-flbdx\" (UID: \"68c47c5d-eb47-40e0-9fb6-cf364ff82535\") " pod="openstack/keystone-db-sync-flbdx" Jan 27 20:09:05 crc kubenswrapper[4915]: I0127 20:09:05.817030 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68c47c5d-eb47-40e0-9fb6-cf364ff82535-config-data\") pod \"keystone-db-sync-flbdx\" (UID: \"68c47c5d-eb47-40e0-9fb6-cf364ff82535\") " pod="openstack/keystone-db-sync-flbdx" Jan 27 20:09:05 crc kubenswrapper[4915]: I0127 20:09:05.823434 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68c47c5d-eb47-40e0-9fb6-cf364ff82535-config-data\") pod \"keystone-db-sync-flbdx\" (UID: \"68c47c5d-eb47-40e0-9fb6-cf364ff82535\") " pod="openstack/keystone-db-sync-flbdx" Jan 27 20:09:05 crc kubenswrapper[4915]: I0127 20:09:05.823726 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68c47c5d-eb47-40e0-9fb6-cf364ff82535-combined-ca-bundle\") pod \"keystone-db-sync-flbdx\" (UID: \"68c47c5d-eb47-40e0-9fb6-cf364ff82535\") " pod="openstack/keystone-db-sync-flbdx" Jan 27 20:09:05 crc kubenswrapper[4915]: I0127 20:09:05.843865 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjzb4\" (UniqueName: \"kubernetes.io/projected/68c47c5d-eb47-40e0-9fb6-cf364ff82535-kube-api-access-vjzb4\") pod \"keystone-db-sync-flbdx\" (UID: \"68c47c5d-eb47-40e0-9fb6-cf364ff82535\") " pod="openstack/keystone-db-sync-flbdx" Jan 27 20:09:05 crc kubenswrapper[4915]: I0127 20:09:05.990657 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-flbdx" Jan 27 20:09:06 crc kubenswrapper[4915]: I0127 20:09:06.426165 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-flbdx"] Jan 27 20:09:06 crc kubenswrapper[4915]: W0127 20:09:06.430712 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68c47c5d_eb47_40e0_9fb6_cf364ff82535.slice/crio-b2a9896a88ac2084ba1ba32540b83e961c23670a18562c4704731c70499e08b8 WatchSource:0}: Error finding container b2a9896a88ac2084ba1ba32540b83e961c23670a18562c4704731c70499e08b8: Status 404 returned error can't find the container with id b2a9896a88ac2084ba1ba32540b83e961c23670a18562c4704731c70499e08b8 Jan 27 20:09:06 crc kubenswrapper[4915]: I0127 20:09:06.591456 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-flbdx" event={"ID":"68c47c5d-eb47-40e0-9fb6-cf364ff82535","Type":"ContainerStarted","Data":"b2a9896a88ac2084ba1ba32540b83e961c23670a18562c4704731c70499e08b8"} Jan 27 20:09:07 crc kubenswrapper[4915]: I0127 20:09:07.600875 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-flbdx" event={"ID":"68c47c5d-eb47-40e0-9fb6-cf364ff82535","Type":"ContainerStarted","Data":"1eac1d864ecbe95a7219bd1585dd9aad27314599f16be69218f0cd2589607878"} Jan 27 20:09:07 crc kubenswrapper[4915]: I0127 20:09:07.629301 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-flbdx" podStartSLOduration=2.629283529 podStartE2EDuration="2.629283529s" podCreationTimestamp="2026-01-27 20:09:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:09:07.622305327 +0000 UTC m=+5238.980159031" watchObservedRunningTime="2026-01-27 20:09:07.629283529 +0000 UTC m=+5238.987137193" Jan 27 20:09:08 crc kubenswrapper[4915]: I0127 20:09:08.611005 4915 generic.go:334] "Generic (PLEG): container finished" podID="68c47c5d-eb47-40e0-9fb6-cf364ff82535" containerID="1eac1d864ecbe95a7219bd1585dd9aad27314599f16be69218f0cd2589607878" exitCode=0 Jan 27 20:09:08 crc kubenswrapper[4915]: I0127 20:09:08.611057 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-flbdx" event={"ID":"68c47c5d-eb47-40e0-9fb6-cf364ff82535","Type":"ContainerDied","Data":"1eac1d864ecbe95a7219bd1585dd9aad27314599f16be69218f0cd2589607878"} Jan 27 20:09:09 crc kubenswrapper[4915]: I0127 20:09:09.918745 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-flbdx" Jan 27 20:09:09 crc kubenswrapper[4915]: I0127 20:09:09.991334 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68c47c5d-eb47-40e0-9fb6-cf364ff82535-config-data\") pod \"68c47c5d-eb47-40e0-9fb6-cf364ff82535\" (UID: \"68c47c5d-eb47-40e0-9fb6-cf364ff82535\") " Jan 27 20:09:09 crc kubenswrapper[4915]: I0127 20:09:09.991472 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjzb4\" (UniqueName: \"kubernetes.io/projected/68c47c5d-eb47-40e0-9fb6-cf364ff82535-kube-api-access-vjzb4\") pod \"68c47c5d-eb47-40e0-9fb6-cf364ff82535\" (UID: \"68c47c5d-eb47-40e0-9fb6-cf364ff82535\") " Jan 27 20:09:09 crc kubenswrapper[4915]: I0127 20:09:09.992233 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68c47c5d-eb47-40e0-9fb6-cf364ff82535-combined-ca-bundle\") pod \"68c47c5d-eb47-40e0-9fb6-cf364ff82535\" (UID: \"68c47c5d-eb47-40e0-9fb6-cf364ff82535\") " Jan 27 20:09:09 crc kubenswrapper[4915]: I0127 20:09:09.996628 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68c47c5d-eb47-40e0-9fb6-cf364ff82535-kube-api-access-vjzb4" (OuterVolumeSpecName: "kube-api-access-vjzb4") pod "68c47c5d-eb47-40e0-9fb6-cf364ff82535" (UID: "68c47c5d-eb47-40e0-9fb6-cf364ff82535"). InnerVolumeSpecName "kube-api-access-vjzb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:09:10 crc kubenswrapper[4915]: I0127 20:09:10.015103 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68c47c5d-eb47-40e0-9fb6-cf364ff82535-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68c47c5d-eb47-40e0-9fb6-cf364ff82535" (UID: "68c47c5d-eb47-40e0-9fb6-cf364ff82535"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:09:10 crc kubenswrapper[4915]: I0127 20:09:10.031113 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68c47c5d-eb47-40e0-9fb6-cf364ff82535-config-data" (OuterVolumeSpecName: "config-data") pod "68c47c5d-eb47-40e0-9fb6-cf364ff82535" (UID: "68c47c5d-eb47-40e0-9fb6-cf364ff82535"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:09:10 crc kubenswrapper[4915]: I0127 20:09:10.094269 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68c47c5d-eb47-40e0-9fb6-cf364ff82535-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:09:10 crc kubenswrapper[4915]: I0127 20:09:10.094301 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjzb4\" (UniqueName: \"kubernetes.io/projected/68c47c5d-eb47-40e0-9fb6-cf364ff82535-kube-api-access-vjzb4\") on node \"crc\" DevicePath \"\"" Jan 27 20:09:10 crc kubenswrapper[4915]: I0127 20:09:10.094312 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68c47c5d-eb47-40e0-9fb6-cf364ff82535-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:09:10 crc kubenswrapper[4915]: I0127 20:09:10.632238 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-flbdx" event={"ID":"68c47c5d-eb47-40e0-9fb6-cf364ff82535","Type":"ContainerDied","Data":"b2a9896a88ac2084ba1ba32540b83e961c23670a18562c4704731c70499e08b8"} Jan 27 20:09:10 crc kubenswrapper[4915]: I0127 20:09:10.632293 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2a9896a88ac2084ba1ba32540b83e961c23670a18562c4704731c70499e08b8" Jan 27 20:09:10 crc kubenswrapper[4915]: I0127 20:09:10.632736 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-flbdx" Jan 27 20:09:10 crc kubenswrapper[4915]: I0127 20:09:10.932303 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cfd7647fc-5gmx8"] Jan 27 20:09:10 crc kubenswrapper[4915]: E0127 20:09:10.933326 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68c47c5d-eb47-40e0-9fb6-cf364ff82535" containerName="keystone-db-sync" Jan 27 20:09:10 crc kubenswrapper[4915]: I0127 20:09:10.933350 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="68c47c5d-eb47-40e0-9fb6-cf364ff82535" containerName="keystone-db-sync" Jan 27 20:09:10 crc kubenswrapper[4915]: I0127 20:09:10.933914 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="68c47c5d-eb47-40e0-9fb6-cf364ff82535" containerName="keystone-db-sync" Jan 27 20:09:10 crc kubenswrapper[4915]: I0127 20:09:10.936245 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" Jan 27 20:09:10 crc kubenswrapper[4915]: I0127 20:09:10.950183 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cfd7647fc-5gmx8"] Jan 27 20:09:10 crc kubenswrapper[4915]: I0127 20:09:10.980145 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-6pkv9"] Jan 27 20:09:10 crc kubenswrapper[4915]: I0127 20:09:10.991457 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6pkv9"] Jan 27 20:09:10 crc kubenswrapper[4915]: I0127 20:09:10.991582 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6pkv9" Jan 27 20:09:10 crc kubenswrapper[4915]: I0127 20:09:10.993766 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 20:09:10 crc kubenswrapper[4915]: I0127 20:09:10.996344 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 20:09:10 crc kubenswrapper[4915]: I0127 20:09:10.996409 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dm82j" Jan 27 20:09:10 crc kubenswrapper[4915]: I0127 20:09:10.997238 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 20:09:10 crc kubenswrapper[4915]: I0127 20:09:10.997418 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.027532 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-fernet-keys\") pod \"keystone-bootstrap-6pkv9\" (UID: \"630c710d-870e-4d01-8378-de652f333582\") " pod="openstack/keystone-bootstrap-6pkv9" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.027606 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-config-data\") pod \"keystone-bootstrap-6pkv9\" (UID: \"630c710d-870e-4d01-8378-de652f333582\") " pod="openstack/keystone-bootstrap-6pkv9" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.027642 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-combined-ca-bundle\") pod \"keystone-bootstrap-6pkv9\" (UID: \"630c710d-870e-4d01-8378-de652f333582\") " pod="openstack/keystone-bootstrap-6pkv9" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.027694 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5991eefb-0085-47ea-832a-1c39bbc67d9f-ovsdbserver-nb\") pod \"dnsmasq-dns-7cfd7647fc-5gmx8\" (UID: \"5991eefb-0085-47ea-832a-1c39bbc67d9f\") " pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.027832 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5991eefb-0085-47ea-832a-1c39bbc67d9f-dns-svc\") pod \"dnsmasq-dns-7cfd7647fc-5gmx8\" (UID: \"5991eefb-0085-47ea-832a-1c39bbc67d9f\") " pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.027886 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl7jl\" (UniqueName: \"kubernetes.io/projected/630c710d-870e-4d01-8378-de652f333582-kube-api-access-jl7jl\") pod \"keystone-bootstrap-6pkv9\" (UID: \"630c710d-870e-4d01-8378-de652f333582\") " pod="openstack/keystone-bootstrap-6pkv9" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.027948 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-scripts\") pod \"keystone-bootstrap-6pkv9\" (UID: \"630c710d-870e-4d01-8378-de652f333582\") " pod="openstack/keystone-bootstrap-6pkv9" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.027987 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5991eefb-0085-47ea-832a-1c39bbc67d9f-ovsdbserver-sb\") pod \"dnsmasq-dns-7cfd7647fc-5gmx8\" (UID: \"5991eefb-0085-47ea-832a-1c39bbc67d9f\") " pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.028208 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-credential-keys\") pod \"keystone-bootstrap-6pkv9\" (UID: \"630c710d-870e-4d01-8378-de652f333582\") " pod="openstack/keystone-bootstrap-6pkv9" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.028269 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5991eefb-0085-47ea-832a-1c39bbc67d9f-config\") pod \"dnsmasq-dns-7cfd7647fc-5gmx8\" (UID: \"5991eefb-0085-47ea-832a-1c39bbc67d9f\") " pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.028290 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c59cb\" (UniqueName: \"kubernetes.io/projected/5991eefb-0085-47ea-832a-1c39bbc67d9f-kube-api-access-c59cb\") pod \"dnsmasq-dns-7cfd7647fc-5gmx8\" (UID: \"5991eefb-0085-47ea-832a-1c39bbc67d9f\") " pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.130086 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5991eefb-0085-47ea-832a-1c39bbc67d9f-dns-svc\") pod \"dnsmasq-dns-7cfd7647fc-5gmx8\" (UID: \"5991eefb-0085-47ea-832a-1c39bbc67d9f\") " pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.130156 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl7jl\" (UniqueName: \"kubernetes.io/projected/630c710d-870e-4d01-8378-de652f333582-kube-api-access-jl7jl\") pod \"keystone-bootstrap-6pkv9\" (UID: \"630c710d-870e-4d01-8378-de652f333582\") " pod="openstack/keystone-bootstrap-6pkv9" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.130193 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-scripts\") pod \"keystone-bootstrap-6pkv9\" (UID: \"630c710d-870e-4d01-8378-de652f333582\") " pod="openstack/keystone-bootstrap-6pkv9" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.130220 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5991eefb-0085-47ea-832a-1c39bbc67d9f-ovsdbserver-sb\") pod \"dnsmasq-dns-7cfd7647fc-5gmx8\" (UID: \"5991eefb-0085-47ea-832a-1c39bbc67d9f\") " pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.130302 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-credential-keys\") pod \"keystone-bootstrap-6pkv9\" (UID: \"630c710d-870e-4d01-8378-de652f333582\") " pod="openstack/keystone-bootstrap-6pkv9" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.130323 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5991eefb-0085-47ea-832a-1c39bbc67d9f-config\") pod \"dnsmasq-dns-7cfd7647fc-5gmx8\" (UID: \"5991eefb-0085-47ea-832a-1c39bbc67d9f\") " pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.130342 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c59cb\" (UniqueName: \"kubernetes.io/projected/5991eefb-0085-47ea-832a-1c39bbc67d9f-kube-api-access-c59cb\") pod \"dnsmasq-dns-7cfd7647fc-5gmx8\" (UID: \"5991eefb-0085-47ea-832a-1c39bbc67d9f\") " pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.130365 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-fernet-keys\") pod \"keystone-bootstrap-6pkv9\" (UID: \"630c710d-870e-4d01-8378-de652f333582\") " pod="openstack/keystone-bootstrap-6pkv9" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.130397 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-config-data\") pod \"keystone-bootstrap-6pkv9\" (UID: \"630c710d-870e-4d01-8378-de652f333582\") " pod="openstack/keystone-bootstrap-6pkv9" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.130453 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-combined-ca-bundle\") pod \"keystone-bootstrap-6pkv9\" (UID: \"630c710d-870e-4d01-8378-de652f333582\") " pod="openstack/keystone-bootstrap-6pkv9" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.130488 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5991eefb-0085-47ea-832a-1c39bbc67d9f-ovsdbserver-nb\") pod \"dnsmasq-dns-7cfd7647fc-5gmx8\" (UID: \"5991eefb-0085-47ea-832a-1c39bbc67d9f\") " pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.131478 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5991eefb-0085-47ea-832a-1c39bbc67d9f-config\") pod \"dnsmasq-dns-7cfd7647fc-5gmx8\" (UID: \"5991eefb-0085-47ea-832a-1c39bbc67d9f\") " pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.131516 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5991eefb-0085-47ea-832a-1c39bbc67d9f-ovsdbserver-nb\") pod \"dnsmasq-dns-7cfd7647fc-5gmx8\" (UID: \"5991eefb-0085-47ea-832a-1c39bbc67d9f\") " pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.131478 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5991eefb-0085-47ea-832a-1c39bbc67d9f-ovsdbserver-sb\") pod \"dnsmasq-dns-7cfd7647fc-5gmx8\" (UID: \"5991eefb-0085-47ea-832a-1c39bbc67d9f\") " pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.131635 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5991eefb-0085-47ea-832a-1c39bbc67d9f-dns-svc\") pod \"dnsmasq-dns-7cfd7647fc-5gmx8\" (UID: \"5991eefb-0085-47ea-832a-1c39bbc67d9f\") " pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.134637 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-config-data\") pod \"keystone-bootstrap-6pkv9\" (UID: \"630c710d-870e-4d01-8378-de652f333582\") " pod="openstack/keystone-bootstrap-6pkv9" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.135147 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-scripts\") pod \"keystone-bootstrap-6pkv9\" (UID: \"630c710d-870e-4d01-8378-de652f333582\") " pod="openstack/keystone-bootstrap-6pkv9" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.135702 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-fernet-keys\") pod \"keystone-bootstrap-6pkv9\" (UID: \"630c710d-870e-4d01-8378-de652f333582\") " pod="openstack/keystone-bootstrap-6pkv9" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.136302 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-combined-ca-bundle\") pod \"keystone-bootstrap-6pkv9\" (UID: \"630c710d-870e-4d01-8378-de652f333582\") " pod="openstack/keystone-bootstrap-6pkv9" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.142412 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-credential-keys\") pod \"keystone-bootstrap-6pkv9\" (UID: \"630c710d-870e-4d01-8378-de652f333582\") " pod="openstack/keystone-bootstrap-6pkv9" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.151066 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl7jl\" (UniqueName: \"kubernetes.io/projected/630c710d-870e-4d01-8378-de652f333582-kube-api-access-jl7jl\") pod \"keystone-bootstrap-6pkv9\" (UID: \"630c710d-870e-4d01-8378-de652f333582\") " pod="openstack/keystone-bootstrap-6pkv9" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.151403 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c59cb\" (UniqueName: \"kubernetes.io/projected/5991eefb-0085-47ea-832a-1c39bbc67d9f-kube-api-access-c59cb\") pod \"dnsmasq-dns-7cfd7647fc-5gmx8\" (UID: \"5991eefb-0085-47ea-832a-1c39bbc67d9f\") " pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.273584 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.309145 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6pkv9" Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.752443 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cfd7647fc-5gmx8"] Jan 27 20:09:11 crc kubenswrapper[4915]: W0127 20:09:11.756667 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5991eefb_0085_47ea_832a_1c39bbc67d9f.slice/crio-e3f4b4493337326b8407b0755d0455a67e0ee9de57f1105d40d16cd7a916a44c WatchSource:0}: Error finding container e3f4b4493337326b8407b0755d0455a67e0ee9de57f1105d40d16cd7a916a44c: Status 404 returned error can't find the container with id e3f4b4493337326b8407b0755d0455a67e0ee9de57f1105d40d16cd7a916a44c Jan 27 20:09:11 crc kubenswrapper[4915]: I0127 20:09:11.819024 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6pkv9"] Jan 27 20:09:11 crc kubenswrapper[4915]: W0127 20:09:11.820713 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod630c710d_870e_4d01_8378_de652f333582.slice/crio-e0eb4aefbda5c567e8308e9e8cf6003dd058526ebcf73a342722890d87a287f4 WatchSource:0}: Error finding container e0eb4aefbda5c567e8308e9e8cf6003dd058526ebcf73a342722890d87a287f4: Status 404 returned error can't find the container with id e0eb4aefbda5c567e8308e9e8cf6003dd058526ebcf73a342722890d87a287f4 Jan 27 20:09:12 crc kubenswrapper[4915]: I0127 20:09:12.652342 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6pkv9" event={"ID":"630c710d-870e-4d01-8378-de652f333582","Type":"ContainerStarted","Data":"a626ec9f6560ebaf3606b6da727c0258c28c2cd727b0871464cbb89e98658d58"} Jan 27 20:09:12 crc kubenswrapper[4915]: I0127 20:09:12.652667 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6pkv9" event={"ID":"630c710d-870e-4d01-8378-de652f333582","Type":"ContainerStarted","Data":"e0eb4aefbda5c567e8308e9e8cf6003dd058526ebcf73a342722890d87a287f4"} Jan 27 20:09:12 crc kubenswrapper[4915]: I0127 20:09:12.656728 4915 generic.go:334] "Generic (PLEG): container finished" podID="5991eefb-0085-47ea-832a-1c39bbc67d9f" containerID="e277b0f1fbcdc984b0bdf49b828ecd2dfbba1181620e97a978eeb24af43de543" exitCode=0 Jan 27 20:09:12 crc kubenswrapper[4915]: I0127 20:09:12.656829 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" event={"ID":"5991eefb-0085-47ea-832a-1c39bbc67d9f","Type":"ContainerDied","Data":"e277b0f1fbcdc984b0bdf49b828ecd2dfbba1181620e97a978eeb24af43de543"} Jan 27 20:09:12 crc kubenswrapper[4915]: I0127 20:09:12.656864 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" event={"ID":"5991eefb-0085-47ea-832a-1c39bbc67d9f","Type":"ContainerStarted","Data":"e3f4b4493337326b8407b0755d0455a67e0ee9de57f1105d40d16cd7a916a44c"} Jan 27 20:09:12 crc kubenswrapper[4915]: I0127 20:09:12.682433 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-6pkv9" podStartSLOduration=2.682380428 podStartE2EDuration="2.682380428s" podCreationTimestamp="2026-01-27 20:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:09:12.671517521 +0000 UTC m=+5244.029371235" watchObservedRunningTime="2026-01-27 20:09:12.682380428 +0000 UTC m=+5244.040234132" Jan 27 20:09:13 crc kubenswrapper[4915]: I0127 20:09:13.666988 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" event={"ID":"5991eefb-0085-47ea-832a-1c39bbc67d9f","Type":"ContainerStarted","Data":"e965138ea48fcae0d00fa65a5456d31302ea16cd9310efa9f50b8a276edddd55"} Jan 27 20:09:13 crc kubenswrapper[4915]: I0127 20:09:13.667383 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" Jan 27 20:09:13 crc kubenswrapper[4915]: I0127 20:09:13.690992 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" podStartSLOduration=3.690963093 podStartE2EDuration="3.690963093s" podCreationTimestamp="2026-01-27 20:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:09:13.689506767 +0000 UTC m=+5245.047360431" watchObservedRunningTime="2026-01-27 20:09:13.690963093 +0000 UTC m=+5245.048816797" Jan 27 20:09:15 crc kubenswrapper[4915]: I0127 20:09:15.063573 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 27 20:09:16 crc kubenswrapper[4915]: I0127 20:09:16.700040 4915 generic.go:334] "Generic (PLEG): container finished" podID="630c710d-870e-4d01-8378-de652f333582" containerID="a626ec9f6560ebaf3606b6da727c0258c28c2cd727b0871464cbb89e98658d58" exitCode=0 Jan 27 20:09:16 crc kubenswrapper[4915]: I0127 20:09:16.700166 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6pkv9" event={"ID":"630c710d-870e-4d01-8378-de652f333582","Type":"ContainerDied","Data":"a626ec9f6560ebaf3606b6da727c0258c28c2cd727b0871464cbb89e98658d58"} Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.082441 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6pkv9" Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.246582 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-fernet-keys\") pod \"630c710d-870e-4d01-8378-de652f333582\" (UID: \"630c710d-870e-4d01-8378-de652f333582\") " Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.246936 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-credential-keys\") pod \"630c710d-870e-4d01-8378-de652f333582\" (UID: \"630c710d-870e-4d01-8378-de652f333582\") " Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.247160 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl7jl\" (UniqueName: \"kubernetes.io/projected/630c710d-870e-4d01-8378-de652f333582-kube-api-access-jl7jl\") pod \"630c710d-870e-4d01-8378-de652f333582\" (UID: \"630c710d-870e-4d01-8378-de652f333582\") " Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.247368 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-config-data\") pod \"630c710d-870e-4d01-8378-de652f333582\" (UID: \"630c710d-870e-4d01-8378-de652f333582\") " Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.247468 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-combined-ca-bundle\") pod \"630c710d-870e-4d01-8378-de652f333582\" (UID: \"630c710d-870e-4d01-8378-de652f333582\") " Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.247629 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-scripts\") pod \"630c710d-870e-4d01-8378-de652f333582\" (UID: \"630c710d-870e-4d01-8378-de652f333582\") " Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.253030 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-scripts" (OuterVolumeSpecName: "scripts") pod "630c710d-870e-4d01-8378-de652f333582" (UID: "630c710d-870e-4d01-8378-de652f333582"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.253596 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "630c710d-870e-4d01-8378-de652f333582" (UID: "630c710d-870e-4d01-8378-de652f333582"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.253701 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/630c710d-870e-4d01-8378-de652f333582-kube-api-access-jl7jl" (OuterVolumeSpecName: "kube-api-access-jl7jl") pod "630c710d-870e-4d01-8378-de652f333582" (UID: "630c710d-870e-4d01-8378-de652f333582"). InnerVolumeSpecName "kube-api-access-jl7jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.255199 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "630c710d-870e-4d01-8378-de652f333582" (UID: "630c710d-870e-4d01-8378-de652f333582"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.277922 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-config-data" (OuterVolumeSpecName: "config-data") pod "630c710d-870e-4d01-8378-de652f333582" (UID: "630c710d-870e-4d01-8378-de652f333582"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.284854 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "630c710d-870e-4d01-8378-de652f333582" (UID: "630c710d-870e-4d01-8378-de652f333582"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.350428 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.350837 4915 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.351012 4915 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.351113 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl7jl\" (UniqueName: \"kubernetes.io/projected/630c710d-870e-4d01-8378-de652f333582-kube-api-access-jl7jl\") on node \"crc\" DevicePath \"\"" Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.351205 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.351319 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/630c710d-870e-4d01-8378-de652f333582-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.728310 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6pkv9" event={"ID":"630c710d-870e-4d01-8378-de652f333582","Type":"ContainerDied","Data":"e0eb4aefbda5c567e8308e9e8cf6003dd058526ebcf73a342722890d87a287f4"} Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.728362 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0eb4aefbda5c567e8308e9e8cf6003dd058526ebcf73a342722890d87a287f4" Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.728392 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6pkv9" Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.826570 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-6pkv9"] Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.844786 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-6pkv9"] Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.909605 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-bs7n9"] Jan 27 20:09:18 crc kubenswrapper[4915]: E0127 20:09:18.910170 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="630c710d-870e-4d01-8378-de652f333582" containerName="keystone-bootstrap" Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.910200 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="630c710d-870e-4d01-8378-de652f333582" containerName="keystone-bootstrap" Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.910560 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="630c710d-870e-4d01-8378-de652f333582" containerName="keystone-bootstrap" Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.911490 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bs7n9" Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.915647 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dm82j" Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.916051 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.916831 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.917215 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.917627 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 20:09:18 crc kubenswrapper[4915]: I0127 20:09:18.920122 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-bs7n9"] Jan 27 20:09:19 crc kubenswrapper[4915]: I0127 20:09:19.063542 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-config-data\") pod \"keystone-bootstrap-bs7n9\" (UID: \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\") " pod="openstack/keystone-bootstrap-bs7n9" Jan 27 20:09:19 crc kubenswrapper[4915]: I0127 20:09:19.063613 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-credential-keys\") pod \"keystone-bootstrap-bs7n9\" (UID: \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\") " pod="openstack/keystone-bootstrap-bs7n9" Jan 27 20:09:19 crc kubenswrapper[4915]: I0127 20:09:19.063694 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-fernet-keys\") pod \"keystone-bootstrap-bs7n9\" (UID: \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\") " pod="openstack/keystone-bootstrap-bs7n9" Jan 27 20:09:19 crc kubenswrapper[4915]: I0127 20:09:19.063751 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-combined-ca-bundle\") pod \"keystone-bootstrap-bs7n9\" (UID: \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\") " pod="openstack/keystone-bootstrap-bs7n9" Jan 27 20:09:19 crc kubenswrapper[4915]: I0127 20:09:19.064376 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-scripts\") pod \"keystone-bootstrap-bs7n9\" (UID: \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\") " pod="openstack/keystone-bootstrap-bs7n9" Jan 27 20:09:19 crc kubenswrapper[4915]: I0127 20:09:19.064429 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdgcn\" (UniqueName: \"kubernetes.io/projected/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-kube-api-access-zdgcn\") pod \"keystone-bootstrap-bs7n9\" (UID: \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\") " pod="openstack/keystone-bootstrap-bs7n9" Jan 27 20:09:19 crc kubenswrapper[4915]: I0127 20:09:19.165754 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-fernet-keys\") pod \"keystone-bootstrap-bs7n9\" (UID: \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\") " pod="openstack/keystone-bootstrap-bs7n9" Jan 27 20:09:19 crc kubenswrapper[4915]: I0127 20:09:19.165905 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-combined-ca-bundle\") pod \"keystone-bootstrap-bs7n9\" (UID: \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\") " pod="openstack/keystone-bootstrap-bs7n9" Jan 27 20:09:19 crc kubenswrapper[4915]: I0127 20:09:19.165957 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-scripts\") pod \"keystone-bootstrap-bs7n9\" (UID: \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\") " pod="openstack/keystone-bootstrap-bs7n9" Jan 27 20:09:19 crc kubenswrapper[4915]: I0127 20:09:19.166016 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdgcn\" (UniqueName: \"kubernetes.io/projected/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-kube-api-access-zdgcn\") pod \"keystone-bootstrap-bs7n9\" (UID: \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\") " pod="openstack/keystone-bootstrap-bs7n9" Jan 27 20:09:19 crc kubenswrapper[4915]: I0127 20:09:19.166113 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-config-data\") pod \"keystone-bootstrap-bs7n9\" (UID: \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\") " pod="openstack/keystone-bootstrap-bs7n9" Jan 27 20:09:19 crc kubenswrapper[4915]: I0127 20:09:19.166157 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-credential-keys\") pod \"keystone-bootstrap-bs7n9\" (UID: \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\") " pod="openstack/keystone-bootstrap-bs7n9" Jan 27 20:09:19 crc kubenswrapper[4915]: I0127 20:09:19.172497 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-scripts\") pod \"keystone-bootstrap-bs7n9\" (UID: \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\") " pod="openstack/keystone-bootstrap-bs7n9" Jan 27 20:09:19 crc kubenswrapper[4915]: I0127 20:09:19.172865 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-credential-keys\") pod \"keystone-bootstrap-bs7n9\" (UID: \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\") " pod="openstack/keystone-bootstrap-bs7n9" Jan 27 20:09:19 crc kubenswrapper[4915]: I0127 20:09:19.173260 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-config-data\") pod \"keystone-bootstrap-bs7n9\" (UID: \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\") " pod="openstack/keystone-bootstrap-bs7n9" Jan 27 20:09:19 crc kubenswrapper[4915]: I0127 20:09:19.174883 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-fernet-keys\") pod \"keystone-bootstrap-bs7n9\" (UID: \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\") " pod="openstack/keystone-bootstrap-bs7n9" Jan 27 20:09:19 crc kubenswrapper[4915]: I0127 20:09:19.175184 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-combined-ca-bundle\") pod \"keystone-bootstrap-bs7n9\" (UID: \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\") " pod="openstack/keystone-bootstrap-bs7n9" Jan 27 20:09:19 crc kubenswrapper[4915]: I0127 20:09:19.186037 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdgcn\" (UniqueName: \"kubernetes.io/projected/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-kube-api-access-zdgcn\") pod \"keystone-bootstrap-bs7n9\" (UID: \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\") " pod="openstack/keystone-bootstrap-bs7n9" Jan 27 20:09:19 crc kubenswrapper[4915]: I0127 20:09:19.245616 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bs7n9" Jan 27 20:09:19 crc kubenswrapper[4915]: I0127 20:09:19.364039 4915 scope.go:117] "RemoveContainer" containerID="5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda" Jan 27 20:09:19 crc kubenswrapper[4915]: E0127 20:09:19.364312 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:09:19 crc kubenswrapper[4915]: I0127 20:09:19.367121 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="630c710d-870e-4d01-8378-de652f333582" path="/var/lib/kubelet/pods/630c710d-870e-4d01-8378-de652f333582/volumes" Jan 27 20:09:19 crc kubenswrapper[4915]: I0127 20:09:19.691554 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-bs7n9"] Jan 27 20:09:19 crc kubenswrapper[4915]: I0127 20:09:19.746535 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bs7n9" event={"ID":"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec","Type":"ContainerStarted","Data":"fe2e4974ab4fe245d682aad67c4ed1fd6ee94d15b830819e79c01cf3f4d5ebd4"} Jan 27 20:09:20 crc kubenswrapper[4915]: I0127 20:09:20.757703 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bs7n9" event={"ID":"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec","Type":"ContainerStarted","Data":"9deabbead6040a871c97f757bd55286ddf89c449ef92688af48b8877574b328a"} Jan 27 20:09:21 crc kubenswrapper[4915]: I0127 20:09:21.274937 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" Jan 27 20:09:21 crc kubenswrapper[4915]: I0127 20:09:21.304180 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-bs7n9" podStartSLOduration=3.304157775 podStartE2EDuration="3.304157775s" podCreationTimestamp="2026-01-27 20:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:09:20.789845746 +0000 UTC m=+5252.147699440" watchObservedRunningTime="2026-01-27 20:09:21.304157775 +0000 UTC m=+5252.662011439" Jan 27 20:09:21 crc kubenswrapper[4915]: I0127 20:09:21.340125 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6646cf9bdf-52hbf"] Jan 27 20:09:21 crc kubenswrapper[4915]: I0127 20:09:21.340514 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" podUID="f3bda1c2-6b01-40ff-81f2-7056d572ac93" containerName="dnsmasq-dns" containerID="cri-o://849c54f277444de56721b46a6e90f1d198c2a4091d2b315b3d8ff9750ad58c5f" gracePeriod=10 Jan 27 20:09:21 crc kubenswrapper[4915]: I0127 20:09:21.770192 4915 generic.go:334] "Generic (PLEG): container finished" podID="f3bda1c2-6b01-40ff-81f2-7056d572ac93" containerID="849c54f277444de56721b46a6e90f1d198c2a4091d2b315b3d8ff9750ad58c5f" exitCode=0 Jan 27 20:09:21 crc kubenswrapper[4915]: I0127 20:09:21.770374 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" event={"ID":"f3bda1c2-6b01-40ff-81f2-7056d572ac93","Type":"ContainerDied","Data":"849c54f277444de56721b46a6e90f1d198c2a4091d2b315b3d8ff9750ad58c5f"} Jan 27 20:09:21 crc kubenswrapper[4915]: I0127 20:09:21.771833 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" event={"ID":"f3bda1c2-6b01-40ff-81f2-7056d572ac93","Type":"ContainerDied","Data":"7865701e35f1751546b9ea10b2f8f08eab418c45911c2923fae7772ca3bb1d8a"} Jan 27 20:09:21 crc kubenswrapper[4915]: I0127 20:09:21.771939 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7865701e35f1751546b9ea10b2f8f08eab418c45911c2923fae7772ca3bb1d8a" Jan 27 20:09:21 crc kubenswrapper[4915]: I0127 20:09:21.816000 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" Jan 27 20:09:22 crc kubenswrapper[4915]: I0127 20:09:22.015561 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3bda1c2-6b01-40ff-81f2-7056d572ac93-ovsdbserver-nb\") pod \"f3bda1c2-6b01-40ff-81f2-7056d572ac93\" (UID: \"f3bda1c2-6b01-40ff-81f2-7056d572ac93\") " Jan 27 20:09:22 crc kubenswrapper[4915]: I0127 20:09:22.015905 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-927tr\" (UniqueName: \"kubernetes.io/projected/f3bda1c2-6b01-40ff-81f2-7056d572ac93-kube-api-access-927tr\") pod \"f3bda1c2-6b01-40ff-81f2-7056d572ac93\" (UID: \"f3bda1c2-6b01-40ff-81f2-7056d572ac93\") " Jan 27 20:09:22 crc kubenswrapper[4915]: I0127 20:09:22.016092 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3bda1c2-6b01-40ff-81f2-7056d572ac93-ovsdbserver-sb\") pod \"f3bda1c2-6b01-40ff-81f2-7056d572ac93\" (UID: \"f3bda1c2-6b01-40ff-81f2-7056d572ac93\") " Jan 27 20:09:22 crc kubenswrapper[4915]: I0127 20:09:22.016165 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3bda1c2-6b01-40ff-81f2-7056d572ac93-config\") pod \"f3bda1c2-6b01-40ff-81f2-7056d572ac93\" (UID: \"f3bda1c2-6b01-40ff-81f2-7056d572ac93\") " Jan 27 20:09:22 crc kubenswrapper[4915]: I0127 20:09:22.016283 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3bda1c2-6b01-40ff-81f2-7056d572ac93-dns-svc\") pod \"f3bda1c2-6b01-40ff-81f2-7056d572ac93\" (UID: \"f3bda1c2-6b01-40ff-81f2-7056d572ac93\") " Jan 27 20:09:22 crc kubenswrapper[4915]: I0127 20:09:22.024084 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3bda1c2-6b01-40ff-81f2-7056d572ac93-kube-api-access-927tr" (OuterVolumeSpecName: "kube-api-access-927tr") pod "f3bda1c2-6b01-40ff-81f2-7056d572ac93" (UID: "f3bda1c2-6b01-40ff-81f2-7056d572ac93"). InnerVolumeSpecName "kube-api-access-927tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:09:22 crc kubenswrapper[4915]: I0127 20:09:22.055363 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3bda1c2-6b01-40ff-81f2-7056d572ac93-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f3bda1c2-6b01-40ff-81f2-7056d572ac93" (UID: "f3bda1c2-6b01-40ff-81f2-7056d572ac93"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:09:22 crc kubenswrapper[4915]: I0127 20:09:22.057704 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3bda1c2-6b01-40ff-81f2-7056d572ac93-config" (OuterVolumeSpecName: "config") pod "f3bda1c2-6b01-40ff-81f2-7056d572ac93" (UID: "f3bda1c2-6b01-40ff-81f2-7056d572ac93"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:09:22 crc kubenswrapper[4915]: I0127 20:09:22.058374 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3bda1c2-6b01-40ff-81f2-7056d572ac93-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f3bda1c2-6b01-40ff-81f2-7056d572ac93" (UID: "f3bda1c2-6b01-40ff-81f2-7056d572ac93"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:09:22 crc kubenswrapper[4915]: I0127 20:09:22.058415 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3bda1c2-6b01-40ff-81f2-7056d572ac93-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f3bda1c2-6b01-40ff-81f2-7056d572ac93" (UID: "f3bda1c2-6b01-40ff-81f2-7056d572ac93"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:09:22 crc kubenswrapper[4915]: I0127 20:09:22.118305 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3bda1c2-6b01-40ff-81f2-7056d572ac93-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 20:09:22 crc kubenswrapper[4915]: I0127 20:09:22.118340 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-927tr\" (UniqueName: \"kubernetes.io/projected/f3bda1c2-6b01-40ff-81f2-7056d572ac93-kube-api-access-927tr\") on node \"crc\" DevicePath \"\"" Jan 27 20:09:22 crc kubenswrapper[4915]: I0127 20:09:22.118350 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3bda1c2-6b01-40ff-81f2-7056d572ac93-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 20:09:22 crc kubenswrapper[4915]: I0127 20:09:22.118360 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3bda1c2-6b01-40ff-81f2-7056d572ac93-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:09:22 crc kubenswrapper[4915]: I0127 20:09:22.118370 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3bda1c2-6b01-40ff-81f2-7056d572ac93-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:09:22 crc kubenswrapper[4915]: I0127 20:09:22.780764 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6646cf9bdf-52hbf" Jan 27 20:09:22 crc kubenswrapper[4915]: I0127 20:09:22.826076 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6646cf9bdf-52hbf"] Jan 27 20:09:22 crc kubenswrapper[4915]: I0127 20:09:22.844730 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6646cf9bdf-52hbf"] Jan 27 20:09:23 crc kubenswrapper[4915]: I0127 20:09:23.368694 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3bda1c2-6b01-40ff-81f2-7056d572ac93" path="/var/lib/kubelet/pods/f3bda1c2-6b01-40ff-81f2-7056d572ac93/volumes" Jan 27 20:09:23 crc kubenswrapper[4915]: I0127 20:09:23.791559 4915 generic.go:334] "Generic (PLEG): container finished" podID="e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec" containerID="9deabbead6040a871c97f757bd55286ddf89c449ef92688af48b8877574b328a" exitCode=0 Jan 27 20:09:23 crc kubenswrapper[4915]: I0127 20:09:23.791604 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bs7n9" event={"ID":"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec","Type":"ContainerDied","Data":"9deabbead6040a871c97f757bd55286ddf89c449ef92688af48b8877574b328a"} Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.154928 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bs7n9" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.172699 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-config-data\") pod \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\" (UID: \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\") " Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.172745 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-credential-keys\") pod \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\" (UID: \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\") " Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.172821 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdgcn\" (UniqueName: \"kubernetes.io/projected/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-kube-api-access-zdgcn\") pod \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\" (UID: \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\") " Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.172973 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-scripts\") pod \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\" (UID: \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\") " Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.173591 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-combined-ca-bundle\") pod \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\" (UID: \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\") " Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.173655 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-fernet-keys\") pod \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\" (UID: \"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec\") " Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.178868 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-scripts" (OuterVolumeSpecName: "scripts") pod "e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec" (UID: "e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.179640 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec" (UID: "e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.183448 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-kube-api-access-zdgcn" (OuterVolumeSpecName: "kube-api-access-zdgcn") pod "e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec" (UID: "e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec"). InnerVolumeSpecName "kube-api-access-zdgcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.183631 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec" (UID: "e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.200463 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-config-data" (OuterVolumeSpecName: "config-data") pod "e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec" (UID: "e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.201516 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec" (UID: "e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.275264 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.275487 4915 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.275574 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.275627 4915 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.275677 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdgcn\" (UniqueName: \"kubernetes.io/projected/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-kube-api-access-zdgcn\") on node \"crc\" DevicePath \"\"" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.275728 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.813332 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bs7n9" event={"ID":"e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec","Type":"ContainerDied","Data":"fe2e4974ab4fe245d682aad67c4ed1fd6ee94d15b830819e79c01cf3f4d5ebd4"} Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.813746 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe2e4974ab4fe245d682aad67c4ed1fd6ee94d15b830819e79c01cf3f4d5ebd4" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.813412 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bs7n9" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.900047 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5d48b84649-qr295"] Jan 27 20:09:25 crc kubenswrapper[4915]: E0127 20:09:25.900484 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec" containerName="keystone-bootstrap" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.900512 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec" containerName="keystone-bootstrap" Jan 27 20:09:25 crc kubenswrapper[4915]: E0127 20:09:25.900540 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3bda1c2-6b01-40ff-81f2-7056d572ac93" containerName="init" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.900553 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3bda1c2-6b01-40ff-81f2-7056d572ac93" containerName="init" Jan 27 20:09:25 crc kubenswrapper[4915]: E0127 20:09:25.900577 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3bda1c2-6b01-40ff-81f2-7056d572ac93" containerName="dnsmasq-dns" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.900588 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3bda1c2-6b01-40ff-81f2-7056d572ac93" containerName="dnsmasq-dns" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.900871 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3bda1c2-6b01-40ff-81f2-7056d572ac93" containerName="dnsmasq-dns" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.900894 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec" containerName="keystone-bootstrap" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.901694 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5d48b84649-qr295" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.904433 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.904747 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.907399 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.915546 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dm82j" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.929011 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5d48b84649-qr295"] Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.985130 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrv2g\" (UniqueName: \"kubernetes.io/projected/5b9cdc8a-c579-45e2-994b-9119ac49807f-kube-api-access-vrv2g\") pod \"keystone-5d48b84649-qr295\" (UID: \"5b9cdc8a-c579-45e2-994b-9119ac49807f\") " pod="openstack/keystone-5d48b84649-qr295" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.985218 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5b9cdc8a-c579-45e2-994b-9119ac49807f-fernet-keys\") pod \"keystone-5d48b84649-qr295\" (UID: \"5b9cdc8a-c579-45e2-994b-9119ac49807f\") " pod="openstack/keystone-5d48b84649-qr295" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.985304 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b9cdc8a-c579-45e2-994b-9119ac49807f-combined-ca-bundle\") pod \"keystone-5d48b84649-qr295\" (UID: \"5b9cdc8a-c579-45e2-994b-9119ac49807f\") " pod="openstack/keystone-5d48b84649-qr295" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.985381 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b9cdc8a-c579-45e2-994b-9119ac49807f-scripts\") pod \"keystone-5d48b84649-qr295\" (UID: \"5b9cdc8a-c579-45e2-994b-9119ac49807f\") " pod="openstack/keystone-5d48b84649-qr295" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.985440 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b9cdc8a-c579-45e2-994b-9119ac49807f-config-data\") pod \"keystone-5d48b84649-qr295\" (UID: \"5b9cdc8a-c579-45e2-994b-9119ac49807f\") " pod="openstack/keystone-5d48b84649-qr295" Jan 27 20:09:25 crc kubenswrapper[4915]: I0127 20:09:25.985495 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5b9cdc8a-c579-45e2-994b-9119ac49807f-credential-keys\") pod \"keystone-5d48b84649-qr295\" (UID: \"5b9cdc8a-c579-45e2-994b-9119ac49807f\") " pod="openstack/keystone-5d48b84649-qr295" Jan 27 20:09:26 crc kubenswrapper[4915]: I0127 20:09:26.087027 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrv2g\" (UniqueName: \"kubernetes.io/projected/5b9cdc8a-c579-45e2-994b-9119ac49807f-kube-api-access-vrv2g\") pod \"keystone-5d48b84649-qr295\" (UID: \"5b9cdc8a-c579-45e2-994b-9119ac49807f\") " pod="openstack/keystone-5d48b84649-qr295" Jan 27 20:09:26 crc kubenswrapper[4915]: I0127 20:09:26.087097 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5b9cdc8a-c579-45e2-994b-9119ac49807f-fernet-keys\") pod \"keystone-5d48b84649-qr295\" (UID: \"5b9cdc8a-c579-45e2-994b-9119ac49807f\") " pod="openstack/keystone-5d48b84649-qr295" Jan 27 20:09:26 crc kubenswrapper[4915]: I0127 20:09:26.087149 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b9cdc8a-c579-45e2-994b-9119ac49807f-combined-ca-bundle\") pod \"keystone-5d48b84649-qr295\" (UID: \"5b9cdc8a-c579-45e2-994b-9119ac49807f\") " pod="openstack/keystone-5d48b84649-qr295" Jan 27 20:09:26 crc kubenswrapper[4915]: I0127 20:09:26.087195 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b9cdc8a-c579-45e2-994b-9119ac49807f-scripts\") pod \"keystone-5d48b84649-qr295\" (UID: \"5b9cdc8a-c579-45e2-994b-9119ac49807f\") " pod="openstack/keystone-5d48b84649-qr295" Jan 27 20:09:26 crc kubenswrapper[4915]: I0127 20:09:26.087237 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b9cdc8a-c579-45e2-994b-9119ac49807f-config-data\") pod \"keystone-5d48b84649-qr295\" (UID: \"5b9cdc8a-c579-45e2-994b-9119ac49807f\") " pod="openstack/keystone-5d48b84649-qr295" Jan 27 20:09:26 crc kubenswrapper[4915]: I0127 20:09:26.087270 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5b9cdc8a-c579-45e2-994b-9119ac49807f-credential-keys\") pod \"keystone-5d48b84649-qr295\" (UID: \"5b9cdc8a-c579-45e2-994b-9119ac49807f\") " pod="openstack/keystone-5d48b84649-qr295" Jan 27 20:09:26 crc kubenswrapper[4915]: I0127 20:09:26.091076 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b9cdc8a-c579-45e2-994b-9119ac49807f-scripts\") pod \"keystone-5d48b84649-qr295\" (UID: \"5b9cdc8a-c579-45e2-994b-9119ac49807f\") " pod="openstack/keystone-5d48b84649-qr295" Jan 27 20:09:26 crc kubenswrapper[4915]: I0127 20:09:26.091149 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5b9cdc8a-c579-45e2-994b-9119ac49807f-credential-keys\") pod \"keystone-5d48b84649-qr295\" (UID: \"5b9cdc8a-c579-45e2-994b-9119ac49807f\") " pod="openstack/keystone-5d48b84649-qr295" Jan 27 20:09:26 crc kubenswrapper[4915]: I0127 20:09:26.091243 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b9cdc8a-c579-45e2-994b-9119ac49807f-combined-ca-bundle\") pod \"keystone-5d48b84649-qr295\" (UID: \"5b9cdc8a-c579-45e2-994b-9119ac49807f\") " pod="openstack/keystone-5d48b84649-qr295" Jan 27 20:09:26 crc kubenswrapper[4915]: I0127 20:09:26.092023 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b9cdc8a-c579-45e2-994b-9119ac49807f-config-data\") pod \"keystone-5d48b84649-qr295\" (UID: \"5b9cdc8a-c579-45e2-994b-9119ac49807f\") " pod="openstack/keystone-5d48b84649-qr295" Jan 27 20:09:26 crc kubenswrapper[4915]: I0127 20:09:26.093052 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5b9cdc8a-c579-45e2-994b-9119ac49807f-fernet-keys\") pod \"keystone-5d48b84649-qr295\" (UID: \"5b9cdc8a-c579-45e2-994b-9119ac49807f\") " pod="openstack/keystone-5d48b84649-qr295" Jan 27 20:09:26 crc kubenswrapper[4915]: I0127 20:09:26.102702 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrv2g\" (UniqueName: \"kubernetes.io/projected/5b9cdc8a-c579-45e2-994b-9119ac49807f-kube-api-access-vrv2g\") pod \"keystone-5d48b84649-qr295\" (UID: \"5b9cdc8a-c579-45e2-994b-9119ac49807f\") " pod="openstack/keystone-5d48b84649-qr295" Jan 27 20:09:26 crc kubenswrapper[4915]: I0127 20:09:26.229135 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5d48b84649-qr295" Jan 27 20:09:26 crc kubenswrapper[4915]: I0127 20:09:26.666062 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5d48b84649-qr295"] Jan 27 20:09:26 crc kubenswrapper[4915]: I0127 20:09:26.820257 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5d48b84649-qr295" event={"ID":"5b9cdc8a-c579-45e2-994b-9119ac49807f","Type":"ContainerStarted","Data":"8e8e7cffbf063af651bc650dba24b653344dbc96683a3044ba00f51c67aa16fc"} Jan 27 20:09:27 crc kubenswrapper[4915]: I0127 20:09:27.833553 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5d48b84649-qr295" event={"ID":"5b9cdc8a-c579-45e2-994b-9119ac49807f","Type":"ContainerStarted","Data":"9bbcbeccf9cd322afd3f754ccadc480c42e3bffdec1d4cd3cf630fe3f0de7287"} Jan 27 20:09:27 crc kubenswrapper[4915]: I0127 20:09:27.834069 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5d48b84649-qr295" Jan 27 20:09:27 crc kubenswrapper[4915]: I0127 20:09:27.866337 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5d48b84649-qr295" podStartSLOduration=2.8663098270000003 podStartE2EDuration="2.866309827s" podCreationTimestamp="2026-01-27 20:09:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:09:27.85869874 +0000 UTC m=+5259.216552414" watchObservedRunningTime="2026-01-27 20:09:27.866309827 +0000 UTC m=+5259.224163511" Jan 27 20:09:33 crc kubenswrapper[4915]: I0127 20:09:33.358675 4915 scope.go:117] "RemoveContainer" containerID="5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda" Jan 27 20:09:33 crc kubenswrapper[4915]: E0127 20:09:33.359469 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:09:48 crc kubenswrapper[4915]: I0127 20:09:48.359475 4915 scope.go:117] "RemoveContainer" containerID="5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda" Jan 27 20:09:48 crc kubenswrapper[4915]: E0127 20:09:48.360312 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:09:56 crc kubenswrapper[4915]: I0127 20:09:56.040523 4915 scope.go:117] "RemoveContainer" containerID="502c4aff969ee29cb603ffa4e1aa3108a4bdca6260e4c2ba8b9f272264df359b" Jan 27 20:09:56 crc kubenswrapper[4915]: I0127 20:09:56.075368 4915 scope.go:117] "RemoveContainer" containerID="7ef96f8218d470885fe2e4cf1dcf50865e97144dc3cabc08081a586c3c6882d8" Jan 27 20:09:57 crc kubenswrapper[4915]: I0127 20:09:57.694743 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5d48b84649-qr295" Jan 27 20:09:59 crc kubenswrapper[4915]: I0127 20:09:59.858360 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 27 20:09:59 crc kubenswrapper[4915]: I0127 20:09:59.859593 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 20:09:59 crc kubenswrapper[4915]: I0127 20:09:59.862821 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-9h64g" Jan 27 20:09:59 crc kubenswrapper[4915]: I0127 20:09:59.863548 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 27 20:09:59 crc kubenswrapper[4915]: I0127 20:09:59.871506 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 27 20:09:59 crc kubenswrapper[4915]: E0127 20:09:59.877952 4915 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-config-secret\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Jan 27 20:09:59 crc kubenswrapper[4915]: I0127 20:09:59.878315 4915 status_manager.go:875] "Failed to update status for pod" pod="openstack/openstackclient" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68411266-82fb-419d-b4d2-4d779293bb64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [openstackclient]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [openstackclient]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"openstackclient\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/home/cloud-admin/.config/openstack/clouds.yaml\\\",\\\"name\\\":\\\"openstack-config\\\"},{\\\"mountPath\\\":\\\"/home/cloud-admin/.config/openstack/secure.yaml\\\",\\\"name\\\":\\\"openstack-config-secret\\\"},{\\\"mountPath\\\":\\\"/home/cloud-admin/cloudrc\\\",\\\"name\\\":\\\"openstack-config-secret\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtmwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:09:59Z\\\"}}\" for pod \"openstack\"/\"openstackclient\": pods \"openstackclient\" not found" Jan 27 20:09:59 crc kubenswrapper[4915]: I0127 20:09:59.879842 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 20:09:59 crc kubenswrapper[4915]: I0127 20:09:59.889742 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 27 20:09:59 crc kubenswrapper[4915]: E0127 20:09:59.890298 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-mtmwx openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[kube-api-access-mtmwx openstack-config openstack-config-secret]: context canceled" pod="openstack/openstackclient" podUID="68411266-82fb-419d-b4d2-4d779293bb64" Jan 27 20:09:59 crc kubenswrapper[4915]: I0127 20:09:59.898919 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 27 20:09:59 crc kubenswrapper[4915]: I0127 20:09:59.912272 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 27 20:09:59 crc kubenswrapper[4915]: I0127 20:09:59.914545 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 20:09:59 crc kubenswrapper[4915]: I0127 20:09:59.933616 4915 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="68411266-82fb-419d-b4d2-4d779293bb64" podUID="85479810-8690-4848-a11f-f7cec2d3b63a" Jan 27 20:09:59 crc kubenswrapper[4915]: I0127 20:09:59.933669 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 20:10:00 crc kubenswrapper[4915]: I0127 20:10:00.087193 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb2fx\" (UniqueName: \"kubernetes.io/projected/85479810-8690-4848-a11f-f7cec2d3b63a-kube-api-access-nb2fx\") pod \"openstackclient\" (UID: \"85479810-8690-4848-a11f-f7cec2d3b63a\") " pod="openstack/openstackclient" Jan 27 20:10:00 crc kubenswrapper[4915]: I0127 20:10:00.087327 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/85479810-8690-4848-a11f-f7cec2d3b63a-openstack-config-secret\") pod \"openstackclient\" (UID: \"85479810-8690-4848-a11f-f7cec2d3b63a\") " pod="openstack/openstackclient" Jan 27 20:10:00 crc kubenswrapper[4915]: I0127 20:10:00.087460 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/85479810-8690-4848-a11f-f7cec2d3b63a-openstack-config\") pod \"openstackclient\" (UID: \"85479810-8690-4848-a11f-f7cec2d3b63a\") " pod="openstack/openstackclient" Jan 27 20:10:00 crc kubenswrapper[4915]: I0127 20:10:00.104136 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 20:10:00 crc kubenswrapper[4915]: I0127 20:10:00.107637 4915 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="68411266-82fb-419d-b4d2-4d779293bb64" podUID="85479810-8690-4848-a11f-f7cec2d3b63a" Jan 27 20:10:00 crc kubenswrapper[4915]: I0127 20:10:00.114140 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 20:10:00 crc kubenswrapper[4915]: I0127 20:10:00.117691 4915 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="68411266-82fb-419d-b4d2-4d779293bb64" podUID="85479810-8690-4848-a11f-f7cec2d3b63a" Jan 27 20:10:00 crc kubenswrapper[4915]: I0127 20:10:00.189241 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/85479810-8690-4848-a11f-f7cec2d3b63a-openstack-config-secret\") pod \"openstackclient\" (UID: \"85479810-8690-4848-a11f-f7cec2d3b63a\") " pod="openstack/openstackclient" Jan 27 20:10:00 crc kubenswrapper[4915]: I0127 20:10:00.189379 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/85479810-8690-4848-a11f-f7cec2d3b63a-openstack-config\") pod \"openstackclient\" (UID: \"85479810-8690-4848-a11f-f7cec2d3b63a\") " pod="openstack/openstackclient" Jan 27 20:10:00 crc kubenswrapper[4915]: I0127 20:10:00.189437 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb2fx\" (UniqueName: \"kubernetes.io/projected/85479810-8690-4848-a11f-f7cec2d3b63a-kube-api-access-nb2fx\") pod \"openstackclient\" (UID: \"85479810-8690-4848-a11f-f7cec2d3b63a\") " pod="openstack/openstackclient" Jan 27 20:10:00 crc kubenswrapper[4915]: I0127 20:10:00.190391 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/85479810-8690-4848-a11f-f7cec2d3b63a-openstack-config\") pod \"openstackclient\" (UID: \"85479810-8690-4848-a11f-f7cec2d3b63a\") " pod="openstack/openstackclient" Jan 27 20:10:00 crc kubenswrapper[4915]: I0127 20:10:00.196237 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/85479810-8690-4848-a11f-f7cec2d3b63a-openstack-config-secret\") pod \"openstackclient\" (UID: \"85479810-8690-4848-a11f-f7cec2d3b63a\") " pod="openstack/openstackclient" Jan 27 20:10:00 crc kubenswrapper[4915]: I0127 20:10:00.205248 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb2fx\" (UniqueName: \"kubernetes.io/projected/85479810-8690-4848-a11f-f7cec2d3b63a-kube-api-access-nb2fx\") pod \"openstackclient\" (UID: \"85479810-8690-4848-a11f-f7cec2d3b63a\") " pod="openstack/openstackclient" Jan 27 20:10:00 crc kubenswrapper[4915]: I0127 20:10:00.232227 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 20:10:00 crc kubenswrapper[4915]: I0127 20:10:00.691462 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 20:10:01 crc kubenswrapper[4915]: I0127 20:10:01.114189 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 20:10:01 crc kubenswrapper[4915]: I0127 20:10:01.114188 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"85479810-8690-4848-a11f-f7cec2d3b63a","Type":"ContainerStarted","Data":"54a994827d3fd893d81544449af99c5d68e8274eeba1acc348be42aa1dd06a8e"} Jan 27 20:10:01 crc kubenswrapper[4915]: I0127 20:10:01.114749 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"85479810-8690-4848-a11f-f7cec2d3b63a","Type":"ContainerStarted","Data":"781ad8aa9dde3f570122dc53fe2d7a6b73c1396802eae754e4bef561bf3b53e3"} Jan 27 20:10:01 crc kubenswrapper[4915]: I0127 20:10:01.136724 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.136692355 podStartE2EDuration="2.136692355s" podCreationTimestamp="2026-01-27 20:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:01.136636764 +0000 UTC m=+5292.494490458" watchObservedRunningTime="2026-01-27 20:10:01.136692355 +0000 UTC m=+5292.494546049" Jan 27 20:10:01 crc kubenswrapper[4915]: I0127 20:10:01.140390 4915 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="68411266-82fb-419d-b4d2-4d779293bb64" podUID="85479810-8690-4848-a11f-f7cec2d3b63a" Jan 27 20:10:01 crc kubenswrapper[4915]: I0127 20:10:01.303862 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 27 20:10:01 crc kubenswrapper[4915]: I0127 20:10:01.358206 4915 scope.go:117] "RemoveContainer" containerID="5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda" Jan 27 20:10:01 crc kubenswrapper[4915]: E0127 20:10:01.358869 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:10:01 crc kubenswrapper[4915]: I0127 20:10:01.384856 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68411266-82fb-419d-b4d2-4d779293bb64" path="/var/lib/kubelet/pods/68411266-82fb-419d-b4d2-4d779293bb64/volumes" Jan 27 20:10:14 crc kubenswrapper[4915]: I0127 20:10:14.357845 4915 scope.go:117] "RemoveContainer" containerID="5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda" Jan 27 20:10:14 crc kubenswrapper[4915]: E0127 20:10:14.358505 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:10:29 crc kubenswrapper[4915]: I0127 20:10:29.362922 4915 scope.go:117] "RemoveContainer" containerID="5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda" Jan 27 20:10:29 crc kubenswrapper[4915]: E0127 20:10:29.363835 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:10:43 crc kubenswrapper[4915]: I0127 20:10:43.357780 4915 scope.go:117] "RemoveContainer" containerID="5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda" Jan 27 20:10:43 crc kubenswrapper[4915]: E0127 20:10:43.358585 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:10:57 crc kubenswrapper[4915]: I0127 20:10:57.357732 4915 scope.go:117] "RemoveContainer" containerID="5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda" Jan 27 20:10:57 crc kubenswrapper[4915]: E0127 20:10:57.358560 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:11:08 crc kubenswrapper[4915]: I0127 20:11:08.358107 4915 scope.go:117] "RemoveContainer" containerID="5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda" Jan 27 20:11:08 crc kubenswrapper[4915]: E0127 20:11:08.359293 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:11:23 crc kubenswrapper[4915]: I0127 20:11:23.358661 4915 scope.go:117] "RemoveContainer" containerID="5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda" Jan 27 20:11:23 crc kubenswrapper[4915]: I0127 20:11:23.821276 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"68967215d5f95104236d9c544ee1dac19345f8fc03262735622caa897c20b480"} Jan 27 20:11:39 crc kubenswrapper[4915]: I0127 20:11:39.034668 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-8dtdm"] Jan 27 20:11:39 crc kubenswrapper[4915]: I0127 20:11:39.038036 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8dtdm" Jan 27 20:11:39 crc kubenswrapper[4915]: I0127 20:11:39.045209 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8dtdm"] Jan 27 20:11:39 crc kubenswrapper[4915]: I0127 20:11:39.055206 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-a1c8-account-create-update-xmztl"] Jan 27 20:11:39 crc kubenswrapper[4915]: I0127 20:11:39.056911 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a1c8-account-create-update-xmztl" Jan 27 20:11:39 crc kubenswrapper[4915]: I0127 20:11:39.061444 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 27 20:11:39 crc kubenswrapper[4915]: I0127 20:11:39.083415 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a1c8-account-create-update-xmztl"] Jan 27 20:11:39 crc kubenswrapper[4915]: I0127 20:11:39.148975 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71be97d2-7ddc-4a74-8aae-e1db009018fe-operator-scripts\") pod \"barbican-db-create-8dtdm\" (UID: \"71be97d2-7ddc-4a74-8aae-e1db009018fe\") " pod="openstack/barbican-db-create-8dtdm" Jan 27 20:11:39 crc kubenswrapper[4915]: I0127 20:11:39.149065 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4826\" (UniqueName: \"kubernetes.io/projected/71be97d2-7ddc-4a74-8aae-e1db009018fe-kube-api-access-n4826\") pod \"barbican-db-create-8dtdm\" (UID: \"71be97d2-7ddc-4a74-8aae-e1db009018fe\") " pod="openstack/barbican-db-create-8dtdm" Jan 27 20:11:39 crc kubenswrapper[4915]: I0127 20:11:39.251137 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmjch\" (UniqueName: \"kubernetes.io/projected/9ea86373-6961-41e9-818c-68a2f3f0805f-kube-api-access-qmjch\") pod \"barbican-a1c8-account-create-update-xmztl\" (UID: \"9ea86373-6961-41e9-818c-68a2f3f0805f\") " pod="openstack/barbican-a1c8-account-create-update-xmztl" Jan 27 20:11:39 crc kubenswrapper[4915]: I0127 20:11:39.251284 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ea86373-6961-41e9-818c-68a2f3f0805f-operator-scripts\") pod \"barbican-a1c8-account-create-update-xmztl\" (UID: \"9ea86373-6961-41e9-818c-68a2f3f0805f\") " pod="openstack/barbican-a1c8-account-create-update-xmztl" Jan 27 20:11:39 crc kubenswrapper[4915]: I0127 20:11:39.251371 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71be97d2-7ddc-4a74-8aae-e1db009018fe-operator-scripts\") pod \"barbican-db-create-8dtdm\" (UID: \"71be97d2-7ddc-4a74-8aae-e1db009018fe\") " pod="openstack/barbican-db-create-8dtdm" Jan 27 20:11:39 crc kubenswrapper[4915]: I0127 20:11:39.251390 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4826\" (UniqueName: \"kubernetes.io/projected/71be97d2-7ddc-4a74-8aae-e1db009018fe-kube-api-access-n4826\") pod \"barbican-db-create-8dtdm\" (UID: \"71be97d2-7ddc-4a74-8aae-e1db009018fe\") " pod="openstack/barbican-db-create-8dtdm" Jan 27 20:11:39 crc kubenswrapper[4915]: I0127 20:11:39.252619 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71be97d2-7ddc-4a74-8aae-e1db009018fe-operator-scripts\") pod \"barbican-db-create-8dtdm\" (UID: \"71be97d2-7ddc-4a74-8aae-e1db009018fe\") " pod="openstack/barbican-db-create-8dtdm" Jan 27 20:11:39 crc kubenswrapper[4915]: I0127 20:11:39.275604 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4826\" (UniqueName: \"kubernetes.io/projected/71be97d2-7ddc-4a74-8aae-e1db009018fe-kube-api-access-n4826\") pod \"barbican-db-create-8dtdm\" (UID: \"71be97d2-7ddc-4a74-8aae-e1db009018fe\") " pod="openstack/barbican-db-create-8dtdm" Jan 27 20:11:39 crc kubenswrapper[4915]: I0127 20:11:39.352741 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ea86373-6961-41e9-818c-68a2f3f0805f-operator-scripts\") pod \"barbican-a1c8-account-create-update-xmztl\" (UID: \"9ea86373-6961-41e9-818c-68a2f3f0805f\") " pod="openstack/barbican-a1c8-account-create-update-xmztl" Jan 27 20:11:39 crc kubenswrapper[4915]: I0127 20:11:39.352874 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmjch\" (UniqueName: \"kubernetes.io/projected/9ea86373-6961-41e9-818c-68a2f3f0805f-kube-api-access-qmjch\") pod \"barbican-a1c8-account-create-update-xmztl\" (UID: \"9ea86373-6961-41e9-818c-68a2f3f0805f\") " pod="openstack/barbican-a1c8-account-create-update-xmztl" Jan 27 20:11:39 crc kubenswrapper[4915]: I0127 20:11:39.353954 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ea86373-6961-41e9-818c-68a2f3f0805f-operator-scripts\") pod \"barbican-a1c8-account-create-update-xmztl\" (UID: \"9ea86373-6961-41e9-818c-68a2f3f0805f\") " pod="openstack/barbican-a1c8-account-create-update-xmztl" Jan 27 20:11:39 crc kubenswrapper[4915]: I0127 20:11:39.370163 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmjch\" (UniqueName: \"kubernetes.io/projected/9ea86373-6961-41e9-818c-68a2f3f0805f-kube-api-access-qmjch\") pod \"barbican-a1c8-account-create-update-xmztl\" (UID: \"9ea86373-6961-41e9-818c-68a2f3f0805f\") " pod="openstack/barbican-a1c8-account-create-update-xmztl" Jan 27 20:11:39 crc kubenswrapper[4915]: I0127 20:11:39.373049 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8dtdm" Jan 27 20:11:39 crc kubenswrapper[4915]: I0127 20:11:39.383286 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a1c8-account-create-update-xmztl" Jan 27 20:11:39 crc kubenswrapper[4915]: I0127 20:11:39.827185 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a1c8-account-create-update-xmztl"] Jan 27 20:11:39 crc kubenswrapper[4915]: I0127 20:11:39.897029 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8dtdm"] Jan 27 20:11:39 crc kubenswrapper[4915]: W0127 20:11:39.897659 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71be97d2_7ddc_4a74_8aae_e1db009018fe.slice/crio-84dbb426aee0480b010e60008a03dce5944a7e76815c5c7b391f285e2d496053 WatchSource:0}: Error finding container 84dbb426aee0480b010e60008a03dce5944a7e76815c5c7b391f285e2d496053: Status 404 returned error can't find the container with id 84dbb426aee0480b010e60008a03dce5944a7e76815c5c7b391f285e2d496053 Jan 27 20:11:39 crc kubenswrapper[4915]: I0127 20:11:39.941920 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8dtdm" event={"ID":"71be97d2-7ddc-4a74-8aae-e1db009018fe","Type":"ContainerStarted","Data":"84dbb426aee0480b010e60008a03dce5944a7e76815c5c7b391f285e2d496053"} Jan 27 20:11:39 crc kubenswrapper[4915]: I0127 20:11:39.943613 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a1c8-account-create-update-xmztl" event={"ID":"9ea86373-6961-41e9-818c-68a2f3f0805f","Type":"ContainerStarted","Data":"841bfe3889b396751d6bb0b0e8b8804f1079f167e2a010b4509ef216ab5c2e55"} Jan 27 20:11:40 crc kubenswrapper[4915]: I0127 20:11:40.953323 4915 generic.go:334] "Generic (PLEG): container finished" podID="9ea86373-6961-41e9-818c-68a2f3f0805f" containerID="34c909baded24626db964fa82be273a30a88ffc8025559bd81c37ca1adaeb8ba" exitCode=0 Jan 27 20:11:40 crc kubenswrapper[4915]: I0127 20:11:40.953383 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a1c8-account-create-update-xmztl" event={"ID":"9ea86373-6961-41e9-818c-68a2f3f0805f","Type":"ContainerDied","Data":"34c909baded24626db964fa82be273a30a88ffc8025559bd81c37ca1adaeb8ba"} Jan 27 20:11:40 crc kubenswrapper[4915]: I0127 20:11:40.954887 4915 generic.go:334] "Generic (PLEG): container finished" podID="71be97d2-7ddc-4a74-8aae-e1db009018fe" containerID="1c22f088a4a7ca8c44e753f1c279892adeb439bc461ced75809af22ba214b798" exitCode=0 Jan 27 20:11:40 crc kubenswrapper[4915]: I0127 20:11:40.954918 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8dtdm" event={"ID":"71be97d2-7ddc-4a74-8aae-e1db009018fe","Type":"ContainerDied","Data":"1c22f088a4a7ca8c44e753f1c279892adeb439bc461ced75809af22ba214b798"} Jan 27 20:11:42 crc kubenswrapper[4915]: I0127 20:11:42.330293 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8dtdm" Jan 27 20:11:42 crc kubenswrapper[4915]: I0127 20:11:42.335106 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a1c8-account-create-update-xmztl" Jan 27 20:11:42 crc kubenswrapper[4915]: I0127 20:11:42.501600 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4826\" (UniqueName: \"kubernetes.io/projected/71be97d2-7ddc-4a74-8aae-e1db009018fe-kube-api-access-n4826\") pod \"71be97d2-7ddc-4a74-8aae-e1db009018fe\" (UID: \"71be97d2-7ddc-4a74-8aae-e1db009018fe\") " Jan 27 20:11:42 crc kubenswrapper[4915]: I0127 20:11:42.501662 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmjch\" (UniqueName: \"kubernetes.io/projected/9ea86373-6961-41e9-818c-68a2f3f0805f-kube-api-access-qmjch\") pod \"9ea86373-6961-41e9-818c-68a2f3f0805f\" (UID: \"9ea86373-6961-41e9-818c-68a2f3f0805f\") " Jan 27 20:11:42 crc kubenswrapper[4915]: I0127 20:11:42.501729 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71be97d2-7ddc-4a74-8aae-e1db009018fe-operator-scripts\") pod \"71be97d2-7ddc-4a74-8aae-e1db009018fe\" (UID: \"71be97d2-7ddc-4a74-8aae-e1db009018fe\") " Jan 27 20:11:42 crc kubenswrapper[4915]: I0127 20:11:42.501804 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ea86373-6961-41e9-818c-68a2f3f0805f-operator-scripts\") pod \"9ea86373-6961-41e9-818c-68a2f3f0805f\" (UID: \"9ea86373-6961-41e9-818c-68a2f3f0805f\") " Jan 27 20:11:42 crc kubenswrapper[4915]: I0127 20:11:42.502511 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71be97d2-7ddc-4a74-8aae-e1db009018fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "71be97d2-7ddc-4a74-8aae-e1db009018fe" (UID: "71be97d2-7ddc-4a74-8aae-e1db009018fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:11:42 crc kubenswrapper[4915]: I0127 20:11:42.503325 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ea86373-6961-41e9-818c-68a2f3f0805f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9ea86373-6961-41e9-818c-68a2f3f0805f" (UID: "9ea86373-6961-41e9-818c-68a2f3f0805f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:11:42 crc kubenswrapper[4915]: I0127 20:11:42.513695 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ea86373-6961-41e9-818c-68a2f3f0805f-kube-api-access-qmjch" (OuterVolumeSpecName: "kube-api-access-qmjch") pod "9ea86373-6961-41e9-818c-68a2f3f0805f" (UID: "9ea86373-6961-41e9-818c-68a2f3f0805f"). InnerVolumeSpecName "kube-api-access-qmjch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:11:42 crc kubenswrapper[4915]: I0127 20:11:42.513834 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71be97d2-7ddc-4a74-8aae-e1db009018fe-kube-api-access-n4826" (OuterVolumeSpecName: "kube-api-access-n4826") pod "71be97d2-7ddc-4a74-8aae-e1db009018fe" (UID: "71be97d2-7ddc-4a74-8aae-e1db009018fe"). InnerVolumeSpecName "kube-api-access-n4826". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:11:42 crc kubenswrapper[4915]: I0127 20:11:42.603595 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4826\" (UniqueName: \"kubernetes.io/projected/71be97d2-7ddc-4a74-8aae-e1db009018fe-kube-api-access-n4826\") on node \"crc\" DevicePath \"\"" Jan 27 20:11:42 crc kubenswrapper[4915]: I0127 20:11:42.603638 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmjch\" (UniqueName: \"kubernetes.io/projected/9ea86373-6961-41e9-818c-68a2f3f0805f-kube-api-access-qmjch\") on node \"crc\" DevicePath \"\"" Jan 27 20:11:42 crc kubenswrapper[4915]: I0127 20:11:42.603649 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71be97d2-7ddc-4a74-8aae-e1db009018fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:11:42 crc kubenswrapper[4915]: I0127 20:11:42.603658 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ea86373-6961-41e9-818c-68a2f3f0805f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:11:42 crc kubenswrapper[4915]: I0127 20:11:42.974337 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a1c8-account-create-update-xmztl" Jan 27 20:11:42 crc kubenswrapper[4915]: I0127 20:11:42.974333 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a1c8-account-create-update-xmztl" event={"ID":"9ea86373-6961-41e9-818c-68a2f3f0805f","Type":"ContainerDied","Data":"841bfe3889b396751d6bb0b0e8b8804f1079f167e2a010b4509ef216ab5c2e55"} Jan 27 20:11:42 crc kubenswrapper[4915]: I0127 20:11:42.974547 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="841bfe3889b396751d6bb0b0e8b8804f1079f167e2a010b4509ef216ab5c2e55" Jan 27 20:11:42 crc kubenswrapper[4915]: I0127 20:11:42.977818 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8dtdm" event={"ID":"71be97d2-7ddc-4a74-8aae-e1db009018fe","Type":"ContainerDied","Data":"84dbb426aee0480b010e60008a03dce5944a7e76815c5c7b391f285e2d496053"} Jan 27 20:11:42 crc kubenswrapper[4915]: I0127 20:11:42.977876 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84dbb426aee0480b010e60008a03dce5944a7e76815c5c7b391f285e2d496053" Jan 27 20:11:42 crc kubenswrapper[4915]: I0127 20:11:42.977930 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8dtdm" Jan 27 20:11:44 crc kubenswrapper[4915]: I0127 20:11:44.426675 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-dhghd"] Jan 27 20:11:44 crc kubenswrapper[4915]: E0127 20:11:44.427540 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ea86373-6961-41e9-818c-68a2f3f0805f" containerName="mariadb-account-create-update" Jan 27 20:11:44 crc kubenswrapper[4915]: I0127 20:11:44.427562 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ea86373-6961-41e9-818c-68a2f3f0805f" containerName="mariadb-account-create-update" Jan 27 20:11:44 crc kubenswrapper[4915]: E0127 20:11:44.427601 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71be97d2-7ddc-4a74-8aae-e1db009018fe" containerName="mariadb-database-create" Jan 27 20:11:44 crc kubenswrapper[4915]: I0127 20:11:44.427614 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="71be97d2-7ddc-4a74-8aae-e1db009018fe" containerName="mariadb-database-create" Jan 27 20:11:44 crc kubenswrapper[4915]: I0127 20:11:44.427921 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="71be97d2-7ddc-4a74-8aae-e1db009018fe" containerName="mariadb-database-create" Jan 27 20:11:44 crc kubenswrapper[4915]: I0127 20:11:44.427966 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ea86373-6961-41e9-818c-68a2f3f0805f" containerName="mariadb-account-create-update" Jan 27 20:11:44 crc kubenswrapper[4915]: I0127 20:11:44.428815 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dhghd" Jan 27 20:11:44 crc kubenswrapper[4915]: I0127 20:11:44.430833 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-hkgb5" Jan 27 20:11:44 crc kubenswrapper[4915]: I0127 20:11:44.434909 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 20:11:44 crc kubenswrapper[4915]: I0127 20:11:44.434969 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-dhghd"] Jan 27 20:11:44 crc kubenswrapper[4915]: I0127 20:11:44.534150 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c4e332-4694-40df-b17a-fc0755eebddc-combined-ca-bundle\") pod \"barbican-db-sync-dhghd\" (UID: \"93c4e332-4694-40df-b17a-fc0755eebddc\") " pod="openstack/barbican-db-sync-dhghd" Jan 27 20:11:44 crc kubenswrapper[4915]: I0127 20:11:44.534428 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwm9k\" (UniqueName: \"kubernetes.io/projected/93c4e332-4694-40df-b17a-fc0755eebddc-kube-api-access-pwm9k\") pod \"barbican-db-sync-dhghd\" (UID: \"93c4e332-4694-40df-b17a-fc0755eebddc\") " pod="openstack/barbican-db-sync-dhghd" Jan 27 20:11:44 crc kubenswrapper[4915]: I0127 20:11:44.534543 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/93c4e332-4694-40df-b17a-fc0755eebddc-db-sync-config-data\") pod \"barbican-db-sync-dhghd\" (UID: \"93c4e332-4694-40df-b17a-fc0755eebddc\") " pod="openstack/barbican-db-sync-dhghd" Jan 27 20:11:44 crc kubenswrapper[4915]: I0127 20:11:44.636447 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwm9k\" (UniqueName: \"kubernetes.io/projected/93c4e332-4694-40df-b17a-fc0755eebddc-kube-api-access-pwm9k\") pod \"barbican-db-sync-dhghd\" (UID: \"93c4e332-4694-40df-b17a-fc0755eebddc\") " pod="openstack/barbican-db-sync-dhghd" Jan 27 20:11:44 crc kubenswrapper[4915]: I0127 20:11:44.636570 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/93c4e332-4694-40df-b17a-fc0755eebddc-db-sync-config-data\") pod \"barbican-db-sync-dhghd\" (UID: \"93c4e332-4694-40df-b17a-fc0755eebddc\") " pod="openstack/barbican-db-sync-dhghd" Jan 27 20:11:44 crc kubenswrapper[4915]: I0127 20:11:44.636676 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c4e332-4694-40df-b17a-fc0755eebddc-combined-ca-bundle\") pod \"barbican-db-sync-dhghd\" (UID: \"93c4e332-4694-40df-b17a-fc0755eebddc\") " pod="openstack/barbican-db-sync-dhghd" Jan 27 20:11:44 crc kubenswrapper[4915]: I0127 20:11:44.644050 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/93c4e332-4694-40df-b17a-fc0755eebddc-db-sync-config-data\") pod \"barbican-db-sync-dhghd\" (UID: \"93c4e332-4694-40df-b17a-fc0755eebddc\") " pod="openstack/barbican-db-sync-dhghd" Jan 27 20:11:44 crc kubenswrapper[4915]: I0127 20:11:44.644245 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c4e332-4694-40df-b17a-fc0755eebddc-combined-ca-bundle\") pod \"barbican-db-sync-dhghd\" (UID: \"93c4e332-4694-40df-b17a-fc0755eebddc\") " pod="openstack/barbican-db-sync-dhghd" Jan 27 20:11:44 crc kubenswrapper[4915]: I0127 20:11:44.669916 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwm9k\" (UniqueName: \"kubernetes.io/projected/93c4e332-4694-40df-b17a-fc0755eebddc-kube-api-access-pwm9k\") pod \"barbican-db-sync-dhghd\" (UID: \"93c4e332-4694-40df-b17a-fc0755eebddc\") " pod="openstack/barbican-db-sync-dhghd" Jan 27 20:11:44 crc kubenswrapper[4915]: I0127 20:11:44.753871 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dhghd" Jan 27 20:11:45 crc kubenswrapper[4915]: I0127 20:11:45.180645 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-dhghd"] Jan 27 20:11:45 crc kubenswrapper[4915]: W0127 20:11:45.183920 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93c4e332_4694_40df_b17a_fc0755eebddc.slice/crio-7968203424d8e90b6ea5d8610513dbd48979db4caca2714bbf7d222a80d1dd19 WatchSource:0}: Error finding container 7968203424d8e90b6ea5d8610513dbd48979db4caca2714bbf7d222a80d1dd19: Status 404 returned error can't find the container with id 7968203424d8e90b6ea5d8610513dbd48979db4caca2714bbf7d222a80d1dd19 Jan 27 20:11:46 crc kubenswrapper[4915]: I0127 20:11:46.006387 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dhghd" event={"ID":"93c4e332-4694-40df-b17a-fc0755eebddc","Type":"ContainerStarted","Data":"75ca7a4d14b11f7ecc0d50303e549b9349cc3096a61dedfb1c72bece53a0fc6f"} Jan 27 20:11:46 crc kubenswrapper[4915]: I0127 20:11:46.006454 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dhghd" event={"ID":"93c4e332-4694-40df-b17a-fc0755eebddc","Type":"ContainerStarted","Data":"7968203424d8e90b6ea5d8610513dbd48979db4caca2714bbf7d222a80d1dd19"} Jan 27 20:11:46 crc kubenswrapper[4915]: I0127 20:11:46.032724 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-dhghd" podStartSLOduration=2.032702747 podStartE2EDuration="2.032702747s" podCreationTimestamp="2026-01-27 20:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:11:46.025054338 +0000 UTC m=+5397.382908042" watchObservedRunningTime="2026-01-27 20:11:46.032702747 +0000 UTC m=+5397.390556411" Jan 27 20:11:47 crc kubenswrapper[4915]: I0127 20:11:47.014817 4915 generic.go:334] "Generic (PLEG): container finished" podID="93c4e332-4694-40df-b17a-fc0755eebddc" containerID="75ca7a4d14b11f7ecc0d50303e549b9349cc3096a61dedfb1c72bece53a0fc6f" exitCode=0 Jan 27 20:11:47 crc kubenswrapper[4915]: I0127 20:11:47.014862 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dhghd" event={"ID":"93c4e332-4694-40df-b17a-fc0755eebddc","Type":"ContainerDied","Data":"75ca7a4d14b11f7ecc0d50303e549b9349cc3096a61dedfb1c72bece53a0fc6f"} Jan 27 20:11:48 crc kubenswrapper[4915]: I0127 20:11:48.363380 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dhghd" Jan 27 20:11:48 crc kubenswrapper[4915]: I0127 20:11:48.500321 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwm9k\" (UniqueName: \"kubernetes.io/projected/93c4e332-4694-40df-b17a-fc0755eebddc-kube-api-access-pwm9k\") pod \"93c4e332-4694-40df-b17a-fc0755eebddc\" (UID: \"93c4e332-4694-40df-b17a-fc0755eebddc\") " Jan 27 20:11:48 crc kubenswrapper[4915]: I0127 20:11:48.500412 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c4e332-4694-40df-b17a-fc0755eebddc-combined-ca-bundle\") pod \"93c4e332-4694-40df-b17a-fc0755eebddc\" (UID: \"93c4e332-4694-40df-b17a-fc0755eebddc\") " Jan 27 20:11:48 crc kubenswrapper[4915]: I0127 20:11:48.500493 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/93c4e332-4694-40df-b17a-fc0755eebddc-db-sync-config-data\") pod \"93c4e332-4694-40df-b17a-fc0755eebddc\" (UID: \"93c4e332-4694-40df-b17a-fc0755eebddc\") " Jan 27 20:11:48 crc kubenswrapper[4915]: I0127 20:11:48.505974 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c4e332-4694-40df-b17a-fc0755eebddc-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "93c4e332-4694-40df-b17a-fc0755eebddc" (UID: "93c4e332-4694-40df-b17a-fc0755eebddc"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:11:48 crc kubenswrapper[4915]: I0127 20:11:48.507591 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93c4e332-4694-40df-b17a-fc0755eebddc-kube-api-access-pwm9k" (OuterVolumeSpecName: "kube-api-access-pwm9k") pod "93c4e332-4694-40df-b17a-fc0755eebddc" (UID: "93c4e332-4694-40df-b17a-fc0755eebddc"). InnerVolumeSpecName "kube-api-access-pwm9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:11:48 crc kubenswrapper[4915]: I0127 20:11:48.524090 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93c4e332-4694-40df-b17a-fc0755eebddc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93c4e332-4694-40df-b17a-fc0755eebddc" (UID: "93c4e332-4694-40df-b17a-fc0755eebddc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:11:48 crc kubenswrapper[4915]: I0127 20:11:48.602674 4915 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/93c4e332-4694-40df-b17a-fc0755eebddc-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:11:48 crc kubenswrapper[4915]: I0127 20:11:48.602708 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwm9k\" (UniqueName: \"kubernetes.io/projected/93c4e332-4694-40df-b17a-fc0755eebddc-kube-api-access-pwm9k\") on node \"crc\" DevicePath \"\"" Jan 27 20:11:48 crc kubenswrapper[4915]: I0127 20:11:48.602717 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93c4e332-4694-40df-b17a-fc0755eebddc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.049393 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dhghd" event={"ID":"93c4e332-4694-40df-b17a-fc0755eebddc","Type":"ContainerDied","Data":"7968203424d8e90b6ea5d8610513dbd48979db4caca2714bbf7d222a80d1dd19"} Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.049443 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7968203424d8e90b6ea5d8610513dbd48979db4caca2714bbf7d222a80d1dd19" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.049481 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dhghd" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.258019 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-575f575d8f-mwqfl"] Jan 27 20:11:49 crc kubenswrapper[4915]: E0127 20:11:49.258510 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c4e332-4694-40df-b17a-fc0755eebddc" containerName="barbican-db-sync" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.258536 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c4e332-4694-40df-b17a-fc0755eebddc" containerName="barbican-db-sync" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.258743 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="93c4e332-4694-40df-b17a-fc0755eebddc" containerName="barbican-db-sync" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.259946 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-575f575d8f-mwqfl" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.266162 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.271457 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-hkgb5" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.271862 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.287919 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-575f575d8f-mwqfl"] Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.314956 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5c875f5dbd-4s44w"] Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.316474 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c875f5dbd-4s44w" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.321083 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.349548 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5c875f5dbd-4s44w"] Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.378909 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79ffc8877c-t8ggr"] Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.384265 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.387250 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79ffc8877c-t8ggr"] Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.417139 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b806ab6-0402-4f4a-8163-d2bb7da4beb5-config-data\") pod \"barbican-worker-575f575d8f-mwqfl\" (UID: \"3b806ab6-0402-4f4a-8163-d2bb7da4beb5\") " pod="openstack/barbican-worker-575f575d8f-mwqfl" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.417197 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80fb5a13-de72-4173-a859-349b05c9d042-config-data\") pod \"barbican-keystone-listener-5c875f5dbd-4s44w\" (UID: \"80fb5a13-de72-4173-a859-349b05c9d042\") " pod="openstack/barbican-keystone-listener-5c875f5dbd-4s44w" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.417226 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b806ab6-0402-4f4a-8163-d2bb7da4beb5-combined-ca-bundle\") pod \"barbican-worker-575f575d8f-mwqfl\" (UID: \"3b806ab6-0402-4f4a-8163-d2bb7da4beb5\") " pod="openstack/barbican-worker-575f575d8f-mwqfl" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.417257 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80fb5a13-de72-4173-a859-349b05c9d042-combined-ca-bundle\") pod \"barbican-keystone-listener-5c875f5dbd-4s44w\" (UID: \"80fb5a13-de72-4173-a859-349b05c9d042\") " pod="openstack/barbican-keystone-listener-5c875f5dbd-4s44w" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.417285 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m47m\" (UniqueName: \"kubernetes.io/projected/80fb5a13-de72-4173-a859-349b05c9d042-kube-api-access-5m47m\") pod \"barbican-keystone-listener-5c875f5dbd-4s44w\" (UID: \"80fb5a13-de72-4173-a859-349b05c9d042\") " pod="openstack/barbican-keystone-listener-5c875f5dbd-4s44w" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.417312 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbdhn\" (UniqueName: \"kubernetes.io/projected/3b806ab6-0402-4f4a-8163-d2bb7da4beb5-kube-api-access-kbdhn\") pod \"barbican-worker-575f575d8f-mwqfl\" (UID: \"3b806ab6-0402-4f4a-8163-d2bb7da4beb5\") " pod="openstack/barbican-worker-575f575d8f-mwqfl" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.417330 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b806ab6-0402-4f4a-8163-d2bb7da4beb5-config-data-custom\") pod \"barbican-worker-575f575d8f-mwqfl\" (UID: \"3b806ab6-0402-4f4a-8163-d2bb7da4beb5\") " pod="openstack/barbican-worker-575f575d8f-mwqfl" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.417372 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80fb5a13-de72-4173-a859-349b05c9d042-logs\") pod \"barbican-keystone-listener-5c875f5dbd-4s44w\" (UID: \"80fb5a13-de72-4173-a859-349b05c9d042\") " pod="openstack/barbican-keystone-listener-5c875f5dbd-4s44w" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.417395 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b806ab6-0402-4f4a-8163-d2bb7da4beb5-logs\") pod \"barbican-worker-575f575d8f-mwqfl\" (UID: \"3b806ab6-0402-4f4a-8163-d2bb7da4beb5\") " pod="openstack/barbican-worker-575f575d8f-mwqfl" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.417412 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80fb5a13-de72-4173-a859-349b05c9d042-config-data-custom\") pod \"barbican-keystone-listener-5c875f5dbd-4s44w\" (UID: \"80fb5a13-de72-4173-a859-349b05c9d042\") " pod="openstack/barbican-keystone-listener-5c875f5dbd-4s44w" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.424538 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6985bc6658-t8db5"] Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.425964 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6985bc6658-t8db5" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.435755 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.445787 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6985bc6658-t8db5"] Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.518749 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80fb5a13-de72-4173-a859-349b05c9d042-combined-ca-bundle\") pod \"barbican-keystone-listener-5c875f5dbd-4s44w\" (UID: \"80fb5a13-de72-4173-a859-349b05c9d042\") " pod="openstack/barbican-keystone-listener-5c875f5dbd-4s44w" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.518906 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m47m\" (UniqueName: \"kubernetes.io/projected/80fb5a13-de72-4173-a859-349b05c9d042-kube-api-access-5m47m\") pod \"barbican-keystone-listener-5c875f5dbd-4s44w\" (UID: \"80fb5a13-de72-4173-a859-349b05c9d042\") " pod="openstack/barbican-keystone-listener-5c875f5dbd-4s44w" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.518942 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3614bc8f-8773-4f22-a9bd-01c34adbd6cc-config-data-custom\") pod \"barbican-api-6985bc6658-t8db5\" (UID: \"3614bc8f-8773-4f22-a9bd-01c34adbd6cc\") " pod="openstack/barbican-api-6985bc6658-t8db5" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.518991 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbdhn\" (UniqueName: \"kubernetes.io/projected/3b806ab6-0402-4f4a-8163-d2bb7da4beb5-kube-api-access-kbdhn\") pod \"barbican-worker-575f575d8f-mwqfl\" (UID: \"3b806ab6-0402-4f4a-8163-d2bb7da4beb5\") " pod="openstack/barbican-worker-575f575d8f-mwqfl" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.519096 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b806ab6-0402-4f4a-8163-d2bb7da4beb5-config-data-custom\") pod \"barbican-worker-575f575d8f-mwqfl\" (UID: \"3b806ab6-0402-4f4a-8163-d2bb7da4beb5\") " pod="openstack/barbican-worker-575f575d8f-mwqfl" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.519134 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18aacb46-b4d8-477d-81df-e0fa4df6c36d-dns-svc\") pod \"dnsmasq-dns-79ffc8877c-t8ggr\" (UID: \"18aacb46-b4d8-477d-81df-e0fa4df6c36d\") " pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.519159 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18aacb46-b4d8-477d-81df-e0fa4df6c36d-config\") pod \"dnsmasq-dns-79ffc8877c-t8ggr\" (UID: \"18aacb46-b4d8-477d-81df-e0fa4df6c36d\") " pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.519187 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18aacb46-b4d8-477d-81df-e0fa4df6c36d-ovsdbserver-nb\") pod \"dnsmasq-dns-79ffc8877c-t8ggr\" (UID: \"18aacb46-b4d8-477d-81df-e0fa4df6c36d\") " pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.519230 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n54tr\" (UniqueName: \"kubernetes.io/projected/3614bc8f-8773-4f22-a9bd-01c34adbd6cc-kube-api-access-n54tr\") pod \"barbican-api-6985bc6658-t8db5\" (UID: \"3614bc8f-8773-4f22-a9bd-01c34adbd6cc\") " pod="openstack/barbican-api-6985bc6658-t8db5" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.519272 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3614bc8f-8773-4f22-a9bd-01c34adbd6cc-config-data\") pod \"barbican-api-6985bc6658-t8db5\" (UID: \"3614bc8f-8773-4f22-a9bd-01c34adbd6cc\") " pod="openstack/barbican-api-6985bc6658-t8db5" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.519324 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80fb5a13-de72-4173-a859-349b05c9d042-logs\") pod \"barbican-keystone-listener-5c875f5dbd-4s44w\" (UID: \"80fb5a13-de72-4173-a859-349b05c9d042\") " pod="openstack/barbican-keystone-listener-5c875f5dbd-4s44w" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.519366 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b806ab6-0402-4f4a-8163-d2bb7da4beb5-logs\") pod \"barbican-worker-575f575d8f-mwqfl\" (UID: \"3b806ab6-0402-4f4a-8163-d2bb7da4beb5\") " pod="openstack/barbican-worker-575f575d8f-mwqfl" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.519393 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80fb5a13-de72-4173-a859-349b05c9d042-config-data-custom\") pod \"barbican-keystone-listener-5c875f5dbd-4s44w\" (UID: \"80fb5a13-de72-4173-a859-349b05c9d042\") " pod="openstack/barbican-keystone-listener-5c875f5dbd-4s44w" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.519421 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptjnd\" (UniqueName: \"kubernetes.io/projected/18aacb46-b4d8-477d-81df-e0fa4df6c36d-kube-api-access-ptjnd\") pod \"dnsmasq-dns-79ffc8877c-t8ggr\" (UID: \"18aacb46-b4d8-477d-81df-e0fa4df6c36d\") " pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.519443 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3614bc8f-8773-4f22-a9bd-01c34adbd6cc-logs\") pod \"barbican-api-6985bc6658-t8db5\" (UID: \"3614bc8f-8773-4f22-a9bd-01c34adbd6cc\") " pod="openstack/barbican-api-6985bc6658-t8db5" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.519508 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b806ab6-0402-4f4a-8163-d2bb7da4beb5-config-data\") pod \"barbican-worker-575f575d8f-mwqfl\" (UID: \"3b806ab6-0402-4f4a-8163-d2bb7da4beb5\") " pod="openstack/barbican-worker-575f575d8f-mwqfl" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.519541 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3614bc8f-8773-4f22-a9bd-01c34adbd6cc-combined-ca-bundle\") pod \"barbican-api-6985bc6658-t8db5\" (UID: \"3614bc8f-8773-4f22-a9bd-01c34adbd6cc\") " pod="openstack/barbican-api-6985bc6658-t8db5" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.519572 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80fb5a13-de72-4173-a859-349b05c9d042-config-data\") pod \"barbican-keystone-listener-5c875f5dbd-4s44w\" (UID: \"80fb5a13-de72-4173-a859-349b05c9d042\") " pod="openstack/barbican-keystone-listener-5c875f5dbd-4s44w" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.519600 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18aacb46-b4d8-477d-81df-e0fa4df6c36d-ovsdbserver-sb\") pod \"dnsmasq-dns-79ffc8877c-t8ggr\" (UID: \"18aacb46-b4d8-477d-81df-e0fa4df6c36d\") " pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.519625 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b806ab6-0402-4f4a-8163-d2bb7da4beb5-combined-ca-bundle\") pod \"barbican-worker-575f575d8f-mwqfl\" (UID: \"3b806ab6-0402-4f4a-8163-d2bb7da4beb5\") " pod="openstack/barbican-worker-575f575d8f-mwqfl" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.520576 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80fb5a13-de72-4173-a859-349b05c9d042-logs\") pod \"barbican-keystone-listener-5c875f5dbd-4s44w\" (UID: \"80fb5a13-de72-4173-a859-349b05c9d042\") " pod="openstack/barbican-keystone-listener-5c875f5dbd-4s44w" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.521900 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.522236 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.522355 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.530862 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80fb5a13-de72-4173-a859-349b05c9d042-combined-ca-bundle\") pod \"barbican-keystone-listener-5c875f5dbd-4s44w\" (UID: \"80fb5a13-de72-4173-a859-349b05c9d042\") " pod="openstack/barbican-keystone-listener-5c875f5dbd-4s44w" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.531123 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b806ab6-0402-4f4a-8163-d2bb7da4beb5-logs\") pod \"barbican-worker-575f575d8f-mwqfl\" (UID: \"3b806ab6-0402-4f4a-8163-d2bb7da4beb5\") " pod="openstack/barbican-worker-575f575d8f-mwqfl" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.531677 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b806ab6-0402-4f4a-8163-d2bb7da4beb5-combined-ca-bundle\") pod \"barbican-worker-575f575d8f-mwqfl\" (UID: \"3b806ab6-0402-4f4a-8163-d2bb7da4beb5\") " pod="openstack/barbican-worker-575f575d8f-mwqfl" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.534625 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80fb5a13-de72-4173-a859-349b05c9d042-config-data-custom\") pod \"barbican-keystone-listener-5c875f5dbd-4s44w\" (UID: \"80fb5a13-de72-4173-a859-349b05c9d042\") " pod="openstack/barbican-keystone-listener-5c875f5dbd-4s44w" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.535274 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b806ab6-0402-4f4a-8163-d2bb7da4beb5-config-data-custom\") pod \"barbican-worker-575f575d8f-mwqfl\" (UID: \"3b806ab6-0402-4f4a-8163-d2bb7da4beb5\") " pod="openstack/barbican-worker-575f575d8f-mwqfl" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.536057 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m47m\" (UniqueName: \"kubernetes.io/projected/80fb5a13-de72-4173-a859-349b05c9d042-kube-api-access-5m47m\") pod \"barbican-keystone-listener-5c875f5dbd-4s44w\" (UID: \"80fb5a13-de72-4173-a859-349b05c9d042\") " pod="openstack/barbican-keystone-listener-5c875f5dbd-4s44w" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.537735 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbdhn\" (UniqueName: \"kubernetes.io/projected/3b806ab6-0402-4f4a-8163-d2bb7da4beb5-kube-api-access-kbdhn\") pod \"barbican-worker-575f575d8f-mwqfl\" (UID: \"3b806ab6-0402-4f4a-8163-d2bb7da4beb5\") " pod="openstack/barbican-worker-575f575d8f-mwqfl" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.537929 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b806ab6-0402-4f4a-8163-d2bb7da4beb5-config-data\") pod \"barbican-worker-575f575d8f-mwqfl\" (UID: \"3b806ab6-0402-4f4a-8163-d2bb7da4beb5\") " pod="openstack/barbican-worker-575f575d8f-mwqfl" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.538550 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80fb5a13-de72-4173-a859-349b05c9d042-config-data\") pod \"barbican-keystone-listener-5c875f5dbd-4s44w\" (UID: \"80fb5a13-de72-4173-a859-349b05c9d042\") " pod="openstack/barbican-keystone-listener-5c875f5dbd-4s44w" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.607748 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-hkgb5" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.616529 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-575f575d8f-mwqfl" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.620721 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptjnd\" (UniqueName: \"kubernetes.io/projected/18aacb46-b4d8-477d-81df-e0fa4df6c36d-kube-api-access-ptjnd\") pod \"dnsmasq-dns-79ffc8877c-t8ggr\" (UID: \"18aacb46-b4d8-477d-81df-e0fa4df6c36d\") " pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.620767 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3614bc8f-8773-4f22-a9bd-01c34adbd6cc-logs\") pod \"barbican-api-6985bc6658-t8db5\" (UID: \"3614bc8f-8773-4f22-a9bd-01c34adbd6cc\") " pod="openstack/barbican-api-6985bc6658-t8db5" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.620831 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3614bc8f-8773-4f22-a9bd-01c34adbd6cc-combined-ca-bundle\") pod \"barbican-api-6985bc6658-t8db5\" (UID: \"3614bc8f-8773-4f22-a9bd-01c34adbd6cc\") " pod="openstack/barbican-api-6985bc6658-t8db5" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.620857 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18aacb46-b4d8-477d-81df-e0fa4df6c36d-ovsdbserver-sb\") pod \"dnsmasq-dns-79ffc8877c-t8ggr\" (UID: \"18aacb46-b4d8-477d-81df-e0fa4df6c36d\") " pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.620903 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3614bc8f-8773-4f22-a9bd-01c34adbd6cc-config-data-custom\") pod \"barbican-api-6985bc6658-t8db5\" (UID: \"3614bc8f-8773-4f22-a9bd-01c34adbd6cc\") " pod="openstack/barbican-api-6985bc6658-t8db5" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.620937 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18aacb46-b4d8-477d-81df-e0fa4df6c36d-dns-svc\") pod \"dnsmasq-dns-79ffc8877c-t8ggr\" (UID: \"18aacb46-b4d8-477d-81df-e0fa4df6c36d\") " pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.620955 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18aacb46-b4d8-477d-81df-e0fa4df6c36d-config\") pod \"dnsmasq-dns-79ffc8877c-t8ggr\" (UID: \"18aacb46-b4d8-477d-81df-e0fa4df6c36d\") " pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.620974 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18aacb46-b4d8-477d-81df-e0fa4df6c36d-ovsdbserver-nb\") pod \"dnsmasq-dns-79ffc8877c-t8ggr\" (UID: \"18aacb46-b4d8-477d-81df-e0fa4df6c36d\") " pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.620999 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n54tr\" (UniqueName: \"kubernetes.io/projected/3614bc8f-8773-4f22-a9bd-01c34adbd6cc-kube-api-access-n54tr\") pod \"barbican-api-6985bc6658-t8db5\" (UID: \"3614bc8f-8773-4f22-a9bd-01c34adbd6cc\") " pod="openstack/barbican-api-6985bc6658-t8db5" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.621031 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3614bc8f-8773-4f22-a9bd-01c34adbd6cc-config-data\") pod \"barbican-api-6985bc6658-t8db5\" (UID: \"3614bc8f-8773-4f22-a9bd-01c34adbd6cc\") " pod="openstack/barbican-api-6985bc6658-t8db5" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.621850 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3614bc8f-8773-4f22-a9bd-01c34adbd6cc-logs\") pod \"barbican-api-6985bc6658-t8db5\" (UID: \"3614bc8f-8773-4f22-a9bd-01c34adbd6cc\") " pod="openstack/barbican-api-6985bc6658-t8db5" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.622314 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18aacb46-b4d8-477d-81df-e0fa4df6c36d-config\") pod \"dnsmasq-dns-79ffc8877c-t8ggr\" (UID: \"18aacb46-b4d8-477d-81df-e0fa4df6c36d\") " pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.622330 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18aacb46-b4d8-477d-81df-e0fa4df6c36d-dns-svc\") pod \"dnsmasq-dns-79ffc8877c-t8ggr\" (UID: \"18aacb46-b4d8-477d-81df-e0fa4df6c36d\") " pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.622666 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18aacb46-b4d8-477d-81df-e0fa4df6c36d-ovsdbserver-nb\") pod \"dnsmasq-dns-79ffc8877c-t8ggr\" (UID: \"18aacb46-b4d8-477d-81df-e0fa4df6c36d\") " pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.622991 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18aacb46-b4d8-477d-81df-e0fa4df6c36d-ovsdbserver-sb\") pod \"dnsmasq-dns-79ffc8877c-t8ggr\" (UID: \"18aacb46-b4d8-477d-81df-e0fa4df6c36d\") " pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.626093 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3614bc8f-8773-4f22-a9bd-01c34adbd6cc-config-data\") pod \"barbican-api-6985bc6658-t8db5\" (UID: \"3614bc8f-8773-4f22-a9bd-01c34adbd6cc\") " pod="openstack/barbican-api-6985bc6658-t8db5" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.626325 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3614bc8f-8773-4f22-a9bd-01c34adbd6cc-combined-ca-bundle\") pod \"barbican-api-6985bc6658-t8db5\" (UID: \"3614bc8f-8773-4f22-a9bd-01c34adbd6cc\") " pod="openstack/barbican-api-6985bc6658-t8db5" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.626748 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3614bc8f-8773-4f22-a9bd-01c34adbd6cc-config-data-custom\") pod \"barbican-api-6985bc6658-t8db5\" (UID: \"3614bc8f-8773-4f22-a9bd-01c34adbd6cc\") " pod="openstack/barbican-api-6985bc6658-t8db5" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.638732 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c875f5dbd-4s44w" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.644163 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n54tr\" (UniqueName: \"kubernetes.io/projected/3614bc8f-8773-4f22-a9bd-01c34adbd6cc-kube-api-access-n54tr\") pod \"barbican-api-6985bc6658-t8db5\" (UID: \"3614bc8f-8773-4f22-a9bd-01c34adbd6cc\") " pod="openstack/barbican-api-6985bc6658-t8db5" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.649385 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptjnd\" (UniqueName: \"kubernetes.io/projected/18aacb46-b4d8-477d-81df-e0fa4df6c36d-kube-api-access-ptjnd\") pod \"dnsmasq-dns-79ffc8877c-t8ggr\" (UID: \"18aacb46-b4d8-477d-81df-e0fa4df6c36d\") " pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.716395 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" Jan 27 20:11:49 crc kubenswrapper[4915]: I0127 20:11:49.754354 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6985bc6658-t8db5" Jan 27 20:11:50 crc kubenswrapper[4915]: I0127 20:11:50.153538 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5c875f5dbd-4s44w"] Jan 27 20:11:50 crc kubenswrapper[4915]: I0127 20:11:50.272597 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-575f575d8f-mwqfl"] Jan 27 20:11:50 crc kubenswrapper[4915]: W0127 20:11:50.282800 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b806ab6_0402_4f4a_8163_d2bb7da4beb5.slice/crio-2daec61a30ba5d182c74f915640dee7a576df7bdb53a23834a0951fe9f2bf08f WatchSource:0}: Error finding container 2daec61a30ba5d182c74f915640dee7a576df7bdb53a23834a0951fe9f2bf08f: Status 404 returned error can't find the container with id 2daec61a30ba5d182c74f915640dee7a576df7bdb53a23834a0951fe9f2bf08f Jan 27 20:11:50 crc kubenswrapper[4915]: I0127 20:11:50.354966 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6985bc6658-t8db5"] Jan 27 20:11:50 crc kubenswrapper[4915]: W0127 20:11:50.362366 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3614bc8f_8773_4f22_a9bd_01c34adbd6cc.slice/crio-ca198c8f103753cb06cb5d05c36ee5f09bc55a10f02e7c551902b81228946173 WatchSource:0}: Error finding container ca198c8f103753cb06cb5d05c36ee5f09bc55a10f02e7c551902b81228946173: Status 404 returned error can't find the container with id ca198c8f103753cb06cb5d05c36ee5f09bc55a10f02e7c551902b81228946173 Jan 27 20:11:50 crc kubenswrapper[4915]: I0127 20:11:50.410055 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79ffc8877c-t8ggr"] Jan 27 20:11:50 crc kubenswrapper[4915]: W0127 20:11:50.423136 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18aacb46_b4d8_477d_81df_e0fa4df6c36d.slice/crio-fb75cdc4899bcf1aa15ffe0d1966034511ad9c5e6b76b2502d99ff0684a74572 WatchSource:0}: Error finding container fb75cdc4899bcf1aa15ffe0d1966034511ad9c5e6b76b2502d99ff0684a74572: Status 404 returned error can't find the container with id fb75cdc4899bcf1aa15ffe0d1966034511ad9c5e6b76b2502d99ff0684a74572 Jan 27 20:11:51 crc kubenswrapper[4915]: I0127 20:11:51.067329 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c875f5dbd-4s44w" event={"ID":"80fb5a13-de72-4173-a859-349b05c9d042","Type":"ContainerStarted","Data":"2a88af7b7e404964d11cd16edf56d2513959a39f04c9820d0856bd0cf393d80f"} Jan 27 20:11:51 crc kubenswrapper[4915]: I0127 20:11:51.067838 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c875f5dbd-4s44w" event={"ID":"80fb5a13-de72-4173-a859-349b05c9d042","Type":"ContainerStarted","Data":"8fc30bdc9e7ce2da80c734192ee8d09dadf78bb6a268ff8f3037165497c76991"} Jan 27 20:11:51 crc kubenswrapper[4915]: I0127 20:11:51.067849 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c875f5dbd-4s44w" event={"ID":"80fb5a13-de72-4173-a859-349b05c9d042","Type":"ContainerStarted","Data":"ae6a072fa1a317ffb3fcdedea9085c0be5d6a853801cc5795bf31ca0dbf3c01e"} Jan 27 20:11:51 crc kubenswrapper[4915]: I0127 20:11:51.069964 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-575f575d8f-mwqfl" event={"ID":"3b806ab6-0402-4f4a-8163-d2bb7da4beb5","Type":"ContainerStarted","Data":"e0ab40debad464deb1dc9d4f855f93cb92b95715aeac265c8e78257d1c362163"} Jan 27 20:11:51 crc kubenswrapper[4915]: I0127 20:11:51.069992 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-575f575d8f-mwqfl" event={"ID":"3b806ab6-0402-4f4a-8163-d2bb7da4beb5","Type":"ContainerStarted","Data":"56327b3a45315ce321fb54d9c073d713eebc938d6bf44b7bd5b1832248960ffb"} Jan 27 20:11:51 crc kubenswrapper[4915]: I0127 20:11:51.070001 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-575f575d8f-mwqfl" event={"ID":"3b806ab6-0402-4f4a-8163-d2bb7da4beb5","Type":"ContainerStarted","Data":"2daec61a30ba5d182c74f915640dee7a576df7bdb53a23834a0951fe9f2bf08f"} Jan 27 20:11:51 crc kubenswrapper[4915]: I0127 20:11:51.073039 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6985bc6658-t8db5" event={"ID":"3614bc8f-8773-4f22-a9bd-01c34adbd6cc","Type":"ContainerStarted","Data":"9ac8f61e27959e378e5f2fb4c033362d353056abf0a9572583722e6e4a692706"} Jan 27 20:11:51 crc kubenswrapper[4915]: I0127 20:11:51.073070 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6985bc6658-t8db5" event={"ID":"3614bc8f-8773-4f22-a9bd-01c34adbd6cc","Type":"ContainerStarted","Data":"9a5e657f0bb1236b6f14bc1a016071ba4f9af710ba4f1443efc20400d244108e"} Jan 27 20:11:51 crc kubenswrapper[4915]: I0127 20:11:51.073079 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6985bc6658-t8db5" event={"ID":"3614bc8f-8773-4f22-a9bd-01c34adbd6cc","Type":"ContainerStarted","Data":"ca198c8f103753cb06cb5d05c36ee5f09bc55a10f02e7c551902b81228946173"} Jan 27 20:11:51 crc kubenswrapper[4915]: I0127 20:11:51.073575 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6985bc6658-t8db5" Jan 27 20:11:51 crc kubenswrapper[4915]: I0127 20:11:51.073611 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6985bc6658-t8db5" Jan 27 20:11:51 crc kubenswrapper[4915]: I0127 20:11:51.075350 4915 generic.go:334] "Generic (PLEG): container finished" podID="18aacb46-b4d8-477d-81df-e0fa4df6c36d" containerID="7261bd60eba84e5e9fc15fc363fa8bbc85c330eb87f52ec35dd941dbb8652b0d" exitCode=0 Jan 27 20:11:51 crc kubenswrapper[4915]: I0127 20:11:51.075374 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" event={"ID":"18aacb46-b4d8-477d-81df-e0fa4df6c36d","Type":"ContainerDied","Data":"7261bd60eba84e5e9fc15fc363fa8bbc85c330eb87f52ec35dd941dbb8652b0d"} Jan 27 20:11:51 crc kubenswrapper[4915]: I0127 20:11:51.075390 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" event={"ID":"18aacb46-b4d8-477d-81df-e0fa4df6c36d","Type":"ContainerStarted","Data":"fb75cdc4899bcf1aa15ffe0d1966034511ad9c5e6b76b2502d99ff0684a74572"} Jan 27 20:11:51 crc kubenswrapper[4915]: I0127 20:11:51.089055 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5c875f5dbd-4s44w" podStartSLOduration=2.089015519 podStartE2EDuration="2.089015519s" podCreationTimestamp="2026-01-27 20:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:11:51.083400021 +0000 UTC m=+5402.441253705" watchObservedRunningTime="2026-01-27 20:11:51.089015519 +0000 UTC m=+5402.446869193" Jan 27 20:11:51 crc kubenswrapper[4915]: I0127 20:11:51.112139 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6985bc6658-t8db5" podStartSLOduration=2.112121398 podStartE2EDuration="2.112121398s" podCreationTimestamp="2026-01-27 20:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:11:51.105967057 +0000 UTC m=+5402.463820761" watchObservedRunningTime="2026-01-27 20:11:51.112121398 +0000 UTC m=+5402.469975062" Jan 27 20:11:51 crc kubenswrapper[4915]: I0127 20:11:51.174184 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-575f575d8f-mwqfl" podStartSLOduration=2.1741664370000002 podStartE2EDuration="2.174166437s" podCreationTimestamp="2026-01-27 20:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:11:51.1519739 +0000 UTC m=+5402.509827604" watchObservedRunningTime="2026-01-27 20:11:51.174166437 +0000 UTC m=+5402.532020101" Jan 27 20:11:52 crc kubenswrapper[4915]: I0127 20:11:52.089115 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" event={"ID":"18aacb46-b4d8-477d-81df-e0fa4df6c36d","Type":"ContainerStarted","Data":"cbd9f2b693e77c7e74966d225ba950921502e99eb368055af8674b745761de25"} Jan 27 20:11:52 crc kubenswrapper[4915]: I0127 20:11:52.091002 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" Jan 27 20:11:52 crc kubenswrapper[4915]: I0127 20:11:52.111230 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" podStartSLOduration=3.111210835 podStartE2EDuration="3.111210835s" podCreationTimestamp="2026-01-27 20:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:11:52.110256392 +0000 UTC m=+5403.468110086" watchObservedRunningTime="2026-01-27 20:11:52.111210835 +0000 UTC m=+5403.469064499" Jan 27 20:11:58 crc kubenswrapper[4915]: I0127 20:11:58.072444 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-l25hd"] Jan 27 20:11:58 crc kubenswrapper[4915]: I0127 20:11:58.082125 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-l25hd"] Jan 27 20:11:59 crc kubenswrapper[4915]: I0127 20:11:59.371408 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a06248c-abed-4c8f-b135-d45412d4427a" path="/var/lib/kubelet/pods/4a06248c-abed-4c8f-b135-d45412d4427a/volumes" Jan 27 20:11:59 crc kubenswrapper[4915]: I0127 20:11:59.722643 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" Jan 27 20:11:59 crc kubenswrapper[4915]: I0127 20:11:59.808478 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cfd7647fc-5gmx8"] Jan 27 20:11:59 crc kubenswrapper[4915]: I0127 20:11:59.808869 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" podUID="5991eefb-0085-47ea-832a-1c39bbc67d9f" containerName="dnsmasq-dns" containerID="cri-o://e965138ea48fcae0d00fa65a5456d31302ea16cd9310efa9f50b8a276edddd55" gracePeriod=10 Jan 27 20:12:00 crc kubenswrapper[4915]: I0127 20:12:00.186458 4915 generic.go:334] "Generic (PLEG): container finished" podID="5991eefb-0085-47ea-832a-1c39bbc67d9f" containerID="e965138ea48fcae0d00fa65a5456d31302ea16cd9310efa9f50b8a276edddd55" exitCode=0 Jan 27 20:12:00 crc kubenswrapper[4915]: I0127 20:12:00.186562 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" event={"ID":"5991eefb-0085-47ea-832a-1c39bbc67d9f","Type":"ContainerDied","Data":"e965138ea48fcae0d00fa65a5456d31302ea16cd9310efa9f50b8a276edddd55"} Jan 27 20:12:00 crc kubenswrapper[4915]: I0127 20:12:00.431534 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" Jan 27 20:12:00 crc kubenswrapper[4915]: I0127 20:12:00.546986 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c59cb\" (UniqueName: \"kubernetes.io/projected/5991eefb-0085-47ea-832a-1c39bbc67d9f-kube-api-access-c59cb\") pod \"5991eefb-0085-47ea-832a-1c39bbc67d9f\" (UID: \"5991eefb-0085-47ea-832a-1c39bbc67d9f\") " Jan 27 20:12:00 crc kubenswrapper[4915]: I0127 20:12:00.547071 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5991eefb-0085-47ea-832a-1c39bbc67d9f-dns-svc\") pod \"5991eefb-0085-47ea-832a-1c39bbc67d9f\" (UID: \"5991eefb-0085-47ea-832a-1c39bbc67d9f\") " Jan 27 20:12:00 crc kubenswrapper[4915]: I0127 20:12:00.547106 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5991eefb-0085-47ea-832a-1c39bbc67d9f-ovsdbserver-nb\") pod \"5991eefb-0085-47ea-832a-1c39bbc67d9f\" (UID: \"5991eefb-0085-47ea-832a-1c39bbc67d9f\") " Jan 27 20:12:00 crc kubenswrapper[4915]: I0127 20:12:00.547217 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5991eefb-0085-47ea-832a-1c39bbc67d9f-ovsdbserver-sb\") pod \"5991eefb-0085-47ea-832a-1c39bbc67d9f\" (UID: \"5991eefb-0085-47ea-832a-1c39bbc67d9f\") " Jan 27 20:12:00 crc kubenswrapper[4915]: I0127 20:12:00.547246 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5991eefb-0085-47ea-832a-1c39bbc67d9f-config\") pod \"5991eefb-0085-47ea-832a-1c39bbc67d9f\" (UID: \"5991eefb-0085-47ea-832a-1c39bbc67d9f\") " Jan 27 20:12:00 crc kubenswrapper[4915]: I0127 20:12:00.555165 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5991eefb-0085-47ea-832a-1c39bbc67d9f-kube-api-access-c59cb" (OuterVolumeSpecName: "kube-api-access-c59cb") pod "5991eefb-0085-47ea-832a-1c39bbc67d9f" (UID: "5991eefb-0085-47ea-832a-1c39bbc67d9f"). InnerVolumeSpecName "kube-api-access-c59cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:12:00 crc kubenswrapper[4915]: I0127 20:12:00.603866 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5991eefb-0085-47ea-832a-1c39bbc67d9f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5991eefb-0085-47ea-832a-1c39bbc67d9f" (UID: "5991eefb-0085-47ea-832a-1c39bbc67d9f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:12:00 crc kubenswrapper[4915]: I0127 20:12:00.607243 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5991eefb-0085-47ea-832a-1c39bbc67d9f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5991eefb-0085-47ea-832a-1c39bbc67d9f" (UID: "5991eefb-0085-47ea-832a-1c39bbc67d9f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:12:00 crc kubenswrapper[4915]: I0127 20:12:00.616165 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5991eefb-0085-47ea-832a-1c39bbc67d9f-config" (OuterVolumeSpecName: "config") pod "5991eefb-0085-47ea-832a-1c39bbc67d9f" (UID: "5991eefb-0085-47ea-832a-1c39bbc67d9f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:12:00 crc kubenswrapper[4915]: I0127 20:12:00.616298 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5991eefb-0085-47ea-832a-1c39bbc67d9f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5991eefb-0085-47ea-832a-1c39bbc67d9f" (UID: "5991eefb-0085-47ea-832a-1c39bbc67d9f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:12:00 crc kubenswrapper[4915]: I0127 20:12:00.649112 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5991eefb-0085-47ea-832a-1c39bbc67d9f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:00 crc kubenswrapper[4915]: I0127 20:12:00.649152 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5991eefb-0085-47ea-832a-1c39bbc67d9f-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:00 crc kubenswrapper[4915]: I0127 20:12:00.649162 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c59cb\" (UniqueName: \"kubernetes.io/projected/5991eefb-0085-47ea-832a-1c39bbc67d9f-kube-api-access-c59cb\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:00 crc kubenswrapper[4915]: I0127 20:12:00.649172 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5991eefb-0085-47ea-832a-1c39bbc67d9f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:00 crc kubenswrapper[4915]: I0127 20:12:00.649181 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5991eefb-0085-47ea-832a-1c39bbc67d9f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:01 crc kubenswrapper[4915]: I0127 20:12:01.185408 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6985bc6658-t8db5" Jan 27 20:12:01 crc kubenswrapper[4915]: I0127 20:12:01.202283 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" event={"ID":"5991eefb-0085-47ea-832a-1c39bbc67d9f","Type":"ContainerDied","Data":"e3f4b4493337326b8407b0755d0455a67e0ee9de57f1105d40d16cd7a916a44c"} Jan 27 20:12:01 crc kubenswrapper[4915]: I0127 20:12:01.202411 4915 scope.go:117] "RemoveContainer" containerID="e965138ea48fcae0d00fa65a5456d31302ea16cd9310efa9f50b8a276edddd55" Jan 27 20:12:01 crc kubenswrapper[4915]: I0127 20:12:01.202673 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cfd7647fc-5gmx8" Jan 27 20:12:01 crc kubenswrapper[4915]: I0127 20:12:01.231394 4915 scope.go:117] "RemoveContainer" containerID="e277b0f1fbcdc984b0bdf49b828ecd2dfbba1181620e97a978eeb24af43de543" Jan 27 20:12:01 crc kubenswrapper[4915]: I0127 20:12:01.286223 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cfd7647fc-5gmx8"] Jan 27 20:12:01 crc kubenswrapper[4915]: I0127 20:12:01.294095 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cfd7647fc-5gmx8"] Jan 27 20:12:01 crc kubenswrapper[4915]: I0127 20:12:01.343823 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6985bc6658-t8db5" Jan 27 20:12:01 crc kubenswrapper[4915]: I0127 20:12:01.381760 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5991eefb-0085-47ea-832a-1c39bbc67d9f" path="/var/lib/kubelet/pods/5991eefb-0085-47ea-832a-1c39bbc67d9f/volumes" Jan 27 20:12:10 crc kubenswrapper[4915]: I0127 20:12:10.670456 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-brzfm"] Jan 27 20:12:10 crc kubenswrapper[4915]: E0127 20:12:10.671572 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5991eefb-0085-47ea-832a-1c39bbc67d9f" containerName="init" Jan 27 20:12:10 crc kubenswrapper[4915]: I0127 20:12:10.671590 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="5991eefb-0085-47ea-832a-1c39bbc67d9f" containerName="init" Jan 27 20:12:10 crc kubenswrapper[4915]: E0127 20:12:10.671622 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5991eefb-0085-47ea-832a-1c39bbc67d9f" containerName="dnsmasq-dns" Jan 27 20:12:10 crc kubenswrapper[4915]: I0127 20:12:10.671630 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="5991eefb-0085-47ea-832a-1c39bbc67d9f" containerName="dnsmasq-dns" Jan 27 20:12:10 crc kubenswrapper[4915]: I0127 20:12:10.671909 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="5991eefb-0085-47ea-832a-1c39bbc67d9f" containerName="dnsmasq-dns" Jan 27 20:12:10 crc kubenswrapper[4915]: I0127 20:12:10.673514 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brzfm" Jan 27 20:12:10 crc kubenswrapper[4915]: I0127 20:12:10.679394 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-brzfm"] Jan 27 20:12:10 crc kubenswrapper[4915]: I0127 20:12:10.737130 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/517eb0df-cc38-4358-874e-6d2ad409d9e0-catalog-content\") pod \"redhat-operators-brzfm\" (UID: \"517eb0df-cc38-4358-874e-6d2ad409d9e0\") " pod="openshift-marketplace/redhat-operators-brzfm" Jan 27 20:12:10 crc kubenswrapper[4915]: I0127 20:12:10.737229 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/517eb0df-cc38-4358-874e-6d2ad409d9e0-utilities\") pod \"redhat-operators-brzfm\" (UID: \"517eb0df-cc38-4358-874e-6d2ad409d9e0\") " pod="openshift-marketplace/redhat-operators-brzfm" Jan 27 20:12:10 crc kubenswrapper[4915]: I0127 20:12:10.737362 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59wb8\" (UniqueName: \"kubernetes.io/projected/517eb0df-cc38-4358-874e-6d2ad409d9e0-kube-api-access-59wb8\") pod \"redhat-operators-brzfm\" (UID: \"517eb0df-cc38-4358-874e-6d2ad409d9e0\") " pod="openshift-marketplace/redhat-operators-brzfm" Jan 27 20:12:10 crc kubenswrapper[4915]: I0127 20:12:10.838643 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/517eb0df-cc38-4358-874e-6d2ad409d9e0-catalog-content\") pod \"redhat-operators-brzfm\" (UID: \"517eb0df-cc38-4358-874e-6d2ad409d9e0\") " pod="openshift-marketplace/redhat-operators-brzfm" Jan 27 20:12:10 crc kubenswrapper[4915]: I0127 20:12:10.838685 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/517eb0df-cc38-4358-874e-6d2ad409d9e0-utilities\") pod \"redhat-operators-brzfm\" (UID: \"517eb0df-cc38-4358-874e-6d2ad409d9e0\") " pod="openshift-marketplace/redhat-operators-brzfm" Jan 27 20:12:10 crc kubenswrapper[4915]: I0127 20:12:10.838778 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59wb8\" (UniqueName: \"kubernetes.io/projected/517eb0df-cc38-4358-874e-6d2ad409d9e0-kube-api-access-59wb8\") pod \"redhat-operators-brzfm\" (UID: \"517eb0df-cc38-4358-874e-6d2ad409d9e0\") " pod="openshift-marketplace/redhat-operators-brzfm" Jan 27 20:12:10 crc kubenswrapper[4915]: I0127 20:12:10.839497 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/517eb0df-cc38-4358-874e-6d2ad409d9e0-catalog-content\") pod \"redhat-operators-brzfm\" (UID: \"517eb0df-cc38-4358-874e-6d2ad409d9e0\") " pod="openshift-marketplace/redhat-operators-brzfm" Jan 27 20:12:10 crc kubenswrapper[4915]: I0127 20:12:10.839743 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/517eb0df-cc38-4358-874e-6d2ad409d9e0-utilities\") pod \"redhat-operators-brzfm\" (UID: \"517eb0df-cc38-4358-874e-6d2ad409d9e0\") " pod="openshift-marketplace/redhat-operators-brzfm" Jan 27 20:12:10 crc kubenswrapper[4915]: I0127 20:12:10.863506 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59wb8\" (UniqueName: \"kubernetes.io/projected/517eb0df-cc38-4358-874e-6d2ad409d9e0-kube-api-access-59wb8\") pod \"redhat-operators-brzfm\" (UID: \"517eb0df-cc38-4358-874e-6d2ad409d9e0\") " pod="openshift-marketplace/redhat-operators-brzfm" Jan 27 20:12:11 crc kubenswrapper[4915]: I0127 20:12:11.009873 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brzfm" Jan 27 20:12:11 crc kubenswrapper[4915]: I0127 20:12:11.445244 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-brzfm"] Jan 27 20:12:12 crc kubenswrapper[4915]: I0127 20:12:12.300512 4915 generic.go:334] "Generic (PLEG): container finished" podID="517eb0df-cc38-4358-874e-6d2ad409d9e0" containerID="7d37a9d020b4152eee1c148ecc7e2e1959c6223c4ceae2ca2d28bd5588736a0b" exitCode=0 Jan 27 20:12:12 crc kubenswrapper[4915]: I0127 20:12:12.300591 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brzfm" event={"ID":"517eb0df-cc38-4358-874e-6d2ad409d9e0","Type":"ContainerDied","Data":"7d37a9d020b4152eee1c148ecc7e2e1959c6223c4ceae2ca2d28bd5588736a0b"} Jan 27 20:12:12 crc kubenswrapper[4915]: I0127 20:12:12.300660 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brzfm" event={"ID":"517eb0df-cc38-4358-874e-6d2ad409d9e0","Type":"ContainerStarted","Data":"0d6ff1096dabdcfc82a585007de1ffd13cc103ac4a3c5c35650e1690bc60b21e"} Jan 27 20:12:12 crc kubenswrapper[4915]: I0127 20:12:12.304237 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 20:12:13 crc kubenswrapper[4915]: I0127 20:12:13.070885 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-smsxm"] Jan 27 20:12:13 crc kubenswrapper[4915]: I0127 20:12:13.072513 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-smsxm" Jan 27 20:12:13 crc kubenswrapper[4915]: I0127 20:12:13.085870 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-smsxm"] Jan 27 20:12:13 crc kubenswrapper[4915]: I0127 20:12:13.175197 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c232a473-6549-423f-8c63-79e5507506e5-operator-scripts\") pod \"neutron-db-create-smsxm\" (UID: \"c232a473-6549-423f-8c63-79e5507506e5\") " pod="openstack/neutron-db-create-smsxm" Jan 27 20:12:13 crc kubenswrapper[4915]: I0127 20:12:13.175338 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sgwg\" (UniqueName: \"kubernetes.io/projected/c232a473-6549-423f-8c63-79e5507506e5-kube-api-access-2sgwg\") pod \"neutron-db-create-smsxm\" (UID: \"c232a473-6549-423f-8c63-79e5507506e5\") " pod="openstack/neutron-db-create-smsxm" Jan 27 20:12:13 crc kubenswrapper[4915]: I0127 20:12:13.175820 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-a45d-account-create-update-xzgwt"] Jan 27 20:12:13 crc kubenswrapper[4915]: I0127 20:12:13.177231 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a45d-account-create-update-xzgwt" Jan 27 20:12:13 crc kubenswrapper[4915]: I0127 20:12:13.178880 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 27 20:12:13 crc kubenswrapper[4915]: I0127 20:12:13.187745 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a45d-account-create-update-xzgwt"] Jan 27 20:12:13 crc kubenswrapper[4915]: I0127 20:12:13.277376 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36bdf6cd-11c3-4fba-9508-76c9e9a69467-operator-scripts\") pod \"neutron-a45d-account-create-update-xzgwt\" (UID: \"36bdf6cd-11c3-4fba-9508-76c9e9a69467\") " pod="openstack/neutron-a45d-account-create-update-xzgwt" Jan 27 20:12:13 crc kubenswrapper[4915]: I0127 20:12:13.277488 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c232a473-6549-423f-8c63-79e5507506e5-operator-scripts\") pod \"neutron-db-create-smsxm\" (UID: \"c232a473-6549-423f-8c63-79e5507506e5\") " pod="openstack/neutron-db-create-smsxm" Jan 27 20:12:13 crc kubenswrapper[4915]: I0127 20:12:13.277609 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sgwg\" (UniqueName: \"kubernetes.io/projected/c232a473-6549-423f-8c63-79e5507506e5-kube-api-access-2sgwg\") pod \"neutron-db-create-smsxm\" (UID: \"c232a473-6549-423f-8c63-79e5507506e5\") " pod="openstack/neutron-db-create-smsxm" Jan 27 20:12:13 crc kubenswrapper[4915]: I0127 20:12:13.277645 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs2vp\" (UniqueName: \"kubernetes.io/projected/36bdf6cd-11c3-4fba-9508-76c9e9a69467-kube-api-access-hs2vp\") pod \"neutron-a45d-account-create-update-xzgwt\" (UID: \"36bdf6cd-11c3-4fba-9508-76c9e9a69467\") " pod="openstack/neutron-a45d-account-create-update-xzgwt" Jan 27 20:12:13 crc kubenswrapper[4915]: I0127 20:12:13.278430 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c232a473-6549-423f-8c63-79e5507506e5-operator-scripts\") pod \"neutron-db-create-smsxm\" (UID: \"c232a473-6549-423f-8c63-79e5507506e5\") " pod="openstack/neutron-db-create-smsxm" Jan 27 20:12:13 crc kubenswrapper[4915]: I0127 20:12:13.315422 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sgwg\" (UniqueName: \"kubernetes.io/projected/c232a473-6549-423f-8c63-79e5507506e5-kube-api-access-2sgwg\") pod \"neutron-db-create-smsxm\" (UID: \"c232a473-6549-423f-8c63-79e5507506e5\") " pod="openstack/neutron-db-create-smsxm" Jan 27 20:12:13 crc kubenswrapper[4915]: I0127 20:12:13.379474 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs2vp\" (UniqueName: \"kubernetes.io/projected/36bdf6cd-11c3-4fba-9508-76c9e9a69467-kube-api-access-hs2vp\") pod \"neutron-a45d-account-create-update-xzgwt\" (UID: \"36bdf6cd-11c3-4fba-9508-76c9e9a69467\") " pod="openstack/neutron-a45d-account-create-update-xzgwt" Jan 27 20:12:13 crc kubenswrapper[4915]: I0127 20:12:13.379642 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36bdf6cd-11c3-4fba-9508-76c9e9a69467-operator-scripts\") pod \"neutron-a45d-account-create-update-xzgwt\" (UID: \"36bdf6cd-11c3-4fba-9508-76c9e9a69467\") " pod="openstack/neutron-a45d-account-create-update-xzgwt" Jan 27 20:12:13 crc kubenswrapper[4915]: I0127 20:12:13.380325 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36bdf6cd-11c3-4fba-9508-76c9e9a69467-operator-scripts\") pod \"neutron-a45d-account-create-update-xzgwt\" (UID: \"36bdf6cd-11c3-4fba-9508-76c9e9a69467\") " pod="openstack/neutron-a45d-account-create-update-xzgwt" Jan 27 20:12:13 crc kubenswrapper[4915]: I0127 20:12:13.393190 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-smsxm" Jan 27 20:12:13 crc kubenswrapper[4915]: I0127 20:12:13.398941 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs2vp\" (UniqueName: \"kubernetes.io/projected/36bdf6cd-11c3-4fba-9508-76c9e9a69467-kube-api-access-hs2vp\") pod \"neutron-a45d-account-create-update-xzgwt\" (UID: \"36bdf6cd-11c3-4fba-9508-76c9e9a69467\") " pod="openstack/neutron-a45d-account-create-update-xzgwt" Jan 27 20:12:13 crc kubenswrapper[4915]: I0127 20:12:13.505550 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a45d-account-create-update-xzgwt" Jan 27 20:12:13 crc kubenswrapper[4915]: I0127 20:12:13.843592 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-smsxm"] Jan 27 20:12:13 crc kubenswrapper[4915]: W0127 20:12:13.851609 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc232a473_6549_423f_8c63_79e5507506e5.slice/crio-a0c33539f88e2f44d2b6f755778861e5dbc75cea956bd64b82f3acd136ab938c WatchSource:0}: Error finding container a0c33539f88e2f44d2b6f755778861e5dbc75cea956bd64b82f3acd136ab938c: Status 404 returned error can't find the container with id a0c33539f88e2f44d2b6f755778861e5dbc75cea956bd64b82f3acd136ab938c Jan 27 20:12:13 crc kubenswrapper[4915]: I0127 20:12:13.972691 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a45d-account-create-update-xzgwt"] Jan 27 20:12:14 crc kubenswrapper[4915]: I0127 20:12:14.321887 4915 generic.go:334] "Generic (PLEG): container finished" podID="36bdf6cd-11c3-4fba-9508-76c9e9a69467" containerID="a1c4853e09911892c1f28e538d09d6be40809596cf5a44be7ccc31186e0bc81a" exitCode=0 Jan 27 20:12:14 crc kubenswrapper[4915]: I0127 20:12:14.321946 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a45d-account-create-update-xzgwt" event={"ID":"36bdf6cd-11c3-4fba-9508-76c9e9a69467","Type":"ContainerDied","Data":"a1c4853e09911892c1f28e538d09d6be40809596cf5a44be7ccc31186e0bc81a"} Jan 27 20:12:14 crc kubenswrapper[4915]: I0127 20:12:14.322145 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a45d-account-create-update-xzgwt" event={"ID":"36bdf6cd-11c3-4fba-9508-76c9e9a69467","Type":"ContainerStarted","Data":"6d14be9037f5c59297fc27162e93ad27b5ce1c8e778b69d08580eba079016627"} Jan 27 20:12:14 crc kubenswrapper[4915]: I0127 20:12:14.324334 4915 generic.go:334] "Generic (PLEG): container finished" podID="517eb0df-cc38-4358-874e-6d2ad409d9e0" containerID="b6c3f631e5901526f11dbc5ebc7b259ed3802918a58aea5dd6339b36166a7521" exitCode=0 Jan 27 20:12:14 crc kubenswrapper[4915]: I0127 20:12:14.324397 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brzfm" event={"ID":"517eb0df-cc38-4358-874e-6d2ad409d9e0","Type":"ContainerDied","Data":"b6c3f631e5901526f11dbc5ebc7b259ed3802918a58aea5dd6339b36166a7521"} Jan 27 20:12:14 crc kubenswrapper[4915]: I0127 20:12:14.326992 4915 generic.go:334] "Generic (PLEG): container finished" podID="c232a473-6549-423f-8c63-79e5507506e5" containerID="a2387b76251fe60d9695f160467d8820dd9ea061351d508066b5c2396ced25cb" exitCode=0 Jan 27 20:12:14 crc kubenswrapper[4915]: I0127 20:12:14.327061 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-smsxm" event={"ID":"c232a473-6549-423f-8c63-79e5507506e5","Type":"ContainerDied","Data":"a2387b76251fe60d9695f160467d8820dd9ea061351d508066b5c2396ced25cb"} Jan 27 20:12:14 crc kubenswrapper[4915]: I0127 20:12:14.327102 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-smsxm" event={"ID":"c232a473-6549-423f-8c63-79e5507506e5","Type":"ContainerStarted","Data":"a0c33539f88e2f44d2b6f755778861e5dbc75cea956bd64b82f3acd136ab938c"} Jan 27 20:12:15 crc kubenswrapper[4915]: I0127 20:12:15.343127 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brzfm" event={"ID":"517eb0df-cc38-4358-874e-6d2ad409d9e0","Type":"ContainerStarted","Data":"1b224d4fe5b4800f5c2a5523bab8d04823eaadfe13d20e0bf0d4e9cc3c2804e6"} Jan 27 20:12:15 crc kubenswrapper[4915]: I0127 20:12:15.367287 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-brzfm" podStartSLOduration=2.953781146 podStartE2EDuration="5.367259592s" podCreationTimestamp="2026-01-27 20:12:10 +0000 UTC" firstStartedPulling="2026-01-27 20:12:12.303809742 +0000 UTC m=+5423.661663466" lastFinishedPulling="2026-01-27 20:12:14.717288248 +0000 UTC m=+5426.075141912" observedRunningTime="2026-01-27 20:12:15.364418972 +0000 UTC m=+5426.722272666" watchObservedRunningTime="2026-01-27 20:12:15.367259592 +0000 UTC m=+5426.725113266" Jan 27 20:12:15 crc kubenswrapper[4915]: I0127 20:12:15.750419 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a45d-account-create-update-xzgwt" Jan 27 20:12:15 crc kubenswrapper[4915]: I0127 20:12:15.756297 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-smsxm" Jan 27 20:12:15 crc kubenswrapper[4915]: I0127 20:12:15.824645 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36bdf6cd-11c3-4fba-9508-76c9e9a69467-operator-scripts\") pod \"36bdf6cd-11c3-4fba-9508-76c9e9a69467\" (UID: \"36bdf6cd-11c3-4fba-9508-76c9e9a69467\") " Jan 27 20:12:15 crc kubenswrapper[4915]: I0127 20:12:15.824717 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs2vp\" (UniqueName: \"kubernetes.io/projected/36bdf6cd-11c3-4fba-9508-76c9e9a69467-kube-api-access-hs2vp\") pod \"36bdf6cd-11c3-4fba-9508-76c9e9a69467\" (UID: \"36bdf6cd-11c3-4fba-9508-76c9e9a69467\") " Jan 27 20:12:15 crc kubenswrapper[4915]: I0127 20:12:15.824770 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sgwg\" (UniqueName: \"kubernetes.io/projected/c232a473-6549-423f-8c63-79e5507506e5-kube-api-access-2sgwg\") pod \"c232a473-6549-423f-8c63-79e5507506e5\" (UID: \"c232a473-6549-423f-8c63-79e5507506e5\") " Jan 27 20:12:15 crc kubenswrapper[4915]: I0127 20:12:15.824788 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c232a473-6549-423f-8c63-79e5507506e5-operator-scripts\") pod \"c232a473-6549-423f-8c63-79e5507506e5\" (UID: \"c232a473-6549-423f-8c63-79e5507506e5\") " Jan 27 20:12:15 crc kubenswrapper[4915]: I0127 20:12:15.825563 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36bdf6cd-11c3-4fba-9508-76c9e9a69467-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "36bdf6cd-11c3-4fba-9508-76c9e9a69467" (UID: "36bdf6cd-11c3-4fba-9508-76c9e9a69467"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:12:15 crc kubenswrapper[4915]: I0127 20:12:15.825580 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c232a473-6549-423f-8c63-79e5507506e5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c232a473-6549-423f-8c63-79e5507506e5" (UID: "c232a473-6549-423f-8c63-79e5507506e5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:12:15 crc kubenswrapper[4915]: I0127 20:12:15.830086 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36bdf6cd-11c3-4fba-9508-76c9e9a69467-kube-api-access-hs2vp" (OuterVolumeSpecName: "kube-api-access-hs2vp") pod "36bdf6cd-11c3-4fba-9508-76c9e9a69467" (UID: "36bdf6cd-11c3-4fba-9508-76c9e9a69467"). InnerVolumeSpecName "kube-api-access-hs2vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:12:15 crc kubenswrapper[4915]: I0127 20:12:15.830214 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c232a473-6549-423f-8c63-79e5507506e5-kube-api-access-2sgwg" (OuterVolumeSpecName: "kube-api-access-2sgwg") pod "c232a473-6549-423f-8c63-79e5507506e5" (UID: "c232a473-6549-423f-8c63-79e5507506e5"). InnerVolumeSpecName "kube-api-access-2sgwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:12:15 crc kubenswrapper[4915]: I0127 20:12:15.926578 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36bdf6cd-11c3-4fba-9508-76c9e9a69467-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:15 crc kubenswrapper[4915]: I0127 20:12:15.926609 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs2vp\" (UniqueName: \"kubernetes.io/projected/36bdf6cd-11c3-4fba-9508-76c9e9a69467-kube-api-access-hs2vp\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:15 crc kubenswrapper[4915]: I0127 20:12:15.926621 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sgwg\" (UniqueName: \"kubernetes.io/projected/c232a473-6549-423f-8c63-79e5507506e5-kube-api-access-2sgwg\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:15 crc kubenswrapper[4915]: I0127 20:12:15.926630 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c232a473-6549-423f-8c63-79e5507506e5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:16 crc kubenswrapper[4915]: I0127 20:12:16.354074 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a45d-account-create-update-xzgwt" event={"ID":"36bdf6cd-11c3-4fba-9508-76c9e9a69467","Type":"ContainerDied","Data":"6d14be9037f5c59297fc27162e93ad27b5ce1c8e778b69d08580eba079016627"} Jan 27 20:12:16 crc kubenswrapper[4915]: I0127 20:12:16.354115 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d14be9037f5c59297fc27162e93ad27b5ce1c8e778b69d08580eba079016627" Jan 27 20:12:16 crc kubenswrapper[4915]: I0127 20:12:16.354699 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a45d-account-create-update-xzgwt" Jan 27 20:12:16 crc kubenswrapper[4915]: I0127 20:12:16.356145 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-smsxm" Jan 27 20:12:16 crc kubenswrapper[4915]: I0127 20:12:16.356148 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-smsxm" event={"ID":"c232a473-6549-423f-8c63-79e5507506e5","Type":"ContainerDied","Data":"a0c33539f88e2f44d2b6f755778861e5dbc75cea956bd64b82f3acd136ab938c"} Jan 27 20:12:16 crc kubenswrapper[4915]: I0127 20:12:16.356198 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0c33539f88e2f44d2b6f755778861e5dbc75cea956bd64b82f3acd136ab938c" Jan 27 20:12:18 crc kubenswrapper[4915]: I0127 20:12:18.457050 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-nx2wg"] Jan 27 20:12:18 crc kubenswrapper[4915]: E0127 20:12:18.460244 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36bdf6cd-11c3-4fba-9508-76c9e9a69467" containerName="mariadb-account-create-update" Jan 27 20:12:18 crc kubenswrapper[4915]: I0127 20:12:18.460272 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="36bdf6cd-11c3-4fba-9508-76c9e9a69467" containerName="mariadb-account-create-update" Jan 27 20:12:18 crc kubenswrapper[4915]: E0127 20:12:18.460291 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c232a473-6549-423f-8c63-79e5507506e5" containerName="mariadb-database-create" Jan 27 20:12:18 crc kubenswrapper[4915]: I0127 20:12:18.460300 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c232a473-6549-423f-8c63-79e5507506e5" containerName="mariadb-database-create" Jan 27 20:12:18 crc kubenswrapper[4915]: I0127 20:12:18.460517 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="36bdf6cd-11c3-4fba-9508-76c9e9a69467" containerName="mariadb-account-create-update" Jan 27 20:12:18 crc kubenswrapper[4915]: I0127 20:12:18.460540 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c232a473-6549-423f-8c63-79e5507506e5" containerName="mariadb-database-create" Jan 27 20:12:18 crc kubenswrapper[4915]: I0127 20:12:18.461260 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nx2wg" Jan 27 20:12:18 crc kubenswrapper[4915]: I0127 20:12:18.466283 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 20:12:18 crc kubenswrapper[4915]: I0127 20:12:18.466506 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 20:12:18 crc kubenswrapper[4915]: I0127 20:12:18.468676 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rgqpn" Jan 27 20:12:18 crc kubenswrapper[4915]: I0127 20:12:18.468844 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-nx2wg"] Jan 27 20:12:18 crc kubenswrapper[4915]: I0127 20:12:18.570629 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2f5de868-ebe4-4f41-b480-4a71a76b1d0f-config\") pod \"neutron-db-sync-nx2wg\" (UID: \"2f5de868-ebe4-4f41-b480-4a71a76b1d0f\") " pod="openstack/neutron-db-sync-nx2wg" Jan 27 20:12:18 crc kubenswrapper[4915]: I0127 20:12:18.570785 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f5de868-ebe4-4f41-b480-4a71a76b1d0f-combined-ca-bundle\") pod \"neutron-db-sync-nx2wg\" (UID: \"2f5de868-ebe4-4f41-b480-4a71a76b1d0f\") " pod="openstack/neutron-db-sync-nx2wg" Jan 27 20:12:18 crc kubenswrapper[4915]: I0127 20:12:18.570934 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv5b8\" (UniqueName: \"kubernetes.io/projected/2f5de868-ebe4-4f41-b480-4a71a76b1d0f-kube-api-access-zv5b8\") pod \"neutron-db-sync-nx2wg\" (UID: \"2f5de868-ebe4-4f41-b480-4a71a76b1d0f\") " pod="openstack/neutron-db-sync-nx2wg" Jan 27 20:12:18 crc kubenswrapper[4915]: I0127 20:12:18.672640 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv5b8\" (UniqueName: \"kubernetes.io/projected/2f5de868-ebe4-4f41-b480-4a71a76b1d0f-kube-api-access-zv5b8\") pod \"neutron-db-sync-nx2wg\" (UID: \"2f5de868-ebe4-4f41-b480-4a71a76b1d0f\") " pod="openstack/neutron-db-sync-nx2wg" Jan 27 20:12:18 crc kubenswrapper[4915]: I0127 20:12:18.672812 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2f5de868-ebe4-4f41-b480-4a71a76b1d0f-config\") pod \"neutron-db-sync-nx2wg\" (UID: \"2f5de868-ebe4-4f41-b480-4a71a76b1d0f\") " pod="openstack/neutron-db-sync-nx2wg" Jan 27 20:12:18 crc kubenswrapper[4915]: I0127 20:12:18.672850 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f5de868-ebe4-4f41-b480-4a71a76b1d0f-combined-ca-bundle\") pod \"neutron-db-sync-nx2wg\" (UID: \"2f5de868-ebe4-4f41-b480-4a71a76b1d0f\") " pod="openstack/neutron-db-sync-nx2wg" Jan 27 20:12:18 crc kubenswrapper[4915]: I0127 20:12:18.677608 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f5de868-ebe4-4f41-b480-4a71a76b1d0f-combined-ca-bundle\") pod \"neutron-db-sync-nx2wg\" (UID: \"2f5de868-ebe4-4f41-b480-4a71a76b1d0f\") " pod="openstack/neutron-db-sync-nx2wg" Jan 27 20:12:18 crc kubenswrapper[4915]: I0127 20:12:18.677724 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2f5de868-ebe4-4f41-b480-4a71a76b1d0f-config\") pod \"neutron-db-sync-nx2wg\" (UID: \"2f5de868-ebe4-4f41-b480-4a71a76b1d0f\") " pod="openstack/neutron-db-sync-nx2wg" Jan 27 20:12:18 crc kubenswrapper[4915]: I0127 20:12:18.688611 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv5b8\" (UniqueName: \"kubernetes.io/projected/2f5de868-ebe4-4f41-b480-4a71a76b1d0f-kube-api-access-zv5b8\") pod \"neutron-db-sync-nx2wg\" (UID: \"2f5de868-ebe4-4f41-b480-4a71a76b1d0f\") " pod="openstack/neutron-db-sync-nx2wg" Jan 27 20:12:18 crc kubenswrapper[4915]: I0127 20:12:18.781006 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nx2wg" Jan 27 20:12:19 crc kubenswrapper[4915]: I0127 20:12:19.354260 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-nx2wg"] Jan 27 20:12:19 crc kubenswrapper[4915]: I0127 20:12:19.395520 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nx2wg" event={"ID":"2f5de868-ebe4-4f41-b480-4a71a76b1d0f","Type":"ContainerStarted","Data":"0db8f380348907111d1d0794946bc86d227bde43972ac8d6a677ba0a0baa69d4"} Jan 27 20:12:20 crc kubenswrapper[4915]: I0127 20:12:20.407908 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nx2wg" event={"ID":"2f5de868-ebe4-4f41-b480-4a71a76b1d0f","Type":"ContainerStarted","Data":"6f3665eb13118bd0967bedead4ce7cbedb3aa1bdbb7e0c28c4c83c8e489ca220"} Jan 27 20:12:20 crc kubenswrapper[4915]: I0127 20:12:20.431244 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-nx2wg" podStartSLOduration=2.431217544 podStartE2EDuration="2.431217544s" podCreationTimestamp="2026-01-27 20:12:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:12:20.421201437 +0000 UTC m=+5431.779055121" watchObservedRunningTime="2026-01-27 20:12:20.431217544 +0000 UTC m=+5431.789071228" Jan 27 20:12:21 crc kubenswrapper[4915]: I0127 20:12:21.010325 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-brzfm" Jan 27 20:12:21 crc kubenswrapper[4915]: I0127 20:12:21.010396 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-brzfm" Jan 27 20:12:21 crc kubenswrapper[4915]: I0127 20:12:21.051943 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-brzfm" Jan 27 20:12:21 crc kubenswrapper[4915]: I0127 20:12:21.461182 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-brzfm" Jan 27 20:12:21 crc kubenswrapper[4915]: I0127 20:12:21.509666 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-brzfm"] Jan 27 20:12:23 crc kubenswrapper[4915]: I0127 20:12:23.432458 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-brzfm" podUID="517eb0df-cc38-4358-874e-6d2ad409d9e0" containerName="registry-server" containerID="cri-o://1b224d4fe5b4800f5c2a5523bab8d04823eaadfe13d20e0bf0d4e9cc3c2804e6" gracePeriod=2 Jan 27 20:12:23 crc kubenswrapper[4915]: I0127 20:12:23.863410 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brzfm" Jan 27 20:12:23 crc kubenswrapper[4915]: I0127 20:12:23.963541 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/517eb0df-cc38-4358-874e-6d2ad409d9e0-catalog-content\") pod \"517eb0df-cc38-4358-874e-6d2ad409d9e0\" (UID: \"517eb0df-cc38-4358-874e-6d2ad409d9e0\") " Jan 27 20:12:23 crc kubenswrapper[4915]: I0127 20:12:23.963702 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59wb8\" (UniqueName: \"kubernetes.io/projected/517eb0df-cc38-4358-874e-6d2ad409d9e0-kube-api-access-59wb8\") pod \"517eb0df-cc38-4358-874e-6d2ad409d9e0\" (UID: \"517eb0df-cc38-4358-874e-6d2ad409d9e0\") " Jan 27 20:12:23 crc kubenswrapper[4915]: I0127 20:12:23.963731 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/517eb0df-cc38-4358-874e-6d2ad409d9e0-utilities\") pod \"517eb0df-cc38-4358-874e-6d2ad409d9e0\" (UID: \"517eb0df-cc38-4358-874e-6d2ad409d9e0\") " Jan 27 20:12:23 crc kubenswrapper[4915]: I0127 20:12:23.965150 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/517eb0df-cc38-4358-874e-6d2ad409d9e0-utilities" (OuterVolumeSpecName: "utilities") pod "517eb0df-cc38-4358-874e-6d2ad409d9e0" (UID: "517eb0df-cc38-4358-874e-6d2ad409d9e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:12:23 crc kubenswrapper[4915]: I0127 20:12:23.969980 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/517eb0df-cc38-4358-874e-6d2ad409d9e0-kube-api-access-59wb8" (OuterVolumeSpecName: "kube-api-access-59wb8") pod "517eb0df-cc38-4358-874e-6d2ad409d9e0" (UID: "517eb0df-cc38-4358-874e-6d2ad409d9e0"). InnerVolumeSpecName "kube-api-access-59wb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:12:24 crc kubenswrapper[4915]: I0127 20:12:24.066025 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59wb8\" (UniqueName: \"kubernetes.io/projected/517eb0df-cc38-4358-874e-6d2ad409d9e0-kube-api-access-59wb8\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:24 crc kubenswrapper[4915]: I0127 20:12:24.066065 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/517eb0df-cc38-4358-874e-6d2ad409d9e0-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:24 crc kubenswrapper[4915]: I0127 20:12:24.077727 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/517eb0df-cc38-4358-874e-6d2ad409d9e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "517eb0df-cc38-4358-874e-6d2ad409d9e0" (UID: "517eb0df-cc38-4358-874e-6d2ad409d9e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:12:24 crc kubenswrapper[4915]: I0127 20:12:24.167312 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/517eb0df-cc38-4358-874e-6d2ad409d9e0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:24 crc kubenswrapper[4915]: I0127 20:12:24.447394 4915 generic.go:334] "Generic (PLEG): container finished" podID="517eb0df-cc38-4358-874e-6d2ad409d9e0" containerID="1b224d4fe5b4800f5c2a5523bab8d04823eaadfe13d20e0bf0d4e9cc3c2804e6" exitCode=0 Jan 27 20:12:24 crc kubenswrapper[4915]: I0127 20:12:24.447459 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brzfm" event={"ID":"517eb0df-cc38-4358-874e-6d2ad409d9e0","Type":"ContainerDied","Data":"1b224d4fe5b4800f5c2a5523bab8d04823eaadfe13d20e0bf0d4e9cc3c2804e6"} Jan 27 20:12:24 crc kubenswrapper[4915]: I0127 20:12:24.447759 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brzfm" event={"ID":"517eb0df-cc38-4358-874e-6d2ad409d9e0","Type":"ContainerDied","Data":"0d6ff1096dabdcfc82a585007de1ffd13cc103ac4a3c5c35650e1690bc60b21e"} Jan 27 20:12:24 crc kubenswrapper[4915]: I0127 20:12:24.447805 4915 scope.go:117] "RemoveContainer" containerID="1b224d4fe5b4800f5c2a5523bab8d04823eaadfe13d20e0bf0d4e9cc3c2804e6" Jan 27 20:12:24 crc kubenswrapper[4915]: I0127 20:12:24.447488 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brzfm" Jan 27 20:12:24 crc kubenswrapper[4915]: I0127 20:12:24.488588 4915 scope.go:117] "RemoveContainer" containerID="b6c3f631e5901526f11dbc5ebc7b259ed3802918a58aea5dd6339b36166a7521" Jan 27 20:12:24 crc kubenswrapper[4915]: I0127 20:12:24.488738 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-brzfm"] Jan 27 20:12:24 crc kubenswrapper[4915]: I0127 20:12:24.493185 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-brzfm"] Jan 27 20:12:24 crc kubenswrapper[4915]: I0127 20:12:24.515232 4915 scope.go:117] "RemoveContainer" containerID="7d37a9d020b4152eee1c148ecc7e2e1959c6223c4ceae2ca2d28bd5588736a0b" Jan 27 20:12:24 crc kubenswrapper[4915]: I0127 20:12:24.549811 4915 scope.go:117] "RemoveContainer" containerID="1b224d4fe5b4800f5c2a5523bab8d04823eaadfe13d20e0bf0d4e9cc3c2804e6" Jan 27 20:12:24 crc kubenswrapper[4915]: E0127 20:12:24.550280 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b224d4fe5b4800f5c2a5523bab8d04823eaadfe13d20e0bf0d4e9cc3c2804e6\": container with ID starting with 1b224d4fe5b4800f5c2a5523bab8d04823eaadfe13d20e0bf0d4e9cc3c2804e6 not found: ID does not exist" containerID="1b224d4fe5b4800f5c2a5523bab8d04823eaadfe13d20e0bf0d4e9cc3c2804e6" Jan 27 20:12:24 crc kubenswrapper[4915]: I0127 20:12:24.550318 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b224d4fe5b4800f5c2a5523bab8d04823eaadfe13d20e0bf0d4e9cc3c2804e6"} err="failed to get container status \"1b224d4fe5b4800f5c2a5523bab8d04823eaadfe13d20e0bf0d4e9cc3c2804e6\": rpc error: code = NotFound desc = could not find container \"1b224d4fe5b4800f5c2a5523bab8d04823eaadfe13d20e0bf0d4e9cc3c2804e6\": container with ID starting with 1b224d4fe5b4800f5c2a5523bab8d04823eaadfe13d20e0bf0d4e9cc3c2804e6 not found: ID does not exist" Jan 27 20:12:24 crc kubenswrapper[4915]: I0127 20:12:24.550343 4915 scope.go:117] "RemoveContainer" containerID="b6c3f631e5901526f11dbc5ebc7b259ed3802918a58aea5dd6339b36166a7521" Jan 27 20:12:24 crc kubenswrapper[4915]: E0127 20:12:24.550870 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6c3f631e5901526f11dbc5ebc7b259ed3802918a58aea5dd6339b36166a7521\": container with ID starting with b6c3f631e5901526f11dbc5ebc7b259ed3802918a58aea5dd6339b36166a7521 not found: ID does not exist" containerID="b6c3f631e5901526f11dbc5ebc7b259ed3802918a58aea5dd6339b36166a7521" Jan 27 20:12:24 crc kubenswrapper[4915]: I0127 20:12:24.550897 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6c3f631e5901526f11dbc5ebc7b259ed3802918a58aea5dd6339b36166a7521"} err="failed to get container status \"b6c3f631e5901526f11dbc5ebc7b259ed3802918a58aea5dd6339b36166a7521\": rpc error: code = NotFound desc = could not find container \"b6c3f631e5901526f11dbc5ebc7b259ed3802918a58aea5dd6339b36166a7521\": container with ID starting with b6c3f631e5901526f11dbc5ebc7b259ed3802918a58aea5dd6339b36166a7521 not found: ID does not exist" Jan 27 20:12:24 crc kubenswrapper[4915]: I0127 20:12:24.550921 4915 scope.go:117] "RemoveContainer" containerID="7d37a9d020b4152eee1c148ecc7e2e1959c6223c4ceae2ca2d28bd5588736a0b" Jan 27 20:12:24 crc kubenswrapper[4915]: E0127 20:12:24.551220 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d37a9d020b4152eee1c148ecc7e2e1959c6223c4ceae2ca2d28bd5588736a0b\": container with ID starting with 7d37a9d020b4152eee1c148ecc7e2e1959c6223c4ceae2ca2d28bd5588736a0b not found: ID does not exist" containerID="7d37a9d020b4152eee1c148ecc7e2e1959c6223c4ceae2ca2d28bd5588736a0b" Jan 27 20:12:24 crc kubenswrapper[4915]: I0127 20:12:24.551258 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d37a9d020b4152eee1c148ecc7e2e1959c6223c4ceae2ca2d28bd5588736a0b"} err="failed to get container status \"7d37a9d020b4152eee1c148ecc7e2e1959c6223c4ceae2ca2d28bd5588736a0b\": rpc error: code = NotFound desc = could not find container \"7d37a9d020b4152eee1c148ecc7e2e1959c6223c4ceae2ca2d28bd5588736a0b\": container with ID starting with 7d37a9d020b4152eee1c148ecc7e2e1959c6223c4ceae2ca2d28bd5588736a0b not found: ID does not exist" Jan 27 20:12:25 crc kubenswrapper[4915]: I0127 20:12:25.370127 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="517eb0df-cc38-4358-874e-6d2ad409d9e0" path="/var/lib/kubelet/pods/517eb0df-cc38-4358-874e-6d2ad409d9e0/volumes" Jan 27 20:12:25 crc kubenswrapper[4915]: I0127 20:12:25.457940 4915 generic.go:334] "Generic (PLEG): container finished" podID="2f5de868-ebe4-4f41-b480-4a71a76b1d0f" containerID="6f3665eb13118bd0967bedead4ce7cbedb3aa1bdbb7e0c28c4c83c8e489ca220" exitCode=0 Jan 27 20:12:25 crc kubenswrapper[4915]: I0127 20:12:25.458026 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nx2wg" event={"ID":"2f5de868-ebe4-4f41-b480-4a71a76b1d0f","Type":"ContainerDied","Data":"6f3665eb13118bd0967bedead4ce7cbedb3aa1bdbb7e0c28c4c83c8e489ca220"} Jan 27 20:12:26 crc kubenswrapper[4915]: I0127 20:12:26.851701 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nx2wg" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.023526 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f5de868-ebe4-4f41-b480-4a71a76b1d0f-combined-ca-bundle\") pod \"2f5de868-ebe4-4f41-b480-4a71a76b1d0f\" (UID: \"2f5de868-ebe4-4f41-b480-4a71a76b1d0f\") " Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.023755 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2f5de868-ebe4-4f41-b480-4a71a76b1d0f-config\") pod \"2f5de868-ebe4-4f41-b480-4a71a76b1d0f\" (UID: \"2f5de868-ebe4-4f41-b480-4a71a76b1d0f\") " Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.023939 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zv5b8\" (UniqueName: \"kubernetes.io/projected/2f5de868-ebe4-4f41-b480-4a71a76b1d0f-kube-api-access-zv5b8\") pod \"2f5de868-ebe4-4f41-b480-4a71a76b1d0f\" (UID: \"2f5de868-ebe4-4f41-b480-4a71a76b1d0f\") " Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.039381 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f5de868-ebe4-4f41-b480-4a71a76b1d0f-kube-api-access-zv5b8" (OuterVolumeSpecName: "kube-api-access-zv5b8") pod "2f5de868-ebe4-4f41-b480-4a71a76b1d0f" (UID: "2f5de868-ebe4-4f41-b480-4a71a76b1d0f"). InnerVolumeSpecName "kube-api-access-zv5b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.064513 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f5de868-ebe4-4f41-b480-4a71a76b1d0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f5de868-ebe4-4f41-b480-4a71a76b1d0f" (UID: "2f5de868-ebe4-4f41-b480-4a71a76b1d0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.075238 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f5de868-ebe4-4f41-b480-4a71a76b1d0f-config" (OuterVolumeSpecName: "config") pod "2f5de868-ebe4-4f41-b480-4a71a76b1d0f" (UID: "2f5de868-ebe4-4f41-b480-4a71a76b1d0f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.127348 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f5de868-ebe4-4f41-b480-4a71a76b1d0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.127392 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2f5de868-ebe4-4f41-b480-4a71a76b1d0f-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.127405 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zv5b8\" (UniqueName: \"kubernetes.io/projected/2f5de868-ebe4-4f41-b480-4a71a76b1d0f-kube-api-access-zv5b8\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.478190 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nx2wg" event={"ID":"2f5de868-ebe4-4f41-b480-4a71a76b1d0f","Type":"ContainerDied","Data":"0db8f380348907111d1d0794946bc86d227bde43972ac8d6a677ba0a0baa69d4"} Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.478261 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0db8f380348907111d1d0794946bc86d227bde43972ac8d6a677ba0a0baa69d4" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.478208 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nx2wg" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.724968 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6dc9c57db9-qwm47"] Jan 27 20:12:27 crc kubenswrapper[4915]: E0127 20:12:27.725310 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517eb0df-cc38-4358-874e-6d2ad409d9e0" containerName="registry-server" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.725328 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="517eb0df-cc38-4358-874e-6d2ad409d9e0" containerName="registry-server" Jan 27 20:12:27 crc kubenswrapper[4915]: E0127 20:12:27.725344 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517eb0df-cc38-4358-874e-6d2ad409d9e0" containerName="extract-content" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.725350 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="517eb0df-cc38-4358-874e-6d2ad409d9e0" containerName="extract-content" Jan 27 20:12:27 crc kubenswrapper[4915]: E0127 20:12:27.725370 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517eb0df-cc38-4358-874e-6d2ad409d9e0" containerName="extract-utilities" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.725377 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="517eb0df-cc38-4358-874e-6d2ad409d9e0" containerName="extract-utilities" Jan 27 20:12:27 crc kubenswrapper[4915]: E0127 20:12:27.725390 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f5de868-ebe4-4f41-b480-4a71a76b1d0f" containerName="neutron-db-sync" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.725396 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f5de868-ebe4-4f41-b480-4a71a76b1d0f" containerName="neutron-db-sync" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.725541 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f5de868-ebe4-4f41-b480-4a71a76b1d0f" containerName="neutron-db-sync" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.725552 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="517eb0df-cc38-4358-874e-6d2ad409d9e0" containerName="registry-server" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.726384 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.741347 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dc9c57db9-qwm47"] Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.809048 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5f78f7959c-b4nv2"] Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.810727 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f78f7959c-b4nv2" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.814167 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.814402 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rgqpn" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.814552 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.817073 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f78f7959c-b4nv2"] Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.839259 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldt9z\" (UniqueName: \"kubernetes.io/projected/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-kube-api-access-ldt9z\") pod \"dnsmasq-dns-6dc9c57db9-qwm47\" (UID: \"414fa0ae-660b-4e2e-acb8-6e039c2cd11a\") " pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.839387 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-ovsdbserver-sb\") pod \"dnsmasq-dns-6dc9c57db9-qwm47\" (UID: \"414fa0ae-660b-4e2e-acb8-6e039c2cd11a\") " pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.839453 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-dns-svc\") pod \"dnsmasq-dns-6dc9c57db9-qwm47\" (UID: \"414fa0ae-660b-4e2e-acb8-6e039c2cd11a\") " pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.839516 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-config\") pod \"dnsmasq-dns-6dc9c57db9-qwm47\" (UID: \"414fa0ae-660b-4e2e-acb8-6e039c2cd11a\") " pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.839622 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-ovsdbserver-nb\") pod \"dnsmasq-dns-6dc9c57db9-qwm47\" (UID: \"414fa0ae-660b-4e2e-acb8-6e039c2cd11a\") " pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.941445 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7f48a50f-8b17-4d5d-9c46-64f7ad566e13-config\") pod \"neutron-5f78f7959c-b4nv2\" (UID: \"7f48a50f-8b17-4d5d-9c46-64f7ad566e13\") " pod="openstack/neutron-5f78f7959c-b4nv2" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.941538 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glnmb\" (UniqueName: \"kubernetes.io/projected/7f48a50f-8b17-4d5d-9c46-64f7ad566e13-kube-api-access-glnmb\") pod \"neutron-5f78f7959c-b4nv2\" (UID: \"7f48a50f-8b17-4d5d-9c46-64f7ad566e13\") " pod="openstack/neutron-5f78f7959c-b4nv2" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.941569 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldt9z\" (UniqueName: \"kubernetes.io/projected/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-kube-api-access-ldt9z\") pod \"dnsmasq-dns-6dc9c57db9-qwm47\" (UID: \"414fa0ae-660b-4e2e-acb8-6e039c2cd11a\") " pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.941626 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-ovsdbserver-sb\") pod \"dnsmasq-dns-6dc9c57db9-qwm47\" (UID: \"414fa0ae-660b-4e2e-acb8-6e039c2cd11a\") " pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.941656 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7f48a50f-8b17-4d5d-9c46-64f7ad566e13-httpd-config\") pod \"neutron-5f78f7959c-b4nv2\" (UID: \"7f48a50f-8b17-4d5d-9c46-64f7ad566e13\") " pod="openstack/neutron-5f78f7959c-b4nv2" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.941710 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-dns-svc\") pod \"dnsmasq-dns-6dc9c57db9-qwm47\" (UID: \"414fa0ae-660b-4e2e-acb8-6e039c2cd11a\") " pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.941741 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-config\") pod \"dnsmasq-dns-6dc9c57db9-qwm47\" (UID: \"414fa0ae-660b-4e2e-acb8-6e039c2cd11a\") " pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.941784 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-ovsdbserver-nb\") pod \"dnsmasq-dns-6dc9c57db9-qwm47\" (UID: \"414fa0ae-660b-4e2e-acb8-6e039c2cd11a\") " pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.941858 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f48a50f-8b17-4d5d-9c46-64f7ad566e13-combined-ca-bundle\") pod \"neutron-5f78f7959c-b4nv2\" (UID: \"7f48a50f-8b17-4d5d-9c46-64f7ad566e13\") " pod="openstack/neutron-5f78f7959c-b4nv2" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.943124 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-ovsdbserver-sb\") pod \"dnsmasq-dns-6dc9c57db9-qwm47\" (UID: \"414fa0ae-660b-4e2e-acb8-6e039c2cd11a\") " pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.943753 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-dns-svc\") pod \"dnsmasq-dns-6dc9c57db9-qwm47\" (UID: \"414fa0ae-660b-4e2e-acb8-6e039c2cd11a\") " pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.944421 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-config\") pod \"dnsmasq-dns-6dc9c57db9-qwm47\" (UID: \"414fa0ae-660b-4e2e-acb8-6e039c2cd11a\") " pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.945141 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-ovsdbserver-nb\") pod \"dnsmasq-dns-6dc9c57db9-qwm47\" (UID: \"414fa0ae-660b-4e2e-acb8-6e039c2cd11a\") " pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" Jan 27 20:12:27 crc kubenswrapper[4915]: I0127 20:12:27.960616 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldt9z\" (UniqueName: \"kubernetes.io/projected/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-kube-api-access-ldt9z\") pod \"dnsmasq-dns-6dc9c57db9-qwm47\" (UID: \"414fa0ae-660b-4e2e-acb8-6e039c2cd11a\") " pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" Jan 27 20:12:28 crc kubenswrapper[4915]: I0127 20:12:28.043753 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f48a50f-8b17-4d5d-9c46-64f7ad566e13-combined-ca-bundle\") pod \"neutron-5f78f7959c-b4nv2\" (UID: \"7f48a50f-8b17-4d5d-9c46-64f7ad566e13\") " pod="openstack/neutron-5f78f7959c-b4nv2" Jan 27 20:12:28 crc kubenswrapper[4915]: I0127 20:12:28.043818 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7f48a50f-8b17-4d5d-9c46-64f7ad566e13-config\") pod \"neutron-5f78f7959c-b4nv2\" (UID: \"7f48a50f-8b17-4d5d-9c46-64f7ad566e13\") " pod="openstack/neutron-5f78f7959c-b4nv2" Jan 27 20:12:28 crc kubenswrapper[4915]: I0127 20:12:28.043865 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glnmb\" (UniqueName: \"kubernetes.io/projected/7f48a50f-8b17-4d5d-9c46-64f7ad566e13-kube-api-access-glnmb\") pod \"neutron-5f78f7959c-b4nv2\" (UID: \"7f48a50f-8b17-4d5d-9c46-64f7ad566e13\") " pod="openstack/neutron-5f78f7959c-b4nv2" Jan 27 20:12:28 crc kubenswrapper[4915]: I0127 20:12:28.043906 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7f48a50f-8b17-4d5d-9c46-64f7ad566e13-httpd-config\") pod \"neutron-5f78f7959c-b4nv2\" (UID: \"7f48a50f-8b17-4d5d-9c46-64f7ad566e13\") " pod="openstack/neutron-5f78f7959c-b4nv2" Jan 27 20:12:28 crc kubenswrapper[4915]: I0127 20:12:28.049300 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7f48a50f-8b17-4d5d-9c46-64f7ad566e13-config\") pod \"neutron-5f78f7959c-b4nv2\" (UID: \"7f48a50f-8b17-4d5d-9c46-64f7ad566e13\") " pod="openstack/neutron-5f78f7959c-b4nv2" Jan 27 20:12:28 crc kubenswrapper[4915]: I0127 20:12:28.050200 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f48a50f-8b17-4d5d-9c46-64f7ad566e13-combined-ca-bundle\") pod \"neutron-5f78f7959c-b4nv2\" (UID: \"7f48a50f-8b17-4d5d-9c46-64f7ad566e13\") " pod="openstack/neutron-5f78f7959c-b4nv2" Jan 27 20:12:28 crc kubenswrapper[4915]: I0127 20:12:28.050582 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" Jan 27 20:12:28 crc kubenswrapper[4915]: I0127 20:12:28.051355 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7f48a50f-8b17-4d5d-9c46-64f7ad566e13-httpd-config\") pod \"neutron-5f78f7959c-b4nv2\" (UID: \"7f48a50f-8b17-4d5d-9c46-64f7ad566e13\") " pod="openstack/neutron-5f78f7959c-b4nv2" Jan 27 20:12:28 crc kubenswrapper[4915]: I0127 20:12:28.061175 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glnmb\" (UniqueName: \"kubernetes.io/projected/7f48a50f-8b17-4d5d-9c46-64f7ad566e13-kube-api-access-glnmb\") pod \"neutron-5f78f7959c-b4nv2\" (UID: \"7f48a50f-8b17-4d5d-9c46-64f7ad566e13\") " pod="openstack/neutron-5f78f7959c-b4nv2" Jan 27 20:12:28 crc kubenswrapper[4915]: I0127 20:12:28.139804 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f78f7959c-b4nv2" Jan 27 20:12:28 crc kubenswrapper[4915]: I0127 20:12:28.328961 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dc9c57db9-qwm47"] Jan 27 20:12:28 crc kubenswrapper[4915]: I0127 20:12:28.487640 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" event={"ID":"414fa0ae-660b-4e2e-acb8-6e039c2cd11a","Type":"ContainerStarted","Data":"2aa78a261d1e496cc5a87f5dd1b55d5f5e83ddc717709919eba4119d976f260a"} Jan 27 20:12:28 crc kubenswrapper[4915]: W0127 20:12:28.702725 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f48a50f_8b17_4d5d_9c46_64f7ad566e13.slice/crio-cd8fe5d7c46d21f682aabe7da2f74cddf7f82846552e73c668b3dece11a1ab14 WatchSource:0}: Error finding container cd8fe5d7c46d21f682aabe7da2f74cddf7f82846552e73c668b3dece11a1ab14: Status 404 returned error can't find the container with id cd8fe5d7c46d21f682aabe7da2f74cddf7f82846552e73c668b3dece11a1ab14 Jan 27 20:12:28 crc kubenswrapper[4915]: I0127 20:12:28.706124 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f78f7959c-b4nv2"] Jan 27 20:12:29 crc kubenswrapper[4915]: I0127 20:12:29.495360 4915 generic.go:334] "Generic (PLEG): container finished" podID="414fa0ae-660b-4e2e-acb8-6e039c2cd11a" containerID="582f1257f3e5dc138b9a7df6b2186651ffc66b3de237aabdb4d328624776f332" exitCode=0 Jan 27 20:12:29 crc kubenswrapper[4915]: I0127 20:12:29.495767 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" event={"ID":"414fa0ae-660b-4e2e-acb8-6e039c2cd11a","Type":"ContainerDied","Data":"582f1257f3e5dc138b9a7df6b2186651ffc66b3de237aabdb4d328624776f332"} Jan 27 20:12:29 crc kubenswrapper[4915]: I0127 20:12:29.502141 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f78f7959c-b4nv2" event={"ID":"7f48a50f-8b17-4d5d-9c46-64f7ad566e13","Type":"ContainerStarted","Data":"e5cd3184d72b587e9ea6e86db90dcabaad519ef2d665e7b5fcb7806b2aec13e5"} Jan 27 20:12:29 crc kubenswrapper[4915]: I0127 20:12:29.502418 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f78f7959c-b4nv2" event={"ID":"7f48a50f-8b17-4d5d-9c46-64f7ad566e13","Type":"ContainerStarted","Data":"1c2e453b7dcc440c2cca79cfe312b6b19b4c66d8425c26c835a9be35a661c62a"} Jan 27 20:12:29 crc kubenswrapper[4915]: I0127 20:12:29.502441 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5f78f7959c-b4nv2" Jan 27 20:12:29 crc kubenswrapper[4915]: I0127 20:12:29.502453 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f78f7959c-b4nv2" event={"ID":"7f48a50f-8b17-4d5d-9c46-64f7ad566e13","Type":"ContainerStarted","Data":"cd8fe5d7c46d21f682aabe7da2f74cddf7f82846552e73c668b3dece11a1ab14"} Jan 27 20:12:29 crc kubenswrapper[4915]: I0127 20:12:29.543189 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5f78f7959c-b4nv2" podStartSLOduration=2.543168944 podStartE2EDuration="2.543168944s" podCreationTimestamp="2026-01-27 20:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:12:29.539951675 +0000 UTC m=+5440.897805339" watchObservedRunningTime="2026-01-27 20:12:29.543168944 +0000 UTC m=+5440.901022598" Jan 27 20:12:30 crc kubenswrapper[4915]: I0127 20:12:30.514200 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" event={"ID":"414fa0ae-660b-4e2e-acb8-6e039c2cd11a","Type":"ContainerStarted","Data":"51d69d60c41b6681f468aeac488c104b40942aa3004a7f012561f3afdaaef61b"} Jan 27 20:12:30 crc kubenswrapper[4915]: I0127 20:12:30.514527 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" Jan 27 20:12:30 crc kubenswrapper[4915]: I0127 20:12:30.556238 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" podStartSLOduration=3.556210515 podStartE2EDuration="3.556210515s" podCreationTimestamp="2026-01-27 20:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:12:30.540362004 +0000 UTC m=+5441.898215688" watchObservedRunningTime="2026-01-27 20:12:30.556210515 +0000 UTC m=+5441.914064219" Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.052016 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.115692 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79ffc8877c-t8ggr"] Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.117667 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" podUID="18aacb46-b4d8-477d-81df-e0fa4df6c36d" containerName="dnsmasq-dns" containerID="cri-o://cbd9f2b693e77c7e74966d225ba950921502e99eb368055af8674b745761de25" gracePeriod=10 Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.586773 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.590896 4915 generic.go:334] "Generic (PLEG): container finished" podID="18aacb46-b4d8-477d-81df-e0fa4df6c36d" containerID="cbd9f2b693e77c7e74966d225ba950921502e99eb368055af8674b745761de25" exitCode=0 Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.590938 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" event={"ID":"18aacb46-b4d8-477d-81df-e0fa4df6c36d","Type":"ContainerDied","Data":"cbd9f2b693e77c7e74966d225ba950921502e99eb368055af8674b745761de25"} Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.590958 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.590976 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79ffc8877c-t8ggr" event={"ID":"18aacb46-b4d8-477d-81df-e0fa4df6c36d","Type":"ContainerDied","Data":"fb75cdc4899bcf1aa15ffe0d1966034511ad9c5e6b76b2502d99ff0684a74572"} Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.590997 4915 scope.go:117] "RemoveContainer" containerID="cbd9f2b693e77c7e74966d225ba950921502e99eb368055af8674b745761de25" Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.622090 4915 scope.go:117] "RemoveContainer" containerID="7261bd60eba84e5e9fc15fc363fa8bbc85c330eb87f52ec35dd941dbb8652b0d" Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.653958 4915 scope.go:117] "RemoveContainer" containerID="cbd9f2b693e77c7e74966d225ba950921502e99eb368055af8674b745761de25" Jan 27 20:12:38 crc kubenswrapper[4915]: E0127 20:12:38.654507 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbd9f2b693e77c7e74966d225ba950921502e99eb368055af8674b745761de25\": container with ID starting with cbd9f2b693e77c7e74966d225ba950921502e99eb368055af8674b745761de25 not found: ID does not exist" containerID="cbd9f2b693e77c7e74966d225ba950921502e99eb368055af8674b745761de25" Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.654538 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbd9f2b693e77c7e74966d225ba950921502e99eb368055af8674b745761de25"} err="failed to get container status \"cbd9f2b693e77c7e74966d225ba950921502e99eb368055af8674b745761de25\": rpc error: code = NotFound desc = could not find container \"cbd9f2b693e77c7e74966d225ba950921502e99eb368055af8674b745761de25\": container with ID starting with cbd9f2b693e77c7e74966d225ba950921502e99eb368055af8674b745761de25 not found: ID does not exist" Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.654564 4915 scope.go:117] "RemoveContainer" containerID="7261bd60eba84e5e9fc15fc363fa8bbc85c330eb87f52ec35dd941dbb8652b0d" Jan 27 20:12:38 crc kubenswrapper[4915]: E0127 20:12:38.655158 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7261bd60eba84e5e9fc15fc363fa8bbc85c330eb87f52ec35dd941dbb8652b0d\": container with ID starting with 7261bd60eba84e5e9fc15fc363fa8bbc85c330eb87f52ec35dd941dbb8652b0d not found: ID does not exist" containerID="7261bd60eba84e5e9fc15fc363fa8bbc85c330eb87f52ec35dd941dbb8652b0d" Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.655211 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7261bd60eba84e5e9fc15fc363fa8bbc85c330eb87f52ec35dd941dbb8652b0d"} err="failed to get container status \"7261bd60eba84e5e9fc15fc363fa8bbc85c330eb87f52ec35dd941dbb8652b0d\": rpc error: code = NotFound desc = could not find container \"7261bd60eba84e5e9fc15fc363fa8bbc85c330eb87f52ec35dd941dbb8652b0d\": container with ID starting with 7261bd60eba84e5e9fc15fc363fa8bbc85c330eb87f52ec35dd941dbb8652b0d not found: ID does not exist" Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.743676 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18aacb46-b4d8-477d-81df-e0fa4df6c36d-ovsdbserver-sb\") pod \"18aacb46-b4d8-477d-81df-e0fa4df6c36d\" (UID: \"18aacb46-b4d8-477d-81df-e0fa4df6c36d\") " Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.743766 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18aacb46-b4d8-477d-81df-e0fa4df6c36d-dns-svc\") pod \"18aacb46-b4d8-477d-81df-e0fa4df6c36d\" (UID: \"18aacb46-b4d8-477d-81df-e0fa4df6c36d\") " Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.743907 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18aacb46-b4d8-477d-81df-e0fa4df6c36d-config\") pod \"18aacb46-b4d8-477d-81df-e0fa4df6c36d\" (UID: \"18aacb46-b4d8-477d-81df-e0fa4df6c36d\") " Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.743973 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptjnd\" (UniqueName: \"kubernetes.io/projected/18aacb46-b4d8-477d-81df-e0fa4df6c36d-kube-api-access-ptjnd\") pod \"18aacb46-b4d8-477d-81df-e0fa4df6c36d\" (UID: \"18aacb46-b4d8-477d-81df-e0fa4df6c36d\") " Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.744043 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18aacb46-b4d8-477d-81df-e0fa4df6c36d-ovsdbserver-nb\") pod \"18aacb46-b4d8-477d-81df-e0fa4df6c36d\" (UID: \"18aacb46-b4d8-477d-81df-e0fa4df6c36d\") " Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.750722 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18aacb46-b4d8-477d-81df-e0fa4df6c36d-kube-api-access-ptjnd" (OuterVolumeSpecName: "kube-api-access-ptjnd") pod "18aacb46-b4d8-477d-81df-e0fa4df6c36d" (UID: "18aacb46-b4d8-477d-81df-e0fa4df6c36d"). InnerVolumeSpecName "kube-api-access-ptjnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.784092 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18aacb46-b4d8-477d-81df-e0fa4df6c36d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "18aacb46-b4d8-477d-81df-e0fa4df6c36d" (UID: "18aacb46-b4d8-477d-81df-e0fa4df6c36d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.790286 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18aacb46-b4d8-477d-81df-e0fa4df6c36d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "18aacb46-b4d8-477d-81df-e0fa4df6c36d" (UID: "18aacb46-b4d8-477d-81df-e0fa4df6c36d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.790399 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18aacb46-b4d8-477d-81df-e0fa4df6c36d-config" (OuterVolumeSpecName: "config") pod "18aacb46-b4d8-477d-81df-e0fa4df6c36d" (UID: "18aacb46-b4d8-477d-81df-e0fa4df6c36d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.794988 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18aacb46-b4d8-477d-81df-e0fa4df6c36d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "18aacb46-b4d8-477d-81df-e0fa4df6c36d" (UID: "18aacb46-b4d8-477d-81df-e0fa4df6c36d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.849524 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18aacb46-b4d8-477d-81df-e0fa4df6c36d-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.849577 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptjnd\" (UniqueName: \"kubernetes.io/projected/18aacb46-b4d8-477d-81df-e0fa4df6c36d-kube-api-access-ptjnd\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.849591 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18aacb46-b4d8-477d-81df-e0fa4df6c36d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.849606 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18aacb46-b4d8-477d-81df-e0fa4df6c36d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.849621 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18aacb46-b4d8-477d-81df-e0fa4df6c36d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.929815 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79ffc8877c-t8ggr"] Jan 27 20:12:38 crc kubenswrapper[4915]: I0127 20:12:38.937969 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79ffc8877c-t8ggr"] Jan 27 20:12:39 crc kubenswrapper[4915]: I0127 20:12:39.366874 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18aacb46-b4d8-477d-81df-e0fa4df6c36d" path="/var/lib/kubelet/pods/18aacb46-b4d8-477d-81df-e0fa4df6c36d/volumes" Jan 27 20:12:56 crc kubenswrapper[4915]: I0127 20:12:56.158546 4915 scope.go:117] "RemoveContainer" containerID="dcf2d93585da668ed3eba54949c404bbb14978f679eb0561e387088e0bb97bb3" Jan 27 20:12:58 crc kubenswrapper[4915]: I0127 20:12:58.153489 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5f78f7959c-b4nv2" Jan 27 20:13:04 crc kubenswrapper[4915]: I0127 20:13:04.786301 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-kmmcp"] Jan 27 20:13:04 crc kubenswrapper[4915]: E0127 20:13:04.787290 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18aacb46-b4d8-477d-81df-e0fa4df6c36d" containerName="dnsmasq-dns" Jan 27 20:13:04 crc kubenswrapper[4915]: I0127 20:13:04.787307 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="18aacb46-b4d8-477d-81df-e0fa4df6c36d" containerName="dnsmasq-dns" Jan 27 20:13:04 crc kubenswrapper[4915]: E0127 20:13:04.787332 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18aacb46-b4d8-477d-81df-e0fa4df6c36d" containerName="init" Jan 27 20:13:04 crc kubenswrapper[4915]: I0127 20:13:04.787340 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="18aacb46-b4d8-477d-81df-e0fa4df6c36d" containerName="init" Jan 27 20:13:04 crc kubenswrapper[4915]: I0127 20:13:04.787546 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="18aacb46-b4d8-477d-81df-e0fa4df6c36d" containerName="dnsmasq-dns" Jan 27 20:13:04 crc kubenswrapper[4915]: I0127 20:13:04.788341 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kmmcp" Jan 27 20:13:04 crc kubenswrapper[4915]: I0127 20:13:04.796668 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-6079-account-create-update-5cdgf"] Jan 27 20:13:04 crc kubenswrapper[4915]: I0127 20:13:04.797691 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6079-account-create-update-5cdgf" Jan 27 20:13:04 crc kubenswrapper[4915]: I0127 20:13:04.799849 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 27 20:13:04 crc kubenswrapper[4915]: I0127 20:13:04.807962 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-kmmcp"] Jan 27 20:13:04 crc kubenswrapper[4915]: I0127 20:13:04.816428 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6079-account-create-update-5cdgf"] Jan 27 20:13:04 crc kubenswrapper[4915]: I0127 20:13:04.880422 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/848fb3e1-4209-4d4a-8e31-b66952b1a767-operator-scripts\") pod \"glance-6079-account-create-update-5cdgf\" (UID: \"848fb3e1-4209-4d4a-8e31-b66952b1a767\") " pod="openstack/glance-6079-account-create-update-5cdgf" Jan 27 20:13:04 crc kubenswrapper[4915]: I0127 20:13:04.880772 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9e8b539-5c97-4710-94a4-8bd3395ea16a-operator-scripts\") pod \"glance-db-create-kmmcp\" (UID: \"c9e8b539-5c97-4710-94a4-8bd3395ea16a\") " pod="openstack/glance-db-create-kmmcp" Jan 27 20:13:04 crc kubenswrapper[4915]: I0127 20:13:04.880893 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h9dl\" (UniqueName: \"kubernetes.io/projected/848fb3e1-4209-4d4a-8e31-b66952b1a767-kube-api-access-6h9dl\") pod \"glance-6079-account-create-update-5cdgf\" (UID: \"848fb3e1-4209-4d4a-8e31-b66952b1a767\") " pod="openstack/glance-6079-account-create-update-5cdgf" Jan 27 20:13:04 crc kubenswrapper[4915]: I0127 20:13:04.880981 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-727pb\" (UniqueName: \"kubernetes.io/projected/c9e8b539-5c97-4710-94a4-8bd3395ea16a-kube-api-access-727pb\") pod \"glance-db-create-kmmcp\" (UID: \"c9e8b539-5c97-4710-94a4-8bd3395ea16a\") " pod="openstack/glance-db-create-kmmcp" Jan 27 20:13:04 crc kubenswrapper[4915]: I0127 20:13:04.982579 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/848fb3e1-4209-4d4a-8e31-b66952b1a767-operator-scripts\") pod \"glance-6079-account-create-update-5cdgf\" (UID: \"848fb3e1-4209-4d4a-8e31-b66952b1a767\") " pod="openstack/glance-6079-account-create-update-5cdgf" Jan 27 20:13:04 crc kubenswrapper[4915]: I0127 20:13:04.982671 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9e8b539-5c97-4710-94a4-8bd3395ea16a-operator-scripts\") pod \"glance-db-create-kmmcp\" (UID: \"c9e8b539-5c97-4710-94a4-8bd3395ea16a\") " pod="openstack/glance-db-create-kmmcp" Jan 27 20:13:04 crc kubenswrapper[4915]: I0127 20:13:04.982703 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h9dl\" (UniqueName: \"kubernetes.io/projected/848fb3e1-4209-4d4a-8e31-b66952b1a767-kube-api-access-6h9dl\") pod \"glance-6079-account-create-update-5cdgf\" (UID: \"848fb3e1-4209-4d4a-8e31-b66952b1a767\") " pod="openstack/glance-6079-account-create-update-5cdgf" Jan 27 20:13:04 crc kubenswrapper[4915]: I0127 20:13:04.982728 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-727pb\" (UniqueName: \"kubernetes.io/projected/c9e8b539-5c97-4710-94a4-8bd3395ea16a-kube-api-access-727pb\") pod \"glance-db-create-kmmcp\" (UID: \"c9e8b539-5c97-4710-94a4-8bd3395ea16a\") " pod="openstack/glance-db-create-kmmcp" Jan 27 20:13:04 crc kubenswrapper[4915]: I0127 20:13:04.983556 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9e8b539-5c97-4710-94a4-8bd3395ea16a-operator-scripts\") pod \"glance-db-create-kmmcp\" (UID: \"c9e8b539-5c97-4710-94a4-8bd3395ea16a\") " pod="openstack/glance-db-create-kmmcp" Jan 27 20:13:04 crc kubenswrapper[4915]: I0127 20:13:04.983706 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/848fb3e1-4209-4d4a-8e31-b66952b1a767-operator-scripts\") pod \"glance-6079-account-create-update-5cdgf\" (UID: \"848fb3e1-4209-4d4a-8e31-b66952b1a767\") " pod="openstack/glance-6079-account-create-update-5cdgf" Jan 27 20:13:05 crc kubenswrapper[4915]: I0127 20:13:05.013703 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-727pb\" (UniqueName: \"kubernetes.io/projected/c9e8b539-5c97-4710-94a4-8bd3395ea16a-kube-api-access-727pb\") pod \"glance-db-create-kmmcp\" (UID: \"c9e8b539-5c97-4710-94a4-8bd3395ea16a\") " pod="openstack/glance-db-create-kmmcp" Jan 27 20:13:05 crc kubenswrapper[4915]: I0127 20:13:05.018308 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h9dl\" (UniqueName: \"kubernetes.io/projected/848fb3e1-4209-4d4a-8e31-b66952b1a767-kube-api-access-6h9dl\") pod \"glance-6079-account-create-update-5cdgf\" (UID: \"848fb3e1-4209-4d4a-8e31-b66952b1a767\") " pod="openstack/glance-6079-account-create-update-5cdgf" Jan 27 20:13:05 crc kubenswrapper[4915]: I0127 20:13:05.123384 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kmmcp" Jan 27 20:13:05 crc kubenswrapper[4915]: I0127 20:13:05.129567 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6079-account-create-update-5cdgf" Jan 27 20:13:05 crc kubenswrapper[4915]: I0127 20:13:05.560114 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6079-account-create-update-5cdgf"] Jan 27 20:13:05 crc kubenswrapper[4915]: W0127 20:13:05.625768 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9e8b539_5c97_4710_94a4_8bd3395ea16a.slice/crio-fa4216763744a86370f7ec8c70c9a64af8fdfc4c02f35f0c010867539e675c9d WatchSource:0}: Error finding container fa4216763744a86370f7ec8c70c9a64af8fdfc4c02f35f0c010867539e675c9d: Status 404 returned error can't find the container with id fa4216763744a86370f7ec8c70c9a64af8fdfc4c02f35f0c010867539e675c9d Jan 27 20:13:05 crc kubenswrapper[4915]: I0127 20:13:05.626231 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-kmmcp"] Jan 27 20:13:05 crc kubenswrapper[4915]: I0127 20:13:05.882730 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6079-account-create-update-5cdgf" event={"ID":"848fb3e1-4209-4d4a-8e31-b66952b1a767","Type":"ContainerStarted","Data":"42a8a3c304454fc5ad99caa855897210179275d3b63b7b50215b5d1a5df0a868"} Jan 27 20:13:05 crc kubenswrapper[4915]: I0127 20:13:05.882772 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6079-account-create-update-5cdgf" event={"ID":"848fb3e1-4209-4d4a-8e31-b66952b1a767","Type":"ContainerStarted","Data":"99a3332fabcaaeb600ac1d835a8be86b2eeeff13e8aa109824f55657f8fc4d59"} Jan 27 20:13:05 crc kubenswrapper[4915]: I0127 20:13:05.884447 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kmmcp" event={"ID":"c9e8b539-5c97-4710-94a4-8bd3395ea16a","Type":"ContainerStarted","Data":"1834a31a0cf740981d5d79adccad04217d9d433a78c0044513e76e4ed2fc1868"} Jan 27 20:13:05 crc kubenswrapper[4915]: I0127 20:13:05.884472 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kmmcp" event={"ID":"c9e8b539-5c97-4710-94a4-8bd3395ea16a","Type":"ContainerStarted","Data":"fa4216763744a86370f7ec8c70c9a64af8fdfc4c02f35f0c010867539e675c9d"} Jan 27 20:13:05 crc kubenswrapper[4915]: I0127 20:13:05.901839 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-6079-account-create-update-5cdgf" podStartSLOduration=1.9018203759999999 podStartE2EDuration="1.901820376s" podCreationTimestamp="2026-01-27 20:13:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:13:05.896846004 +0000 UTC m=+5477.254699668" watchObservedRunningTime="2026-01-27 20:13:05.901820376 +0000 UTC m=+5477.259674040" Jan 27 20:13:05 crc kubenswrapper[4915]: I0127 20:13:05.913415 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-kmmcp" podStartSLOduration=1.913394931 podStartE2EDuration="1.913394931s" podCreationTimestamp="2026-01-27 20:13:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:13:05.912073779 +0000 UTC m=+5477.269927463" watchObservedRunningTime="2026-01-27 20:13:05.913394931 +0000 UTC m=+5477.271248585" Jan 27 20:13:06 crc kubenswrapper[4915]: I0127 20:13:06.894112 4915 generic.go:334] "Generic (PLEG): container finished" podID="c9e8b539-5c97-4710-94a4-8bd3395ea16a" containerID="1834a31a0cf740981d5d79adccad04217d9d433a78c0044513e76e4ed2fc1868" exitCode=0 Jan 27 20:13:06 crc kubenswrapper[4915]: I0127 20:13:06.894169 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kmmcp" event={"ID":"c9e8b539-5c97-4710-94a4-8bd3395ea16a","Type":"ContainerDied","Data":"1834a31a0cf740981d5d79adccad04217d9d433a78c0044513e76e4ed2fc1868"} Jan 27 20:13:06 crc kubenswrapper[4915]: I0127 20:13:06.895811 4915 generic.go:334] "Generic (PLEG): container finished" podID="848fb3e1-4209-4d4a-8e31-b66952b1a767" containerID="42a8a3c304454fc5ad99caa855897210179275d3b63b7b50215b5d1a5df0a868" exitCode=0 Jan 27 20:13:06 crc kubenswrapper[4915]: I0127 20:13:06.895835 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6079-account-create-update-5cdgf" event={"ID":"848fb3e1-4209-4d4a-8e31-b66952b1a767","Type":"ContainerDied","Data":"42a8a3c304454fc5ad99caa855897210179275d3b63b7b50215b5d1a5df0a868"} Jan 27 20:13:08 crc kubenswrapper[4915]: I0127 20:13:08.312719 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kmmcp" Jan 27 20:13:08 crc kubenswrapper[4915]: I0127 20:13:08.320703 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6079-account-create-update-5cdgf" Jan 27 20:13:08 crc kubenswrapper[4915]: I0127 20:13:08.444340 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9e8b539-5c97-4710-94a4-8bd3395ea16a-operator-scripts\") pod \"c9e8b539-5c97-4710-94a4-8bd3395ea16a\" (UID: \"c9e8b539-5c97-4710-94a4-8bd3395ea16a\") " Jan 27 20:13:08 crc kubenswrapper[4915]: I0127 20:13:08.444583 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-727pb\" (UniqueName: \"kubernetes.io/projected/c9e8b539-5c97-4710-94a4-8bd3395ea16a-kube-api-access-727pb\") pod \"c9e8b539-5c97-4710-94a4-8bd3395ea16a\" (UID: \"c9e8b539-5c97-4710-94a4-8bd3395ea16a\") " Jan 27 20:13:08 crc kubenswrapper[4915]: I0127 20:13:08.444615 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h9dl\" (UniqueName: \"kubernetes.io/projected/848fb3e1-4209-4d4a-8e31-b66952b1a767-kube-api-access-6h9dl\") pod \"848fb3e1-4209-4d4a-8e31-b66952b1a767\" (UID: \"848fb3e1-4209-4d4a-8e31-b66952b1a767\") " Jan 27 20:13:08 crc kubenswrapper[4915]: I0127 20:13:08.444707 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/848fb3e1-4209-4d4a-8e31-b66952b1a767-operator-scripts\") pod \"848fb3e1-4209-4d4a-8e31-b66952b1a767\" (UID: \"848fb3e1-4209-4d4a-8e31-b66952b1a767\") " Jan 27 20:13:08 crc kubenswrapper[4915]: I0127 20:13:08.445238 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9e8b539-5c97-4710-94a4-8bd3395ea16a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c9e8b539-5c97-4710-94a4-8bd3395ea16a" (UID: "c9e8b539-5c97-4710-94a4-8bd3395ea16a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:13:08 crc kubenswrapper[4915]: I0127 20:13:08.445650 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/848fb3e1-4209-4d4a-8e31-b66952b1a767-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "848fb3e1-4209-4d4a-8e31-b66952b1a767" (UID: "848fb3e1-4209-4d4a-8e31-b66952b1a767"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:13:08 crc kubenswrapper[4915]: I0127 20:13:08.446316 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/848fb3e1-4209-4d4a-8e31-b66952b1a767-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:08 crc kubenswrapper[4915]: I0127 20:13:08.446341 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9e8b539-5c97-4710-94a4-8bd3395ea16a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:08 crc kubenswrapper[4915]: I0127 20:13:08.449725 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/848fb3e1-4209-4d4a-8e31-b66952b1a767-kube-api-access-6h9dl" (OuterVolumeSpecName: "kube-api-access-6h9dl") pod "848fb3e1-4209-4d4a-8e31-b66952b1a767" (UID: "848fb3e1-4209-4d4a-8e31-b66952b1a767"). InnerVolumeSpecName "kube-api-access-6h9dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:13:08 crc kubenswrapper[4915]: I0127 20:13:08.451771 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9e8b539-5c97-4710-94a4-8bd3395ea16a-kube-api-access-727pb" (OuterVolumeSpecName: "kube-api-access-727pb") pod "c9e8b539-5c97-4710-94a4-8bd3395ea16a" (UID: "c9e8b539-5c97-4710-94a4-8bd3395ea16a"). InnerVolumeSpecName "kube-api-access-727pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:13:08 crc kubenswrapper[4915]: I0127 20:13:08.548015 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-727pb\" (UniqueName: \"kubernetes.io/projected/c9e8b539-5c97-4710-94a4-8bd3395ea16a-kube-api-access-727pb\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:08 crc kubenswrapper[4915]: I0127 20:13:08.548061 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6h9dl\" (UniqueName: \"kubernetes.io/projected/848fb3e1-4209-4d4a-8e31-b66952b1a767-kube-api-access-6h9dl\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:08 crc kubenswrapper[4915]: I0127 20:13:08.917964 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6079-account-create-update-5cdgf" event={"ID":"848fb3e1-4209-4d4a-8e31-b66952b1a767","Type":"ContainerDied","Data":"99a3332fabcaaeb600ac1d835a8be86b2eeeff13e8aa109824f55657f8fc4d59"} Jan 27 20:13:08 crc kubenswrapper[4915]: I0127 20:13:08.918024 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99a3332fabcaaeb600ac1d835a8be86b2eeeff13e8aa109824f55657f8fc4d59" Jan 27 20:13:08 crc kubenswrapper[4915]: I0127 20:13:08.917980 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6079-account-create-update-5cdgf" Jan 27 20:13:08 crc kubenswrapper[4915]: I0127 20:13:08.920822 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kmmcp" event={"ID":"c9e8b539-5c97-4710-94a4-8bd3395ea16a","Type":"ContainerDied","Data":"fa4216763744a86370f7ec8c70c9a64af8fdfc4c02f35f0c010867539e675c9d"} Jan 27 20:13:08 crc kubenswrapper[4915]: I0127 20:13:08.920870 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa4216763744a86370f7ec8c70c9a64af8fdfc4c02f35f0c010867539e675c9d" Jan 27 20:13:08 crc kubenswrapper[4915]: I0127 20:13:08.920922 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kmmcp" Jan 27 20:13:09 crc kubenswrapper[4915]: I0127 20:13:09.937190 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-nqxnm"] Jan 27 20:13:09 crc kubenswrapper[4915]: E0127 20:13:09.937805 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9e8b539-5c97-4710-94a4-8bd3395ea16a" containerName="mariadb-database-create" Jan 27 20:13:09 crc kubenswrapper[4915]: I0127 20:13:09.937817 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e8b539-5c97-4710-94a4-8bd3395ea16a" containerName="mariadb-database-create" Jan 27 20:13:09 crc kubenswrapper[4915]: E0127 20:13:09.937830 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="848fb3e1-4209-4d4a-8e31-b66952b1a767" containerName="mariadb-account-create-update" Jan 27 20:13:09 crc kubenswrapper[4915]: I0127 20:13:09.937837 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="848fb3e1-4209-4d4a-8e31-b66952b1a767" containerName="mariadb-account-create-update" Jan 27 20:13:09 crc kubenswrapper[4915]: I0127 20:13:09.938002 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9e8b539-5c97-4710-94a4-8bd3395ea16a" containerName="mariadb-database-create" Jan 27 20:13:09 crc kubenswrapper[4915]: I0127 20:13:09.938017 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="848fb3e1-4209-4d4a-8e31-b66952b1a767" containerName="mariadb-account-create-update" Jan 27 20:13:09 crc kubenswrapper[4915]: I0127 20:13:09.938514 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nqxnm" Jan 27 20:13:09 crc kubenswrapper[4915]: I0127 20:13:09.940559 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 27 20:13:09 crc kubenswrapper[4915]: I0127 20:13:09.942168 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-v9wnd" Jan 27 20:13:09 crc kubenswrapper[4915]: I0127 20:13:09.952361 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-nqxnm"] Jan 27 20:13:10 crc kubenswrapper[4915]: I0127 20:13:10.076715 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c9fb59-a8c5-4411-975d-d2751dca2344-combined-ca-bundle\") pod \"glance-db-sync-nqxnm\" (UID: \"00c9fb59-a8c5-4411-975d-d2751dca2344\") " pod="openstack/glance-db-sync-nqxnm" Jan 27 20:13:10 crc kubenswrapper[4915]: I0127 20:13:10.076800 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqbvn\" (UniqueName: \"kubernetes.io/projected/00c9fb59-a8c5-4411-975d-d2751dca2344-kube-api-access-qqbvn\") pod \"glance-db-sync-nqxnm\" (UID: \"00c9fb59-a8c5-4411-975d-d2751dca2344\") " pod="openstack/glance-db-sync-nqxnm" Jan 27 20:13:10 crc kubenswrapper[4915]: I0127 20:13:10.076889 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c9fb59-a8c5-4411-975d-d2751dca2344-config-data\") pod \"glance-db-sync-nqxnm\" (UID: \"00c9fb59-a8c5-4411-975d-d2751dca2344\") " pod="openstack/glance-db-sync-nqxnm" Jan 27 20:13:10 crc kubenswrapper[4915]: I0127 20:13:10.077095 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/00c9fb59-a8c5-4411-975d-d2751dca2344-db-sync-config-data\") pod \"glance-db-sync-nqxnm\" (UID: \"00c9fb59-a8c5-4411-975d-d2751dca2344\") " pod="openstack/glance-db-sync-nqxnm" Jan 27 20:13:10 crc kubenswrapper[4915]: I0127 20:13:10.178601 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c9fb59-a8c5-4411-975d-d2751dca2344-combined-ca-bundle\") pod \"glance-db-sync-nqxnm\" (UID: \"00c9fb59-a8c5-4411-975d-d2751dca2344\") " pod="openstack/glance-db-sync-nqxnm" Jan 27 20:13:10 crc kubenswrapper[4915]: I0127 20:13:10.178699 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqbvn\" (UniqueName: \"kubernetes.io/projected/00c9fb59-a8c5-4411-975d-d2751dca2344-kube-api-access-qqbvn\") pod \"glance-db-sync-nqxnm\" (UID: \"00c9fb59-a8c5-4411-975d-d2751dca2344\") " pod="openstack/glance-db-sync-nqxnm" Jan 27 20:13:10 crc kubenswrapper[4915]: I0127 20:13:10.178738 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c9fb59-a8c5-4411-975d-d2751dca2344-config-data\") pod \"glance-db-sync-nqxnm\" (UID: \"00c9fb59-a8c5-4411-975d-d2751dca2344\") " pod="openstack/glance-db-sync-nqxnm" Jan 27 20:13:10 crc kubenswrapper[4915]: I0127 20:13:10.178826 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/00c9fb59-a8c5-4411-975d-d2751dca2344-db-sync-config-data\") pod \"glance-db-sync-nqxnm\" (UID: \"00c9fb59-a8c5-4411-975d-d2751dca2344\") " pod="openstack/glance-db-sync-nqxnm" Jan 27 20:13:10 crc kubenswrapper[4915]: I0127 20:13:10.182971 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c9fb59-a8c5-4411-975d-d2751dca2344-combined-ca-bundle\") pod \"glance-db-sync-nqxnm\" (UID: \"00c9fb59-a8c5-4411-975d-d2751dca2344\") " pod="openstack/glance-db-sync-nqxnm" Jan 27 20:13:10 crc kubenswrapper[4915]: I0127 20:13:10.186487 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/00c9fb59-a8c5-4411-975d-d2751dca2344-db-sync-config-data\") pod \"glance-db-sync-nqxnm\" (UID: \"00c9fb59-a8c5-4411-975d-d2751dca2344\") " pod="openstack/glance-db-sync-nqxnm" Jan 27 20:13:10 crc kubenswrapper[4915]: I0127 20:13:10.191163 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c9fb59-a8c5-4411-975d-d2751dca2344-config-data\") pod \"glance-db-sync-nqxnm\" (UID: \"00c9fb59-a8c5-4411-975d-d2751dca2344\") " pod="openstack/glance-db-sync-nqxnm" Jan 27 20:13:10 crc kubenswrapper[4915]: I0127 20:13:10.204512 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqbvn\" (UniqueName: \"kubernetes.io/projected/00c9fb59-a8c5-4411-975d-d2751dca2344-kube-api-access-qqbvn\") pod \"glance-db-sync-nqxnm\" (UID: \"00c9fb59-a8c5-4411-975d-d2751dca2344\") " pod="openstack/glance-db-sync-nqxnm" Jan 27 20:13:10 crc kubenswrapper[4915]: I0127 20:13:10.257510 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nqxnm" Jan 27 20:13:10 crc kubenswrapper[4915]: I0127 20:13:10.760160 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-nqxnm"] Jan 27 20:13:10 crc kubenswrapper[4915]: I0127 20:13:10.948092 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nqxnm" event={"ID":"00c9fb59-a8c5-4411-975d-d2751dca2344","Type":"ContainerStarted","Data":"6a7560c60fcf2022373a0746bd780873598e6e0a15272a6a427f2724daa77b36"} Jan 27 20:13:11 crc kubenswrapper[4915]: I0127 20:13:11.955956 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nqxnm" event={"ID":"00c9fb59-a8c5-4411-975d-d2751dca2344","Type":"ContainerStarted","Data":"418dd9be86baa5d7c19a7cbe1f5cf3b8c10eeb767727fe741f8d83aa05fc19f4"} Jan 27 20:13:11 crc kubenswrapper[4915]: I0127 20:13:11.976195 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-nqxnm" podStartSLOduration=2.976174333 podStartE2EDuration="2.976174333s" podCreationTimestamp="2026-01-27 20:13:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:13:11.970676758 +0000 UTC m=+5483.328530422" watchObservedRunningTime="2026-01-27 20:13:11.976174333 +0000 UTC m=+5483.334027987" Jan 27 20:13:14 crc kubenswrapper[4915]: I0127 20:13:14.981887 4915 generic.go:334] "Generic (PLEG): container finished" podID="00c9fb59-a8c5-4411-975d-d2751dca2344" containerID="418dd9be86baa5d7c19a7cbe1f5cf3b8c10eeb767727fe741f8d83aa05fc19f4" exitCode=0 Jan 27 20:13:14 crc kubenswrapper[4915]: I0127 20:13:14.981970 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nqxnm" event={"ID":"00c9fb59-a8c5-4411-975d-d2751dca2344","Type":"ContainerDied","Data":"418dd9be86baa5d7c19a7cbe1f5cf3b8c10eeb767727fe741f8d83aa05fc19f4"} Jan 27 20:13:16 crc kubenswrapper[4915]: I0127 20:13:16.380407 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nqxnm" Jan 27 20:13:16 crc kubenswrapper[4915]: I0127 20:13:16.501786 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c9fb59-a8c5-4411-975d-d2751dca2344-combined-ca-bundle\") pod \"00c9fb59-a8c5-4411-975d-d2751dca2344\" (UID: \"00c9fb59-a8c5-4411-975d-d2751dca2344\") " Jan 27 20:13:16 crc kubenswrapper[4915]: I0127 20:13:16.501904 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbvn\" (UniqueName: \"kubernetes.io/projected/00c9fb59-a8c5-4411-975d-d2751dca2344-kube-api-access-qqbvn\") pod \"00c9fb59-a8c5-4411-975d-d2751dca2344\" (UID: \"00c9fb59-a8c5-4411-975d-d2751dca2344\") " Jan 27 20:13:16 crc kubenswrapper[4915]: I0127 20:13:16.501956 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/00c9fb59-a8c5-4411-975d-d2751dca2344-db-sync-config-data\") pod \"00c9fb59-a8c5-4411-975d-d2751dca2344\" (UID: \"00c9fb59-a8c5-4411-975d-d2751dca2344\") " Jan 27 20:13:16 crc kubenswrapper[4915]: I0127 20:13:16.502009 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c9fb59-a8c5-4411-975d-d2751dca2344-config-data\") pod \"00c9fb59-a8c5-4411-975d-d2751dca2344\" (UID: \"00c9fb59-a8c5-4411-975d-d2751dca2344\") " Jan 27 20:13:16 crc kubenswrapper[4915]: I0127 20:13:16.506703 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c9fb59-a8c5-4411-975d-d2751dca2344-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "00c9fb59-a8c5-4411-975d-d2751dca2344" (UID: "00c9fb59-a8c5-4411-975d-d2751dca2344"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:13:16 crc kubenswrapper[4915]: I0127 20:13:16.507077 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00c9fb59-a8c5-4411-975d-d2751dca2344-kube-api-access-qqbvn" (OuterVolumeSpecName: "kube-api-access-qqbvn") pod "00c9fb59-a8c5-4411-975d-d2751dca2344" (UID: "00c9fb59-a8c5-4411-975d-d2751dca2344"). InnerVolumeSpecName "kube-api-access-qqbvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:13:16 crc kubenswrapper[4915]: I0127 20:13:16.546663 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c9fb59-a8c5-4411-975d-d2751dca2344-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00c9fb59-a8c5-4411-975d-d2751dca2344" (UID: "00c9fb59-a8c5-4411-975d-d2751dca2344"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:13:16 crc kubenswrapper[4915]: I0127 20:13:16.567002 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c9fb59-a8c5-4411-975d-d2751dca2344-config-data" (OuterVolumeSpecName: "config-data") pod "00c9fb59-a8c5-4411-975d-d2751dca2344" (UID: "00c9fb59-a8c5-4411-975d-d2751dca2344"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:13:16 crc kubenswrapper[4915]: I0127 20:13:16.604178 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c9fb59-a8c5-4411-975d-d2751dca2344-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:16 crc kubenswrapper[4915]: I0127 20:13:16.604207 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqbvn\" (UniqueName: \"kubernetes.io/projected/00c9fb59-a8c5-4411-975d-d2751dca2344-kube-api-access-qqbvn\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:16 crc kubenswrapper[4915]: I0127 20:13:16.604220 4915 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/00c9fb59-a8c5-4411-975d-d2751dca2344-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:16 crc kubenswrapper[4915]: I0127 20:13:16.604231 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c9fb59-a8c5-4411-975d-d2751dca2344-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.022216 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nqxnm" event={"ID":"00c9fb59-a8c5-4411-975d-d2751dca2344","Type":"ContainerDied","Data":"6a7560c60fcf2022373a0746bd780873598e6e0a15272a6a427f2724daa77b36"} Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.022280 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a7560c60fcf2022373a0746bd780873598e6e0a15272a6a427f2724daa77b36" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.022339 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nqxnm" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.291368 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 20:13:17 crc kubenswrapper[4915]: E0127 20:13:17.291753 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00c9fb59-a8c5-4411-975d-d2751dca2344" containerName="glance-db-sync" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.291776 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="00c9fb59-a8c5-4411-975d-d2751dca2344" containerName="glance-db-sync" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.292016 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="00c9fb59-a8c5-4411-975d-d2751dca2344" containerName="glance-db-sync" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.292833 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.294755 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.295662 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.296301 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.302538 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-v9wnd" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.323521 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.409092 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78cb9bb6f7-mkftp"] Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.410478 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.420068 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cb9bb6f7-mkftp"] Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.424891 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7989c88e-cdd8-4abb-ae52-6431d4770d08-ceph\") pod \"glance-default-external-api-0\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.424931 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7989c88e-cdd8-4abb-ae52-6431d4770d08-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.424969 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7989c88e-cdd8-4abb-ae52-6431d4770d08-scripts\") pod \"glance-default-external-api-0\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.425021 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7989c88e-cdd8-4abb-ae52-6431d4770d08-logs\") pod \"glance-default-external-api-0\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.425066 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7989c88e-cdd8-4abb-ae52-6431d4770d08-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.425144 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4brn\" (UniqueName: \"kubernetes.io/projected/7989c88e-cdd8-4abb-ae52-6431d4770d08-kube-api-access-z4brn\") pod \"glance-default-external-api-0\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.425227 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7989c88e-cdd8-4abb-ae52-6431d4770d08-config-data\") pod \"glance-default-external-api-0\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.531491 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7989c88e-cdd8-4abb-ae52-6431d4770d08-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.531561 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd6c739c-9243-41cc-a4a9-fa33bd588820-ovsdbserver-sb\") pod \"dnsmasq-dns-78cb9bb6f7-mkftp\" (UID: \"dd6c739c-9243-41cc-a4a9-fa33bd588820\") " pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.531595 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhxd2\" (UniqueName: \"kubernetes.io/projected/dd6c739c-9243-41cc-a4a9-fa33bd588820-kube-api-access-vhxd2\") pod \"dnsmasq-dns-78cb9bb6f7-mkftp\" (UID: \"dd6c739c-9243-41cc-a4a9-fa33bd588820\") " pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.531620 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd6c739c-9243-41cc-a4a9-fa33bd588820-config\") pod \"dnsmasq-dns-78cb9bb6f7-mkftp\" (UID: \"dd6c739c-9243-41cc-a4a9-fa33bd588820\") " pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.531649 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4brn\" (UniqueName: \"kubernetes.io/projected/7989c88e-cdd8-4abb-ae52-6431d4770d08-kube-api-access-z4brn\") pod \"glance-default-external-api-0\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.531675 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd6c739c-9243-41cc-a4a9-fa33bd588820-dns-svc\") pod \"dnsmasq-dns-78cb9bb6f7-mkftp\" (UID: \"dd6c739c-9243-41cc-a4a9-fa33bd588820\") " pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.531726 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7989c88e-cdd8-4abb-ae52-6431d4770d08-config-data\") pod \"glance-default-external-api-0\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.531821 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7989c88e-cdd8-4abb-ae52-6431d4770d08-ceph\") pod \"glance-default-external-api-0\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.531840 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7989c88e-cdd8-4abb-ae52-6431d4770d08-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.531869 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7989c88e-cdd8-4abb-ae52-6431d4770d08-scripts\") pod \"glance-default-external-api-0\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.531920 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd6c739c-9243-41cc-a4a9-fa33bd588820-ovsdbserver-nb\") pod \"dnsmasq-dns-78cb9bb6f7-mkftp\" (UID: \"dd6c739c-9243-41cc-a4a9-fa33bd588820\") " pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.531951 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7989c88e-cdd8-4abb-ae52-6431d4770d08-logs\") pod \"glance-default-external-api-0\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.532412 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7989c88e-cdd8-4abb-ae52-6431d4770d08-logs\") pod \"glance-default-external-api-0\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.534218 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7989c88e-cdd8-4abb-ae52-6431d4770d08-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.537917 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.541314 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.542529 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7989c88e-cdd8-4abb-ae52-6431d4770d08-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.542765 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7989c88e-cdd8-4abb-ae52-6431d4770d08-scripts\") pod \"glance-default-external-api-0\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.544555 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.547332 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7989c88e-cdd8-4abb-ae52-6431d4770d08-config-data\") pod \"glance-default-external-api-0\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.548625 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7989c88e-cdd8-4abb-ae52-6431d4770d08-ceph\") pod \"glance-default-external-api-0\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.553609 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.566090 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4brn\" (UniqueName: \"kubernetes.io/projected/7989c88e-cdd8-4abb-ae52-6431d4770d08-kube-api-access-z4brn\") pod \"glance-default-external-api-0\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.620253 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.633093 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aab7866d-0082-4581-9e7c-146b3a1169d7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.633182 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aab7866d-0082-4581-9e7c-146b3a1169d7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.633210 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/aab7866d-0082-4581-9e7c-146b3a1169d7-ceph\") pod \"glance-default-internal-api-0\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.633242 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd6c739c-9243-41cc-a4a9-fa33bd588820-ovsdbserver-sb\") pod \"dnsmasq-dns-78cb9bb6f7-mkftp\" (UID: \"dd6c739c-9243-41cc-a4a9-fa33bd588820\") " pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.633274 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhxd2\" (UniqueName: \"kubernetes.io/projected/dd6c739c-9243-41cc-a4a9-fa33bd588820-kube-api-access-vhxd2\") pod \"dnsmasq-dns-78cb9bb6f7-mkftp\" (UID: \"dd6c739c-9243-41cc-a4a9-fa33bd588820\") " pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.633326 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd6c739c-9243-41cc-a4a9-fa33bd588820-config\") pod \"dnsmasq-dns-78cb9bb6f7-mkftp\" (UID: \"dd6c739c-9243-41cc-a4a9-fa33bd588820\") " pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.633372 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd6c739c-9243-41cc-a4a9-fa33bd588820-dns-svc\") pod \"dnsmasq-dns-78cb9bb6f7-mkftp\" (UID: \"dd6c739c-9243-41cc-a4a9-fa33bd588820\") " pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.633400 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aab7866d-0082-4581-9e7c-146b3a1169d7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.633418 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aab7866d-0082-4581-9e7c-146b3a1169d7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.633446 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rhbv\" (UniqueName: \"kubernetes.io/projected/aab7866d-0082-4581-9e7c-146b3a1169d7-kube-api-access-4rhbv\") pod \"glance-default-internal-api-0\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.633506 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aab7866d-0082-4581-9e7c-146b3a1169d7-logs\") pod \"glance-default-internal-api-0\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.634136 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd6c739c-9243-41cc-a4a9-fa33bd588820-ovsdbserver-nb\") pod \"dnsmasq-dns-78cb9bb6f7-mkftp\" (UID: \"dd6c739c-9243-41cc-a4a9-fa33bd588820\") " pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.634630 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd6c739c-9243-41cc-a4a9-fa33bd588820-dns-svc\") pod \"dnsmasq-dns-78cb9bb6f7-mkftp\" (UID: \"dd6c739c-9243-41cc-a4a9-fa33bd588820\") " pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.634935 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd6c739c-9243-41cc-a4a9-fa33bd588820-config\") pod \"dnsmasq-dns-78cb9bb6f7-mkftp\" (UID: \"dd6c739c-9243-41cc-a4a9-fa33bd588820\") " pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.635084 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd6c739c-9243-41cc-a4a9-fa33bd588820-ovsdbserver-nb\") pod \"dnsmasq-dns-78cb9bb6f7-mkftp\" (UID: \"dd6c739c-9243-41cc-a4a9-fa33bd588820\") " pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.635362 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd6c739c-9243-41cc-a4a9-fa33bd588820-ovsdbserver-sb\") pod \"dnsmasq-dns-78cb9bb6f7-mkftp\" (UID: \"dd6c739c-9243-41cc-a4a9-fa33bd588820\") " pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.649534 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhxd2\" (UniqueName: \"kubernetes.io/projected/dd6c739c-9243-41cc-a4a9-fa33bd588820-kube-api-access-vhxd2\") pod \"dnsmasq-dns-78cb9bb6f7-mkftp\" (UID: \"dd6c739c-9243-41cc-a4a9-fa33bd588820\") " pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.728889 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.735974 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aab7866d-0082-4581-9e7c-146b3a1169d7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.736046 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aab7866d-0082-4581-9e7c-146b3a1169d7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.736330 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rhbv\" (UniqueName: \"kubernetes.io/projected/aab7866d-0082-4581-9e7c-146b3a1169d7-kube-api-access-4rhbv\") pod \"glance-default-internal-api-0\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.736482 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aab7866d-0082-4581-9e7c-146b3a1169d7-logs\") pod \"glance-default-internal-api-0\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.736588 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aab7866d-0082-4581-9e7c-146b3a1169d7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.736642 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/aab7866d-0082-4581-9e7c-146b3a1169d7-ceph\") pod \"glance-default-internal-api-0\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.736662 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aab7866d-0082-4581-9e7c-146b3a1169d7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.738920 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aab7866d-0082-4581-9e7c-146b3a1169d7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.739399 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aab7866d-0082-4581-9e7c-146b3a1169d7-logs\") pod \"glance-default-internal-api-0\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.744188 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/aab7866d-0082-4581-9e7c-146b3a1169d7-ceph\") pod \"glance-default-internal-api-0\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.744276 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aab7866d-0082-4581-9e7c-146b3a1169d7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.744410 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aab7866d-0082-4581-9e7c-146b3a1169d7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.747898 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aab7866d-0082-4581-9e7c-146b3a1169d7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.753833 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rhbv\" (UniqueName: \"kubernetes.io/projected/aab7866d-0082-4581-9e7c-146b3a1169d7-kube-api-access-4rhbv\") pod \"glance-default-internal-api-0\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:17 crc kubenswrapper[4915]: I0127 20:13:17.918948 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 20:13:18 crc kubenswrapper[4915]: I0127 20:13:18.139321 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 20:13:18 crc kubenswrapper[4915]: I0127 20:13:18.189228 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cb9bb6f7-mkftp"] Jan 27 20:13:18 crc kubenswrapper[4915]: I0127 20:13:18.441901 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 20:13:18 crc kubenswrapper[4915]: I0127 20:13:18.462619 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 20:13:18 crc kubenswrapper[4915]: W0127 20:13:18.471333 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaab7866d_0082_4581_9e7c_146b3a1169d7.slice/crio-557ad9125965b4189830cc086fd727154415eb096bb44e78cbf58487a1d048e3 WatchSource:0}: Error finding container 557ad9125965b4189830cc086fd727154415eb096bb44e78cbf58487a1d048e3: Status 404 returned error can't find the container with id 557ad9125965b4189830cc086fd727154415eb096bb44e78cbf58487a1d048e3 Jan 27 20:13:19 crc kubenswrapper[4915]: I0127 20:13:19.040328 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aab7866d-0082-4581-9e7c-146b3a1169d7","Type":"ContainerStarted","Data":"9faf2487f5bb77a5aa3ac8a334170071fae53be11c9c7af53aad9d2d0271f830"} Jan 27 20:13:19 crc kubenswrapper[4915]: I0127 20:13:19.040784 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aab7866d-0082-4581-9e7c-146b3a1169d7","Type":"ContainerStarted","Data":"557ad9125965b4189830cc086fd727154415eb096bb44e78cbf58487a1d048e3"} Jan 27 20:13:19 crc kubenswrapper[4915]: I0127 20:13:19.042051 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7989c88e-cdd8-4abb-ae52-6431d4770d08","Type":"ContainerStarted","Data":"1f97ef9714f3a7a5bf1d1940ab68c0ea46762deb68866fe00f6bb586fade05ef"} Jan 27 20:13:19 crc kubenswrapper[4915]: I0127 20:13:19.042095 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7989c88e-cdd8-4abb-ae52-6431d4770d08","Type":"ContainerStarted","Data":"8ce522b037721fc980caa49719099df8a39d3f23a239c01474548e989f50cb3f"} Jan 27 20:13:19 crc kubenswrapper[4915]: I0127 20:13:19.043497 4915 generic.go:334] "Generic (PLEG): container finished" podID="dd6c739c-9243-41cc-a4a9-fa33bd588820" containerID="2689c859b8385f2132016a4df13e865bd0efdeb5b5fe279a5f7e7b6b39dac6bc" exitCode=0 Jan 27 20:13:19 crc kubenswrapper[4915]: I0127 20:13:19.043533 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" event={"ID":"dd6c739c-9243-41cc-a4a9-fa33bd588820","Type":"ContainerDied","Data":"2689c859b8385f2132016a4df13e865bd0efdeb5b5fe279a5f7e7b6b39dac6bc"} Jan 27 20:13:19 crc kubenswrapper[4915]: I0127 20:13:19.043552 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" event={"ID":"dd6c739c-9243-41cc-a4a9-fa33bd588820","Type":"ContainerStarted","Data":"a6fc022437fc144c3fade216574cf47140e1119069cef61bbe2052210284db91"} Jan 27 20:13:19 crc kubenswrapper[4915]: I0127 20:13:19.970543 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mfkk4"] Jan 27 20:13:19 crc kubenswrapper[4915]: I0127 20:13:19.973232 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mfkk4" Jan 27 20:13:19 crc kubenswrapper[4915]: I0127 20:13:19.979529 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mfkk4"] Jan 27 20:13:19 crc kubenswrapper[4915]: I0127 20:13:19.993672 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 20:13:20 crc kubenswrapper[4915]: I0127 20:13:20.059888 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" event={"ID":"dd6c739c-9243-41cc-a4a9-fa33bd588820","Type":"ContainerStarted","Data":"a685d836ff80cb37ebfb1f17c81148557140dfce62e128e0236709160a21bf36"} Jan 27 20:13:20 crc kubenswrapper[4915]: I0127 20:13:20.060516 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" Jan 27 20:13:20 crc kubenswrapper[4915]: I0127 20:13:20.063006 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aab7866d-0082-4581-9e7c-146b3a1169d7","Type":"ContainerStarted","Data":"8f083fbc29570e3593ba29ed60f104ac8e84f525652f8bc99132bce6af3d6a79"} Jan 27 20:13:20 crc kubenswrapper[4915]: I0127 20:13:20.065769 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7989c88e-cdd8-4abb-ae52-6431d4770d08","Type":"ContainerStarted","Data":"70980bf20b5dff49dee34cc8b7ef4f0084b23f145de5c9683656d1d2eb040649"} Jan 27 20:13:20 crc kubenswrapper[4915]: I0127 20:13:20.066233 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7989c88e-cdd8-4abb-ae52-6431d4770d08" containerName="glance-httpd" containerID="cri-o://70980bf20b5dff49dee34cc8b7ef4f0084b23f145de5c9683656d1d2eb040649" gracePeriod=30 Jan 27 20:13:20 crc kubenswrapper[4915]: I0127 20:13:20.066256 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7989c88e-cdd8-4abb-ae52-6431d4770d08" containerName="glance-log" containerID="cri-o://1f97ef9714f3a7a5bf1d1940ab68c0ea46762deb68866fe00f6bb586fade05ef" gracePeriod=30 Jan 27 20:13:20 crc kubenswrapper[4915]: I0127 20:13:20.082363 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bc46d68-20c5-4923-a500-6552cc12f7ea-utilities\") pod \"certified-operators-mfkk4\" (UID: \"1bc46d68-20c5-4923-a500-6552cc12f7ea\") " pod="openshift-marketplace/certified-operators-mfkk4" Jan 27 20:13:20 crc kubenswrapper[4915]: I0127 20:13:20.082467 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjrtl\" (UniqueName: \"kubernetes.io/projected/1bc46d68-20c5-4923-a500-6552cc12f7ea-kube-api-access-bjrtl\") pod \"certified-operators-mfkk4\" (UID: \"1bc46d68-20c5-4923-a500-6552cc12f7ea\") " pod="openshift-marketplace/certified-operators-mfkk4" Jan 27 20:13:20 crc kubenswrapper[4915]: I0127 20:13:20.082535 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bc46d68-20c5-4923-a500-6552cc12f7ea-catalog-content\") pod \"certified-operators-mfkk4\" (UID: \"1bc46d68-20c5-4923-a500-6552cc12f7ea\") " pod="openshift-marketplace/certified-operators-mfkk4" Jan 27 20:13:20 crc kubenswrapper[4915]: I0127 20:13:20.089513 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" podStartSLOduration=3.089486897 podStartE2EDuration="3.089486897s" podCreationTimestamp="2026-01-27 20:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:13:20.076559108 +0000 UTC m=+5491.434412782" watchObservedRunningTime="2026-01-27 20:13:20.089486897 +0000 UTC m=+5491.447340561" Jan 27 20:13:20 crc kubenswrapper[4915]: I0127 20:13:20.108183 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.108156047 podStartE2EDuration="3.108156047s" podCreationTimestamp="2026-01-27 20:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:13:20.103461361 +0000 UTC m=+5491.461315035" watchObservedRunningTime="2026-01-27 20:13:20.108156047 +0000 UTC m=+5491.466009721" Jan 27 20:13:20 crc kubenswrapper[4915]: I0127 20:13:20.131022 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.13100427 podStartE2EDuration="3.13100427s" podCreationTimestamp="2026-01-27 20:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:13:20.12290427 +0000 UTC m=+5491.480757944" watchObservedRunningTime="2026-01-27 20:13:20.13100427 +0000 UTC m=+5491.488857934" Jan 27 20:13:20 crc kubenswrapper[4915]: I0127 20:13:20.184431 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bc46d68-20c5-4923-a500-6552cc12f7ea-utilities\") pod \"certified-operators-mfkk4\" (UID: \"1bc46d68-20c5-4923-a500-6552cc12f7ea\") " pod="openshift-marketplace/certified-operators-mfkk4" Jan 27 20:13:20 crc kubenswrapper[4915]: I0127 20:13:20.184553 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjrtl\" (UniqueName: \"kubernetes.io/projected/1bc46d68-20c5-4923-a500-6552cc12f7ea-kube-api-access-bjrtl\") pod \"certified-operators-mfkk4\" (UID: \"1bc46d68-20c5-4923-a500-6552cc12f7ea\") " pod="openshift-marketplace/certified-operators-mfkk4" Jan 27 20:13:20 crc kubenswrapper[4915]: I0127 20:13:20.184611 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bc46d68-20c5-4923-a500-6552cc12f7ea-catalog-content\") pod \"certified-operators-mfkk4\" (UID: \"1bc46d68-20c5-4923-a500-6552cc12f7ea\") " pod="openshift-marketplace/certified-operators-mfkk4" Jan 27 20:13:20 crc kubenswrapper[4915]: I0127 20:13:20.185322 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bc46d68-20c5-4923-a500-6552cc12f7ea-catalog-content\") pod \"certified-operators-mfkk4\" (UID: \"1bc46d68-20c5-4923-a500-6552cc12f7ea\") " pod="openshift-marketplace/certified-operators-mfkk4" Jan 27 20:13:20 crc kubenswrapper[4915]: I0127 20:13:20.186521 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bc46d68-20c5-4923-a500-6552cc12f7ea-utilities\") pod \"certified-operators-mfkk4\" (UID: \"1bc46d68-20c5-4923-a500-6552cc12f7ea\") " pod="openshift-marketplace/certified-operators-mfkk4" Jan 27 20:13:20 crc kubenswrapper[4915]: I0127 20:13:20.215591 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjrtl\" (UniqueName: \"kubernetes.io/projected/1bc46d68-20c5-4923-a500-6552cc12f7ea-kube-api-access-bjrtl\") pod \"certified-operators-mfkk4\" (UID: \"1bc46d68-20c5-4923-a500-6552cc12f7ea\") " pod="openshift-marketplace/certified-operators-mfkk4" Jan 27 20:13:20 crc kubenswrapper[4915]: I0127 20:13:20.301700 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mfkk4" Jan 27 20:13:20 crc kubenswrapper[4915]: I0127 20:13:20.807271 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mfkk4"] Jan 27 20:13:20 crc kubenswrapper[4915]: I0127 20:13:20.994434 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.073947 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfkk4" event={"ID":"1bc46d68-20c5-4923-a500-6552cc12f7ea","Type":"ContainerStarted","Data":"9232f53f737437df2a2b1091dcdeefe36d2c419b516959e76320ad09e8fdb8e1"} Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.073984 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfkk4" event={"ID":"1bc46d68-20c5-4923-a500-6552cc12f7ea","Type":"ContainerStarted","Data":"29c8cc1779a2d567d99d8a69526b4fa794130f3cb2e195e80bdbe8495d38ca1f"} Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.077046 4915 generic.go:334] "Generic (PLEG): container finished" podID="7989c88e-cdd8-4abb-ae52-6431d4770d08" containerID="70980bf20b5dff49dee34cc8b7ef4f0084b23f145de5c9683656d1d2eb040649" exitCode=0 Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.077092 4915 generic.go:334] "Generic (PLEG): container finished" podID="7989c88e-cdd8-4abb-ae52-6431d4770d08" containerID="1f97ef9714f3a7a5bf1d1940ab68c0ea46762deb68866fe00f6bb586fade05ef" exitCode=143 Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.077337 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="aab7866d-0082-4581-9e7c-146b3a1169d7" containerName="glance-log" containerID="cri-o://9faf2487f5bb77a5aa3ac8a334170071fae53be11c9c7af53aad9d2d0271f830" gracePeriod=30 Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.077668 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.078137 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7989c88e-cdd8-4abb-ae52-6431d4770d08","Type":"ContainerDied","Data":"70980bf20b5dff49dee34cc8b7ef4f0084b23f145de5c9683656d1d2eb040649"} Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.078184 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7989c88e-cdd8-4abb-ae52-6431d4770d08","Type":"ContainerDied","Data":"1f97ef9714f3a7a5bf1d1940ab68c0ea46762deb68866fe00f6bb586fade05ef"} Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.078199 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7989c88e-cdd8-4abb-ae52-6431d4770d08","Type":"ContainerDied","Data":"8ce522b037721fc980caa49719099df8a39d3f23a239c01474548e989f50cb3f"} Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.078217 4915 scope.go:117] "RemoveContainer" containerID="70980bf20b5dff49dee34cc8b7ef4f0084b23f145de5c9683656d1d2eb040649" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.079030 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="aab7866d-0082-4581-9e7c-146b3a1169d7" containerName="glance-httpd" containerID="cri-o://8f083fbc29570e3593ba29ed60f104ac8e84f525652f8bc99132bce6af3d6a79" gracePeriod=30 Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.103977 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7989c88e-cdd8-4abb-ae52-6431d4770d08-ceph\") pod \"7989c88e-cdd8-4abb-ae52-6431d4770d08\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.104043 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7989c88e-cdd8-4abb-ae52-6431d4770d08-combined-ca-bundle\") pod \"7989c88e-cdd8-4abb-ae52-6431d4770d08\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.104089 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7989c88e-cdd8-4abb-ae52-6431d4770d08-logs\") pod \"7989c88e-cdd8-4abb-ae52-6431d4770d08\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.104105 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7989c88e-cdd8-4abb-ae52-6431d4770d08-scripts\") pod \"7989c88e-cdd8-4abb-ae52-6431d4770d08\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.104202 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4brn\" (UniqueName: \"kubernetes.io/projected/7989c88e-cdd8-4abb-ae52-6431d4770d08-kube-api-access-z4brn\") pod \"7989c88e-cdd8-4abb-ae52-6431d4770d08\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.104263 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7989c88e-cdd8-4abb-ae52-6431d4770d08-httpd-run\") pod \"7989c88e-cdd8-4abb-ae52-6431d4770d08\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.104318 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7989c88e-cdd8-4abb-ae52-6431d4770d08-config-data\") pod \"7989c88e-cdd8-4abb-ae52-6431d4770d08\" (UID: \"7989c88e-cdd8-4abb-ae52-6431d4770d08\") " Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.107093 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7989c88e-cdd8-4abb-ae52-6431d4770d08-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7989c88e-cdd8-4abb-ae52-6431d4770d08" (UID: "7989c88e-cdd8-4abb-ae52-6431d4770d08"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.107335 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7989c88e-cdd8-4abb-ae52-6431d4770d08-logs" (OuterVolumeSpecName: "logs") pod "7989c88e-cdd8-4abb-ae52-6431d4770d08" (UID: "7989c88e-cdd8-4abb-ae52-6431d4770d08"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.111248 4915 scope.go:117] "RemoveContainer" containerID="1f97ef9714f3a7a5bf1d1940ab68c0ea46762deb68866fe00f6bb586fade05ef" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.114295 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7989c88e-cdd8-4abb-ae52-6431d4770d08-scripts" (OuterVolumeSpecName: "scripts") pod "7989c88e-cdd8-4abb-ae52-6431d4770d08" (UID: "7989c88e-cdd8-4abb-ae52-6431d4770d08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.114911 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7989c88e-cdd8-4abb-ae52-6431d4770d08-ceph" (OuterVolumeSpecName: "ceph") pod "7989c88e-cdd8-4abb-ae52-6431d4770d08" (UID: "7989c88e-cdd8-4abb-ae52-6431d4770d08"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.114949 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7989c88e-cdd8-4abb-ae52-6431d4770d08-kube-api-access-z4brn" (OuterVolumeSpecName: "kube-api-access-z4brn") pod "7989c88e-cdd8-4abb-ae52-6431d4770d08" (UID: "7989c88e-cdd8-4abb-ae52-6431d4770d08"). InnerVolumeSpecName "kube-api-access-z4brn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.136493 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7989c88e-cdd8-4abb-ae52-6431d4770d08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7989c88e-cdd8-4abb-ae52-6431d4770d08" (UID: "7989c88e-cdd8-4abb-ae52-6431d4770d08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.157047 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7989c88e-cdd8-4abb-ae52-6431d4770d08-config-data" (OuterVolumeSpecName: "config-data") pod "7989c88e-cdd8-4abb-ae52-6431d4770d08" (UID: "7989c88e-cdd8-4abb-ae52-6431d4770d08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.206556 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4brn\" (UniqueName: \"kubernetes.io/projected/7989c88e-cdd8-4abb-ae52-6431d4770d08-kube-api-access-z4brn\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.207216 4915 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7989c88e-cdd8-4abb-ae52-6431d4770d08-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.207230 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7989c88e-cdd8-4abb-ae52-6431d4770d08-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.207272 4915 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7989c88e-cdd8-4abb-ae52-6431d4770d08-ceph\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.207292 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7989c88e-cdd8-4abb-ae52-6431d4770d08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.207305 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7989c88e-cdd8-4abb-ae52-6431d4770d08-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.207316 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7989c88e-cdd8-4abb-ae52-6431d4770d08-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.224679 4915 scope.go:117] "RemoveContainer" containerID="70980bf20b5dff49dee34cc8b7ef4f0084b23f145de5c9683656d1d2eb040649" Jan 27 20:13:21 crc kubenswrapper[4915]: E0127 20:13:21.225147 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70980bf20b5dff49dee34cc8b7ef4f0084b23f145de5c9683656d1d2eb040649\": container with ID starting with 70980bf20b5dff49dee34cc8b7ef4f0084b23f145de5c9683656d1d2eb040649 not found: ID does not exist" containerID="70980bf20b5dff49dee34cc8b7ef4f0084b23f145de5c9683656d1d2eb040649" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.225183 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70980bf20b5dff49dee34cc8b7ef4f0084b23f145de5c9683656d1d2eb040649"} err="failed to get container status \"70980bf20b5dff49dee34cc8b7ef4f0084b23f145de5c9683656d1d2eb040649\": rpc error: code = NotFound desc = could not find container \"70980bf20b5dff49dee34cc8b7ef4f0084b23f145de5c9683656d1d2eb040649\": container with ID starting with 70980bf20b5dff49dee34cc8b7ef4f0084b23f145de5c9683656d1d2eb040649 not found: ID does not exist" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.225210 4915 scope.go:117] "RemoveContainer" containerID="1f97ef9714f3a7a5bf1d1940ab68c0ea46762deb68866fe00f6bb586fade05ef" Jan 27 20:13:21 crc kubenswrapper[4915]: E0127 20:13:21.225602 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f97ef9714f3a7a5bf1d1940ab68c0ea46762deb68866fe00f6bb586fade05ef\": container with ID starting with 1f97ef9714f3a7a5bf1d1940ab68c0ea46762deb68866fe00f6bb586fade05ef not found: ID does not exist" containerID="1f97ef9714f3a7a5bf1d1940ab68c0ea46762deb68866fe00f6bb586fade05ef" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.225623 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f97ef9714f3a7a5bf1d1940ab68c0ea46762deb68866fe00f6bb586fade05ef"} err="failed to get container status \"1f97ef9714f3a7a5bf1d1940ab68c0ea46762deb68866fe00f6bb586fade05ef\": rpc error: code = NotFound desc = could not find container \"1f97ef9714f3a7a5bf1d1940ab68c0ea46762deb68866fe00f6bb586fade05ef\": container with ID starting with 1f97ef9714f3a7a5bf1d1940ab68c0ea46762deb68866fe00f6bb586fade05ef not found: ID does not exist" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.225639 4915 scope.go:117] "RemoveContainer" containerID="70980bf20b5dff49dee34cc8b7ef4f0084b23f145de5c9683656d1d2eb040649" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.226715 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70980bf20b5dff49dee34cc8b7ef4f0084b23f145de5c9683656d1d2eb040649"} err="failed to get container status \"70980bf20b5dff49dee34cc8b7ef4f0084b23f145de5c9683656d1d2eb040649\": rpc error: code = NotFound desc = could not find container \"70980bf20b5dff49dee34cc8b7ef4f0084b23f145de5c9683656d1d2eb040649\": container with ID starting with 70980bf20b5dff49dee34cc8b7ef4f0084b23f145de5c9683656d1d2eb040649 not found: ID does not exist" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.226738 4915 scope.go:117] "RemoveContainer" containerID="1f97ef9714f3a7a5bf1d1940ab68c0ea46762deb68866fe00f6bb586fade05ef" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.227160 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f97ef9714f3a7a5bf1d1940ab68c0ea46762deb68866fe00f6bb586fade05ef"} err="failed to get container status \"1f97ef9714f3a7a5bf1d1940ab68c0ea46762deb68866fe00f6bb586fade05ef\": rpc error: code = NotFound desc = could not find container \"1f97ef9714f3a7a5bf1d1940ab68c0ea46762deb68866fe00f6bb586fade05ef\": container with ID starting with 1f97ef9714f3a7a5bf1d1940ab68c0ea46762deb68866fe00f6bb586fade05ef not found: ID does not exist" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.401640 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.421086 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.433761 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 20:13:21 crc kubenswrapper[4915]: E0127 20:13:21.434253 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7989c88e-cdd8-4abb-ae52-6431d4770d08" containerName="glance-httpd" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.434276 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="7989c88e-cdd8-4abb-ae52-6431d4770d08" containerName="glance-httpd" Jan 27 20:13:21 crc kubenswrapper[4915]: E0127 20:13:21.434299 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7989c88e-cdd8-4abb-ae52-6431d4770d08" containerName="glance-log" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.434306 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="7989c88e-cdd8-4abb-ae52-6431d4770d08" containerName="glance-log" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.434533 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="7989c88e-cdd8-4abb-ae52-6431d4770d08" containerName="glance-log" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.434565 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="7989c88e-cdd8-4abb-ae52-6431d4770d08" containerName="glance-httpd" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.435891 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.438484 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.443311 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.512053 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7efd00f-5998-493d-a94a-7c319ecc7663-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f7efd00f-5998-493d-a94a-7c319ecc7663\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.512331 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsjnp\" (UniqueName: \"kubernetes.io/projected/f7efd00f-5998-493d-a94a-7c319ecc7663-kube-api-access-hsjnp\") pod \"glance-default-external-api-0\" (UID: \"f7efd00f-5998-493d-a94a-7c319ecc7663\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.512451 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7efd00f-5998-493d-a94a-7c319ecc7663-logs\") pod \"glance-default-external-api-0\" (UID: \"f7efd00f-5998-493d-a94a-7c319ecc7663\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.512514 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7efd00f-5998-493d-a94a-7c319ecc7663-config-data\") pod \"glance-default-external-api-0\" (UID: \"f7efd00f-5998-493d-a94a-7c319ecc7663\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.513134 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7efd00f-5998-493d-a94a-7c319ecc7663-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f7efd00f-5998-493d-a94a-7c319ecc7663\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.513172 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7efd00f-5998-493d-a94a-7c319ecc7663-scripts\") pod \"glance-default-external-api-0\" (UID: \"f7efd00f-5998-493d-a94a-7c319ecc7663\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.513608 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f7efd00f-5998-493d-a94a-7c319ecc7663-ceph\") pod \"glance-default-external-api-0\" (UID: \"f7efd00f-5998-493d-a94a-7c319ecc7663\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.614867 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7efd00f-5998-493d-a94a-7c319ecc7663-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f7efd00f-5998-493d-a94a-7c319ecc7663\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.614907 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7efd00f-5998-493d-a94a-7c319ecc7663-scripts\") pod \"glance-default-external-api-0\" (UID: \"f7efd00f-5998-493d-a94a-7c319ecc7663\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.614947 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f7efd00f-5998-493d-a94a-7c319ecc7663-ceph\") pod \"glance-default-external-api-0\" (UID: \"f7efd00f-5998-493d-a94a-7c319ecc7663\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.614976 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7efd00f-5998-493d-a94a-7c319ecc7663-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f7efd00f-5998-493d-a94a-7c319ecc7663\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.615014 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsjnp\" (UniqueName: \"kubernetes.io/projected/f7efd00f-5998-493d-a94a-7c319ecc7663-kube-api-access-hsjnp\") pod \"glance-default-external-api-0\" (UID: \"f7efd00f-5998-493d-a94a-7c319ecc7663\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.615045 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7efd00f-5998-493d-a94a-7c319ecc7663-logs\") pod \"glance-default-external-api-0\" (UID: \"f7efd00f-5998-493d-a94a-7c319ecc7663\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.615064 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7efd00f-5998-493d-a94a-7c319ecc7663-config-data\") pod \"glance-default-external-api-0\" (UID: \"f7efd00f-5998-493d-a94a-7c319ecc7663\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.615373 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7efd00f-5998-493d-a94a-7c319ecc7663-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f7efd00f-5998-493d-a94a-7c319ecc7663\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.615534 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7efd00f-5998-493d-a94a-7c319ecc7663-logs\") pod \"glance-default-external-api-0\" (UID: \"f7efd00f-5998-493d-a94a-7c319ecc7663\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.622902 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7efd00f-5998-493d-a94a-7c319ecc7663-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f7efd00f-5998-493d-a94a-7c319ecc7663\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.623372 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f7efd00f-5998-493d-a94a-7c319ecc7663-ceph\") pod \"glance-default-external-api-0\" (UID: \"f7efd00f-5998-493d-a94a-7c319ecc7663\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.624103 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7efd00f-5998-493d-a94a-7c319ecc7663-config-data\") pod \"glance-default-external-api-0\" (UID: \"f7efd00f-5998-493d-a94a-7c319ecc7663\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.627543 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7efd00f-5998-493d-a94a-7c319ecc7663-scripts\") pod \"glance-default-external-api-0\" (UID: \"f7efd00f-5998-493d-a94a-7c319ecc7663\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.633309 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsjnp\" (UniqueName: \"kubernetes.io/projected/f7efd00f-5998-493d-a94a-7c319ecc7663-kube-api-access-hsjnp\") pod \"glance-default-external-api-0\" (UID: \"f7efd00f-5998-493d-a94a-7c319ecc7663\") " pod="openstack/glance-default-external-api-0" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.652954 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.753686 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.818350 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aab7866d-0082-4581-9e7c-146b3a1169d7-scripts\") pod \"aab7866d-0082-4581-9e7c-146b3a1169d7\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.818461 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/aab7866d-0082-4581-9e7c-146b3a1169d7-ceph\") pod \"aab7866d-0082-4581-9e7c-146b3a1169d7\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.818489 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rhbv\" (UniqueName: \"kubernetes.io/projected/aab7866d-0082-4581-9e7c-146b3a1169d7-kube-api-access-4rhbv\") pod \"aab7866d-0082-4581-9e7c-146b3a1169d7\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.818544 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aab7866d-0082-4581-9e7c-146b3a1169d7-combined-ca-bundle\") pod \"aab7866d-0082-4581-9e7c-146b3a1169d7\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.818592 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aab7866d-0082-4581-9e7c-146b3a1169d7-logs\") pod \"aab7866d-0082-4581-9e7c-146b3a1169d7\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.818618 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aab7866d-0082-4581-9e7c-146b3a1169d7-httpd-run\") pod \"aab7866d-0082-4581-9e7c-146b3a1169d7\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.818690 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aab7866d-0082-4581-9e7c-146b3a1169d7-config-data\") pod \"aab7866d-0082-4581-9e7c-146b3a1169d7\" (UID: \"aab7866d-0082-4581-9e7c-146b3a1169d7\") " Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.819176 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aab7866d-0082-4581-9e7c-146b3a1169d7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "aab7866d-0082-4581-9e7c-146b3a1169d7" (UID: "aab7866d-0082-4581-9e7c-146b3a1169d7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.819213 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aab7866d-0082-4581-9e7c-146b3a1169d7-logs" (OuterVolumeSpecName: "logs") pod "aab7866d-0082-4581-9e7c-146b3a1169d7" (UID: "aab7866d-0082-4581-9e7c-146b3a1169d7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.844024 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aab7866d-0082-4581-9e7c-146b3a1169d7-ceph" (OuterVolumeSpecName: "ceph") pod "aab7866d-0082-4581-9e7c-146b3a1169d7" (UID: "aab7866d-0082-4581-9e7c-146b3a1169d7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.850979 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aab7866d-0082-4581-9e7c-146b3a1169d7-scripts" (OuterVolumeSpecName: "scripts") pod "aab7866d-0082-4581-9e7c-146b3a1169d7" (UID: "aab7866d-0082-4581-9e7c-146b3a1169d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.851829 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aab7866d-0082-4581-9e7c-146b3a1169d7-kube-api-access-4rhbv" (OuterVolumeSpecName: "kube-api-access-4rhbv") pod "aab7866d-0082-4581-9e7c-146b3a1169d7" (UID: "aab7866d-0082-4581-9e7c-146b3a1169d7"). InnerVolumeSpecName "kube-api-access-4rhbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.893892 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aab7866d-0082-4581-9e7c-146b3a1169d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aab7866d-0082-4581-9e7c-146b3a1169d7" (UID: "aab7866d-0082-4581-9e7c-146b3a1169d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.894001 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aab7866d-0082-4581-9e7c-146b3a1169d7-config-data" (OuterVolumeSpecName: "config-data") pod "aab7866d-0082-4581-9e7c-146b3a1169d7" (UID: "aab7866d-0082-4581-9e7c-146b3a1169d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.923986 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aab7866d-0082-4581-9e7c-146b3a1169d7-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.924030 4915 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/aab7866d-0082-4581-9e7c-146b3a1169d7-ceph\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.924041 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rhbv\" (UniqueName: \"kubernetes.io/projected/aab7866d-0082-4581-9e7c-146b3a1169d7-kube-api-access-4rhbv\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.924055 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aab7866d-0082-4581-9e7c-146b3a1169d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.924065 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aab7866d-0082-4581-9e7c-146b3a1169d7-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.924076 4915 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aab7866d-0082-4581-9e7c-146b3a1169d7-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:21 crc kubenswrapper[4915]: I0127 20:13:21.924088 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aab7866d-0082-4581-9e7c-146b3a1169d7-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.090010 4915 generic.go:334] "Generic (PLEG): container finished" podID="aab7866d-0082-4581-9e7c-146b3a1169d7" containerID="8f083fbc29570e3593ba29ed60f104ac8e84f525652f8bc99132bce6af3d6a79" exitCode=0 Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.090037 4915 generic.go:334] "Generic (PLEG): container finished" podID="aab7866d-0082-4581-9e7c-146b3a1169d7" containerID="9faf2487f5bb77a5aa3ac8a334170071fae53be11c9c7af53aad9d2d0271f830" exitCode=143 Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.090071 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aab7866d-0082-4581-9e7c-146b3a1169d7","Type":"ContainerDied","Data":"8f083fbc29570e3593ba29ed60f104ac8e84f525652f8bc99132bce6af3d6a79"} Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.090096 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aab7866d-0082-4581-9e7c-146b3a1169d7","Type":"ContainerDied","Data":"9faf2487f5bb77a5aa3ac8a334170071fae53be11c9c7af53aad9d2d0271f830"} Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.090106 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aab7866d-0082-4581-9e7c-146b3a1169d7","Type":"ContainerDied","Data":"557ad9125965b4189830cc086fd727154415eb096bb44e78cbf58487a1d048e3"} Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.090120 4915 scope.go:117] "RemoveContainer" containerID="8f083fbc29570e3593ba29ed60f104ac8e84f525652f8bc99132bce6af3d6a79" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.090205 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.105286 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfkk4" event={"ID":"1bc46d68-20c5-4923-a500-6552cc12f7ea","Type":"ContainerDied","Data":"9232f53f737437df2a2b1091dcdeefe36d2c419b516959e76320ad09e8fdb8e1"} Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.104773 4915 generic.go:334] "Generic (PLEG): container finished" podID="1bc46d68-20c5-4923-a500-6552cc12f7ea" containerID="9232f53f737437df2a2b1091dcdeefe36d2c419b516959e76320ad09e8fdb8e1" exitCode=0 Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.120107 4915 scope.go:117] "RemoveContainer" containerID="9faf2487f5bb77a5aa3ac8a334170071fae53be11c9c7af53aad9d2d0271f830" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.151750 4915 scope.go:117] "RemoveContainer" containerID="8f083fbc29570e3593ba29ed60f104ac8e84f525652f8bc99132bce6af3d6a79" Jan 27 20:13:22 crc kubenswrapper[4915]: E0127 20:13:22.152154 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f083fbc29570e3593ba29ed60f104ac8e84f525652f8bc99132bce6af3d6a79\": container with ID starting with 8f083fbc29570e3593ba29ed60f104ac8e84f525652f8bc99132bce6af3d6a79 not found: ID does not exist" containerID="8f083fbc29570e3593ba29ed60f104ac8e84f525652f8bc99132bce6af3d6a79" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.152178 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f083fbc29570e3593ba29ed60f104ac8e84f525652f8bc99132bce6af3d6a79"} err="failed to get container status \"8f083fbc29570e3593ba29ed60f104ac8e84f525652f8bc99132bce6af3d6a79\": rpc error: code = NotFound desc = could not find container \"8f083fbc29570e3593ba29ed60f104ac8e84f525652f8bc99132bce6af3d6a79\": container with ID starting with 8f083fbc29570e3593ba29ed60f104ac8e84f525652f8bc99132bce6af3d6a79 not found: ID does not exist" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.152201 4915 scope.go:117] "RemoveContainer" containerID="9faf2487f5bb77a5aa3ac8a334170071fae53be11c9c7af53aad9d2d0271f830" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.152359 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 20:13:22 crc kubenswrapper[4915]: E0127 20:13:22.152576 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9faf2487f5bb77a5aa3ac8a334170071fae53be11c9c7af53aad9d2d0271f830\": container with ID starting with 9faf2487f5bb77a5aa3ac8a334170071fae53be11c9c7af53aad9d2d0271f830 not found: ID does not exist" containerID="9faf2487f5bb77a5aa3ac8a334170071fae53be11c9c7af53aad9d2d0271f830" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.152595 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9faf2487f5bb77a5aa3ac8a334170071fae53be11c9c7af53aad9d2d0271f830"} err="failed to get container status \"9faf2487f5bb77a5aa3ac8a334170071fae53be11c9c7af53aad9d2d0271f830\": rpc error: code = NotFound desc = could not find container \"9faf2487f5bb77a5aa3ac8a334170071fae53be11c9c7af53aad9d2d0271f830\": container with ID starting with 9faf2487f5bb77a5aa3ac8a334170071fae53be11c9c7af53aad9d2d0271f830 not found: ID does not exist" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.152614 4915 scope.go:117] "RemoveContainer" containerID="8f083fbc29570e3593ba29ed60f104ac8e84f525652f8bc99132bce6af3d6a79" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.152784 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f083fbc29570e3593ba29ed60f104ac8e84f525652f8bc99132bce6af3d6a79"} err="failed to get container status \"8f083fbc29570e3593ba29ed60f104ac8e84f525652f8bc99132bce6af3d6a79\": rpc error: code = NotFound desc = could not find container \"8f083fbc29570e3593ba29ed60f104ac8e84f525652f8bc99132bce6af3d6a79\": container with ID starting with 8f083fbc29570e3593ba29ed60f104ac8e84f525652f8bc99132bce6af3d6a79 not found: ID does not exist" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.152811 4915 scope.go:117] "RemoveContainer" containerID="9faf2487f5bb77a5aa3ac8a334170071fae53be11c9c7af53aad9d2d0271f830" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.153133 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9faf2487f5bb77a5aa3ac8a334170071fae53be11c9c7af53aad9d2d0271f830"} err="failed to get container status \"9faf2487f5bb77a5aa3ac8a334170071fae53be11c9c7af53aad9d2d0271f830\": rpc error: code = NotFound desc = could not find container \"9faf2487f5bb77a5aa3ac8a334170071fae53be11c9c7af53aad9d2d0271f830\": container with ID starting with 9faf2487f5bb77a5aa3ac8a334170071fae53be11c9c7af53aad9d2d0271f830 not found: ID does not exist" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.168049 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.181850 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 20:13:22 crc kubenswrapper[4915]: E0127 20:13:22.182230 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aab7866d-0082-4581-9e7c-146b3a1169d7" containerName="glance-httpd" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.182247 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="aab7866d-0082-4581-9e7c-146b3a1169d7" containerName="glance-httpd" Jan 27 20:13:22 crc kubenswrapper[4915]: E0127 20:13:22.182299 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aab7866d-0082-4581-9e7c-146b3a1169d7" containerName="glance-log" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.182306 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="aab7866d-0082-4581-9e7c-146b3a1169d7" containerName="glance-log" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.182463 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="aab7866d-0082-4581-9e7c-146b3a1169d7" containerName="glance-log" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.182478 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="aab7866d-0082-4581-9e7c-146b3a1169d7" containerName="glance-httpd" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.183363 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.186309 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.186615 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.329325 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4863f021-132b-43da-adf0-babc9a72f8a1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4863f021-132b-43da-adf0-babc9a72f8a1\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.329423 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p75f2\" (UniqueName: \"kubernetes.io/projected/4863f021-132b-43da-adf0-babc9a72f8a1-kube-api-access-p75f2\") pod \"glance-default-internal-api-0\" (UID: \"4863f021-132b-43da-adf0-babc9a72f8a1\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.329459 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4863f021-132b-43da-adf0-babc9a72f8a1-logs\") pod \"glance-default-internal-api-0\" (UID: \"4863f021-132b-43da-adf0-babc9a72f8a1\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.329498 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4863f021-132b-43da-adf0-babc9a72f8a1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4863f021-132b-43da-adf0-babc9a72f8a1\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.329531 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4863f021-132b-43da-adf0-babc9a72f8a1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4863f021-132b-43da-adf0-babc9a72f8a1\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.329587 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4863f021-132b-43da-adf0-babc9a72f8a1-ceph\") pod \"glance-default-internal-api-0\" (UID: \"4863f021-132b-43da-adf0-babc9a72f8a1\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.329616 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4863f021-132b-43da-adf0-babc9a72f8a1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4863f021-132b-43da-adf0-babc9a72f8a1\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.414301 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.434092 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4863f021-132b-43da-adf0-babc9a72f8a1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4863f021-132b-43da-adf0-babc9a72f8a1\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.434431 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p75f2\" (UniqueName: \"kubernetes.io/projected/4863f021-132b-43da-adf0-babc9a72f8a1-kube-api-access-p75f2\") pod \"glance-default-internal-api-0\" (UID: \"4863f021-132b-43da-adf0-babc9a72f8a1\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.434458 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4863f021-132b-43da-adf0-babc9a72f8a1-logs\") pod \"glance-default-internal-api-0\" (UID: \"4863f021-132b-43da-adf0-babc9a72f8a1\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.434488 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4863f021-132b-43da-adf0-babc9a72f8a1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4863f021-132b-43da-adf0-babc9a72f8a1\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.434509 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4863f021-132b-43da-adf0-babc9a72f8a1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4863f021-132b-43da-adf0-babc9a72f8a1\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.434575 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4863f021-132b-43da-adf0-babc9a72f8a1-ceph\") pod \"glance-default-internal-api-0\" (UID: \"4863f021-132b-43da-adf0-babc9a72f8a1\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.434594 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4863f021-132b-43da-adf0-babc9a72f8a1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4863f021-132b-43da-adf0-babc9a72f8a1\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.435270 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4863f021-132b-43da-adf0-babc9a72f8a1-logs\") pod \"glance-default-internal-api-0\" (UID: \"4863f021-132b-43da-adf0-babc9a72f8a1\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.435641 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4863f021-132b-43da-adf0-babc9a72f8a1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4863f021-132b-43da-adf0-babc9a72f8a1\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.443643 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4863f021-132b-43da-adf0-babc9a72f8a1-ceph\") pod \"glance-default-internal-api-0\" (UID: \"4863f021-132b-43da-adf0-babc9a72f8a1\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.443835 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4863f021-132b-43da-adf0-babc9a72f8a1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4863f021-132b-43da-adf0-babc9a72f8a1\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.446715 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4863f021-132b-43da-adf0-babc9a72f8a1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4863f021-132b-43da-adf0-babc9a72f8a1\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.447472 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4863f021-132b-43da-adf0-babc9a72f8a1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4863f021-132b-43da-adf0-babc9a72f8a1\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.450269 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p75f2\" (UniqueName: \"kubernetes.io/projected/4863f021-132b-43da-adf0-babc9a72f8a1-kube-api-access-p75f2\") pod \"glance-default-internal-api-0\" (UID: \"4863f021-132b-43da-adf0-babc9a72f8a1\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:13:22 crc kubenswrapper[4915]: I0127 20:13:22.503977 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 20:13:23 crc kubenswrapper[4915]: I0127 20:13:23.001036 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 20:13:23 crc kubenswrapper[4915]: W0127 20:13:23.001961 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4863f021_132b_43da_adf0_babc9a72f8a1.slice/crio-0777eb6839d852271abb5fe5a213b2b5a42bc7e2059fbcb4411df996e408bd8d WatchSource:0}: Error finding container 0777eb6839d852271abb5fe5a213b2b5a42bc7e2059fbcb4411df996e408bd8d: Status 404 returned error can't find the container with id 0777eb6839d852271abb5fe5a213b2b5a42bc7e2059fbcb4411df996e408bd8d Jan 27 20:13:23 crc kubenswrapper[4915]: I0127 20:13:23.137259 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4863f021-132b-43da-adf0-babc9a72f8a1","Type":"ContainerStarted","Data":"0777eb6839d852271abb5fe5a213b2b5a42bc7e2059fbcb4411df996e408bd8d"} Jan 27 20:13:23 crc kubenswrapper[4915]: I0127 20:13:23.145398 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfkk4" event={"ID":"1bc46d68-20c5-4923-a500-6552cc12f7ea","Type":"ContainerStarted","Data":"d4aa2d988e58a1f4201f822362e31b247649de368abb407a08a6928241987d52"} Jan 27 20:13:23 crc kubenswrapper[4915]: I0127 20:13:23.153664 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7efd00f-5998-493d-a94a-7c319ecc7663","Type":"ContainerStarted","Data":"cce2f29b108eeba3763d8c6bc62cd8ec38074a29924592ca31189c4c0dcdd119"} Jan 27 20:13:23 crc kubenswrapper[4915]: I0127 20:13:23.153972 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7efd00f-5998-493d-a94a-7c319ecc7663","Type":"ContainerStarted","Data":"5b866f308313f478ef0c7327acb13439648330434a1f317117760d0f39673d1b"} Jan 27 20:13:23 crc kubenswrapper[4915]: I0127 20:13:23.373527 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7989c88e-cdd8-4abb-ae52-6431d4770d08" path="/var/lib/kubelet/pods/7989c88e-cdd8-4abb-ae52-6431d4770d08/volumes" Jan 27 20:13:23 crc kubenswrapper[4915]: I0127 20:13:23.374554 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aab7866d-0082-4581-9e7c-146b3a1169d7" path="/var/lib/kubelet/pods/aab7866d-0082-4581-9e7c-146b3a1169d7/volumes" Jan 27 20:13:24 crc kubenswrapper[4915]: I0127 20:13:24.165353 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4863f021-132b-43da-adf0-babc9a72f8a1","Type":"ContainerStarted","Data":"70356535d8a703f5b9384aa0df74465bae9c2701b97a6a8a6e68ff0683f1b2a5"} Jan 27 20:13:24 crc kubenswrapper[4915]: I0127 20:13:24.165711 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4863f021-132b-43da-adf0-babc9a72f8a1","Type":"ContainerStarted","Data":"b0aa02fb408407dd1bbf12bdfcdff9d2fb6e57e12076fe3ab9359056822747ee"} Jan 27 20:13:24 crc kubenswrapper[4915]: I0127 20:13:24.172417 4915 generic.go:334] "Generic (PLEG): container finished" podID="1bc46d68-20c5-4923-a500-6552cc12f7ea" containerID="d4aa2d988e58a1f4201f822362e31b247649de368abb407a08a6928241987d52" exitCode=0 Jan 27 20:13:24 crc kubenswrapper[4915]: I0127 20:13:24.172488 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfkk4" event={"ID":"1bc46d68-20c5-4923-a500-6552cc12f7ea","Type":"ContainerDied","Data":"d4aa2d988e58a1f4201f822362e31b247649de368abb407a08a6928241987d52"} Jan 27 20:13:24 crc kubenswrapper[4915]: I0127 20:13:24.177963 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7efd00f-5998-493d-a94a-7c319ecc7663","Type":"ContainerStarted","Data":"3c748c7319057ac73321399d582763b61187c96ec317eb65783ef4b375cfd162"} Jan 27 20:13:24 crc kubenswrapper[4915]: I0127 20:13:24.199665 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.199643877 podStartE2EDuration="2.199643877s" podCreationTimestamp="2026-01-27 20:13:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:13:24.187015556 +0000 UTC m=+5495.544869230" watchObservedRunningTime="2026-01-27 20:13:24.199643877 +0000 UTC m=+5495.557497541" Jan 27 20:13:24 crc kubenswrapper[4915]: I0127 20:13:24.216430 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.21640591 podStartE2EDuration="3.21640591s" podCreationTimestamp="2026-01-27 20:13:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:13:24.209661514 +0000 UTC m=+5495.567515178" watchObservedRunningTime="2026-01-27 20:13:24.21640591 +0000 UTC m=+5495.574259574" Jan 27 20:13:25 crc kubenswrapper[4915]: I0127 20:13:25.189781 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfkk4" event={"ID":"1bc46d68-20c5-4923-a500-6552cc12f7ea","Type":"ContainerStarted","Data":"71b4a82d168a03dd60a09d17e32a8270dd207d32147589669af02b101e55809b"} Jan 27 20:13:25 crc kubenswrapper[4915]: I0127 20:13:25.218928 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mfkk4" podStartSLOduration=3.7467610799999997 podStartE2EDuration="6.218909281s" podCreationTimestamp="2026-01-27 20:13:19 +0000 UTC" firstStartedPulling="2026-01-27 20:13:22.119505265 +0000 UTC m=+5493.477358929" lastFinishedPulling="2026-01-27 20:13:24.591653456 +0000 UTC m=+5495.949507130" observedRunningTime="2026-01-27 20:13:25.210660758 +0000 UTC m=+5496.568514422" watchObservedRunningTime="2026-01-27 20:13:25.218909281 +0000 UTC m=+5496.576762935" Jan 27 20:13:27 crc kubenswrapper[4915]: I0127 20:13:27.730989 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" Jan 27 20:13:27 crc kubenswrapper[4915]: I0127 20:13:27.802507 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dc9c57db9-qwm47"] Jan 27 20:13:27 crc kubenswrapper[4915]: I0127 20:13:27.802814 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" podUID="414fa0ae-660b-4e2e-acb8-6e039c2cd11a" containerName="dnsmasq-dns" containerID="cri-o://51d69d60c41b6681f468aeac488c104b40942aa3004a7f012561f3afdaaef61b" gracePeriod=10 Jan 27 20:13:28 crc kubenswrapper[4915]: I0127 20:13:28.219699 4915 generic.go:334] "Generic (PLEG): container finished" podID="414fa0ae-660b-4e2e-acb8-6e039c2cd11a" containerID="51d69d60c41b6681f468aeac488c104b40942aa3004a7f012561f3afdaaef61b" exitCode=0 Jan 27 20:13:28 crc kubenswrapper[4915]: I0127 20:13:28.219753 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" event={"ID":"414fa0ae-660b-4e2e-acb8-6e039c2cd11a","Type":"ContainerDied","Data":"51d69d60c41b6681f468aeac488c104b40942aa3004a7f012561f3afdaaef61b"} Jan 27 20:13:28 crc kubenswrapper[4915]: I0127 20:13:28.219785 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" event={"ID":"414fa0ae-660b-4e2e-acb8-6e039c2cd11a","Type":"ContainerDied","Data":"2aa78a261d1e496cc5a87f5dd1b55d5f5e83ddc717709919eba4119d976f260a"} Jan 27 20:13:28 crc kubenswrapper[4915]: I0127 20:13:28.219833 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aa78a261d1e496cc5a87f5dd1b55d5f5e83ddc717709919eba4119d976f260a" Jan 27 20:13:28 crc kubenswrapper[4915]: I0127 20:13:28.266013 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" Jan 27 20:13:28 crc kubenswrapper[4915]: I0127 20:13:28.352713 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-config\") pod \"414fa0ae-660b-4e2e-acb8-6e039c2cd11a\" (UID: \"414fa0ae-660b-4e2e-acb8-6e039c2cd11a\") " Jan 27 20:13:28 crc kubenswrapper[4915]: I0127 20:13:28.352823 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-dns-svc\") pod \"414fa0ae-660b-4e2e-acb8-6e039c2cd11a\" (UID: \"414fa0ae-660b-4e2e-acb8-6e039c2cd11a\") " Jan 27 20:13:28 crc kubenswrapper[4915]: I0127 20:13:28.352898 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldt9z\" (UniqueName: \"kubernetes.io/projected/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-kube-api-access-ldt9z\") pod \"414fa0ae-660b-4e2e-acb8-6e039c2cd11a\" (UID: \"414fa0ae-660b-4e2e-acb8-6e039c2cd11a\") " Jan 27 20:13:28 crc kubenswrapper[4915]: I0127 20:13:28.352917 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-ovsdbserver-nb\") pod \"414fa0ae-660b-4e2e-acb8-6e039c2cd11a\" (UID: \"414fa0ae-660b-4e2e-acb8-6e039c2cd11a\") " Jan 27 20:13:28 crc kubenswrapper[4915]: I0127 20:13:28.352964 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-ovsdbserver-sb\") pod \"414fa0ae-660b-4e2e-acb8-6e039c2cd11a\" (UID: \"414fa0ae-660b-4e2e-acb8-6e039c2cd11a\") " Jan 27 20:13:28 crc kubenswrapper[4915]: I0127 20:13:28.359027 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-kube-api-access-ldt9z" (OuterVolumeSpecName: "kube-api-access-ldt9z") pod "414fa0ae-660b-4e2e-acb8-6e039c2cd11a" (UID: "414fa0ae-660b-4e2e-acb8-6e039c2cd11a"). InnerVolumeSpecName "kube-api-access-ldt9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:13:28 crc kubenswrapper[4915]: I0127 20:13:28.395507 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-config" (OuterVolumeSpecName: "config") pod "414fa0ae-660b-4e2e-acb8-6e039c2cd11a" (UID: "414fa0ae-660b-4e2e-acb8-6e039c2cd11a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:13:28 crc kubenswrapper[4915]: I0127 20:13:28.397811 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "414fa0ae-660b-4e2e-acb8-6e039c2cd11a" (UID: "414fa0ae-660b-4e2e-acb8-6e039c2cd11a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:13:28 crc kubenswrapper[4915]: I0127 20:13:28.399285 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "414fa0ae-660b-4e2e-acb8-6e039c2cd11a" (UID: "414fa0ae-660b-4e2e-acb8-6e039c2cd11a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:13:28 crc kubenswrapper[4915]: I0127 20:13:28.404173 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "414fa0ae-660b-4e2e-acb8-6e039c2cd11a" (UID: "414fa0ae-660b-4e2e-acb8-6e039c2cd11a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:13:28 crc kubenswrapper[4915]: I0127 20:13:28.455375 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:28 crc kubenswrapper[4915]: I0127 20:13:28.455431 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldt9z\" (UniqueName: \"kubernetes.io/projected/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-kube-api-access-ldt9z\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:28 crc kubenswrapper[4915]: I0127 20:13:28.455457 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:28 crc kubenswrapper[4915]: I0127 20:13:28.455476 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:28 crc kubenswrapper[4915]: I0127 20:13:28.455497 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/414fa0ae-660b-4e2e-acb8-6e039c2cd11a-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:29 crc kubenswrapper[4915]: I0127 20:13:29.231037 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" Jan 27 20:13:29 crc kubenswrapper[4915]: I0127 20:13:29.265644 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dc9c57db9-qwm47"] Jan 27 20:13:29 crc kubenswrapper[4915]: I0127 20:13:29.276450 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6dc9c57db9-qwm47"] Jan 27 20:13:29 crc kubenswrapper[4915]: I0127 20:13:29.368626 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="414fa0ae-660b-4e2e-acb8-6e039c2cd11a" path="/var/lib/kubelet/pods/414fa0ae-660b-4e2e-acb8-6e039c2cd11a/volumes" Jan 27 20:13:30 crc kubenswrapper[4915]: I0127 20:13:30.303016 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mfkk4" Jan 27 20:13:30 crc kubenswrapper[4915]: I0127 20:13:30.303374 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mfkk4" Jan 27 20:13:30 crc kubenswrapper[4915]: I0127 20:13:30.351039 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mfkk4" Jan 27 20:13:31 crc kubenswrapper[4915]: I0127 20:13:31.331548 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mfkk4" Jan 27 20:13:31 crc kubenswrapper[4915]: I0127 20:13:31.403205 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mfkk4"] Jan 27 20:13:31 crc kubenswrapper[4915]: I0127 20:13:31.754385 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 20:13:31 crc kubenswrapper[4915]: I0127 20:13:31.754694 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 20:13:31 crc kubenswrapper[4915]: I0127 20:13:31.787066 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 20:13:31 crc kubenswrapper[4915]: I0127 20:13:31.791215 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 20:13:32 crc kubenswrapper[4915]: I0127 20:13:32.262409 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 20:13:32 crc kubenswrapper[4915]: I0127 20:13:32.262463 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 20:13:32 crc kubenswrapper[4915]: I0127 20:13:32.504623 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 20:13:32 crc kubenswrapper[4915]: I0127 20:13:32.505692 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 20:13:32 crc kubenswrapper[4915]: I0127 20:13:32.531653 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 20:13:32 crc kubenswrapper[4915]: I0127 20:13:32.565781 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 20:13:33 crc kubenswrapper[4915]: I0127 20:13:33.052084 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6dc9c57db9-qwm47" podUID="414fa0ae-660b-4e2e-acb8-6e039c2cd11a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.35:5353: i/o timeout" Jan 27 20:13:33 crc kubenswrapper[4915]: I0127 20:13:33.271960 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mfkk4" podUID="1bc46d68-20c5-4923-a500-6552cc12f7ea" containerName="registry-server" containerID="cri-o://71b4a82d168a03dd60a09d17e32a8270dd207d32147589669af02b101e55809b" gracePeriod=2 Jan 27 20:13:33 crc kubenswrapper[4915]: I0127 20:13:33.272903 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 20:13:33 crc kubenswrapper[4915]: I0127 20:13:33.273114 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 20:13:33 crc kubenswrapper[4915]: I0127 20:13:33.790200 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mfkk4" Jan 27 20:13:33 crc kubenswrapper[4915]: I0127 20:13:33.951655 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjrtl\" (UniqueName: \"kubernetes.io/projected/1bc46d68-20c5-4923-a500-6552cc12f7ea-kube-api-access-bjrtl\") pod \"1bc46d68-20c5-4923-a500-6552cc12f7ea\" (UID: \"1bc46d68-20c5-4923-a500-6552cc12f7ea\") " Jan 27 20:13:33 crc kubenswrapper[4915]: I0127 20:13:33.951739 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bc46d68-20c5-4923-a500-6552cc12f7ea-catalog-content\") pod \"1bc46d68-20c5-4923-a500-6552cc12f7ea\" (UID: \"1bc46d68-20c5-4923-a500-6552cc12f7ea\") " Jan 27 20:13:33 crc kubenswrapper[4915]: I0127 20:13:33.951874 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bc46d68-20c5-4923-a500-6552cc12f7ea-utilities\") pod \"1bc46d68-20c5-4923-a500-6552cc12f7ea\" (UID: \"1bc46d68-20c5-4923-a500-6552cc12f7ea\") " Jan 27 20:13:33 crc kubenswrapper[4915]: I0127 20:13:33.952888 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bc46d68-20c5-4923-a500-6552cc12f7ea-utilities" (OuterVolumeSpecName: "utilities") pod "1bc46d68-20c5-4923-a500-6552cc12f7ea" (UID: "1bc46d68-20c5-4923-a500-6552cc12f7ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:13:33 crc kubenswrapper[4915]: I0127 20:13:33.970893 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bc46d68-20c5-4923-a500-6552cc12f7ea-kube-api-access-bjrtl" (OuterVolumeSpecName: "kube-api-access-bjrtl") pod "1bc46d68-20c5-4923-a500-6552cc12f7ea" (UID: "1bc46d68-20c5-4923-a500-6552cc12f7ea"). InnerVolumeSpecName "kube-api-access-bjrtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:13:34 crc kubenswrapper[4915]: I0127 20:13:34.064128 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bc46d68-20c5-4923-a500-6552cc12f7ea-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:34 crc kubenswrapper[4915]: I0127 20:13:34.064207 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjrtl\" (UniqueName: \"kubernetes.io/projected/1bc46d68-20c5-4923-a500-6552cc12f7ea-kube-api-access-bjrtl\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:34 crc kubenswrapper[4915]: I0127 20:13:34.227675 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bc46d68-20c5-4923-a500-6552cc12f7ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1bc46d68-20c5-4923-a500-6552cc12f7ea" (UID: "1bc46d68-20c5-4923-a500-6552cc12f7ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:13:34 crc kubenswrapper[4915]: I0127 20:13:34.247087 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 20:13:34 crc kubenswrapper[4915]: I0127 20:13:34.260019 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 20:13:34 crc kubenswrapper[4915]: I0127 20:13:34.267475 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bc46d68-20c5-4923-a500-6552cc12f7ea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:34 crc kubenswrapper[4915]: I0127 20:13:34.287753 4915 generic.go:334] "Generic (PLEG): container finished" podID="1bc46d68-20c5-4923-a500-6552cc12f7ea" containerID="71b4a82d168a03dd60a09d17e32a8270dd207d32147589669af02b101e55809b" exitCode=0 Jan 27 20:13:34 crc kubenswrapper[4915]: I0127 20:13:34.287864 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mfkk4" Jan 27 20:13:34 crc kubenswrapper[4915]: I0127 20:13:34.287923 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfkk4" event={"ID":"1bc46d68-20c5-4923-a500-6552cc12f7ea","Type":"ContainerDied","Data":"71b4a82d168a03dd60a09d17e32a8270dd207d32147589669af02b101e55809b"} Jan 27 20:13:34 crc kubenswrapper[4915]: I0127 20:13:34.287977 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfkk4" event={"ID":"1bc46d68-20c5-4923-a500-6552cc12f7ea","Type":"ContainerDied","Data":"29c8cc1779a2d567d99d8a69526b4fa794130f3cb2e195e80bdbe8495d38ca1f"} Jan 27 20:13:34 crc kubenswrapper[4915]: I0127 20:13:34.288001 4915 scope.go:117] "RemoveContainer" containerID="71b4a82d168a03dd60a09d17e32a8270dd207d32147589669af02b101e55809b" Jan 27 20:13:34 crc kubenswrapper[4915]: I0127 20:13:34.342725 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mfkk4"] Jan 27 20:13:34 crc kubenswrapper[4915]: I0127 20:13:34.348412 4915 scope.go:117] "RemoveContainer" containerID="d4aa2d988e58a1f4201f822362e31b247649de368abb407a08a6928241987d52" Jan 27 20:13:34 crc kubenswrapper[4915]: I0127 20:13:34.350532 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mfkk4"] Jan 27 20:13:34 crc kubenswrapper[4915]: I0127 20:13:34.395079 4915 scope.go:117] "RemoveContainer" containerID="9232f53f737437df2a2b1091dcdeefe36d2c419b516959e76320ad09e8fdb8e1" Jan 27 20:13:34 crc kubenswrapper[4915]: I0127 20:13:34.410528 4915 scope.go:117] "RemoveContainer" containerID="71b4a82d168a03dd60a09d17e32a8270dd207d32147589669af02b101e55809b" Jan 27 20:13:34 crc kubenswrapper[4915]: E0127 20:13:34.410945 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71b4a82d168a03dd60a09d17e32a8270dd207d32147589669af02b101e55809b\": container with ID starting with 71b4a82d168a03dd60a09d17e32a8270dd207d32147589669af02b101e55809b not found: ID does not exist" containerID="71b4a82d168a03dd60a09d17e32a8270dd207d32147589669af02b101e55809b" Jan 27 20:13:34 crc kubenswrapper[4915]: I0127 20:13:34.410974 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71b4a82d168a03dd60a09d17e32a8270dd207d32147589669af02b101e55809b"} err="failed to get container status \"71b4a82d168a03dd60a09d17e32a8270dd207d32147589669af02b101e55809b\": rpc error: code = NotFound desc = could not find container \"71b4a82d168a03dd60a09d17e32a8270dd207d32147589669af02b101e55809b\": container with ID starting with 71b4a82d168a03dd60a09d17e32a8270dd207d32147589669af02b101e55809b not found: ID does not exist" Jan 27 20:13:34 crc kubenswrapper[4915]: I0127 20:13:34.410996 4915 scope.go:117] "RemoveContainer" containerID="d4aa2d988e58a1f4201f822362e31b247649de368abb407a08a6928241987d52" Jan 27 20:13:34 crc kubenswrapper[4915]: E0127 20:13:34.411271 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4aa2d988e58a1f4201f822362e31b247649de368abb407a08a6928241987d52\": container with ID starting with d4aa2d988e58a1f4201f822362e31b247649de368abb407a08a6928241987d52 not found: ID does not exist" containerID="d4aa2d988e58a1f4201f822362e31b247649de368abb407a08a6928241987d52" Jan 27 20:13:34 crc kubenswrapper[4915]: I0127 20:13:34.411323 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4aa2d988e58a1f4201f822362e31b247649de368abb407a08a6928241987d52"} err="failed to get container status \"d4aa2d988e58a1f4201f822362e31b247649de368abb407a08a6928241987d52\": rpc error: code = NotFound desc = could not find container \"d4aa2d988e58a1f4201f822362e31b247649de368abb407a08a6928241987d52\": container with ID starting with d4aa2d988e58a1f4201f822362e31b247649de368abb407a08a6928241987d52 not found: ID does not exist" Jan 27 20:13:34 crc kubenswrapper[4915]: I0127 20:13:34.411366 4915 scope.go:117] "RemoveContainer" containerID="9232f53f737437df2a2b1091dcdeefe36d2c419b516959e76320ad09e8fdb8e1" Jan 27 20:13:34 crc kubenswrapper[4915]: E0127 20:13:34.412182 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9232f53f737437df2a2b1091dcdeefe36d2c419b516959e76320ad09e8fdb8e1\": container with ID starting with 9232f53f737437df2a2b1091dcdeefe36d2c419b516959e76320ad09e8fdb8e1 not found: ID does not exist" containerID="9232f53f737437df2a2b1091dcdeefe36d2c419b516959e76320ad09e8fdb8e1" Jan 27 20:13:34 crc kubenswrapper[4915]: I0127 20:13:34.412202 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9232f53f737437df2a2b1091dcdeefe36d2c419b516959e76320ad09e8fdb8e1"} err="failed to get container status \"9232f53f737437df2a2b1091dcdeefe36d2c419b516959e76320ad09e8fdb8e1\": rpc error: code = NotFound desc = could not find container \"9232f53f737437df2a2b1091dcdeefe36d2c419b516959e76320ad09e8fdb8e1\": container with ID starting with 9232f53f737437df2a2b1091dcdeefe36d2c419b516959e76320ad09e8fdb8e1 not found: ID does not exist" Jan 27 20:13:35 crc kubenswrapper[4915]: I0127 20:13:35.297393 4915 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 20:13:35 crc kubenswrapper[4915]: I0127 20:13:35.297423 4915 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 20:13:35 crc kubenswrapper[4915]: I0127 20:13:35.336432 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 20:13:35 crc kubenswrapper[4915]: I0127 20:13:35.336670 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 20:13:35 crc kubenswrapper[4915]: I0127 20:13:35.375404 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bc46d68-20c5-4923-a500-6552cc12f7ea" path="/var/lib/kubelet/pods/1bc46d68-20c5-4923-a500-6552cc12f7ea/volumes" Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.054195 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-z6tzp"] Jan 27 20:13:41 crc kubenswrapper[4915]: E0127 20:13:41.055056 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bc46d68-20c5-4923-a500-6552cc12f7ea" containerName="registry-server" Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.055070 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bc46d68-20c5-4923-a500-6552cc12f7ea" containerName="registry-server" Jan 27 20:13:41 crc kubenswrapper[4915]: E0127 20:13:41.055080 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bc46d68-20c5-4923-a500-6552cc12f7ea" containerName="extract-content" Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.055087 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bc46d68-20c5-4923-a500-6552cc12f7ea" containerName="extract-content" Jan 27 20:13:41 crc kubenswrapper[4915]: E0127 20:13:41.055095 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bc46d68-20c5-4923-a500-6552cc12f7ea" containerName="extract-utilities" Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.055104 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bc46d68-20c5-4923-a500-6552cc12f7ea" containerName="extract-utilities" Jan 27 20:13:41 crc kubenswrapper[4915]: E0127 20:13:41.055128 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="414fa0ae-660b-4e2e-acb8-6e039c2cd11a" containerName="dnsmasq-dns" Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.055134 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="414fa0ae-660b-4e2e-acb8-6e039c2cd11a" containerName="dnsmasq-dns" Jan 27 20:13:41 crc kubenswrapper[4915]: E0127 20:13:41.055152 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="414fa0ae-660b-4e2e-acb8-6e039c2cd11a" containerName="init" Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.055157 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="414fa0ae-660b-4e2e-acb8-6e039c2cd11a" containerName="init" Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.055308 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="414fa0ae-660b-4e2e-acb8-6e039c2cd11a" containerName="dnsmasq-dns" Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.055334 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bc46d68-20c5-4923-a500-6552cc12f7ea" containerName="registry-server" Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.055892 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-z6tzp" Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.069844 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-z6tzp"] Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.158657 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7084-account-create-update-k24wr"] Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.159846 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7084-account-create-update-k24wr" Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.162277 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.169139 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7084-account-create-update-k24wr"] Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.190833 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm866\" (UniqueName: \"kubernetes.io/projected/478c9727-7969-4a55-a51b-0f10d52bbcee-kube-api-access-cm866\") pod \"placement-db-create-z6tzp\" (UID: \"478c9727-7969-4a55-a51b-0f10d52bbcee\") " pod="openstack/placement-db-create-z6tzp" Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.190899 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/478c9727-7969-4a55-a51b-0f10d52bbcee-operator-scripts\") pod \"placement-db-create-z6tzp\" (UID: \"478c9727-7969-4a55-a51b-0f10d52bbcee\") " pod="openstack/placement-db-create-z6tzp" Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.292900 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5m2f\" (UniqueName: \"kubernetes.io/projected/531d7f69-5afe-4721-8600-7cba22e2a02f-kube-api-access-c5m2f\") pod \"placement-7084-account-create-update-k24wr\" (UID: \"531d7f69-5afe-4721-8600-7cba22e2a02f\") " pod="openstack/placement-7084-account-create-update-k24wr" Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.292943 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm866\" (UniqueName: \"kubernetes.io/projected/478c9727-7969-4a55-a51b-0f10d52bbcee-kube-api-access-cm866\") pod \"placement-db-create-z6tzp\" (UID: \"478c9727-7969-4a55-a51b-0f10d52bbcee\") " pod="openstack/placement-db-create-z6tzp" Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.292974 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/478c9727-7969-4a55-a51b-0f10d52bbcee-operator-scripts\") pod \"placement-db-create-z6tzp\" (UID: \"478c9727-7969-4a55-a51b-0f10d52bbcee\") " pod="openstack/placement-db-create-z6tzp" Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.293299 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/531d7f69-5afe-4721-8600-7cba22e2a02f-operator-scripts\") pod \"placement-7084-account-create-update-k24wr\" (UID: \"531d7f69-5afe-4721-8600-7cba22e2a02f\") " pod="openstack/placement-7084-account-create-update-k24wr" Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.293923 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/478c9727-7969-4a55-a51b-0f10d52bbcee-operator-scripts\") pod \"placement-db-create-z6tzp\" (UID: \"478c9727-7969-4a55-a51b-0f10d52bbcee\") " pod="openstack/placement-db-create-z6tzp" Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.312773 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm866\" (UniqueName: \"kubernetes.io/projected/478c9727-7969-4a55-a51b-0f10d52bbcee-kube-api-access-cm866\") pod \"placement-db-create-z6tzp\" (UID: \"478c9727-7969-4a55-a51b-0f10d52bbcee\") " pod="openstack/placement-db-create-z6tzp" Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.375374 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-z6tzp" Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.394378 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/531d7f69-5afe-4721-8600-7cba22e2a02f-operator-scripts\") pod \"placement-7084-account-create-update-k24wr\" (UID: \"531d7f69-5afe-4721-8600-7cba22e2a02f\") " pod="openstack/placement-7084-account-create-update-k24wr" Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.394840 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5m2f\" (UniqueName: \"kubernetes.io/projected/531d7f69-5afe-4721-8600-7cba22e2a02f-kube-api-access-c5m2f\") pod \"placement-7084-account-create-update-k24wr\" (UID: \"531d7f69-5afe-4721-8600-7cba22e2a02f\") " pod="openstack/placement-7084-account-create-update-k24wr" Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.395198 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/531d7f69-5afe-4721-8600-7cba22e2a02f-operator-scripts\") pod \"placement-7084-account-create-update-k24wr\" (UID: \"531d7f69-5afe-4721-8600-7cba22e2a02f\") " pod="openstack/placement-7084-account-create-update-k24wr" Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.410624 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5m2f\" (UniqueName: \"kubernetes.io/projected/531d7f69-5afe-4721-8600-7cba22e2a02f-kube-api-access-c5m2f\") pod \"placement-7084-account-create-update-k24wr\" (UID: \"531d7f69-5afe-4721-8600-7cba22e2a02f\") " pod="openstack/placement-7084-account-create-update-k24wr" Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.473366 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7084-account-create-update-k24wr" Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.800258 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-z6tzp"] Jan 27 20:13:41 crc kubenswrapper[4915]: W0127 20:13:41.801168 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod478c9727_7969_4a55_a51b_0f10d52bbcee.slice/crio-c93008521e803b04b171d1ea932ef0651a8ea36258da7fe05d69194771fd49f7 WatchSource:0}: Error finding container c93008521e803b04b171d1ea932ef0651a8ea36258da7fe05d69194771fd49f7: Status 404 returned error can't find the container with id c93008521e803b04b171d1ea932ef0651a8ea36258da7fe05d69194771fd49f7 Jan 27 20:13:41 crc kubenswrapper[4915]: I0127 20:13:41.905206 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7084-account-create-update-k24wr"] Jan 27 20:13:41 crc kubenswrapper[4915]: W0127 20:13:41.933963 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod531d7f69_5afe_4721_8600_7cba22e2a02f.slice/crio-89029f95442e506fb25395e890f5d89dee7fb6cacaf66a0ce3cace3c242aa078 WatchSource:0}: Error finding container 89029f95442e506fb25395e890f5d89dee7fb6cacaf66a0ce3cace3c242aa078: Status 404 returned error can't find the container with id 89029f95442e506fb25395e890f5d89dee7fb6cacaf66a0ce3cace3c242aa078 Jan 27 20:13:42 crc kubenswrapper[4915]: I0127 20:13:42.371957 4915 generic.go:334] "Generic (PLEG): container finished" podID="478c9727-7969-4a55-a51b-0f10d52bbcee" containerID="fb8d352cc44eeba36594b18ae9a6b1a992faf783308def8cfed1feaecfb3e191" exitCode=0 Jan 27 20:13:42 crc kubenswrapper[4915]: I0127 20:13:42.372023 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-z6tzp" event={"ID":"478c9727-7969-4a55-a51b-0f10d52bbcee","Type":"ContainerDied","Data":"fb8d352cc44eeba36594b18ae9a6b1a992faf783308def8cfed1feaecfb3e191"} Jan 27 20:13:42 crc kubenswrapper[4915]: I0127 20:13:42.372080 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-z6tzp" event={"ID":"478c9727-7969-4a55-a51b-0f10d52bbcee","Type":"ContainerStarted","Data":"c93008521e803b04b171d1ea932ef0651a8ea36258da7fe05d69194771fd49f7"} Jan 27 20:13:42 crc kubenswrapper[4915]: I0127 20:13:42.375159 4915 generic.go:334] "Generic (PLEG): container finished" podID="531d7f69-5afe-4721-8600-7cba22e2a02f" containerID="7e126cc296def1ec61a71aa659721f23cbd6b572935b12f84bd4e1cf792489ba" exitCode=0 Jan 27 20:13:42 crc kubenswrapper[4915]: I0127 20:13:42.375190 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7084-account-create-update-k24wr" event={"ID":"531d7f69-5afe-4721-8600-7cba22e2a02f","Type":"ContainerDied","Data":"7e126cc296def1ec61a71aa659721f23cbd6b572935b12f84bd4e1cf792489ba"} Jan 27 20:13:42 crc kubenswrapper[4915]: I0127 20:13:42.375208 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7084-account-create-update-k24wr" event={"ID":"531d7f69-5afe-4721-8600-7cba22e2a02f","Type":"ContainerStarted","Data":"89029f95442e506fb25395e890f5d89dee7fb6cacaf66a0ce3cace3c242aa078"} Jan 27 20:13:43 crc kubenswrapper[4915]: I0127 20:13:43.726250 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7084-account-create-update-k24wr" Jan 27 20:13:43 crc kubenswrapper[4915]: I0127 20:13:43.801393 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-z6tzp" Jan 27 20:13:43 crc kubenswrapper[4915]: I0127 20:13:43.836083 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/531d7f69-5afe-4721-8600-7cba22e2a02f-operator-scripts\") pod \"531d7f69-5afe-4721-8600-7cba22e2a02f\" (UID: \"531d7f69-5afe-4721-8600-7cba22e2a02f\") " Jan 27 20:13:43 crc kubenswrapper[4915]: I0127 20:13:43.836149 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5m2f\" (UniqueName: \"kubernetes.io/projected/531d7f69-5afe-4721-8600-7cba22e2a02f-kube-api-access-c5m2f\") pod \"531d7f69-5afe-4721-8600-7cba22e2a02f\" (UID: \"531d7f69-5afe-4721-8600-7cba22e2a02f\") " Jan 27 20:13:43 crc kubenswrapper[4915]: I0127 20:13:43.836858 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/531d7f69-5afe-4721-8600-7cba22e2a02f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "531d7f69-5afe-4721-8600-7cba22e2a02f" (UID: "531d7f69-5afe-4721-8600-7cba22e2a02f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:13:43 crc kubenswrapper[4915]: I0127 20:13:43.842244 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/531d7f69-5afe-4721-8600-7cba22e2a02f-kube-api-access-c5m2f" (OuterVolumeSpecName: "kube-api-access-c5m2f") pod "531d7f69-5afe-4721-8600-7cba22e2a02f" (UID: "531d7f69-5afe-4721-8600-7cba22e2a02f"). InnerVolumeSpecName "kube-api-access-c5m2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:13:43 crc kubenswrapper[4915]: I0127 20:13:43.937505 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/478c9727-7969-4a55-a51b-0f10d52bbcee-operator-scripts\") pod \"478c9727-7969-4a55-a51b-0f10d52bbcee\" (UID: \"478c9727-7969-4a55-a51b-0f10d52bbcee\") " Jan 27 20:13:43 crc kubenswrapper[4915]: I0127 20:13:43.937597 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm866\" (UniqueName: \"kubernetes.io/projected/478c9727-7969-4a55-a51b-0f10d52bbcee-kube-api-access-cm866\") pod \"478c9727-7969-4a55-a51b-0f10d52bbcee\" (UID: \"478c9727-7969-4a55-a51b-0f10d52bbcee\") " Jan 27 20:13:43 crc kubenswrapper[4915]: I0127 20:13:43.938133 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/478c9727-7969-4a55-a51b-0f10d52bbcee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "478c9727-7969-4a55-a51b-0f10d52bbcee" (UID: "478c9727-7969-4a55-a51b-0f10d52bbcee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:13:43 crc kubenswrapper[4915]: I0127 20:13:43.938486 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/531d7f69-5afe-4721-8600-7cba22e2a02f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:43 crc kubenswrapper[4915]: I0127 20:13:43.938513 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5m2f\" (UniqueName: \"kubernetes.io/projected/531d7f69-5afe-4721-8600-7cba22e2a02f-kube-api-access-c5m2f\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:43 crc kubenswrapper[4915]: I0127 20:13:43.938527 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/478c9727-7969-4a55-a51b-0f10d52bbcee-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:43 crc kubenswrapper[4915]: I0127 20:13:43.941122 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/478c9727-7969-4a55-a51b-0f10d52bbcee-kube-api-access-cm866" (OuterVolumeSpecName: "kube-api-access-cm866") pod "478c9727-7969-4a55-a51b-0f10d52bbcee" (UID: "478c9727-7969-4a55-a51b-0f10d52bbcee"). InnerVolumeSpecName "kube-api-access-cm866". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:13:44 crc kubenswrapper[4915]: I0127 20:13:44.040839 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm866\" (UniqueName: \"kubernetes.io/projected/478c9727-7969-4a55-a51b-0f10d52bbcee-kube-api-access-cm866\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:44 crc kubenswrapper[4915]: I0127 20:13:44.390493 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-z6tzp" event={"ID":"478c9727-7969-4a55-a51b-0f10d52bbcee","Type":"ContainerDied","Data":"c93008521e803b04b171d1ea932ef0651a8ea36258da7fe05d69194771fd49f7"} Jan 27 20:13:44 crc kubenswrapper[4915]: I0127 20:13:44.390539 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c93008521e803b04b171d1ea932ef0651a8ea36258da7fe05d69194771fd49f7" Jan 27 20:13:44 crc kubenswrapper[4915]: I0127 20:13:44.390542 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-z6tzp" Jan 27 20:13:44 crc kubenswrapper[4915]: I0127 20:13:44.392143 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7084-account-create-update-k24wr" event={"ID":"531d7f69-5afe-4721-8600-7cba22e2a02f","Type":"ContainerDied","Data":"89029f95442e506fb25395e890f5d89dee7fb6cacaf66a0ce3cace3c242aa078"} Jan 27 20:13:44 crc kubenswrapper[4915]: I0127 20:13:44.392302 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89029f95442e506fb25395e890f5d89dee7fb6cacaf66a0ce3cace3c242aa078" Jan 27 20:13:44 crc kubenswrapper[4915]: I0127 20:13:44.392240 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7084-account-create-update-k24wr" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.539142 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-875c99799-qdfcx"] Jan 27 20:13:46 crc kubenswrapper[4915]: E0127 20:13:46.539782 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="478c9727-7969-4a55-a51b-0f10d52bbcee" containerName="mariadb-database-create" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.539814 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="478c9727-7969-4a55-a51b-0f10d52bbcee" containerName="mariadb-database-create" Jan 27 20:13:46 crc kubenswrapper[4915]: E0127 20:13:46.539842 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="531d7f69-5afe-4721-8600-7cba22e2a02f" containerName="mariadb-account-create-update" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.539850 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="531d7f69-5afe-4721-8600-7cba22e2a02f" containerName="mariadb-account-create-update" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.540043 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="531d7f69-5afe-4721-8600-7cba22e2a02f" containerName="mariadb-account-create-update" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.540074 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="478c9727-7969-4a55-a51b-0f10d52bbcee" containerName="mariadb-database-create" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.541980 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-875c99799-qdfcx" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.620652 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-875c99799-qdfcx"] Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.635138 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-gpx4t"] Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.636462 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gpx4t" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.638629 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-pncmx" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.638928 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.638939 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.643813 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-gpx4t"] Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.693436 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7kgg\" (UniqueName: \"kubernetes.io/projected/e962e699-4ef5-48c8-9704-724462d56679-kube-api-access-l7kgg\") pod \"dnsmasq-dns-875c99799-qdfcx\" (UID: \"e962e699-4ef5-48c8-9704-724462d56679\") " pod="openstack/dnsmasq-dns-875c99799-qdfcx" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.693495 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e962e699-4ef5-48c8-9704-724462d56679-ovsdbserver-sb\") pod \"dnsmasq-dns-875c99799-qdfcx\" (UID: \"e962e699-4ef5-48c8-9704-724462d56679\") " pod="openstack/dnsmasq-dns-875c99799-qdfcx" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.693543 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e962e699-4ef5-48c8-9704-724462d56679-config\") pod \"dnsmasq-dns-875c99799-qdfcx\" (UID: \"e962e699-4ef5-48c8-9704-724462d56679\") " pod="openstack/dnsmasq-dns-875c99799-qdfcx" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.693594 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e962e699-4ef5-48c8-9704-724462d56679-ovsdbserver-nb\") pod \"dnsmasq-dns-875c99799-qdfcx\" (UID: \"e962e699-4ef5-48c8-9704-724462d56679\") " pod="openstack/dnsmasq-dns-875c99799-qdfcx" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.693649 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e962e699-4ef5-48c8-9704-724462d56679-dns-svc\") pod \"dnsmasq-dns-875c99799-qdfcx\" (UID: \"e962e699-4ef5-48c8-9704-724462d56679\") " pod="openstack/dnsmasq-dns-875c99799-qdfcx" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.795138 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e962e699-4ef5-48c8-9704-724462d56679-ovsdbserver-sb\") pod \"dnsmasq-dns-875c99799-qdfcx\" (UID: \"e962e699-4ef5-48c8-9704-724462d56679\") " pod="openstack/dnsmasq-dns-875c99799-qdfcx" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.795202 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e962e699-4ef5-48c8-9704-724462d56679-config\") pod \"dnsmasq-dns-875c99799-qdfcx\" (UID: \"e962e699-4ef5-48c8-9704-724462d56679\") " pod="openstack/dnsmasq-dns-875c99799-qdfcx" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.795261 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-config-data\") pod \"placement-db-sync-gpx4t\" (UID: \"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2\") " pod="openstack/placement-db-sync-gpx4t" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.795295 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e962e699-4ef5-48c8-9704-724462d56679-ovsdbserver-nb\") pod \"dnsmasq-dns-875c99799-qdfcx\" (UID: \"e962e699-4ef5-48c8-9704-724462d56679\") " pod="openstack/dnsmasq-dns-875c99799-qdfcx" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.795327 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-logs\") pod \"placement-db-sync-gpx4t\" (UID: \"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2\") " pod="openstack/placement-db-sync-gpx4t" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.795452 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e962e699-4ef5-48c8-9704-724462d56679-dns-svc\") pod \"dnsmasq-dns-875c99799-qdfcx\" (UID: \"e962e699-4ef5-48c8-9704-724462d56679\") " pod="openstack/dnsmasq-dns-875c99799-qdfcx" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.795557 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-scripts\") pod \"placement-db-sync-gpx4t\" (UID: \"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2\") " pod="openstack/placement-db-sync-gpx4t" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.795732 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sl8w\" (UniqueName: \"kubernetes.io/projected/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-kube-api-access-7sl8w\") pod \"placement-db-sync-gpx4t\" (UID: \"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2\") " pod="openstack/placement-db-sync-gpx4t" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.795834 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7kgg\" (UniqueName: \"kubernetes.io/projected/e962e699-4ef5-48c8-9704-724462d56679-kube-api-access-l7kgg\") pod \"dnsmasq-dns-875c99799-qdfcx\" (UID: \"e962e699-4ef5-48c8-9704-724462d56679\") " pod="openstack/dnsmasq-dns-875c99799-qdfcx" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.795898 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-combined-ca-bundle\") pod \"placement-db-sync-gpx4t\" (UID: \"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2\") " pod="openstack/placement-db-sync-gpx4t" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.796534 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e962e699-4ef5-48c8-9704-724462d56679-ovsdbserver-nb\") pod \"dnsmasq-dns-875c99799-qdfcx\" (UID: \"e962e699-4ef5-48c8-9704-724462d56679\") " pod="openstack/dnsmasq-dns-875c99799-qdfcx" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.796573 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e962e699-4ef5-48c8-9704-724462d56679-ovsdbserver-sb\") pod \"dnsmasq-dns-875c99799-qdfcx\" (UID: \"e962e699-4ef5-48c8-9704-724462d56679\") " pod="openstack/dnsmasq-dns-875c99799-qdfcx" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.796751 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e962e699-4ef5-48c8-9704-724462d56679-dns-svc\") pod \"dnsmasq-dns-875c99799-qdfcx\" (UID: \"e962e699-4ef5-48c8-9704-724462d56679\") " pod="openstack/dnsmasq-dns-875c99799-qdfcx" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.796878 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e962e699-4ef5-48c8-9704-724462d56679-config\") pod \"dnsmasq-dns-875c99799-qdfcx\" (UID: \"e962e699-4ef5-48c8-9704-724462d56679\") " pod="openstack/dnsmasq-dns-875c99799-qdfcx" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.813722 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7kgg\" (UniqueName: \"kubernetes.io/projected/e962e699-4ef5-48c8-9704-724462d56679-kube-api-access-l7kgg\") pod \"dnsmasq-dns-875c99799-qdfcx\" (UID: \"e962e699-4ef5-48c8-9704-724462d56679\") " pod="openstack/dnsmasq-dns-875c99799-qdfcx" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.863482 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-875c99799-qdfcx" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.897938 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-config-data\") pod \"placement-db-sync-gpx4t\" (UID: \"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2\") " pod="openstack/placement-db-sync-gpx4t" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.898041 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-logs\") pod \"placement-db-sync-gpx4t\" (UID: \"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2\") " pod="openstack/placement-db-sync-gpx4t" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.898115 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-scripts\") pod \"placement-db-sync-gpx4t\" (UID: \"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2\") " pod="openstack/placement-db-sync-gpx4t" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.898255 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sl8w\" (UniqueName: \"kubernetes.io/projected/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-kube-api-access-7sl8w\") pod \"placement-db-sync-gpx4t\" (UID: \"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2\") " pod="openstack/placement-db-sync-gpx4t" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.898334 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-combined-ca-bundle\") pod \"placement-db-sync-gpx4t\" (UID: \"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2\") " pod="openstack/placement-db-sync-gpx4t" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.898551 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-logs\") pod \"placement-db-sync-gpx4t\" (UID: \"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2\") " pod="openstack/placement-db-sync-gpx4t" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.901713 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-scripts\") pod \"placement-db-sync-gpx4t\" (UID: \"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2\") " pod="openstack/placement-db-sync-gpx4t" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.902814 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-config-data\") pod \"placement-db-sync-gpx4t\" (UID: \"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2\") " pod="openstack/placement-db-sync-gpx4t" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.903248 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-combined-ca-bundle\") pod \"placement-db-sync-gpx4t\" (UID: \"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2\") " pod="openstack/placement-db-sync-gpx4t" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.913657 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sl8w\" (UniqueName: \"kubernetes.io/projected/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-kube-api-access-7sl8w\") pod \"placement-db-sync-gpx4t\" (UID: \"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2\") " pod="openstack/placement-db-sync-gpx4t" Jan 27 20:13:46 crc kubenswrapper[4915]: I0127 20:13:46.955694 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gpx4t" Jan 27 20:13:47 crc kubenswrapper[4915]: I0127 20:13:47.315772 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-875c99799-qdfcx"] Jan 27 20:13:47 crc kubenswrapper[4915]: I0127 20:13:47.417121 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-875c99799-qdfcx" event={"ID":"e962e699-4ef5-48c8-9704-724462d56679","Type":"ContainerStarted","Data":"41f299716271cfa9c30dc31c3594530e415dd9afa89cf80700d7b2425b20a8fa"} Jan 27 20:13:47 crc kubenswrapper[4915]: W0127 20:13:47.515693 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa2606ee_f8a5_4465_8ab4_feb4ccea59e2.slice/crio-f2f31bd07cb0c91d7651b2e5746e4a9b2755b3e61ca2781957947820d76abfa0 WatchSource:0}: Error finding container f2f31bd07cb0c91d7651b2e5746e4a9b2755b3e61ca2781957947820d76abfa0: Status 404 returned error can't find the container with id f2f31bd07cb0c91d7651b2e5746e4a9b2755b3e61ca2781957947820d76abfa0 Jan 27 20:13:47 crc kubenswrapper[4915]: I0127 20:13:47.515720 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-gpx4t"] Jan 27 20:13:48 crc kubenswrapper[4915]: I0127 20:13:48.426767 4915 generic.go:334] "Generic (PLEG): container finished" podID="e962e699-4ef5-48c8-9704-724462d56679" containerID="e5f67737f306cf0ecdbe580a6dd1a6cbc670157dd4106fb508f66a99e190f9c9" exitCode=0 Jan 27 20:13:48 crc kubenswrapper[4915]: I0127 20:13:48.427065 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-875c99799-qdfcx" event={"ID":"e962e699-4ef5-48c8-9704-724462d56679","Type":"ContainerDied","Data":"e5f67737f306cf0ecdbe580a6dd1a6cbc670157dd4106fb508f66a99e190f9c9"} Jan 27 20:13:48 crc kubenswrapper[4915]: I0127 20:13:48.429554 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gpx4t" event={"ID":"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2","Type":"ContainerStarted","Data":"a6bae7069833d21acf5e928f21cd290ea796ed4117037b497884d21f42618186"} Jan 27 20:13:48 crc kubenswrapper[4915]: I0127 20:13:48.429592 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gpx4t" event={"ID":"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2","Type":"ContainerStarted","Data":"f2f31bd07cb0c91d7651b2e5746e4a9b2755b3e61ca2781957947820d76abfa0"} Jan 27 20:13:48 crc kubenswrapper[4915]: I0127 20:13:48.471074 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-gpx4t" podStartSLOduration=2.471053752 podStartE2EDuration="2.471053752s" podCreationTimestamp="2026-01-27 20:13:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:13:48.470376045 +0000 UTC m=+5519.828229749" watchObservedRunningTime="2026-01-27 20:13:48.471053752 +0000 UTC m=+5519.828907426" Jan 27 20:13:49 crc kubenswrapper[4915]: I0127 20:13:49.441266 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-875c99799-qdfcx" event={"ID":"e962e699-4ef5-48c8-9704-724462d56679","Type":"ContainerStarted","Data":"1bb097f916c90d17a58fc5d98469e6091b973555b1b4de4f38a49b6737ece427"} Jan 27 20:13:49 crc kubenswrapper[4915]: I0127 20:13:49.441686 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-875c99799-qdfcx" Jan 27 20:13:49 crc kubenswrapper[4915]: I0127 20:13:49.446620 4915 generic.go:334] "Generic (PLEG): container finished" podID="aa2606ee-f8a5-4465-8ab4-feb4ccea59e2" containerID="a6bae7069833d21acf5e928f21cd290ea796ed4117037b497884d21f42618186" exitCode=0 Jan 27 20:13:49 crc kubenswrapper[4915]: I0127 20:13:49.446677 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gpx4t" event={"ID":"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2","Type":"ContainerDied","Data":"a6bae7069833d21acf5e928f21cd290ea796ed4117037b497884d21f42618186"} Jan 27 20:13:49 crc kubenswrapper[4915]: I0127 20:13:49.481128 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-875c99799-qdfcx" podStartSLOduration=3.481099798 podStartE2EDuration="3.481099798s" podCreationTimestamp="2026-01-27 20:13:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:13:49.462009698 +0000 UTC m=+5520.819863362" watchObservedRunningTime="2026-01-27 20:13:49.481099798 +0000 UTC m=+5520.838953502" Jan 27 20:13:50 crc kubenswrapper[4915]: I0127 20:13:50.624523 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:13:50 crc kubenswrapper[4915]: I0127 20:13:50.624912 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:13:50 crc kubenswrapper[4915]: I0127 20:13:50.810240 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gpx4t" Jan 27 20:13:50 crc kubenswrapper[4915]: I0127 20:13:50.870394 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-scripts\") pod \"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2\" (UID: \"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2\") " Jan 27 20:13:50 crc kubenswrapper[4915]: I0127 20:13:50.877492 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-scripts" (OuterVolumeSpecName: "scripts") pod "aa2606ee-f8a5-4465-8ab4-feb4ccea59e2" (UID: "aa2606ee-f8a5-4465-8ab4-feb4ccea59e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:13:50 crc kubenswrapper[4915]: I0127 20:13:50.971853 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-config-data\") pod \"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2\" (UID: \"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2\") " Jan 27 20:13:50 crc kubenswrapper[4915]: I0127 20:13:50.971910 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-combined-ca-bundle\") pod \"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2\" (UID: \"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2\") " Jan 27 20:13:50 crc kubenswrapper[4915]: I0127 20:13:50.971947 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-logs\") pod \"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2\" (UID: \"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2\") " Jan 27 20:13:50 crc kubenswrapper[4915]: I0127 20:13:50.972003 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sl8w\" (UniqueName: \"kubernetes.io/projected/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-kube-api-access-7sl8w\") pod \"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2\" (UID: \"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2\") " Jan 27 20:13:50 crc kubenswrapper[4915]: I0127 20:13:50.972225 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:50 crc kubenswrapper[4915]: I0127 20:13:50.972570 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-logs" (OuterVolumeSpecName: "logs") pod "aa2606ee-f8a5-4465-8ab4-feb4ccea59e2" (UID: "aa2606ee-f8a5-4465-8ab4-feb4ccea59e2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:13:50 crc kubenswrapper[4915]: I0127 20:13:50.975093 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-kube-api-access-7sl8w" (OuterVolumeSpecName: "kube-api-access-7sl8w") pod "aa2606ee-f8a5-4465-8ab4-feb4ccea59e2" (UID: "aa2606ee-f8a5-4465-8ab4-feb4ccea59e2"). InnerVolumeSpecName "kube-api-access-7sl8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:13:50 crc kubenswrapper[4915]: I0127 20:13:50.992526 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-config-data" (OuterVolumeSpecName: "config-data") pod "aa2606ee-f8a5-4465-8ab4-feb4ccea59e2" (UID: "aa2606ee-f8a5-4465-8ab4-feb4ccea59e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:13:50 crc kubenswrapper[4915]: I0127 20:13:50.993087 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa2606ee-f8a5-4465-8ab4-feb4ccea59e2" (UID: "aa2606ee-f8a5-4465-8ab4-feb4ccea59e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.073166 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.073196 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.073207 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.073217 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sl8w\" (UniqueName: \"kubernetes.io/projected/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2-kube-api-access-7sl8w\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.469506 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gpx4t" event={"ID":"aa2606ee-f8a5-4465-8ab4-feb4ccea59e2","Type":"ContainerDied","Data":"f2f31bd07cb0c91d7651b2e5746e4a9b2755b3e61ca2781957947820d76abfa0"} Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.469552 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2f31bd07cb0c91d7651b2e5746e4a9b2755b3e61ca2781957947820d76abfa0" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.469620 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gpx4t" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.565976 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-9f9d58b64-d6k58"] Jan 27 20:13:51 crc kubenswrapper[4915]: E0127 20:13:51.566422 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa2606ee-f8a5-4465-8ab4-feb4ccea59e2" containerName="placement-db-sync" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.566454 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa2606ee-f8a5-4465-8ab4-feb4ccea59e2" containerName="placement-db-sync" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.566701 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa2606ee-f8a5-4465-8ab4-feb4ccea59e2" containerName="placement-db-sync" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.568899 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9f9d58b64-d6k58" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.574336 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.574352 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.574415 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-pncmx" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.576810 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9f9d58b64-d6k58"] Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.683595 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bec9253f-bf66-4582-9ec1-d8a758db7367-logs\") pod \"placement-9f9d58b64-d6k58\" (UID: \"bec9253f-bf66-4582-9ec1-d8a758db7367\") " pod="openstack/placement-9f9d58b64-d6k58" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.684028 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bec9253f-bf66-4582-9ec1-d8a758db7367-scripts\") pod \"placement-9f9d58b64-d6k58\" (UID: \"bec9253f-bf66-4582-9ec1-d8a758db7367\") " pod="openstack/placement-9f9d58b64-d6k58" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.684160 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bec9253f-bf66-4582-9ec1-d8a758db7367-combined-ca-bundle\") pod \"placement-9f9d58b64-d6k58\" (UID: \"bec9253f-bf66-4582-9ec1-d8a758db7367\") " pod="openstack/placement-9f9d58b64-d6k58" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.684504 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j49pr\" (UniqueName: \"kubernetes.io/projected/bec9253f-bf66-4582-9ec1-d8a758db7367-kube-api-access-j49pr\") pod \"placement-9f9d58b64-d6k58\" (UID: \"bec9253f-bf66-4582-9ec1-d8a758db7367\") " pod="openstack/placement-9f9d58b64-d6k58" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.684569 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bec9253f-bf66-4582-9ec1-d8a758db7367-config-data\") pod \"placement-9f9d58b64-d6k58\" (UID: \"bec9253f-bf66-4582-9ec1-d8a758db7367\") " pod="openstack/placement-9f9d58b64-d6k58" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.785570 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bec9253f-bf66-4582-9ec1-d8a758db7367-config-data\") pod \"placement-9f9d58b64-d6k58\" (UID: \"bec9253f-bf66-4582-9ec1-d8a758db7367\") " pod="openstack/placement-9f9d58b64-d6k58" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.785646 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bec9253f-bf66-4582-9ec1-d8a758db7367-logs\") pod \"placement-9f9d58b64-d6k58\" (UID: \"bec9253f-bf66-4582-9ec1-d8a758db7367\") " pod="openstack/placement-9f9d58b64-d6k58" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.785692 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bec9253f-bf66-4582-9ec1-d8a758db7367-scripts\") pod \"placement-9f9d58b64-d6k58\" (UID: \"bec9253f-bf66-4582-9ec1-d8a758db7367\") " pod="openstack/placement-9f9d58b64-d6k58" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.785717 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bec9253f-bf66-4582-9ec1-d8a758db7367-combined-ca-bundle\") pod \"placement-9f9d58b64-d6k58\" (UID: \"bec9253f-bf66-4582-9ec1-d8a758db7367\") " pod="openstack/placement-9f9d58b64-d6k58" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.785767 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j49pr\" (UniqueName: \"kubernetes.io/projected/bec9253f-bf66-4582-9ec1-d8a758db7367-kube-api-access-j49pr\") pod \"placement-9f9d58b64-d6k58\" (UID: \"bec9253f-bf66-4582-9ec1-d8a758db7367\") " pod="openstack/placement-9f9d58b64-d6k58" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.786248 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bec9253f-bf66-4582-9ec1-d8a758db7367-logs\") pod \"placement-9f9d58b64-d6k58\" (UID: \"bec9253f-bf66-4582-9ec1-d8a758db7367\") " pod="openstack/placement-9f9d58b64-d6k58" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.790004 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bec9253f-bf66-4582-9ec1-d8a758db7367-scripts\") pod \"placement-9f9d58b64-d6k58\" (UID: \"bec9253f-bf66-4582-9ec1-d8a758db7367\") " pod="openstack/placement-9f9d58b64-d6k58" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.790044 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bec9253f-bf66-4582-9ec1-d8a758db7367-combined-ca-bundle\") pod \"placement-9f9d58b64-d6k58\" (UID: \"bec9253f-bf66-4582-9ec1-d8a758db7367\") " pod="openstack/placement-9f9d58b64-d6k58" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.790848 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bec9253f-bf66-4582-9ec1-d8a758db7367-config-data\") pod \"placement-9f9d58b64-d6k58\" (UID: \"bec9253f-bf66-4582-9ec1-d8a758db7367\") " pod="openstack/placement-9f9d58b64-d6k58" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.800551 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j49pr\" (UniqueName: \"kubernetes.io/projected/bec9253f-bf66-4582-9ec1-d8a758db7367-kube-api-access-j49pr\") pod \"placement-9f9d58b64-d6k58\" (UID: \"bec9253f-bf66-4582-9ec1-d8a758db7367\") " pod="openstack/placement-9f9d58b64-d6k58" Jan 27 20:13:51 crc kubenswrapper[4915]: I0127 20:13:51.899479 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9f9d58b64-d6k58" Jan 27 20:13:52 crc kubenswrapper[4915]: I0127 20:13:52.329335 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9f9d58b64-d6k58"] Jan 27 20:13:52 crc kubenswrapper[4915]: I0127 20:13:52.478973 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9f9d58b64-d6k58" event={"ID":"bec9253f-bf66-4582-9ec1-d8a758db7367","Type":"ContainerStarted","Data":"6f1b439147016e0c2a2b608cb47a8d094e0f57d4c2c52aa85fc5b455d3b500d1"} Jan 27 20:13:53 crc kubenswrapper[4915]: I0127 20:13:53.489663 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9f9d58b64-d6k58" event={"ID":"bec9253f-bf66-4582-9ec1-d8a758db7367","Type":"ContainerStarted","Data":"7d57f9546997d3231b5ebc348776c509e35d37a99ddc27da30a909f37153a8bb"} Jan 27 20:13:53 crc kubenswrapper[4915]: I0127 20:13:53.490013 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9f9d58b64-d6k58" event={"ID":"bec9253f-bf66-4582-9ec1-d8a758db7367","Type":"ContainerStarted","Data":"1521da600722afaf9ecf33e1df800baad1643e700d35782778061cca9dbfa092"} Jan 27 20:13:53 crc kubenswrapper[4915]: I0127 20:13:53.490050 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-9f9d58b64-d6k58" Jan 27 20:13:53 crc kubenswrapper[4915]: I0127 20:13:53.490069 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-9f9d58b64-d6k58" Jan 27 20:13:53 crc kubenswrapper[4915]: I0127 20:13:53.518480 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-9f9d58b64-d6k58" podStartSLOduration=2.518458535 podStartE2EDuration="2.518458535s" podCreationTimestamp="2026-01-27 20:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:13:53.513500183 +0000 UTC m=+5524.871353877" watchObservedRunningTime="2026-01-27 20:13:53.518458535 +0000 UTC m=+5524.876312209" Jan 27 20:13:56 crc kubenswrapper[4915]: I0127 20:13:56.865197 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-875c99799-qdfcx" Jan 27 20:13:56 crc kubenswrapper[4915]: I0127 20:13:56.946231 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cb9bb6f7-mkftp"] Jan 27 20:13:56 crc kubenswrapper[4915]: I0127 20:13:56.946520 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" podUID="dd6c739c-9243-41cc-a4a9-fa33bd588820" containerName="dnsmasq-dns" containerID="cri-o://a685d836ff80cb37ebfb1f17c81148557140dfce62e128e0236709160a21bf36" gracePeriod=10 Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.405056 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.524564 4915 generic.go:334] "Generic (PLEG): container finished" podID="dd6c739c-9243-41cc-a4a9-fa33bd588820" containerID="a685d836ff80cb37ebfb1f17c81148557140dfce62e128e0236709160a21bf36" exitCode=0 Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.524620 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" event={"ID":"dd6c739c-9243-41cc-a4a9-fa33bd588820","Type":"ContainerDied","Data":"a685d836ff80cb37ebfb1f17c81148557140dfce62e128e0236709160a21bf36"} Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.524628 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.524664 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cb9bb6f7-mkftp" event={"ID":"dd6c739c-9243-41cc-a4a9-fa33bd588820","Type":"ContainerDied","Data":"a6fc022437fc144c3fade216574cf47140e1119069cef61bbe2052210284db91"} Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.524686 4915 scope.go:117] "RemoveContainer" containerID="a685d836ff80cb37ebfb1f17c81148557140dfce62e128e0236709160a21bf36" Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.543857 4915 scope.go:117] "RemoveContainer" containerID="2689c859b8385f2132016a4df13e865bd0efdeb5b5fe279a5f7e7b6b39dac6bc" Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.560890 4915 scope.go:117] "RemoveContainer" containerID="a685d836ff80cb37ebfb1f17c81148557140dfce62e128e0236709160a21bf36" Jan 27 20:13:57 crc kubenswrapper[4915]: E0127 20:13:57.561544 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a685d836ff80cb37ebfb1f17c81148557140dfce62e128e0236709160a21bf36\": container with ID starting with a685d836ff80cb37ebfb1f17c81148557140dfce62e128e0236709160a21bf36 not found: ID does not exist" containerID="a685d836ff80cb37ebfb1f17c81148557140dfce62e128e0236709160a21bf36" Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.561597 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a685d836ff80cb37ebfb1f17c81148557140dfce62e128e0236709160a21bf36"} err="failed to get container status \"a685d836ff80cb37ebfb1f17c81148557140dfce62e128e0236709160a21bf36\": rpc error: code = NotFound desc = could not find container \"a685d836ff80cb37ebfb1f17c81148557140dfce62e128e0236709160a21bf36\": container with ID starting with a685d836ff80cb37ebfb1f17c81148557140dfce62e128e0236709160a21bf36 not found: ID does not exist" Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.561629 4915 scope.go:117] "RemoveContainer" containerID="2689c859b8385f2132016a4df13e865bd0efdeb5b5fe279a5f7e7b6b39dac6bc" Jan 27 20:13:57 crc kubenswrapper[4915]: E0127 20:13:57.561994 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2689c859b8385f2132016a4df13e865bd0efdeb5b5fe279a5f7e7b6b39dac6bc\": container with ID starting with 2689c859b8385f2132016a4df13e865bd0efdeb5b5fe279a5f7e7b6b39dac6bc not found: ID does not exist" containerID="2689c859b8385f2132016a4df13e865bd0efdeb5b5fe279a5f7e7b6b39dac6bc" Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.562025 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2689c859b8385f2132016a4df13e865bd0efdeb5b5fe279a5f7e7b6b39dac6bc"} err="failed to get container status \"2689c859b8385f2132016a4df13e865bd0efdeb5b5fe279a5f7e7b6b39dac6bc\": rpc error: code = NotFound desc = could not find container \"2689c859b8385f2132016a4df13e865bd0efdeb5b5fe279a5f7e7b6b39dac6bc\": container with ID starting with 2689c859b8385f2132016a4df13e865bd0efdeb5b5fe279a5f7e7b6b39dac6bc not found: ID does not exist" Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.584945 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd6c739c-9243-41cc-a4a9-fa33bd588820-ovsdbserver-nb\") pod \"dd6c739c-9243-41cc-a4a9-fa33bd588820\" (UID: \"dd6c739c-9243-41cc-a4a9-fa33bd588820\") " Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.585055 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd6c739c-9243-41cc-a4a9-fa33bd588820-config\") pod \"dd6c739c-9243-41cc-a4a9-fa33bd588820\" (UID: \"dd6c739c-9243-41cc-a4a9-fa33bd588820\") " Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.585096 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd6c739c-9243-41cc-a4a9-fa33bd588820-ovsdbserver-sb\") pod \"dd6c739c-9243-41cc-a4a9-fa33bd588820\" (UID: \"dd6c739c-9243-41cc-a4a9-fa33bd588820\") " Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.585124 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhxd2\" (UniqueName: \"kubernetes.io/projected/dd6c739c-9243-41cc-a4a9-fa33bd588820-kube-api-access-vhxd2\") pod \"dd6c739c-9243-41cc-a4a9-fa33bd588820\" (UID: \"dd6c739c-9243-41cc-a4a9-fa33bd588820\") " Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.585160 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd6c739c-9243-41cc-a4a9-fa33bd588820-dns-svc\") pod \"dd6c739c-9243-41cc-a4a9-fa33bd588820\" (UID: \"dd6c739c-9243-41cc-a4a9-fa33bd588820\") " Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.589684 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd6c739c-9243-41cc-a4a9-fa33bd588820-kube-api-access-vhxd2" (OuterVolumeSpecName: "kube-api-access-vhxd2") pod "dd6c739c-9243-41cc-a4a9-fa33bd588820" (UID: "dd6c739c-9243-41cc-a4a9-fa33bd588820"). InnerVolumeSpecName "kube-api-access-vhxd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.623318 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd6c739c-9243-41cc-a4a9-fa33bd588820-config" (OuterVolumeSpecName: "config") pod "dd6c739c-9243-41cc-a4a9-fa33bd588820" (UID: "dd6c739c-9243-41cc-a4a9-fa33bd588820"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.628061 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd6c739c-9243-41cc-a4a9-fa33bd588820-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dd6c739c-9243-41cc-a4a9-fa33bd588820" (UID: "dd6c739c-9243-41cc-a4a9-fa33bd588820"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.628415 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd6c739c-9243-41cc-a4a9-fa33bd588820-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dd6c739c-9243-41cc-a4a9-fa33bd588820" (UID: "dd6c739c-9243-41cc-a4a9-fa33bd588820"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.630758 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd6c739c-9243-41cc-a4a9-fa33bd588820-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dd6c739c-9243-41cc-a4a9-fa33bd588820" (UID: "dd6c739c-9243-41cc-a4a9-fa33bd588820"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.687980 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd6c739c-9243-41cc-a4a9-fa33bd588820-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.688017 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd6c739c-9243-41cc-a4a9-fa33bd588820-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.688029 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd6c739c-9243-41cc-a4a9-fa33bd588820-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.688041 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd6c739c-9243-41cc-a4a9-fa33bd588820-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.688054 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhxd2\" (UniqueName: \"kubernetes.io/projected/dd6c739c-9243-41cc-a4a9-fa33bd588820-kube-api-access-vhxd2\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.867172 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cb9bb6f7-mkftp"] Jan 27 20:13:57 crc kubenswrapper[4915]: I0127 20:13:57.880988 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78cb9bb6f7-mkftp"] Jan 27 20:13:59 crc kubenswrapper[4915]: I0127 20:13:59.373050 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd6c739c-9243-41cc-a4a9-fa33bd588820" path="/var/lib/kubelet/pods/dd6c739c-9243-41cc-a4a9-fa33bd588820/volumes" Jan 27 20:14:20 crc kubenswrapper[4915]: I0127 20:14:20.624995 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:14:20 crc kubenswrapper[4915]: I0127 20:14:20.625608 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:14:22 crc kubenswrapper[4915]: I0127 20:14:22.937185 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-9f9d58b64-d6k58" Jan 27 20:14:22 crc kubenswrapper[4915]: I0127 20:14:22.938647 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-9f9d58b64-d6k58" Jan 27 20:14:43 crc kubenswrapper[4915]: I0127 20:14:43.946033 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-s5whp"] Jan 27 20:14:43 crc kubenswrapper[4915]: E0127 20:14:43.947352 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd6c739c-9243-41cc-a4a9-fa33bd588820" containerName="init" Jan 27 20:14:43 crc kubenswrapper[4915]: I0127 20:14:43.947387 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd6c739c-9243-41cc-a4a9-fa33bd588820" containerName="init" Jan 27 20:14:43 crc kubenswrapper[4915]: E0127 20:14:43.947415 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd6c739c-9243-41cc-a4a9-fa33bd588820" containerName="dnsmasq-dns" Jan 27 20:14:43 crc kubenswrapper[4915]: I0127 20:14:43.947423 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd6c739c-9243-41cc-a4a9-fa33bd588820" containerName="dnsmasq-dns" Jan 27 20:14:43 crc kubenswrapper[4915]: I0127 20:14:43.947616 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd6c739c-9243-41cc-a4a9-fa33bd588820" containerName="dnsmasq-dns" Jan 27 20:14:43 crc kubenswrapper[4915]: I0127 20:14:43.948387 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-s5whp" Jan 27 20:14:43 crc kubenswrapper[4915]: I0127 20:14:43.958926 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-s5whp"] Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.044085 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-k85wj"] Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.045165 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-k85wj" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.064082 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-43f6-account-create-update-8rx5z"] Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.065230 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-43f6-account-create-update-8rx5z" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.067243 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.073163 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-k85wj"] Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.092194 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-43f6-account-create-update-8rx5z"] Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.126052 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f81bdbbf-7100-4e76-82e3-92585b3fe6f4-operator-scripts\") pod \"nova-api-db-create-s5whp\" (UID: \"f81bdbbf-7100-4e76-82e3-92585b3fe6f4\") " pod="openstack/nova-api-db-create-s5whp" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.126104 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40f70372-2930-4901-a532-193c12377d2e-operator-scripts\") pod \"nova-cell0-db-create-k85wj\" (UID: \"40f70372-2930-4901-a532-193c12377d2e\") " pod="openstack/nova-cell0-db-create-k85wj" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.126167 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q8z5\" (UniqueName: \"kubernetes.io/projected/f81bdbbf-7100-4e76-82e3-92585b3fe6f4-kube-api-access-8q8z5\") pod \"nova-api-db-create-s5whp\" (UID: \"f81bdbbf-7100-4e76-82e3-92585b3fe6f4\") " pod="openstack/nova-api-db-create-s5whp" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.126205 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64vr7\" (UniqueName: \"kubernetes.io/projected/40f70372-2930-4901-a532-193c12377d2e-kube-api-access-64vr7\") pod \"nova-cell0-db-create-k85wj\" (UID: \"40f70372-2930-4901-a532-193c12377d2e\") " pod="openstack/nova-cell0-db-create-k85wj" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.228466 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q8z5\" (UniqueName: \"kubernetes.io/projected/f81bdbbf-7100-4e76-82e3-92585b3fe6f4-kube-api-access-8q8z5\") pod \"nova-api-db-create-s5whp\" (UID: \"f81bdbbf-7100-4e76-82e3-92585b3fe6f4\") " pod="openstack/nova-api-db-create-s5whp" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.228576 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64vr7\" (UniqueName: \"kubernetes.io/projected/40f70372-2930-4901-a532-193c12377d2e-kube-api-access-64vr7\") pod \"nova-cell0-db-create-k85wj\" (UID: \"40f70372-2930-4901-a532-193c12377d2e\") " pod="openstack/nova-cell0-db-create-k85wj" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.228631 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd7e9679-5e88-434f-9060-cb5da12c3217-operator-scripts\") pod \"nova-api-43f6-account-create-update-8rx5z\" (UID: \"bd7e9679-5e88-434f-9060-cb5da12c3217\") " pod="openstack/nova-api-43f6-account-create-update-8rx5z" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.228739 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f81bdbbf-7100-4e76-82e3-92585b3fe6f4-operator-scripts\") pod \"nova-api-db-create-s5whp\" (UID: \"f81bdbbf-7100-4e76-82e3-92585b3fe6f4\") " pod="openstack/nova-api-db-create-s5whp" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.228772 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckhgz\" (UniqueName: \"kubernetes.io/projected/bd7e9679-5e88-434f-9060-cb5da12c3217-kube-api-access-ckhgz\") pod \"nova-api-43f6-account-create-update-8rx5z\" (UID: \"bd7e9679-5e88-434f-9060-cb5da12c3217\") " pod="openstack/nova-api-43f6-account-create-update-8rx5z" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.228868 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40f70372-2930-4901-a532-193c12377d2e-operator-scripts\") pod \"nova-cell0-db-create-k85wj\" (UID: \"40f70372-2930-4901-a532-193c12377d2e\") " pod="openstack/nova-cell0-db-create-k85wj" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.229631 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40f70372-2930-4901-a532-193c12377d2e-operator-scripts\") pod \"nova-cell0-db-create-k85wj\" (UID: \"40f70372-2930-4901-a532-193c12377d2e\") " pod="openstack/nova-cell0-db-create-k85wj" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.229631 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f81bdbbf-7100-4e76-82e3-92585b3fe6f4-operator-scripts\") pod \"nova-api-db-create-s5whp\" (UID: \"f81bdbbf-7100-4e76-82e3-92585b3fe6f4\") " pod="openstack/nova-api-db-create-s5whp" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.255548 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64vr7\" (UniqueName: \"kubernetes.io/projected/40f70372-2930-4901-a532-193c12377d2e-kube-api-access-64vr7\") pod \"nova-cell0-db-create-k85wj\" (UID: \"40f70372-2930-4901-a532-193c12377d2e\") " pod="openstack/nova-cell0-db-create-k85wj" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.255586 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q8z5\" (UniqueName: \"kubernetes.io/projected/f81bdbbf-7100-4e76-82e3-92585b3fe6f4-kube-api-access-8q8z5\") pod \"nova-api-db-create-s5whp\" (UID: \"f81bdbbf-7100-4e76-82e3-92585b3fe6f4\") " pod="openstack/nova-api-db-create-s5whp" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.258426 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-25tx8"] Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.259751 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-25tx8" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.268030 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-s5whp" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.271371 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-64f9-account-create-update-mwfg6"] Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.272861 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-64f9-account-create-update-mwfg6" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.274619 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.286864 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-25tx8"] Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.296514 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-64f9-account-create-update-mwfg6"] Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.331505 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckhgz\" (UniqueName: \"kubernetes.io/projected/bd7e9679-5e88-434f-9060-cb5da12c3217-kube-api-access-ckhgz\") pod \"nova-api-43f6-account-create-update-8rx5z\" (UID: \"bd7e9679-5e88-434f-9060-cb5da12c3217\") " pod="openstack/nova-api-43f6-account-create-update-8rx5z" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.331694 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd7e9679-5e88-434f-9060-cb5da12c3217-operator-scripts\") pod \"nova-api-43f6-account-create-update-8rx5z\" (UID: \"bd7e9679-5e88-434f-9060-cb5da12c3217\") " pod="openstack/nova-api-43f6-account-create-update-8rx5z" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.336226 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd7e9679-5e88-434f-9060-cb5da12c3217-operator-scripts\") pod \"nova-api-43f6-account-create-update-8rx5z\" (UID: \"bd7e9679-5e88-434f-9060-cb5da12c3217\") " pod="openstack/nova-api-43f6-account-create-update-8rx5z" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.351164 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckhgz\" (UniqueName: \"kubernetes.io/projected/bd7e9679-5e88-434f-9060-cb5da12c3217-kube-api-access-ckhgz\") pod \"nova-api-43f6-account-create-update-8rx5z\" (UID: \"bd7e9679-5e88-434f-9060-cb5da12c3217\") " pod="openstack/nova-api-43f6-account-create-update-8rx5z" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.366725 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-k85wj" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.380683 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-43f6-account-create-update-8rx5z" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.439855 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ffbb1fa-90be-4883-beac-6ab4e9d05117-operator-scripts\") pod \"nova-cell1-db-create-25tx8\" (UID: \"2ffbb1fa-90be-4883-beac-6ab4e9d05117\") " pod="openstack/nova-cell1-db-create-25tx8" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.439997 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ba08e34-40c6-440c-961b-757a4716a9fc-operator-scripts\") pod \"nova-cell0-64f9-account-create-update-mwfg6\" (UID: \"8ba08e34-40c6-440c-961b-757a4716a9fc\") " pod="openstack/nova-cell0-64f9-account-create-update-mwfg6" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.440016 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npxw2\" (UniqueName: \"kubernetes.io/projected/8ba08e34-40c6-440c-961b-757a4716a9fc-kube-api-access-npxw2\") pod \"nova-cell0-64f9-account-create-update-mwfg6\" (UID: \"8ba08e34-40c6-440c-961b-757a4716a9fc\") " pod="openstack/nova-cell0-64f9-account-create-update-mwfg6" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.440062 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sp9j\" (UniqueName: \"kubernetes.io/projected/2ffbb1fa-90be-4883-beac-6ab4e9d05117-kube-api-access-9sp9j\") pod \"nova-cell1-db-create-25tx8\" (UID: \"2ffbb1fa-90be-4883-beac-6ab4e9d05117\") " pod="openstack/nova-cell1-db-create-25tx8" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.481019 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-1f37-account-create-update-ns88t"] Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.482165 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1f37-account-create-update-ns88t" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.490965 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.498744 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1f37-account-create-update-ns88t"] Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.541617 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ba08e34-40c6-440c-961b-757a4716a9fc-operator-scripts\") pod \"nova-cell0-64f9-account-create-update-mwfg6\" (UID: \"8ba08e34-40c6-440c-961b-757a4716a9fc\") " pod="openstack/nova-cell0-64f9-account-create-update-mwfg6" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.541696 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npxw2\" (UniqueName: \"kubernetes.io/projected/8ba08e34-40c6-440c-961b-757a4716a9fc-kube-api-access-npxw2\") pod \"nova-cell0-64f9-account-create-update-mwfg6\" (UID: \"8ba08e34-40c6-440c-961b-757a4716a9fc\") " pod="openstack/nova-cell0-64f9-account-create-update-mwfg6" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.541733 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sp9j\" (UniqueName: \"kubernetes.io/projected/2ffbb1fa-90be-4883-beac-6ab4e9d05117-kube-api-access-9sp9j\") pod \"nova-cell1-db-create-25tx8\" (UID: \"2ffbb1fa-90be-4883-beac-6ab4e9d05117\") " pod="openstack/nova-cell1-db-create-25tx8" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.541841 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ffbb1fa-90be-4883-beac-6ab4e9d05117-operator-scripts\") pod \"nova-cell1-db-create-25tx8\" (UID: \"2ffbb1fa-90be-4883-beac-6ab4e9d05117\") " pod="openstack/nova-cell1-db-create-25tx8" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.544228 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ffbb1fa-90be-4883-beac-6ab4e9d05117-operator-scripts\") pod \"nova-cell1-db-create-25tx8\" (UID: \"2ffbb1fa-90be-4883-beac-6ab4e9d05117\") " pod="openstack/nova-cell1-db-create-25tx8" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.548054 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ba08e34-40c6-440c-961b-757a4716a9fc-operator-scripts\") pod \"nova-cell0-64f9-account-create-update-mwfg6\" (UID: \"8ba08e34-40c6-440c-961b-757a4716a9fc\") " pod="openstack/nova-cell0-64f9-account-create-update-mwfg6" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.562751 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sp9j\" (UniqueName: \"kubernetes.io/projected/2ffbb1fa-90be-4883-beac-6ab4e9d05117-kube-api-access-9sp9j\") pod \"nova-cell1-db-create-25tx8\" (UID: \"2ffbb1fa-90be-4883-beac-6ab4e9d05117\") " pod="openstack/nova-cell1-db-create-25tx8" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.563118 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npxw2\" (UniqueName: \"kubernetes.io/projected/8ba08e34-40c6-440c-961b-757a4716a9fc-kube-api-access-npxw2\") pod \"nova-cell0-64f9-account-create-update-mwfg6\" (UID: \"8ba08e34-40c6-440c-961b-757a4716a9fc\") " pod="openstack/nova-cell0-64f9-account-create-update-mwfg6" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.643078 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9hdm\" (UniqueName: \"kubernetes.io/projected/7508feea-1859-4276-9235-4008ec0767a0-kube-api-access-r9hdm\") pod \"nova-cell1-1f37-account-create-update-ns88t\" (UID: \"7508feea-1859-4276-9235-4008ec0767a0\") " pod="openstack/nova-cell1-1f37-account-create-update-ns88t" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.643227 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7508feea-1859-4276-9235-4008ec0767a0-operator-scripts\") pod \"nova-cell1-1f37-account-create-update-ns88t\" (UID: \"7508feea-1859-4276-9235-4008ec0767a0\") " pod="openstack/nova-cell1-1f37-account-create-update-ns88t" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.735376 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-s5whp"] Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.745936 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9hdm\" (UniqueName: \"kubernetes.io/projected/7508feea-1859-4276-9235-4008ec0767a0-kube-api-access-r9hdm\") pod \"nova-cell1-1f37-account-create-update-ns88t\" (UID: \"7508feea-1859-4276-9235-4008ec0767a0\") " pod="openstack/nova-cell1-1f37-account-create-update-ns88t" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.746031 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7508feea-1859-4276-9235-4008ec0767a0-operator-scripts\") pod \"nova-cell1-1f37-account-create-update-ns88t\" (UID: \"7508feea-1859-4276-9235-4008ec0767a0\") " pod="openstack/nova-cell1-1f37-account-create-update-ns88t" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.746775 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7508feea-1859-4276-9235-4008ec0767a0-operator-scripts\") pod \"nova-cell1-1f37-account-create-update-ns88t\" (UID: \"7508feea-1859-4276-9235-4008ec0767a0\") " pod="openstack/nova-cell1-1f37-account-create-update-ns88t" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.765393 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-25tx8" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.766539 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9hdm\" (UniqueName: \"kubernetes.io/projected/7508feea-1859-4276-9235-4008ec0767a0-kube-api-access-r9hdm\") pod \"nova-cell1-1f37-account-create-update-ns88t\" (UID: \"7508feea-1859-4276-9235-4008ec0767a0\") " pod="openstack/nova-cell1-1f37-account-create-update-ns88t" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.777309 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-64f9-account-create-update-mwfg6" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.803405 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1f37-account-create-update-ns88t" Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.906161 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-k85wj"] Jan 27 20:14:44 crc kubenswrapper[4915]: I0127 20:14:44.950264 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-43f6-account-create-update-8rx5z"] Jan 27 20:14:44 crc kubenswrapper[4915]: W0127 20:14:44.966349 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd7e9679_5e88_434f_9060_cb5da12c3217.slice/crio-243a81a65cd05fcc07fbfd41de9f115aedd1075bc6bb6e3187a8330aa0d414b1 WatchSource:0}: Error finding container 243a81a65cd05fcc07fbfd41de9f115aedd1075bc6bb6e3187a8330aa0d414b1: Status 404 returned error can't find the container with id 243a81a65cd05fcc07fbfd41de9f115aedd1075bc6bb6e3187a8330aa0d414b1 Jan 27 20:14:45 crc kubenswrapper[4915]: I0127 20:14:45.229675 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-64f9-account-create-update-mwfg6"] Jan 27 20:14:45 crc kubenswrapper[4915]: W0127 20:14:45.232421 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ba08e34_40c6_440c_961b_757a4716a9fc.slice/crio-320a48144f9fe12b900cfa035590e41204002ac8338a540f840be58b63304a2e WatchSource:0}: Error finding container 320a48144f9fe12b900cfa035590e41204002ac8338a540f840be58b63304a2e: Status 404 returned error can't find the container with id 320a48144f9fe12b900cfa035590e41204002ac8338a540f840be58b63304a2e Jan 27 20:14:45 crc kubenswrapper[4915]: I0127 20:14:45.304901 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-25tx8"] Jan 27 20:14:45 crc kubenswrapper[4915]: W0127 20:14:45.310426 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ffbb1fa_90be_4883_beac_6ab4e9d05117.slice/crio-55436ba697463f328a1a31b1d16a7ee7dd2c6b5c9f73c70e5840b813904b27dc WatchSource:0}: Error finding container 55436ba697463f328a1a31b1d16a7ee7dd2c6b5c9f73c70e5840b813904b27dc: Status 404 returned error can't find the container with id 55436ba697463f328a1a31b1d16a7ee7dd2c6b5c9f73c70e5840b813904b27dc Jan 27 20:14:45 crc kubenswrapper[4915]: I0127 20:14:45.386239 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1f37-account-create-update-ns88t"] Jan 27 20:14:45 crc kubenswrapper[4915]: I0127 20:14:45.393250 4915 generic.go:334] "Generic (PLEG): container finished" podID="f81bdbbf-7100-4e76-82e3-92585b3fe6f4" containerID="9885e32ed0c96cee550752d0ac99b55ffb6a503001315d538f9adc9a67570ae7" exitCode=0 Jan 27 20:14:45 crc kubenswrapper[4915]: I0127 20:14:45.393319 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-s5whp" event={"ID":"f81bdbbf-7100-4e76-82e3-92585b3fe6f4","Type":"ContainerDied","Data":"9885e32ed0c96cee550752d0ac99b55ffb6a503001315d538f9adc9a67570ae7"} Jan 27 20:14:45 crc kubenswrapper[4915]: I0127 20:14:45.393364 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-s5whp" event={"ID":"f81bdbbf-7100-4e76-82e3-92585b3fe6f4","Type":"ContainerStarted","Data":"e4264a004145b41a6f787f4c2b563f3ef48f8f2b59b5a415d70ec37110c0b012"} Jan 27 20:14:45 crc kubenswrapper[4915]: I0127 20:14:45.395066 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-43f6-account-create-update-8rx5z" event={"ID":"bd7e9679-5e88-434f-9060-cb5da12c3217","Type":"ContainerStarted","Data":"a9a8515a7269f60718b5e8af61fe9b289fa1c7273f62047c5613709939a8ce57"} Jan 27 20:14:45 crc kubenswrapper[4915]: I0127 20:14:45.395110 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-43f6-account-create-update-8rx5z" event={"ID":"bd7e9679-5e88-434f-9060-cb5da12c3217","Type":"ContainerStarted","Data":"243a81a65cd05fcc07fbfd41de9f115aedd1075bc6bb6e3187a8330aa0d414b1"} Jan 27 20:14:45 crc kubenswrapper[4915]: I0127 20:14:45.396323 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-25tx8" event={"ID":"2ffbb1fa-90be-4883-beac-6ab4e9d05117","Type":"ContainerStarted","Data":"55436ba697463f328a1a31b1d16a7ee7dd2c6b5c9f73c70e5840b813904b27dc"} Jan 27 20:14:45 crc kubenswrapper[4915]: I0127 20:14:45.397700 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-64f9-account-create-update-mwfg6" event={"ID":"8ba08e34-40c6-440c-961b-757a4716a9fc","Type":"ContainerStarted","Data":"320a48144f9fe12b900cfa035590e41204002ac8338a540f840be58b63304a2e"} Jan 27 20:14:45 crc kubenswrapper[4915]: I0127 20:14:45.398947 4915 generic.go:334] "Generic (PLEG): container finished" podID="40f70372-2930-4901-a532-193c12377d2e" containerID="cff9f4960953613ee14fa28f22270d0640c1ea1c42b00b50a3c28bd353f5cadf" exitCode=0 Jan 27 20:14:45 crc kubenswrapper[4915]: I0127 20:14:45.399005 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-k85wj" event={"ID":"40f70372-2930-4901-a532-193c12377d2e","Type":"ContainerDied","Data":"cff9f4960953613ee14fa28f22270d0640c1ea1c42b00b50a3c28bd353f5cadf"} Jan 27 20:14:45 crc kubenswrapper[4915]: I0127 20:14:45.399024 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-k85wj" event={"ID":"40f70372-2930-4901-a532-193c12377d2e","Type":"ContainerStarted","Data":"e7e847bed4b15c6ea8df1d444d71c3858247d1c37a7aa1ddedce540738f7bcab"} Jan 27 20:14:46 crc kubenswrapper[4915]: I0127 20:14:46.408844 4915 generic.go:334] "Generic (PLEG): container finished" podID="bd7e9679-5e88-434f-9060-cb5da12c3217" containerID="a9a8515a7269f60718b5e8af61fe9b289fa1c7273f62047c5613709939a8ce57" exitCode=0 Jan 27 20:14:46 crc kubenswrapper[4915]: I0127 20:14:46.408910 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-43f6-account-create-update-8rx5z" event={"ID":"bd7e9679-5e88-434f-9060-cb5da12c3217","Type":"ContainerDied","Data":"a9a8515a7269f60718b5e8af61fe9b289fa1c7273f62047c5613709939a8ce57"} Jan 27 20:14:46 crc kubenswrapper[4915]: I0127 20:14:46.410967 4915 generic.go:334] "Generic (PLEG): container finished" podID="2ffbb1fa-90be-4883-beac-6ab4e9d05117" containerID="0473993f63356ca9caf6b2642d6fecbf4977660ac309f73249f86aa440f4a2ab" exitCode=0 Jan 27 20:14:46 crc kubenswrapper[4915]: I0127 20:14:46.411036 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-25tx8" event={"ID":"2ffbb1fa-90be-4883-beac-6ab4e9d05117","Type":"ContainerDied","Data":"0473993f63356ca9caf6b2642d6fecbf4977660ac309f73249f86aa440f4a2ab"} Jan 27 20:14:46 crc kubenswrapper[4915]: I0127 20:14:46.412839 4915 generic.go:334] "Generic (PLEG): container finished" podID="7508feea-1859-4276-9235-4008ec0767a0" containerID="f5eed7dc7178965cbc12a7bcc0efe005209b9ad321a354055d06a5ecbb5200b2" exitCode=0 Jan 27 20:14:46 crc kubenswrapper[4915]: I0127 20:14:46.412898 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1f37-account-create-update-ns88t" event={"ID":"7508feea-1859-4276-9235-4008ec0767a0","Type":"ContainerDied","Data":"f5eed7dc7178965cbc12a7bcc0efe005209b9ad321a354055d06a5ecbb5200b2"} Jan 27 20:14:46 crc kubenswrapper[4915]: I0127 20:14:46.412930 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1f37-account-create-update-ns88t" event={"ID":"7508feea-1859-4276-9235-4008ec0767a0","Type":"ContainerStarted","Data":"1121f45b7b6afbd54adf2861f4dac92fd901bfe73b6acefceeb84d0121f4243e"} Jan 27 20:14:46 crc kubenswrapper[4915]: I0127 20:14:46.414668 4915 generic.go:334] "Generic (PLEG): container finished" podID="8ba08e34-40c6-440c-961b-757a4716a9fc" containerID="bec161609d482b98248b2d7c3142c06ec62b97f617c5d49589ae8f1e6237e719" exitCode=0 Jan 27 20:14:46 crc kubenswrapper[4915]: I0127 20:14:46.414907 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-64f9-account-create-update-mwfg6" event={"ID":"8ba08e34-40c6-440c-961b-757a4716a9fc","Type":"ContainerDied","Data":"bec161609d482b98248b2d7c3142c06ec62b97f617c5d49589ae8f1e6237e719"} Jan 27 20:14:46 crc kubenswrapper[4915]: I0127 20:14:46.808162 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-s5whp" Jan 27 20:14:46 crc kubenswrapper[4915]: I0127 20:14:46.880566 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f81bdbbf-7100-4e76-82e3-92585b3fe6f4-operator-scripts\") pod \"f81bdbbf-7100-4e76-82e3-92585b3fe6f4\" (UID: \"f81bdbbf-7100-4e76-82e3-92585b3fe6f4\") " Jan 27 20:14:46 crc kubenswrapper[4915]: I0127 20:14:46.880814 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q8z5\" (UniqueName: \"kubernetes.io/projected/f81bdbbf-7100-4e76-82e3-92585b3fe6f4-kube-api-access-8q8z5\") pod \"f81bdbbf-7100-4e76-82e3-92585b3fe6f4\" (UID: \"f81bdbbf-7100-4e76-82e3-92585b3fe6f4\") " Jan 27 20:14:46 crc kubenswrapper[4915]: I0127 20:14:46.881205 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f81bdbbf-7100-4e76-82e3-92585b3fe6f4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f81bdbbf-7100-4e76-82e3-92585b3fe6f4" (UID: "f81bdbbf-7100-4e76-82e3-92585b3fe6f4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:14:46 crc kubenswrapper[4915]: I0127 20:14:46.886227 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f81bdbbf-7100-4e76-82e3-92585b3fe6f4-kube-api-access-8q8z5" (OuterVolumeSpecName: "kube-api-access-8q8z5") pod "f81bdbbf-7100-4e76-82e3-92585b3fe6f4" (UID: "f81bdbbf-7100-4e76-82e3-92585b3fe6f4"). InnerVolumeSpecName "kube-api-access-8q8z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:14:46 crc kubenswrapper[4915]: I0127 20:14:46.947285 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-k85wj" Jan 27 20:14:46 crc kubenswrapper[4915]: I0127 20:14:46.954223 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-43f6-account-create-update-8rx5z" Jan 27 20:14:46 crc kubenswrapper[4915]: I0127 20:14:46.982745 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q8z5\" (UniqueName: \"kubernetes.io/projected/f81bdbbf-7100-4e76-82e3-92585b3fe6f4-kube-api-access-8q8z5\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:46 crc kubenswrapper[4915]: I0127 20:14:46.982781 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f81bdbbf-7100-4e76-82e3-92585b3fe6f4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.084483 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64vr7\" (UniqueName: \"kubernetes.io/projected/40f70372-2930-4901-a532-193c12377d2e-kube-api-access-64vr7\") pod \"40f70372-2930-4901-a532-193c12377d2e\" (UID: \"40f70372-2930-4901-a532-193c12377d2e\") " Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.084547 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd7e9679-5e88-434f-9060-cb5da12c3217-operator-scripts\") pod \"bd7e9679-5e88-434f-9060-cb5da12c3217\" (UID: \"bd7e9679-5e88-434f-9060-cb5da12c3217\") " Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.085196 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd7e9679-5e88-434f-9060-cb5da12c3217-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bd7e9679-5e88-434f-9060-cb5da12c3217" (UID: "bd7e9679-5e88-434f-9060-cb5da12c3217"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.084575 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckhgz\" (UniqueName: \"kubernetes.io/projected/bd7e9679-5e88-434f-9060-cb5da12c3217-kube-api-access-ckhgz\") pod \"bd7e9679-5e88-434f-9060-cb5da12c3217\" (UID: \"bd7e9679-5e88-434f-9060-cb5da12c3217\") " Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.085345 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40f70372-2930-4901-a532-193c12377d2e-operator-scripts\") pod \"40f70372-2930-4901-a532-193c12377d2e\" (UID: \"40f70372-2930-4901-a532-193c12377d2e\") " Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.085930 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40f70372-2930-4901-a532-193c12377d2e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "40f70372-2930-4901-a532-193c12377d2e" (UID: "40f70372-2930-4901-a532-193c12377d2e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.086438 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd7e9679-5e88-434f-9060-cb5da12c3217-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.086460 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40f70372-2930-4901-a532-193c12377d2e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.088272 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40f70372-2930-4901-a532-193c12377d2e-kube-api-access-64vr7" (OuterVolumeSpecName: "kube-api-access-64vr7") pod "40f70372-2930-4901-a532-193c12377d2e" (UID: "40f70372-2930-4901-a532-193c12377d2e"). InnerVolumeSpecName "kube-api-access-64vr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.088760 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd7e9679-5e88-434f-9060-cb5da12c3217-kube-api-access-ckhgz" (OuterVolumeSpecName: "kube-api-access-ckhgz") pod "bd7e9679-5e88-434f-9060-cb5da12c3217" (UID: "bd7e9679-5e88-434f-9060-cb5da12c3217"). InnerVolumeSpecName "kube-api-access-ckhgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.188999 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64vr7\" (UniqueName: \"kubernetes.io/projected/40f70372-2930-4901-a532-193c12377d2e-kube-api-access-64vr7\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.189052 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckhgz\" (UniqueName: \"kubernetes.io/projected/bd7e9679-5e88-434f-9060-cb5da12c3217-kube-api-access-ckhgz\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.434249 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-k85wj" event={"ID":"40f70372-2930-4901-a532-193c12377d2e","Type":"ContainerDied","Data":"e7e847bed4b15c6ea8df1d444d71c3858247d1c37a7aa1ddedce540738f7bcab"} Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.434322 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7e847bed4b15c6ea8df1d444d71c3858247d1c37a7aa1ddedce540738f7bcab" Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.434257 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-k85wj" Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.436530 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-s5whp" event={"ID":"f81bdbbf-7100-4e76-82e3-92585b3fe6f4","Type":"ContainerDied","Data":"e4264a004145b41a6f787f4c2b563f3ef48f8f2b59b5a415d70ec37110c0b012"} Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.436556 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4264a004145b41a6f787f4c2b563f3ef48f8f2b59b5a415d70ec37110c0b012" Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.436605 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-s5whp" Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.438678 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-43f6-account-create-update-8rx5z" event={"ID":"bd7e9679-5e88-434f-9060-cb5da12c3217","Type":"ContainerDied","Data":"243a81a65cd05fcc07fbfd41de9f115aedd1075bc6bb6e3187a8330aa0d414b1"} Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.438736 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="243a81a65cd05fcc07fbfd41de9f115aedd1075bc6bb6e3187a8330aa0d414b1" Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.438808 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-43f6-account-create-update-8rx5z" Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.696366 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-64f9-account-create-update-mwfg6" Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.799059 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ba08e34-40c6-440c-961b-757a4716a9fc-operator-scripts\") pod \"8ba08e34-40c6-440c-961b-757a4716a9fc\" (UID: \"8ba08e34-40c6-440c-961b-757a4716a9fc\") " Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.799553 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npxw2\" (UniqueName: \"kubernetes.io/projected/8ba08e34-40c6-440c-961b-757a4716a9fc-kube-api-access-npxw2\") pod \"8ba08e34-40c6-440c-961b-757a4716a9fc\" (UID: \"8ba08e34-40c6-440c-961b-757a4716a9fc\") " Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.799773 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ba08e34-40c6-440c-961b-757a4716a9fc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8ba08e34-40c6-440c-961b-757a4716a9fc" (UID: "8ba08e34-40c6-440c-961b-757a4716a9fc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.800147 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ba08e34-40c6-440c-961b-757a4716a9fc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.803991 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ba08e34-40c6-440c-961b-757a4716a9fc-kube-api-access-npxw2" (OuterVolumeSpecName: "kube-api-access-npxw2") pod "8ba08e34-40c6-440c-961b-757a4716a9fc" (UID: "8ba08e34-40c6-440c-961b-757a4716a9fc"). InnerVolumeSpecName "kube-api-access-npxw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.877521 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-25tx8" Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.885724 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1f37-account-create-update-ns88t" Jan 27 20:14:47 crc kubenswrapper[4915]: I0127 20:14:47.901466 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npxw2\" (UniqueName: \"kubernetes.io/projected/8ba08e34-40c6-440c-961b-757a4716a9fc-kube-api-access-npxw2\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.003077 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ffbb1fa-90be-4883-beac-6ab4e9d05117-operator-scripts\") pod \"2ffbb1fa-90be-4883-beac-6ab4e9d05117\" (UID: \"2ffbb1fa-90be-4883-beac-6ab4e9d05117\") " Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.003378 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7508feea-1859-4276-9235-4008ec0767a0-operator-scripts\") pod \"7508feea-1859-4276-9235-4008ec0767a0\" (UID: \"7508feea-1859-4276-9235-4008ec0767a0\") " Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.003518 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sp9j\" (UniqueName: \"kubernetes.io/projected/2ffbb1fa-90be-4883-beac-6ab4e9d05117-kube-api-access-9sp9j\") pod \"2ffbb1fa-90be-4883-beac-6ab4e9d05117\" (UID: \"2ffbb1fa-90be-4883-beac-6ab4e9d05117\") " Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.003521 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ffbb1fa-90be-4883-beac-6ab4e9d05117-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ffbb1fa-90be-4883-beac-6ab4e9d05117" (UID: "2ffbb1fa-90be-4883-beac-6ab4e9d05117"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.003808 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7508feea-1859-4276-9235-4008ec0767a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7508feea-1859-4276-9235-4008ec0767a0" (UID: "7508feea-1859-4276-9235-4008ec0767a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.003926 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9hdm\" (UniqueName: \"kubernetes.io/projected/7508feea-1859-4276-9235-4008ec0767a0-kube-api-access-r9hdm\") pod \"7508feea-1859-4276-9235-4008ec0767a0\" (UID: \"7508feea-1859-4276-9235-4008ec0767a0\") " Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.004394 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ffbb1fa-90be-4883-beac-6ab4e9d05117-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.004493 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7508feea-1859-4276-9235-4008ec0767a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.006464 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ffbb1fa-90be-4883-beac-6ab4e9d05117-kube-api-access-9sp9j" (OuterVolumeSpecName: "kube-api-access-9sp9j") pod "2ffbb1fa-90be-4883-beac-6ab4e9d05117" (UID: "2ffbb1fa-90be-4883-beac-6ab4e9d05117"). InnerVolumeSpecName "kube-api-access-9sp9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.006873 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7508feea-1859-4276-9235-4008ec0767a0-kube-api-access-r9hdm" (OuterVolumeSpecName: "kube-api-access-r9hdm") pod "7508feea-1859-4276-9235-4008ec0767a0" (UID: "7508feea-1859-4276-9235-4008ec0767a0"). InnerVolumeSpecName "kube-api-access-r9hdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.105813 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9hdm\" (UniqueName: \"kubernetes.io/projected/7508feea-1859-4276-9235-4008ec0767a0-kube-api-access-r9hdm\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.105845 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sp9j\" (UniqueName: \"kubernetes.io/projected/2ffbb1fa-90be-4883-beac-6ab4e9d05117-kube-api-access-9sp9j\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.446395 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-25tx8" event={"ID":"2ffbb1fa-90be-4883-beac-6ab4e9d05117","Type":"ContainerDied","Data":"55436ba697463f328a1a31b1d16a7ee7dd2c6b5c9f73c70e5840b813904b27dc"} Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.446423 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-25tx8" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.446436 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55436ba697463f328a1a31b1d16a7ee7dd2c6b5c9f73c70e5840b813904b27dc" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.448817 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1f37-account-create-update-ns88t" event={"ID":"7508feea-1859-4276-9235-4008ec0767a0","Type":"ContainerDied","Data":"1121f45b7b6afbd54adf2861f4dac92fd901bfe73b6acefceeb84d0121f4243e"} Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.448866 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1121f45b7b6afbd54adf2861f4dac92fd901bfe73b6acefceeb84d0121f4243e" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.448994 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1f37-account-create-update-ns88t" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.450182 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-64f9-account-create-update-mwfg6" event={"ID":"8ba08e34-40c6-440c-961b-757a4716a9fc","Type":"ContainerDied","Data":"320a48144f9fe12b900cfa035590e41204002ac8338a540f840be58b63304a2e"} Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.450222 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="320a48144f9fe12b900cfa035590e41204002ac8338a540f840be58b63304a2e" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.450308 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-64f9-account-create-update-mwfg6" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.687133 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5bjhm"] Jan 27 20:14:48 crc kubenswrapper[4915]: E0127 20:14:48.687633 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ba08e34-40c6-440c-961b-757a4716a9fc" containerName="mariadb-account-create-update" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.687654 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ba08e34-40c6-440c-961b-757a4716a9fc" containerName="mariadb-account-create-update" Jan 27 20:14:48 crc kubenswrapper[4915]: E0127 20:14:48.687665 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ffbb1fa-90be-4883-beac-6ab4e9d05117" containerName="mariadb-database-create" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.687675 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ffbb1fa-90be-4883-beac-6ab4e9d05117" containerName="mariadb-database-create" Jan 27 20:14:48 crc kubenswrapper[4915]: E0127 20:14:48.687694 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7e9679-5e88-434f-9060-cb5da12c3217" containerName="mariadb-account-create-update" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.687702 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7e9679-5e88-434f-9060-cb5da12c3217" containerName="mariadb-account-create-update" Jan 27 20:14:48 crc kubenswrapper[4915]: E0127 20:14:48.687712 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f81bdbbf-7100-4e76-82e3-92585b3fe6f4" containerName="mariadb-database-create" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.687719 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f81bdbbf-7100-4e76-82e3-92585b3fe6f4" containerName="mariadb-database-create" Jan 27 20:14:48 crc kubenswrapper[4915]: E0127 20:14:48.687733 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40f70372-2930-4901-a532-193c12377d2e" containerName="mariadb-database-create" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.687740 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="40f70372-2930-4901-a532-193c12377d2e" containerName="mariadb-database-create" Jan 27 20:14:48 crc kubenswrapper[4915]: E0127 20:14:48.687753 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7508feea-1859-4276-9235-4008ec0767a0" containerName="mariadb-account-create-update" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.687760 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="7508feea-1859-4276-9235-4008ec0767a0" containerName="mariadb-account-create-update" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.688013 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ba08e34-40c6-440c-961b-757a4716a9fc" containerName="mariadb-account-create-update" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.688031 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="7508feea-1859-4276-9235-4008ec0767a0" containerName="mariadb-account-create-update" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.688040 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ffbb1fa-90be-4883-beac-6ab4e9d05117" containerName="mariadb-database-create" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.688055 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f81bdbbf-7100-4e76-82e3-92585b3fe6f4" containerName="mariadb-database-create" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.688071 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="40f70372-2930-4901-a532-193c12377d2e" containerName="mariadb-database-create" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.688088 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7e9679-5e88-434f-9060-cb5da12c3217" containerName="mariadb-account-create-update" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.689759 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bjhm" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.707585 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5bjhm"] Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.817299 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/417482b3-b73b-41a8-a137-f6f86fe78ae5-catalog-content\") pod \"community-operators-5bjhm\" (UID: \"417482b3-b73b-41a8-a137-f6f86fe78ae5\") " pod="openshift-marketplace/community-operators-5bjhm" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.817416 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/417482b3-b73b-41a8-a137-f6f86fe78ae5-utilities\") pod \"community-operators-5bjhm\" (UID: \"417482b3-b73b-41a8-a137-f6f86fe78ae5\") " pod="openshift-marketplace/community-operators-5bjhm" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.817503 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kxcq\" (UniqueName: \"kubernetes.io/projected/417482b3-b73b-41a8-a137-f6f86fe78ae5-kube-api-access-7kxcq\") pod \"community-operators-5bjhm\" (UID: \"417482b3-b73b-41a8-a137-f6f86fe78ae5\") " pod="openshift-marketplace/community-operators-5bjhm" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.919630 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/417482b3-b73b-41a8-a137-f6f86fe78ae5-catalog-content\") pod \"community-operators-5bjhm\" (UID: \"417482b3-b73b-41a8-a137-f6f86fe78ae5\") " pod="openshift-marketplace/community-operators-5bjhm" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.919736 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/417482b3-b73b-41a8-a137-f6f86fe78ae5-utilities\") pod \"community-operators-5bjhm\" (UID: \"417482b3-b73b-41a8-a137-f6f86fe78ae5\") " pod="openshift-marketplace/community-operators-5bjhm" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.919833 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kxcq\" (UniqueName: \"kubernetes.io/projected/417482b3-b73b-41a8-a137-f6f86fe78ae5-kube-api-access-7kxcq\") pod \"community-operators-5bjhm\" (UID: \"417482b3-b73b-41a8-a137-f6f86fe78ae5\") " pod="openshift-marketplace/community-operators-5bjhm" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.920188 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/417482b3-b73b-41a8-a137-f6f86fe78ae5-catalog-content\") pod \"community-operators-5bjhm\" (UID: \"417482b3-b73b-41a8-a137-f6f86fe78ae5\") " pod="openshift-marketplace/community-operators-5bjhm" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.920439 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/417482b3-b73b-41a8-a137-f6f86fe78ae5-utilities\") pod \"community-operators-5bjhm\" (UID: \"417482b3-b73b-41a8-a137-f6f86fe78ae5\") " pod="openshift-marketplace/community-operators-5bjhm" Jan 27 20:14:48 crc kubenswrapper[4915]: I0127 20:14:48.949089 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kxcq\" (UniqueName: \"kubernetes.io/projected/417482b3-b73b-41a8-a137-f6f86fe78ae5-kube-api-access-7kxcq\") pod \"community-operators-5bjhm\" (UID: \"417482b3-b73b-41a8-a137-f6f86fe78ae5\") " pod="openshift-marketplace/community-operators-5bjhm" Jan 27 20:14:49 crc kubenswrapper[4915]: I0127 20:14:49.051998 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bjhm" Jan 27 20:14:49 crc kubenswrapper[4915]: I0127 20:14:49.536462 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5bjhm"] Jan 27 20:14:49 crc kubenswrapper[4915]: I0127 20:14:49.548580 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-29rl9"] Jan 27 20:14:49 crc kubenswrapper[4915]: I0127 20:14:49.550268 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-29rl9" Jan 27 20:14:49 crc kubenswrapper[4915]: I0127 20:14:49.553338 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-2qk9z" Jan 27 20:14:49 crc kubenswrapper[4915]: I0127 20:14:49.554662 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 27 20:14:49 crc kubenswrapper[4915]: I0127 20:14:49.555457 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 20:14:49 crc kubenswrapper[4915]: I0127 20:14:49.559537 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-29rl9"] Jan 27 20:14:49 crc kubenswrapper[4915]: I0127 20:14:49.629857 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e95f84a-c23d-46b0-b1bc-619ca8ae283c-config-data\") pod \"nova-cell0-conductor-db-sync-29rl9\" (UID: \"8e95f84a-c23d-46b0-b1bc-619ca8ae283c\") " pod="openstack/nova-cell0-conductor-db-sync-29rl9" Jan 27 20:14:49 crc kubenswrapper[4915]: I0127 20:14:49.630014 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e95f84a-c23d-46b0-b1bc-619ca8ae283c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-29rl9\" (UID: \"8e95f84a-c23d-46b0-b1bc-619ca8ae283c\") " pod="openstack/nova-cell0-conductor-db-sync-29rl9" Jan 27 20:14:49 crc kubenswrapper[4915]: I0127 20:14:49.630059 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e95f84a-c23d-46b0-b1bc-619ca8ae283c-scripts\") pod \"nova-cell0-conductor-db-sync-29rl9\" (UID: \"8e95f84a-c23d-46b0-b1bc-619ca8ae283c\") " pod="openstack/nova-cell0-conductor-db-sync-29rl9" Jan 27 20:14:49 crc kubenswrapper[4915]: I0127 20:14:49.630087 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qplh8\" (UniqueName: \"kubernetes.io/projected/8e95f84a-c23d-46b0-b1bc-619ca8ae283c-kube-api-access-qplh8\") pod \"nova-cell0-conductor-db-sync-29rl9\" (UID: \"8e95f84a-c23d-46b0-b1bc-619ca8ae283c\") " pod="openstack/nova-cell0-conductor-db-sync-29rl9" Jan 27 20:14:49 crc kubenswrapper[4915]: I0127 20:14:49.731445 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e95f84a-c23d-46b0-b1bc-619ca8ae283c-config-data\") pod \"nova-cell0-conductor-db-sync-29rl9\" (UID: \"8e95f84a-c23d-46b0-b1bc-619ca8ae283c\") " pod="openstack/nova-cell0-conductor-db-sync-29rl9" Jan 27 20:14:49 crc kubenswrapper[4915]: I0127 20:14:49.731615 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e95f84a-c23d-46b0-b1bc-619ca8ae283c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-29rl9\" (UID: \"8e95f84a-c23d-46b0-b1bc-619ca8ae283c\") " pod="openstack/nova-cell0-conductor-db-sync-29rl9" Jan 27 20:14:49 crc kubenswrapper[4915]: I0127 20:14:49.731656 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e95f84a-c23d-46b0-b1bc-619ca8ae283c-scripts\") pod \"nova-cell0-conductor-db-sync-29rl9\" (UID: \"8e95f84a-c23d-46b0-b1bc-619ca8ae283c\") " pod="openstack/nova-cell0-conductor-db-sync-29rl9" Jan 27 20:14:49 crc kubenswrapper[4915]: I0127 20:14:49.731684 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qplh8\" (UniqueName: \"kubernetes.io/projected/8e95f84a-c23d-46b0-b1bc-619ca8ae283c-kube-api-access-qplh8\") pod \"nova-cell0-conductor-db-sync-29rl9\" (UID: \"8e95f84a-c23d-46b0-b1bc-619ca8ae283c\") " pod="openstack/nova-cell0-conductor-db-sync-29rl9" Jan 27 20:14:49 crc kubenswrapper[4915]: I0127 20:14:49.738892 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e95f84a-c23d-46b0-b1bc-619ca8ae283c-config-data\") pod \"nova-cell0-conductor-db-sync-29rl9\" (UID: \"8e95f84a-c23d-46b0-b1bc-619ca8ae283c\") " pod="openstack/nova-cell0-conductor-db-sync-29rl9" Jan 27 20:14:49 crc kubenswrapper[4915]: I0127 20:14:49.738959 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e95f84a-c23d-46b0-b1bc-619ca8ae283c-scripts\") pod \"nova-cell0-conductor-db-sync-29rl9\" (UID: \"8e95f84a-c23d-46b0-b1bc-619ca8ae283c\") " pod="openstack/nova-cell0-conductor-db-sync-29rl9" Jan 27 20:14:49 crc kubenswrapper[4915]: I0127 20:14:49.739018 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e95f84a-c23d-46b0-b1bc-619ca8ae283c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-29rl9\" (UID: \"8e95f84a-c23d-46b0-b1bc-619ca8ae283c\") " pod="openstack/nova-cell0-conductor-db-sync-29rl9" Jan 27 20:14:49 crc kubenswrapper[4915]: I0127 20:14:49.749228 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qplh8\" (UniqueName: \"kubernetes.io/projected/8e95f84a-c23d-46b0-b1bc-619ca8ae283c-kube-api-access-qplh8\") pod \"nova-cell0-conductor-db-sync-29rl9\" (UID: \"8e95f84a-c23d-46b0-b1bc-619ca8ae283c\") " pod="openstack/nova-cell0-conductor-db-sync-29rl9" Jan 27 20:14:49 crc kubenswrapper[4915]: I0127 20:14:49.906931 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-29rl9" Jan 27 20:14:50 crc kubenswrapper[4915]: I0127 20:14:50.340816 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-29rl9"] Jan 27 20:14:50 crc kubenswrapper[4915]: W0127 20:14:50.343872 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e95f84a_c23d_46b0_b1bc_619ca8ae283c.slice/crio-b8764b26f47547e6568c79acb1368919acf7c76cc7f8a80673bc8a0957afe661 WatchSource:0}: Error finding container b8764b26f47547e6568c79acb1368919acf7c76cc7f8a80673bc8a0957afe661: Status 404 returned error can't find the container with id b8764b26f47547e6568c79acb1368919acf7c76cc7f8a80673bc8a0957afe661 Jan 27 20:14:50 crc kubenswrapper[4915]: I0127 20:14:50.474017 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-29rl9" event={"ID":"8e95f84a-c23d-46b0-b1bc-619ca8ae283c","Type":"ContainerStarted","Data":"b8764b26f47547e6568c79acb1368919acf7c76cc7f8a80673bc8a0957afe661"} Jan 27 20:14:50 crc kubenswrapper[4915]: I0127 20:14:50.476283 4915 generic.go:334] "Generic (PLEG): container finished" podID="417482b3-b73b-41a8-a137-f6f86fe78ae5" containerID="15cb47e5ea8068411ee2e8bb3cb313f95479438b9162341144a1c41938118ac5" exitCode=0 Jan 27 20:14:50 crc kubenswrapper[4915]: I0127 20:14:50.476331 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bjhm" event={"ID":"417482b3-b73b-41a8-a137-f6f86fe78ae5","Type":"ContainerDied","Data":"15cb47e5ea8068411ee2e8bb3cb313f95479438b9162341144a1c41938118ac5"} Jan 27 20:14:50 crc kubenswrapper[4915]: I0127 20:14:50.476363 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bjhm" event={"ID":"417482b3-b73b-41a8-a137-f6f86fe78ae5","Type":"ContainerStarted","Data":"35299fc97efafcf843be318b18170685b1ce690c277556e4378d24334e26782c"} Jan 27 20:14:50 crc kubenswrapper[4915]: I0127 20:14:50.624301 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:14:50 crc kubenswrapper[4915]: I0127 20:14:50.624625 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:14:50 crc kubenswrapper[4915]: I0127 20:14:50.624662 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 20:14:50 crc kubenswrapper[4915]: I0127 20:14:50.625444 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"68967215d5f95104236d9c544ee1dac19345f8fc03262735622caa897c20b480"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 20:14:50 crc kubenswrapper[4915]: I0127 20:14:50.625521 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://68967215d5f95104236d9c544ee1dac19345f8fc03262735622caa897c20b480" gracePeriod=600 Jan 27 20:14:51 crc kubenswrapper[4915]: I0127 20:14:51.487240 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="68967215d5f95104236d9c544ee1dac19345f8fc03262735622caa897c20b480" exitCode=0 Jan 27 20:14:51 crc kubenswrapper[4915]: I0127 20:14:51.487962 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"68967215d5f95104236d9c544ee1dac19345f8fc03262735622caa897c20b480"} Jan 27 20:14:51 crc kubenswrapper[4915]: I0127 20:14:51.488000 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114"} Jan 27 20:14:51 crc kubenswrapper[4915]: I0127 20:14:51.488020 4915 scope.go:117] "RemoveContainer" containerID="5ecf77019c5e42b43f93f8e02a1bae45547ab1c1d295938bc8906b34def26dda" Jan 27 20:14:51 crc kubenswrapper[4915]: I0127 20:14:51.492641 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bjhm" event={"ID":"417482b3-b73b-41a8-a137-f6f86fe78ae5","Type":"ContainerStarted","Data":"4740d09b1503a9a1ed4d9f0314dfcae01cf52b56cdbb928b8408be47326dc7af"} Jan 27 20:14:51 crc kubenswrapper[4915]: I0127 20:14:51.500257 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-29rl9" event={"ID":"8e95f84a-c23d-46b0-b1bc-619ca8ae283c","Type":"ContainerStarted","Data":"969f029243ffebf7028457f97459cf302f8157643118e8acefd8fd27814c358a"} Jan 27 20:14:51 crc kubenswrapper[4915]: I0127 20:14:51.542902 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-29rl9" podStartSLOduration=2.542880217 podStartE2EDuration="2.542880217s" podCreationTimestamp="2026-01-27 20:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:14:51.542059206 +0000 UTC m=+5582.899912950" watchObservedRunningTime="2026-01-27 20:14:51.542880217 +0000 UTC m=+5582.900733881" Jan 27 20:14:52 crc kubenswrapper[4915]: I0127 20:14:52.518551 4915 generic.go:334] "Generic (PLEG): container finished" podID="417482b3-b73b-41a8-a137-f6f86fe78ae5" containerID="4740d09b1503a9a1ed4d9f0314dfcae01cf52b56cdbb928b8408be47326dc7af" exitCode=0 Jan 27 20:14:52 crc kubenswrapper[4915]: I0127 20:14:52.518629 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bjhm" event={"ID":"417482b3-b73b-41a8-a137-f6f86fe78ae5","Type":"ContainerDied","Data":"4740d09b1503a9a1ed4d9f0314dfcae01cf52b56cdbb928b8408be47326dc7af"} Jan 27 20:14:53 crc kubenswrapper[4915]: I0127 20:14:53.534553 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bjhm" event={"ID":"417482b3-b73b-41a8-a137-f6f86fe78ae5","Type":"ContainerStarted","Data":"f37646ef3d6fa4a2a3458031057fa7f162f1c94b75b3c1f3fea8dcc747d64de8"} Jan 27 20:14:53 crc kubenswrapper[4915]: I0127 20:14:53.558116 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5bjhm" podStartSLOduration=2.863069076 podStartE2EDuration="5.558096509s" podCreationTimestamp="2026-01-27 20:14:48 +0000 UTC" firstStartedPulling="2026-01-27 20:14:50.477822604 +0000 UTC m=+5581.835676278" lastFinishedPulling="2026-01-27 20:14:53.172850037 +0000 UTC m=+5584.530703711" observedRunningTime="2026-01-27 20:14:53.554676295 +0000 UTC m=+5584.912529959" watchObservedRunningTime="2026-01-27 20:14:53.558096509 +0000 UTC m=+5584.915950183" Jan 27 20:14:56 crc kubenswrapper[4915]: I0127 20:14:56.297237 4915 scope.go:117] "RemoveContainer" containerID="849c54f277444de56721b46a6e90f1d198c2a4091d2b315b3d8ff9750ad58c5f" Jan 27 20:14:56 crc kubenswrapper[4915]: I0127 20:14:56.332887 4915 scope.go:117] "RemoveContainer" containerID="74c18c6694a0e108607c5b1348d49a223bffa1dfbd42ef36699a336ee3bfd818" Jan 27 20:14:56 crc kubenswrapper[4915]: I0127 20:14:56.361068 4915 scope.go:117] "RemoveContainer" containerID="431edf60c8d72469395f2b72c8f53e35d7c190ac49177a2b5301e3bc1284eb2f" Jan 27 20:14:56 crc kubenswrapper[4915]: I0127 20:14:56.565470 4915 generic.go:334] "Generic (PLEG): container finished" podID="8e95f84a-c23d-46b0-b1bc-619ca8ae283c" containerID="969f029243ffebf7028457f97459cf302f8157643118e8acefd8fd27814c358a" exitCode=0 Jan 27 20:14:56 crc kubenswrapper[4915]: I0127 20:14:56.565530 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-29rl9" event={"ID":"8e95f84a-c23d-46b0-b1bc-619ca8ae283c","Type":"ContainerDied","Data":"969f029243ffebf7028457f97459cf302f8157643118e8acefd8fd27814c358a"} Jan 27 20:14:57 crc kubenswrapper[4915]: I0127 20:14:57.926210 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-29rl9" Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.018557 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e95f84a-c23d-46b0-b1bc-619ca8ae283c-scripts\") pod \"8e95f84a-c23d-46b0-b1bc-619ca8ae283c\" (UID: \"8e95f84a-c23d-46b0-b1bc-619ca8ae283c\") " Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.018854 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qplh8\" (UniqueName: \"kubernetes.io/projected/8e95f84a-c23d-46b0-b1bc-619ca8ae283c-kube-api-access-qplh8\") pod \"8e95f84a-c23d-46b0-b1bc-619ca8ae283c\" (UID: \"8e95f84a-c23d-46b0-b1bc-619ca8ae283c\") " Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.019109 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e95f84a-c23d-46b0-b1bc-619ca8ae283c-config-data\") pod \"8e95f84a-c23d-46b0-b1bc-619ca8ae283c\" (UID: \"8e95f84a-c23d-46b0-b1bc-619ca8ae283c\") " Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.019223 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e95f84a-c23d-46b0-b1bc-619ca8ae283c-combined-ca-bundle\") pod \"8e95f84a-c23d-46b0-b1bc-619ca8ae283c\" (UID: \"8e95f84a-c23d-46b0-b1bc-619ca8ae283c\") " Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.028291 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e95f84a-c23d-46b0-b1bc-619ca8ae283c-kube-api-access-qplh8" (OuterVolumeSpecName: "kube-api-access-qplh8") pod "8e95f84a-c23d-46b0-b1bc-619ca8ae283c" (UID: "8e95f84a-c23d-46b0-b1bc-619ca8ae283c"). InnerVolumeSpecName "kube-api-access-qplh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.028731 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e95f84a-c23d-46b0-b1bc-619ca8ae283c-scripts" (OuterVolumeSpecName: "scripts") pod "8e95f84a-c23d-46b0-b1bc-619ca8ae283c" (UID: "8e95f84a-c23d-46b0-b1bc-619ca8ae283c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.051124 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e95f84a-c23d-46b0-b1bc-619ca8ae283c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e95f84a-c23d-46b0-b1bc-619ca8ae283c" (UID: "8e95f84a-c23d-46b0-b1bc-619ca8ae283c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.065873 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e95f84a-c23d-46b0-b1bc-619ca8ae283c-config-data" (OuterVolumeSpecName: "config-data") pod "8e95f84a-c23d-46b0-b1bc-619ca8ae283c" (UID: "8e95f84a-c23d-46b0-b1bc-619ca8ae283c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.122164 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e95f84a-c23d-46b0-b1bc-619ca8ae283c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.122222 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e95f84a-c23d-46b0-b1bc-619ca8ae283c-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.122247 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qplh8\" (UniqueName: \"kubernetes.io/projected/8e95f84a-c23d-46b0-b1bc-619ca8ae283c-kube-api-access-qplh8\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.122486 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e95f84a-c23d-46b0-b1bc-619ca8ae283c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.587378 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-29rl9" event={"ID":"8e95f84a-c23d-46b0-b1bc-619ca8ae283c","Type":"ContainerDied","Data":"b8764b26f47547e6568c79acb1368919acf7c76cc7f8a80673bc8a0957afe661"} Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.587426 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8764b26f47547e6568c79acb1368919acf7c76cc7f8a80673bc8a0957afe661" Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.587453 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-29rl9" Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.657534 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 20:14:58 crc kubenswrapper[4915]: E0127 20:14:58.658269 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e95f84a-c23d-46b0-b1bc-619ca8ae283c" containerName="nova-cell0-conductor-db-sync" Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.658490 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e95f84a-c23d-46b0-b1bc-619ca8ae283c" containerName="nova-cell0-conductor-db-sync" Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.658786 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e95f84a-c23d-46b0-b1bc-619ca8ae283c" containerName="nova-cell0-conductor-db-sync" Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.659762 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.662122 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-2qk9z" Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.662222 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.691044 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.731148 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3080a4d-b714-443c-83cc-ba3e7b82e3b5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c3080a4d-b714-443c-83cc-ba3e7b82e3b5\") " pod="openstack/nova-cell0-conductor-0" Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.731204 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3080a4d-b714-443c-83cc-ba3e7b82e3b5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c3080a4d-b714-443c-83cc-ba3e7b82e3b5\") " pod="openstack/nova-cell0-conductor-0" Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.731307 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf2tz\" (UniqueName: \"kubernetes.io/projected/c3080a4d-b714-443c-83cc-ba3e7b82e3b5-kube-api-access-pf2tz\") pod \"nova-cell0-conductor-0\" (UID: \"c3080a4d-b714-443c-83cc-ba3e7b82e3b5\") " pod="openstack/nova-cell0-conductor-0" Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.833123 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3080a4d-b714-443c-83cc-ba3e7b82e3b5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c3080a4d-b714-443c-83cc-ba3e7b82e3b5\") " pod="openstack/nova-cell0-conductor-0" Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.833218 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3080a4d-b714-443c-83cc-ba3e7b82e3b5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c3080a4d-b714-443c-83cc-ba3e7b82e3b5\") " pod="openstack/nova-cell0-conductor-0" Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.833263 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf2tz\" (UniqueName: \"kubernetes.io/projected/c3080a4d-b714-443c-83cc-ba3e7b82e3b5-kube-api-access-pf2tz\") pod \"nova-cell0-conductor-0\" (UID: \"c3080a4d-b714-443c-83cc-ba3e7b82e3b5\") " pod="openstack/nova-cell0-conductor-0" Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.836918 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3080a4d-b714-443c-83cc-ba3e7b82e3b5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c3080a4d-b714-443c-83cc-ba3e7b82e3b5\") " pod="openstack/nova-cell0-conductor-0" Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.837744 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3080a4d-b714-443c-83cc-ba3e7b82e3b5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c3080a4d-b714-443c-83cc-ba3e7b82e3b5\") " pod="openstack/nova-cell0-conductor-0" Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.850368 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf2tz\" (UniqueName: \"kubernetes.io/projected/c3080a4d-b714-443c-83cc-ba3e7b82e3b5-kube-api-access-pf2tz\") pod \"nova-cell0-conductor-0\" (UID: \"c3080a4d-b714-443c-83cc-ba3e7b82e3b5\") " pod="openstack/nova-cell0-conductor-0" Jan 27 20:14:58 crc kubenswrapper[4915]: I0127 20:14:58.992068 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 20:14:59 crc kubenswrapper[4915]: I0127 20:14:59.053185 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5bjhm" Jan 27 20:14:59 crc kubenswrapper[4915]: I0127 20:14:59.053731 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5bjhm" Jan 27 20:14:59 crc kubenswrapper[4915]: I0127 20:14:59.124495 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5bjhm" Jan 27 20:14:59 crc kubenswrapper[4915]: W0127 20:14:59.436548 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3080a4d_b714_443c_83cc_ba3e7b82e3b5.slice/crio-6a695faf470c56ac1f1f9e212eb9fb9ee0c55e737c5d94c192cd4aa656ce3bcd WatchSource:0}: Error finding container 6a695faf470c56ac1f1f9e212eb9fb9ee0c55e737c5d94c192cd4aa656ce3bcd: Status 404 returned error can't find the container with id 6a695faf470c56ac1f1f9e212eb9fb9ee0c55e737c5d94c192cd4aa656ce3bcd Jan 27 20:14:59 crc kubenswrapper[4915]: I0127 20:14:59.436606 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 20:14:59 crc kubenswrapper[4915]: I0127 20:14:59.597308 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c3080a4d-b714-443c-83cc-ba3e7b82e3b5","Type":"ContainerStarted","Data":"6a695faf470c56ac1f1f9e212eb9fb9ee0c55e737c5d94c192cd4aa656ce3bcd"} Jan 27 20:14:59 crc kubenswrapper[4915]: I0127 20:14:59.654800 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5bjhm" Jan 27 20:14:59 crc kubenswrapper[4915]: I0127 20:14:59.699547 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5bjhm"] Jan 27 20:15:00 crc kubenswrapper[4915]: I0127 20:15:00.149472 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492415-trlxq"] Jan 27 20:15:00 crc kubenswrapper[4915]: I0127 20:15:00.151020 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-trlxq" Jan 27 20:15:00 crc kubenswrapper[4915]: I0127 20:15:00.153483 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 20:15:00 crc kubenswrapper[4915]: I0127 20:15:00.153551 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 20:15:00 crc kubenswrapper[4915]: I0127 20:15:00.164739 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492415-trlxq"] Jan 27 20:15:00 crc kubenswrapper[4915]: I0127 20:15:00.258002 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a2584d4-dae1-4371-be0f-f27cedc845f9-secret-volume\") pod \"collect-profiles-29492415-trlxq\" (UID: \"0a2584d4-dae1-4371-be0f-f27cedc845f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-trlxq" Jan 27 20:15:00 crc kubenswrapper[4915]: I0127 20:15:00.258096 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7ck5\" (UniqueName: \"kubernetes.io/projected/0a2584d4-dae1-4371-be0f-f27cedc845f9-kube-api-access-d7ck5\") pod \"collect-profiles-29492415-trlxq\" (UID: \"0a2584d4-dae1-4371-be0f-f27cedc845f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-trlxq" Jan 27 20:15:00 crc kubenswrapper[4915]: I0127 20:15:00.258163 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a2584d4-dae1-4371-be0f-f27cedc845f9-config-volume\") pod \"collect-profiles-29492415-trlxq\" (UID: \"0a2584d4-dae1-4371-be0f-f27cedc845f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-trlxq" Jan 27 20:15:00 crc kubenswrapper[4915]: I0127 20:15:00.359922 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a2584d4-dae1-4371-be0f-f27cedc845f9-secret-volume\") pod \"collect-profiles-29492415-trlxq\" (UID: \"0a2584d4-dae1-4371-be0f-f27cedc845f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-trlxq" Jan 27 20:15:00 crc kubenswrapper[4915]: I0127 20:15:00.360271 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ck5\" (UniqueName: \"kubernetes.io/projected/0a2584d4-dae1-4371-be0f-f27cedc845f9-kube-api-access-d7ck5\") pod \"collect-profiles-29492415-trlxq\" (UID: \"0a2584d4-dae1-4371-be0f-f27cedc845f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-trlxq" Jan 27 20:15:00 crc kubenswrapper[4915]: I0127 20:15:00.360331 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a2584d4-dae1-4371-be0f-f27cedc845f9-config-volume\") pod \"collect-profiles-29492415-trlxq\" (UID: \"0a2584d4-dae1-4371-be0f-f27cedc845f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-trlxq" Jan 27 20:15:00 crc kubenswrapper[4915]: I0127 20:15:00.361219 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a2584d4-dae1-4371-be0f-f27cedc845f9-config-volume\") pod \"collect-profiles-29492415-trlxq\" (UID: \"0a2584d4-dae1-4371-be0f-f27cedc845f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-trlxq" Jan 27 20:15:00 crc kubenswrapper[4915]: I0127 20:15:00.379309 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a2584d4-dae1-4371-be0f-f27cedc845f9-secret-volume\") pod \"collect-profiles-29492415-trlxq\" (UID: \"0a2584d4-dae1-4371-be0f-f27cedc845f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-trlxq" Jan 27 20:15:00 crc kubenswrapper[4915]: I0127 20:15:00.383499 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7ck5\" (UniqueName: \"kubernetes.io/projected/0a2584d4-dae1-4371-be0f-f27cedc845f9-kube-api-access-d7ck5\") pod \"collect-profiles-29492415-trlxq\" (UID: \"0a2584d4-dae1-4371-be0f-f27cedc845f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-trlxq" Jan 27 20:15:00 crc kubenswrapper[4915]: I0127 20:15:00.476953 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-trlxq" Jan 27 20:15:00 crc kubenswrapper[4915]: I0127 20:15:00.608980 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c3080a4d-b714-443c-83cc-ba3e7b82e3b5","Type":"ContainerStarted","Data":"a789ec3a448c54abbf18cab4369fc2c2365028f5015485ed2367d4376fcf350c"} Jan 27 20:15:00 crc kubenswrapper[4915]: I0127 20:15:00.609194 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 27 20:15:00 crc kubenswrapper[4915]: I0127 20:15:00.633547 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.633514351 podStartE2EDuration="2.633514351s" podCreationTimestamp="2026-01-27 20:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:15:00.623627648 +0000 UTC m=+5591.981481312" watchObservedRunningTime="2026-01-27 20:15:00.633514351 +0000 UTC m=+5591.991368015" Jan 27 20:15:00 crc kubenswrapper[4915]: I0127 20:15:00.933601 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492415-trlxq"] Jan 27 20:15:00 crc kubenswrapper[4915]: W0127 20:15:00.948336 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a2584d4_dae1_4371_be0f_f27cedc845f9.slice/crio-c787a9c7b71984dc1f511f72df820558aeb8ad6c1bbafe3bc626538944430635 WatchSource:0}: Error finding container c787a9c7b71984dc1f511f72df820558aeb8ad6c1bbafe3bc626538944430635: Status 404 returned error can't find the container with id c787a9c7b71984dc1f511f72df820558aeb8ad6c1bbafe3bc626538944430635 Jan 27 20:15:01 crc kubenswrapper[4915]: I0127 20:15:01.618100 4915 generic.go:334] "Generic (PLEG): container finished" podID="0a2584d4-dae1-4371-be0f-f27cedc845f9" containerID="7130398f323392b546c973bf9f26a234c699ec4d90acc64223c4d9179671b8c7" exitCode=0 Jan 27 20:15:01 crc kubenswrapper[4915]: I0127 20:15:01.618222 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-trlxq" event={"ID":"0a2584d4-dae1-4371-be0f-f27cedc845f9","Type":"ContainerDied","Data":"7130398f323392b546c973bf9f26a234c699ec4d90acc64223c4d9179671b8c7"} Jan 27 20:15:01 crc kubenswrapper[4915]: I0127 20:15:01.618494 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-trlxq" event={"ID":"0a2584d4-dae1-4371-be0f-f27cedc845f9","Type":"ContainerStarted","Data":"c787a9c7b71984dc1f511f72df820558aeb8ad6c1bbafe3bc626538944430635"} Jan 27 20:15:01 crc kubenswrapper[4915]: I0127 20:15:01.618861 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5bjhm" podUID="417482b3-b73b-41a8-a137-f6f86fe78ae5" containerName="registry-server" containerID="cri-o://f37646ef3d6fa4a2a3458031057fa7f162f1c94b75b3c1f3fea8dcc747d64de8" gracePeriod=2 Jan 27 20:15:02 crc kubenswrapper[4915]: I0127 20:15:02.088871 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bjhm" Jan 27 20:15:02 crc kubenswrapper[4915]: I0127 20:15:02.190673 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kxcq\" (UniqueName: \"kubernetes.io/projected/417482b3-b73b-41a8-a137-f6f86fe78ae5-kube-api-access-7kxcq\") pod \"417482b3-b73b-41a8-a137-f6f86fe78ae5\" (UID: \"417482b3-b73b-41a8-a137-f6f86fe78ae5\") " Jan 27 20:15:02 crc kubenswrapper[4915]: I0127 20:15:02.190857 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/417482b3-b73b-41a8-a137-f6f86fe78ae5-catalog-content\") pod \"417482b3-b73b-41a8-a137-f6f86fe78ae5\" (UID: \"417482b3-b73b-41a8-a137-f6f86fe78ae5\") " Jan 27 20:15:02 crc kubenswrapper[4915]: I0127 20:15:02.190931 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/417482b3-b73b-41a8-a137-f6f86fe78ae5-utilities\") pod \"417482b3-b73b-41a8-a137-f6f86fe78ae5\" (UID: \"417482b3-b73b-41a8-a137-f6f86fe78ae5\") " Jan 27 20:15:02 crc kubenswrapper[4915]: I0127 20:15:02.191746 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/417482b3-b73b-41a8-a137-f6f86fe78ae5-utilities" (OuterVolumeSpecName: "utilities") pod "417482b3-b73b-41a8-a137-f6f86fe78ae5" (UID: "417482b3-b73b-41a8-a137-f6f86fe78ae5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:15:02 crc kubenswrapper[4915]: I0127 20:15:02.199371 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/417482b3-b73b-41a8-a137-f6f86fe78ae5-kube-api-access-7kxcq" (OuterVolumeSpecName: "kube-api-access-7kxcq") pod "417482b3-b73b-41a8-a137-f6f86fe78ae5" (UID: "417482b3-b73b-41a8-a137-f6f86fe78ae5"). InnerVolumeSpecName "kube-api-access-7kxcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:15:02 crc kubenswrapper[4915]: I0127 20:15:02.249612 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/417482b3-b73b-41a8-a137-f6f86fe78ae5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "417482b3-b73b-41a8-a137-f6f86fe78ae5" (UID: "417482b3-b73b-41a8-a137-f6f86fe78ae5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:15:02 crc kubenswrapper[4915]: I0127 20:15:02.293400 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/417482b3-b73b-41a8-a137-f6f86fe78ae5-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:02 crc kubenswrapper[4915]: I0127 20:15:02.293441 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kxcq\" (UniqueName: \"kubernetes.io/projected/417482b3-b73b-41a8-a137-f6f86fe78ae5-kube-api-access-7kxcq\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:02 crc kubenswrapper[4915]: I0127 20:15:02.293452 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/417482b3-b73b-41a8-a137-f6f86fe78ae5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:02 crc kubenswrapper[4915]: I0127 20:15:02.632337 4915 generic.go:334] "Generic (PLEG): container finished" podID="417482b3-b73b-41a8-a137-f6f86fe78ae5" containerID="f37646ef3d6fa4a2a3458031057fa7f162f1c94b75b3c1f3fea8dcc747d64de8" exitCode=0 Jan 27 20:15:02 crc kubenswrapper[4915]: I0127 20:15:02.632427 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bjhm" Jan 27 20:15:02 crc kubenswrapper[4915]: I0127 20:15:02.632520 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bjhm" event={"ID":"417482b3-b73b-41a8-a137-f6f86fe78ae5","Type":"ContainerDied","Data":"f37646ef3d6fa4a2a3458031057fa7f162f1c94b75b3c1f3fea8dcc747d64de8"} Jan 27 20:15:02 crc kubenswrapper[4915]: I0127 20:15:02.632592 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bjhm" event={"ID":"417482b3-b73b-41a8-a137-f6f86fe78ae5","Type":"ContainerDied","Data":"35299fc97efafcf843be318b18170685b1ce690c277556e4378d24334e26782c"} Jan 27 20:15:02 crc kubenswrapper[4915]: I0127 20:15:02.632633 4915 scope.go:117] "RemoveContainer" containerID="f37646ef3d6fa4a2a3458031057fa7f162f1c94b75b3c1f3fea8dcc747d64de8" Jan 27 20:15:02 crc kubenswrapper[4915]: I0127 20:15:02.666861 4915 scope.go:117] "RemoveContainer" containerID="4740d09b1503a9a1ed4d9f0314dfcae01cf52b56cdbb928b8408be47326dc7af" Jan 27 20:15:02 crc kubenswrapper[4915]: I0127 20:15:02.712866 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5bjhm"] Jan 27 20:15:02 crc kubenswrapper[4915]: I0127 20:15:02.713599 4915 scope.go:117] "RemoveContainer" containerID="15cb47e5ea8068411ee2e8bb3cb313f95479438b9162341144a1c41938118ac5" Jan 27 20:15:02 crc kubenswrapper[4915]: I0127 20:15:02.728850 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5bjhm"] Jan 27 20:15:02 crc kubenswrapper[4915]: I0127 20:15:02.746342 4915 scope.go:117] "RemoveContainer" containerID="f37646ef3d6fa4a2a3458031057fa7f162f1c94b75b3c1f3fea8dcc747d64de8" Jan 27 20:15:02 crc kubenswrapper[4915]: E0127 20:15:02.747077 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f37646ef3d6fa4a2a3458031057fa7f162f1c94b75b3c1f3fea8dcc747d64de8\": container with ID starting with f37646ef3d6fa4a2a3458031057fa7f162f1c94b75b3c1f3fea8dcc747d64de8 not found: ID does not exist" containerID="f37646ef3d6fa4a2a3458031057fa7f162f1c94b75b3c1f3fea8dcc747d64de8" Jan 27 20:15:02 crc kubenswrapper[4915]: I0127 20:15:02.747111 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f37646ef3d6fa4a2a3458031057fa7f162f1c94b75b3c1f3fea8dcc747d64de8"} err="failed to get container status \"f37646ef3d6fa4a2a3458031057fa7f162f1c94b75b3c1f3fea8dcc747d64de8\": rpc error: code = NotFound desc = could not find container \"f37646ef3d6fa4a2a3458031057fa7f162f1c94b75b3c1f3fea8dcc747d64de8\": container with ID starting with f37646ef3d6fa4a2a3458031057fa7f162f1c94b75b3c1f3fea8dcc747d64de8 not found: ID does not exist" Jan 27 20:15:02 crc kubenswrapper[4915]: I0127 20:15:02.747137 4915 scope.go:117] "RemoveContainer" containerID="4740d09b1503a9a1ed4d9f0314dfcae01cf52b56cdbb928b8408be47326dc7af" Jan 27 20:15:02 crc kubenswrapper[4915]: E0127 20:15:02.747838 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4740d09b1503a9a1ed4d9f0314dfcae01cf52b56cdbb928b8408be47326dc7af\": container with ID starting with 4740d09b1503a9a1ed4d9f0314dfcae01cf52b56cdbb928b8408be47326dc7af not found: ID does not exist" containerID="4740d09b1503a9a1ed4d9f0314dfcae01cf52b56cdbb928b8408be47326dc7af" Jan 27 20:15:02 crc kubenswrapper[4915]: I0127 20:15:02.747908 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4740d09b1503a9a1ed4d9f0314dfcae01cf52b56cdbb928b8408be47326dc7af"} err="failed to get container status \"4740d09b1503a9a1ed4d9f0314dfcae01cf52b56cdbb928b8408be47326dc7af\": rpc error: code = NotFound desc = could not find container \"4740d09b1503a9a1ed4d9f0314dfcae01cf52b56cdbb928b8408be47326dc7af\": container with ID starting with 4740d09b1503a9a1ed4d9f0314dfcae01cf52b56cdbb928b8408be47326dc7af not found: ID does not exist" Jan 27 20:15:02 crc kubenswrapper[4915]: I0127 20:15:02.747940 4915 scope.go:117] "RemoveContainer" containerID="15cb47e5ea8068411ee2e8bb3cb313f95479438b9162341144a1c41938118ac5" Jan 27 20:15:02 crc kubenswrapper[4915]: E0127 20:15:02.748643 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15cb47e5ea8068411ee2e8bb3cb313f95479438b9162341144a1c41938118ac5\": container with ID starting with 15cb47e5ea8068411ee2e8bb3cb313f95479438b9162341144a1c41938118ac5 not found: ID does not exist" containerID="15cb47e5ea8068411ee2e8bb3cb313f95479438b9162341144a1c41938118ac5" Jan 27 20:15:02 crc kubenswrapper[4915]: I0127 20:15:02.748694 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15cb47e5ea8068411ee2e8bb3cb313f95479438b9162341144a1c41938118ac5"} err="failed to get container status \"15cb47e5ea8068411ee2e8bb3cb313f95479438b9162341144a1c41938118ac5\": rpc error: code = NotFound desc = could not find container \"15cb47e5ea8068411ee2e8bb3cb313f95479438b9162341144a1c41938118ac5\": container with ID starting with 15cb47e5ea8068411ee2e8bb3cb313f95479438b9162341144a1c41938118ac5 not found: ID does not exist" Jan 27 20:15:02 crc kubenswrapper[4915]: I0127 20:15:02.976264 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-trlxq" Jan 27 20:15:03 crc kubenswrapper[4915]: I0127 20:15:03.110250 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a2584d4-dae1-4371-be0f-f27cedc845f9-config-volume\") pod \"0a2584d4-dae1-4371-be0f-f27cedc845f9\" (UID: \"0a2584d4-dae1-4371-be0f-f27cedc845f9\") " Jan 27 20:15:03 crc kubenswrapper[4915]: I0127 20:15:03.110408 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7ck5\" (UniqueName: \"kubernetes.io/projected/0a2584d4-dae1-4371-be0f-f27cedc845f9-kube-api-access-d7ck5\") pod \"0a2584d4-dae1-4371-be0f-f27cedc845f9\" (UID: \"0a2584d4-dae1-4371-be0f-f27cedc845f9\") " Jan 27 20:15:03 crc kubenswrapper[4915]: I0127 20:15:03.110470 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a2584d4-dae1-4371-be0f-f27cedc845f9-secret-volume\") pod \"0a2584d4-dae1-4371-be0f-f27cedc845f9\" (UID: \"0a2584d4-dae1-4371-be0f-f27cedc845f9\") " Jan 27 20:15:03 crc kubenswrapper[4915]: I0127 20:15:03.111107 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a2584d4-dae1-4371-be0f-f27cedc845f9-config-volume" (OuterVolumeSpecName: "config-volume") pod "0a2584d4-dae1-4371-be0f-f27cedc845f9" (UID: "0a2584d4-dae1-4371-be0f-f27cedc845f9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:15:03 crc kubenswrapper[4915]: I0127 20:15:03.115186 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a2584d4-dae1-4371-be0f-f27cedc845f9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0a2584d4-dae1-4371-be0f-f27cedc845f9" (UID: "0a2584d4-dae1-4371-be0f-f27cedc845f9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:15:03 crc kubenswrapper[4915]: I0127 20:15:03.116306 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a2584d4-dae1-4371-be0f-f27cedc845f9-kube-api-access-d7ck5" (OuterVolumeSpecName: "kube-api-access-d7ck5") pod "0a2584d4-dae1-4371-be0f-f27cedc845f9" (UID: "0a2584d4-dae1-4371-be0f-f27cedc845f9"). InnerVolumeSpecName "kube-api-access-d7ck5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:15:03 crc kubenswrapper[4915]: I0127 20:15:03.213578 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7ck5\" (UniqueName: \"kubernetes.io/projected/0a2584d4-dae1-4371-be0f-f27cedc845f9-kube-api-access-d7ck5\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:03 crc kubenswrapper[4915]: I0127 20:15:03.213620 4915 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a2584d4-dae1-4371-be0f-f27cedc845f9-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:03 crc kubenswrapper[4915]: I0127 20:15:03.213633 4915 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a2584d4-dae1-4371-be0f-f27cedc845f9-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:03 crc kubenswrapper[4915]: I0127 20:15:03.373531 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="417482b3-b73b-41a8-a137-f6f86fe78ae5" path="/var/lib/kubelet/pods/417482b3-b73b-41a8-a137-f6f86fe78ae5/volumes" Jan 27 20:15:03 crc kubenswrapper[4915]: I0127 20:15:03.643263 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-trlxq" event={"ID":"0a2584d4-dae1-4371-be0f-f27cedc845f9","Type":"ContainerDied","Data":"c787a9c7b71984dc1f511f72df820558aeb8ad6c1bbafe3bc626538944430635"} Jan 27 20:15:03 crc kubenswrapper[4915]: I0127 20:15:03.643312 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c787a9c7b71984dc1f511f72df820558aeb8ad6c1bbafe3bc626538944430635" Jan 27 20:15:03 crc kubenswrapper[4915]: I0127 20:15:03.644081 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-trlxq" Jan 27 20:15:04 crc kubenswrapper[4915]: I0127 20:15:04.077684 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492370-mxv69"] Jan 27 20:15:04 crc kubenswrapper[4915]: I0127 20:15:04.085083 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492370-mxv69"] Jan 27 20:15:05 crc kubenswrapper[4915]: I0127 20:15:05.367709 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58a0a309-348a-4379-a435-a2e95ee9d37d" path="/var/lib/kubelet/pods/58a0a309-348a-4379-a435-a2e95ee9d37d/volumes" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.043137 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.597555 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-qw7g5"] Jan 27 20:15:09 crc kubenswrapper[4915]: E0127 20:15:09.598305 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="417482b3-b73b-41a8-a137-f6f86fe78ae5" containerName="extract-utilities" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.598320 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="417482b3-b73b-41a8-a137-f6f86fe78ae5" containerName="extract-utilities" Jan 27 20:15:09 crc kubenswrapper[4915]: E0127 20:15:09.598330 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="417482b3-b73b-41a8-a137-f6f86fe78ae5" containerName="extract-content" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.598336 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="417482b3-b73b-41a8-a137-f6f86fe78ae5" containerName="extract-content" Jan 27 20:15:09 crc kubenswrapper[4915]: E0127 20:15:09.598357 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a2584d4-dae1-4371-be0f-f27cedc845f9" containerName="collect-profiles" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.598364 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a2584d4-dae1-4371-be0f-f27cedc845f9" containerName="collect-profiles" Jan 27 20:15:09 crc kubenswrapper[4915]: E0127 20:15:09.598371 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="417482b3-b73b-41a8-a137-f6f86fe78ae5" containerName="registry-server" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.598378 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="417482b3-b73b-41a8-a137-f6f86fe78ae5" containerName="registry-server" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.598533 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a2584d4-dae1-4371-be0f-f27cedc845f9" containerName="collect-profiles" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.598553 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="417482b3-b73b-41a8-a137-f6f86fe78ae5" containerName="registry-server" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.599182 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qw7g5" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.601803 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.602545 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.613770 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qw7g5"] Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.730744 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65254045-1103-45cd-b7ad-da904497894e-scripts\") pod \"nova-cell0-cell-mapping-qw7g5\" (UID: \"65254045-1103-45cd-b7ad-da904497894e\") " pod="openstack/nova-cell0-cell-mapping-qw7g5" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.731179 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65254045-1103-45cd-b7ad-da904497894e-config-data\") pod \"nova-cell0-cell-mapping-qw7g5\" (UID: \"65254045-1103-45cd-b7ad-da904497894e\") " pod="openstack/nova-cell0-cell-mapping-qw7g5" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.731345 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgk56\" (UniqueName: \"kubernetes.io/projected/65254045-1103-45cd-b7ad-da904497894e-kube-api-access-pgk56\") pod \"nova-cell0-cell-mapping-qw7g5\" (UID: \"65254045-1103-45cd-b7ad-da904497894e\") " pod="openstack/nova-cell0-cell-mapping-qw7g5" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.731546 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65254045-1103-45cd-b7ad-da904497894e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qw7g5\" (UID: \"65254045-1103-45cd-b7ad-da904497894e\") " pod="openstack/nova-cell0-cell-mapping-qw7g5" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.734928 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.737060 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.739275 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.755063 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.789110 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.791210 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.795364 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.812532 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.865822 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65254045-1103-45cd-b7ad-da904497894e-scripts\") pod \"nova-cell0-cell-mapping-qw7g5\" (UID: \"65254045-1103-45cd-b7ad-da904497894e\") " pod="openstack/nova-cell0-cell-mapping-qw7g5" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.865919 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9dgg\" (UniqueName: \"kubernetes.io/projected/2015bf50-de5e-432d-8732-a894c1593de6-kube-api-access-m9dgg\") pod \"nova-metadata-0\" (UID: \"2015bf50-de5e-432d-8732-a894c1593de6\") " pod="openstack/nova-metadata-0" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.866036 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cea03b9-f218-4d20-86e6-94add13710e3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4cea03b9-f218-4d20-86e6-94add13710e3\") " pod="openstack/nova-api-0" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.866111 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qmtk\" (UniqueName: \"kubernetes.io/projected/4cea03b9-f218-4d20-86e6-94add13710e3-kube-api-access-7qmtk\") pod \"nova-api-0\" (UID: \"4cea03b9-f218-4d20-86e6-94add13710e3\") " pod="openstack/nova-api-0" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.866140 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2015bf50-de5e-432d-8732-a894c1593de6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2015bf50-de5e-432d-8732-a894c1593de6\") " pod="openstack/nova-metadata-0" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.866177 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65254045-1103-45cd-b7ad-da904497894e-config-data\") pod \"nova-cell0-cell-mapping-qw7g5\" (UID: \"65254045-1103-45cd-b7ad-da904497894e\") " pod="openstack/nova-cell0-cell-mapping-qw7g5" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.866225 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2015bf50-de5e-432d-8732-a894c1593de6-config-data\") pod \"nova-metadata-0\" (UID: \"2015bf50-de5e-432d-8732-a894c1593de6\") " pod="openstack/nova-metadata-0" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.866265 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgk56\" (UniqueName: \"kubernetes.io/projected/65254045-1103-45cd-b7ad-da904497894e-kube-api-access-pgk56\") pod \"nova-cell0-cell-mapping-qw7g5\" (UID: \"65254045-1103-45cd-b7ad-da904497894e\") " pod="openstack/nova-cell0-cell-mapping-qw7g5" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.866300 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cea03b9-f218-4d20-86e6-94add13710e3-logs\") pod \"nova-api-0\" (UID: \"4cea03b9-f218-4d20-86e6-94add13710e3\") " pod="openstack/nova-api-0" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.866321 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2015bf50-de5e-432d-8732-a894c1593de6-logs\") pod \"nova-metadata-0\" (UID: \"2015bf50-de5e-432d-8732-a894c1593de6\") " pod="openstack/nova-metadata-0" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.866425 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65254045-1103-45cd-b7ad-da904497894e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qw7g5\" (UID: \"65254045-1103-45cd-b7ad-da904497894e\") " pod="openstack/nova-cell0-cell-mapping-qw7g5" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.866507 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cea03b9-f218-4d20-86e6-94add13710e3-config-data\") pod \"nova-api-0\" (UID: \"4cea03b9-f218-4d20-86e6-94add13710e3\") " pod="openstack/nova-api-0" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.884151 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65254045-1103-45cd-b7ad-da904497894e-scripts\") pod \"nova-cell0-cell-mapping-qw7g5\" (UID: \"65254045-1103-45cd-b7ad-da904497894e\") " pod="openstack/nova-cell0-cell-mapping-qw7g5" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.890136 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65254045-1103-45cd-b7ad-da904497894e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qw7g5\" (UID: \"65254045-1103-45cd-b7ad-da904497894e\") " pod="openstack/nova-cell0-cell-mapping-qw7g5" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.902757 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65254045-1103-45cd-b7ad-da904497894e-config-data\") pod \"nova-cell0-cell-mapping-qw7g5\" (UID: \"65254045-1103-45cd-b7ad-da904497894e\") " pod="openstack/nova-cell0-cell-mapping-qw7g5" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.909336 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgk56\" (UniqueName: \"kubernetes.io/projected/65254045-1103-45cd-b7ad-da904497894e-kube-api-access-pgk56\") pod \"nova-cell0-cell-mapping-qw7g5\" (UID: \"65254045-1103-45cd-b7ad-da904497894e\") " pod="openstack/nova-cell0-cell-mapping-qw7g5" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.922600 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qw7g5" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.942032 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5db6b59b7f-dv4xq"] Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.945468 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.967864 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cea03b9-f218-4d20-86e6-94add13710e3-config-data\") pod \"nova-api-0\" (UID: \"4cea03b9-f218-4d20-86e6-94add13710e3\") " pod="openstack/nova-api-0" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.968185 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9dgg\" (UniqueName: \"kubernetes.io/projected/2015bf50-de5e-432d-8732-a894c1593de6-kube-api-access-m9dgg\") pod \"nova-metadata-0\" (UID: \"2015bf50-de5e-432d-8732-a894c1593de6\") " pod="openstack/nova-metadata-0" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.968253 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cea03b9-f218-4d20-86e6-94add13710e3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4cea03b9-f218-4d20-86e6-94add13710e3\") " pod="openstack/nova-api-0" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.968285 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qmtk\" (UniqueName: \"kubernetes.io/projected/4cea03b9-f218-4d20-86e6-94add13710e3-kube-api-access-7qmtk\") pod \"nova-api-0\" (UID: \"4cea03b9-f218-4d20-86e6-94add13710e3\") " pod="openstack/nova-api-0" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.968323 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2015bf50-de5e-432d-8732-a894c1593de6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2015bf50-de5e-432d-8732-a894c1593de6\") " pod="openstack/nova-metadata-0" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.968347 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2015bf50-de5e-432d-8732-a894c1593de6-config-data\") pod \"nova-metadata-0\" (UID: \"2015bf50-de5e-432d-8732-a894c1593de6\") " pod="openstack/nova-metadata-0" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.968391 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cea03b9-f218-4d20-86e6-94add13710e3-logs\") pod \"nova-api-0\" (UID: \"4cea03b9-f218-4d20-86e6-94add13710e3\") " pod="openstack/nova-api-0" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.968408 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2015bf50-de5e-432d-8732-a894c1593de6-logs\") pod \"nova-metadata-0\" (UID: \"2015bf50-de5e-432d-8732-a894c1593de6\") " pod="openstack/nova-metadata-0" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.968896 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2015bf50-de5e-432d-8732-a894c1593de6-logs\") pod \"nova-metadata-0\" (UID: \"2015bf50-de5e-432d-8732-a894c1593de6\") " pod="openstack/nova-metadata-0" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.977863 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.978238 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cea03b9-f218-4d20-86e6-94add13710e3-logs\") pod \"nova-api-0\" (UID: \"4cea03b9-f218-4d20-86e6-94add13710e3\") " pod="openstack/nova-api-0" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.983185 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.996992 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:15:09 crc kubenswrapper[4915]: I0127 20:15:09.997639 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.008132 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2015bf50-de5e-432d-8732-a894c1593de6-config-data\") pod \"nova-metadata-0\" (UID: \"2015bf50-de5e-432d-8732-a894c1593de6\") " pod="openstack/nova-metadata-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.009237 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cea03b9-f218-4d20-86e6-94add13710e3-config-data\") pod \"nova-api-0\" (UID: \"4cea03b9-f218-4d20-86e6-94add13710e3\") " pod="openstack/nova-api-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.011529 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qmtk\" (UniqueName: \"kubernetes.io/projected/4cea03b9-f218-4d20-86e6-94add13710e3-kube-api-access-7qmtk\") pod \"nova-api-0\" (UID: \"4cea03b9-f218-4d20-86e6-94add13710e3\") " pod="openstack/nova-api-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.015924 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5db6b59b7f-dv4xq"] Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.016278 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2015bf50-de5e-432d-8732-a894c1593de6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2015bf50-de5e-432d-8732-a894c1593de6\") " pod="openstack/nova-metadata-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.016753 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9dgg\" (UniqueName: \"kubernetes.io/projected/2015bf50-de5e-432d-8732-a894c1593de6-kube-api-access-m9dgg\") pod \"nova-metadata-0\" (UID: \"2015bf50-de5e-432d-8732-a894c1593de6\") " pod="openstack/nova-metadata-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.017862 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cea03b9-f218-4d20-86e6-94add13710e3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4cea03b9-f218-4d20-86e6-94add13710e3\") " pod="openstack/nova-api-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.033314 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.035020 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.037474 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.042903 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.069508 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/937a84dd-3c48-47b5-831f-40bea0da2e8a-config\") pod \"dnsmasq-dns-5db6b59b7f-dv4xq\" (UID: \"937a84dd-3c48-47b5-831f-40bea0da2e8a\") " pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.072089 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.077185 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e40c09-ab7a-452a-a474-50bce06699c4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d2e40c09-ab7a-452a-a474-50bce06699c4\") " pod="openstack/nova-scheduler-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.085694 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt2t9\" (UniqueName: \"kubernetes.io/projected/937a84dd-3c48-47b5-831f-40bea0da2e8a-kube-api-access-xt2t9\") pod \"dnsmasq-dns-5db6b59b7f-dv4xq\" (UID: \"937a84dd-3c48-47b5-831f-40bea0da2e8a\") " pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.085985 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/937a84dd-3c48-47b5-831f-40bea0da2e8a-dns-svc\") pod \"dnsmasq-dns-5db6b59b7f-dv4xq\" (UID: \"937a84dd-3c48-47b5-831f-40bea0da2e8a\") " pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.086150 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/937a84dd-3c48-47b5-831f-40bea0da2e8a-ovsdbserver-sb\") pod \"dnsmasq-dns-5db6b59b7f-dv4xq\" (UID: \"937a84dd-3c48-47b5-831f-40bea0da2e8a\") " pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.086299 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e40c09-ab7a-452a-a474-50bce06699c4-config-data\") pod \"nova-scheduler-0\" (UID: \"d2e40c09-ab7a-452a-a474-50bce06699c4\") " pod="openstack/nova-scheduler-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.091376 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/937a84dd-3c48-47b5-831f-40bea0da2e8a-ovsdbserver-nb\") pod \"dnsmasq-dns-5db6b59b7f-dv4xq\" (UID: \"937a84dd-3c48-47b5-831f-40bea0da2e8a\") " pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.091609 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn8g2\" (UniqueName: \"kubernetes.io/projected/d2e40c09-ab7a-452a-a474-50bce06699c4-kube-api-access-qn8g2\") pod \"nova-scheduler-0\" (UID: \"d2e40c09-ab7a-452a-a474-50bce06699c4\") " pod="openstack/nova-scheduler-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.128761 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.194089 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/937a84dd-3c48-47b5-831f-40bea0da2e8a-dns-svc\") pod \"dnsmasq-dns-5db6b59b7f-dv4xq\" (UID: \"937a84dd-3c48-47b5-831f-40bea0da2e8a\") " pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.194140 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/937a84dd-3c48-47b5-831f-40bea0da2e8a-ovsdbserver-sb\") pod \"dnsmasq-dns-5db6b59b7f-dv4xq\" (UID: \"937a84dd-3c48-47b5-831f-40bea0da2e8a\") " pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.194190 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e40c09-ab7a-452a-a474-50bce06699c4-config-data\") pod \"nova-scheduler-0\" (UID: \"d2e40c09-ab7a-452a-a474-50bce06699c4\") " pod="openstack/nova-scheduler-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.194229 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/937a84dd-3c48-47b5-831f-40bea0da2e8a-ovsdbserver-nb\") pod \"dnsmasq-dns-5db6b59b7f-dv4xq\" (UID: \"937a84dd-3c48-47b5-831f-40bea0da2e8a\") " pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.194252 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn8g2\" (UniqueName: \"kubernetes.io/projected/d2e40c09-ab7a-452a-a474-50bce06699c4-kube-api-access-qn8g2\") pod \"nova-scheduler-0\" (UID: \"d2e40c09-ab7a-452a-a474-50bce06699c4\") " pod="openstack/nova-scheduler-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.194297 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26208d0d-91a2-4a24-8bf2-bcb293abe6ba-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"26208d0d-91a2-4a24-8bf2-bcb293abe6ba\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.194328 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26208d0d-91a2-4a24-8bf2-bcb293abe6ba-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"26208d0d-91a2-4a24-8bf2-bcb293abe6ba\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.194358 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/937a84dd-3c48-47b5-831f-40bea0da2e8a-config\") pod \"dnsmasq-dns-5db6b59b7f-dv4xq\" (UID: \"937a84dd-3c48-47b5-831f-40bea0da2e8a\") " pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.194398 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e40c09-ab7a-452a-a474-50bce06699c4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d2e40c09-ab7a-452a-a474-50bce06699c4\") " pod="openstack/nova-scheduler-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.194429 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt2t9\" (UniqueName: \"kubernetes.io/projected/937a84dd-3c48-47b5-831f-40bea0da2e8a-kube-api-access-xt2t9\") pod \"dnsmasq-dns-5db6b59b7f-dv4xq\" (UID: \"937a84dd-3c48-47b5-831f-40bea0da2e8a\") " pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.194448 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l228t\" (UniqueName: \"kubernetes.io/projected/26208d0d-91a2-4a24-8bf2-bcb293abe6ba-kube-api-access-l228t\") pod \"nova-cell1-novncproxy-0\" (UID: \"26208d0d-91a2-4a24-8bf2-bcb293abe6ba\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.195209 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/937a84dd-3c48-47b5-831f-40bea0da2e8a-dns-svc\") pod \"dnsmasq-dns-5db6b59b7f-dv4xq\" (UID: \"937a84dd-3c48-47b5-831f-40bea0da2e8a\") " pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.195253 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/937a84dd-3c48-47b5-831f-40bea0da2e8a-ovsdbserver-sb\") pod \"dnsmasq-dns-5db6b59b7f-dv4xq\" (UID: \"937a84dd-3c48-47b5-831f-40bea0da2e8a\") " pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.196520 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/937a84dd-3c48-47b5-831f-40bea0da2e8a-ovsdbserver-nb\") pod \"dnsmasq-dns-5db6b59b7f-dv4xq\" (UID: \"937a84dd-3c48-47b5-831f-40bea0da2e8a\") " pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.196553 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/937a84dd-3c48-47b5-831f-40bea0da2e8a-config\") pod \"dnsmasq-dns-5db6b59b7f-dv4xq\" (UID: \"937a84dd-3c48-47b5-831f-40bea0da2e8a\") " pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.201322 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e40c09-ab7a-452a-a474-50bce06699c4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d2e40c09-ab7a-452a-a474-50bce06699c4\") " pod="openstack/nova-scheduler-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.201902 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e40c09-ab7a-452a-a474-50bce06699c4-config-data\") pod \"nova-scheduler-0\" (UID: \"d2e40c09-ab7a-452a-a474-50bce06699c4\") " pod="openstack/nova-scheduler-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.217198 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt2t9\" (UniqueName: \"kubernetes.io/projected/937a84dd-3c48-47b5-831f-40bea0da2e8a-kube-api-access-xt2t9\") pod \"dnsmasq-dns-5db6b59b7f-dv4xq\" (UID: \"937a84dd-3c48-47b5-831f-40bea0da2e8a\") " pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.220870 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn8g2\" (UniqueName: \"kubernetes.io/projected/d2e40c09-ab7a-452a-a474-50bce06699c4-kube-api-access-qn8g2\") pod \"nova-scheduler-0\" (UID: \"d2e40c09-ab7a-452a-a474-50bce06699c4\") " pod="openstack/nova-scheduler-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.295734 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l228t\" (UniqueName: \"kubernetes.io/projected/26208d0d-91a2-4a24-8bf2-bcb293abe6ba-kube-api-access-l228t\") pod \"nova-cell1-novncproxy-0\" (UID: \"26208d0d-91a2-4a24-8bf2-bcb293abe6ba\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.296230 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26208d0d-91a2-4a24-8bf2-bcb293abe6ba-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"26208d0d-91a2-4a24-8bf2-bcb293abe6ba\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.296262 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26208d0d-91a2-4a24-8bf2-bcb293abe6ba-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"26208d0d-91a2-4a24-8bf2-bcb293abe6ba\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.300538 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26208d0d-91a2-4a24-8bf2-bcb293abe6ba-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"26208d0d-91a2-4a24-8bf2-bcb293abe6ba\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.308295 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26208d0d-91a2-4a24-8bf2-bcb293abe6ba-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"26208d0d-91a2-4a24-8bf2-bcb293abe6ba\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.319400 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l228t\" (UniqueName: \"kubernetes.io/projected/26208d0d-91a2-4a24-8bf2-bcb293abe6ba-kube-api-access-l228t\") pod \"nova-cell1-novncproxy-0\" (UID: \"26208d0d-91a2-4a24-8bf2-bcb293abe6ba\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.403917 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.445566 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.447701 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qw7g5"] Jan 27 20:15:10 crc kubenswrapper[4915]: W0127 20:15:10.451413 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65254045_1103_45cd_b7ad_da904497894e.slice/crio-a04b3cf0f5ce3201e1982b726d1c13c43956cd7b115beb83945b63e0d6de2d7c WatchSource:0}: Error finding container a04b3cf0f5ce3201e1982b726d1c13c43956cd7b115beb83945b63e0d6de2d7c: Status 404 returned error can't find the container with id a04b3cf0f5ce3201e1982b726d1c13c43956cd7b115beb83945b63e0d6de2d7c Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.457446 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.549774 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-s687l"] Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.550932 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-s687l" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.553976 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.554040 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.559762 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-s687l"] Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.620337 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:15:10 crc kubenswrapper[4915]: W0127 20:15:10.630951 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cea03b9_f218_4d20_86e6_94add13710e3.slice/crio-488d79262d62f014ecde7310bc13278d93d56b352bff6267caae5b5b44c11b09 WatchSource:0}: Error finding container 488d79262d62f014ecde7310bc13278d93d56b352bff6267caae5b5b44c11b09: Status 404 returned error can't find the container with id 488d79262d62f014ecde7310bc13278d93d56b352bff6267caae5b5b44c11b09 Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.704116 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:15:10 crc kubenswrapper[4915]: W0127 20:15:10.712851 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2015bf50_de5e_432d_8732_a894c1593de6.slice/crio-94807ba8fbd5b69ede568351a1c576344f58fe2d348399020001bf6f709aaa9c WatchSource:0}: Error finding container 94807ba8fbd5b69ede568351a1c576344f58fe2d348399020001bf6f709aaa9c: Status 404 returned error can't find the container with id 94807ba8fbd5b69ede568351a1c576344f58fe2d348399020001bf6f709aaa9c Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.716161 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4cea03b9-f218-4d20-86e6-94add13710e3","Type":"ContainerStarted","Data":"488d79262d62f014ecde7310bc13278d93d56b352bff6267caae5b5b44c11b09"} Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.717133 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xld6w\" (UniqueName: \"kubernetes.io/projected/85014245-b9ed-4ed1-834f-224ba6e918b0-kube-api-access-xld6w\") pod \"nova-cell1-conductor-db-sync-s687l\" (UID: \"85014245-b9ed-4ed1-834f-224ba6e918b0\") " pod="openstack/nova-cell1-conductor-db-sync-s687l" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.717234 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85014245-b9ed-4ed1-834f-224ba6e918b0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-s687l\" (UID: \"85014245-b9ed-4ed1-834f-224ba6e918b0\") " pod="openstack/nova-cell1-conductor-db-sync-s687l" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.717753 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85014245-b9ed-4ed1-834f-224ba6e918b0-scripts\") pod \"nova-cell1-conductor-db-sync-s687l\" (UID: \"85014245-b9ed-4ed1-834f-224ba6e918b0\") " pod="openstack/nova-cell1-conductor-db-sync-s687l" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.717866 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85014245-b9ed-4ed1-834f-224ba6e918b0-config-data\") pod \"nova-cell1-conductor-db-sync-s687l\" (UID: \"85014245-b9ed-4ed1-834f-224ba6e918b0\") " pod="openstack/nova-cell1-conductor-db-sync-s687l" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.722622 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qw7g5" event={"ID":"65254045-1103-45cd-b7ad-da904497894e","Type":"ContainerStarted","Data":"ce21af0f387886bdb895c158c3d058a99f94f7ff8aeaa61153332404d699aeea"} Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.722672 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qw7g5" event={"ID":"65254045-1103-45cd-b7ad-da904497894e","Type":"ContainerStarted","Data":"a04b3cf0f5ce3201e1982b726d1c13c43956cd7b115beb83945b63e0d6de2d7c"} Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.739546 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-qw7g5" podStartSLOduration=1.739520704 podStartE2EDuration="1.739520704s" podCreationTimestamp="2026-01-27 20:15:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:15:10.734332726 +0000 UTC m=+5602.092186390" watchObservedRunningTime="2026-01-27 20:15:10.739520704 +0000 UTC m=+5602.097374368" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.818971 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xld6w\" (UniqueName: \"kubernetes.io/projected/85014245-b9ed-4ed1-834f-224ba6e918b0-kube-api-access-xld6w\") pod \"nova-cell1-conductor-db-sync-s687l\" (UID: \"85014245-b9ed-4ed1-834f-224ba6e918b0\") " pod="openstack/nova-cell1-conductor-db-sync-s687l" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.819350 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85014245-b9ed-4ed1-834f-224ba6e918b0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-s687l\" (UID: \"85014245-b9ed-4ed1-834f-224ba6e918b0\") " pod="openstack/nova-cell1-conductor-db-sync-s687l" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.819401 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85014245-b9ed-4ed1-834f-224ba6e918b0-scripts\") pod \"nova-cell1-conductor-db-sync-s687l\" (UID: \"85014245-b9ed-4ed1-834f-224ba6e918b0\") " pod="openstack/nova-cell1-conductor-db-sync-s687l" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.819440 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85014245-b9ed-4ed1-834f-224ba6e918b0-config-data\") pod \"nova-cell1-conductor-db-sync-s687l\" (UID: \"85014245-b9ed-4ed1-834f-224ba6e918b0\") " pod="openstack/nova-cell1-conductor-db-sync-s687l" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.824249 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85014245-b9ed-4ed1-834f-224ba6e918b0-scripts\") pod \"nova-cell1-conductor-db-sync-s687l\" (UID: \"85014245-b9ed-4ed1-834f-224ba6e918b0\") " pod="openstack/nova-cell1-conductor-db-sync-s687l" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.825180 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85014245-b9ed-4ed1-834f-224ba6e918b0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-s687l\" (UID: \"85014245-b9ed-4ed1-834f-224ba6e918b0\") " pod="openstack/nova-cell1-conductor-db-sync-s687l" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.825757 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85014245-b9ed-4ed1-834f-224ba6e918b0-config-data\") pod \"nova-cell1-conductor-db-sync-s687l\" (UID: \"85014245-b9ed-4ed1-834f-224ba6e918b0\") " pod="openstack/nova-cell1-conductor-db-sync-s687l" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.835732 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xld6w\" (UniqueName: \"kubernetes.io/projected/85014245-b9ed-4ed1-834f-224ba6e918b0-kube-api-access-xld6w\") pod \"nova-cell1-conductor-db-sync-s687l\" (UID: \"85014245-b9ed-4ed1-834f-224ba6e918b0\") " pod="openstack/nova-cell1-conductor-db-sync-s687l" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.915373 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-s687l" Jan 27 20:15:10 crc kubenswrapper[4915]: I0127 20:15:10.962687 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5db6b59b7f-dv4xq"] Jan 27 20:15:11 crc kubenswrapper[4915]: I0127 20:15:11.069476 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:15:11 crc kubenswrapper[4915]: I0127 20:15:11.079206 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 20:15:11 crc kubenswrapper[4915]: I0127 20:15:11.382879 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-s687l"] Jan 27 20:15:11 crc kubenswrapper[4915]: W0127 20:15:11.385947 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85014245_b9ed_4ed1_834f_224ba6e918b0.slice/crio-507b68f264f5af30e38c94a7b1f04422da288cd479bb22546737f85367e6ef04 WatchSource:0}: Error finding container 507b68f264f5af30e38c94a7b1f04422da288cd479bb22546737f85367e6ef04: Status 404 returned error can't find the container with id 507b68f264f5af30e38c94a7b1f04422da288cd479bb22546737f85367e6ef04 Jan 27 20:15:11 crc kubenswrapper[4915]: I0127 20:15:11.732719 4915 generic.go:334] "Generic (PLEG): container finished" podID="937a84dd-3c48-47b5-831f-40bea0da2e8a" containerID="89254b6ab516ff626309af4e9c4db43c5f4fe79a66934e4f87ce696e41144029" exitCode=0 Jan 27 20:15:11 crc kubenswrapper[4915]: I0127 20:15:11.732824 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" event={"ID":"937a84dd-3c48-47b5-831f-40bea0da2e8a","Type":"ContainerDied","Data":"89254b6ab516ff626309af4e9c4db43c5f4fe79a66934e4f87ce696e41144029"} Jan 27 20:15:11 crc kubenswrapper[4915]: I0127 20:15:11.733109 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" event={"ID":"937a84dd-3c48-47b5-831f-40bea0da2e8a","Type":"ContainerStarted","Data":"471434a8cf8e98852ad4f6c2e0edbad1fb7577f2707f1169651f1d300dac84e6"} Jan 27 20:15:11 crc kubenswrapper[4915]: I0127 20:15:11.735165 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-s687l" event={"ID":"85014245-b9ed-4ed1-834f-224ba6e918b0","Type":"ContainerStarted","Data":"ed649be42c4904a753bb18e29f66456767f94078c79d8d9a28233c88df28e18e"} Jan 27 20:15:11 crc kubenswrapper[4915]: I0127 20:15:11.735198 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-s687l" event={"ID":"85014245-b9ed-4ed1-834f-224ba6e918b0","Type":"ContainerStarted","Data":"507b68f264f5af30e38c94a7b1f04422da288cd479bb22546737f85367e6ef04"} Jan 27 20:15:11 crc kubenswrapper[4915]: I0127 20:15:11.740457 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d2e40c09-ab7a-452a-a474-50bce06699c4","Type":"ContainerStarted","Data":"a27a05c2aa673e9e84033a689edaa95471404b65b24160b6b58265fdb2ddcbf2"} Jan 27 20:15:11 crc kubenswrapper[4915]: I0127 20:15:11.740498 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d2e40c09-ab7a-452a-a474-50bce06699c4","Type":"ContainerStarted","Data":"44b845b8d64b6e8971b17a94c6162de0b3be84a51b33774fce87f399c44dd7e8"} Jan 27 20:15:11 crc kubenswrapper[4915]: I0127 20:15:11.755885 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4cea03b9-f218-4d20-86e6-94add13710e3","Type":"ContainerStarted","Data":"c2bb0558803889ce1bc71f30b97d6efe240f69782b0d6287640d57a81914b840"} Jan 27 20:15:11 crc kubenswrapper[4915]: I0127 20:15:11.755998 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4cea03b9-f218-4d20-86e6-94add13710e3","Type":"ContainerStarted","Data":"80871dddb0a10227151de97124919f8d75b278ee7ad936bcacffa342ef69d086"} Jan 27 20:15:11 crc kubenswrapper[4915]: I0127 20:15:11.764104 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"26208d0d-91a2-4a24-8bf2-bcb293abe6ba","Type":"ContainerStarted","Data":"c901a3dde72a6e18513cdc0e1435998cc7c1e2466153cb2b3e0a5bd429f9bff3"} Jan 27 20:15:11 crc kubenswrapper[4915]: I0127 20:15:11.764157 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"26208d0d-91a2-4a24-8bf2-bcb293abe6ba","Type":"ContainerStarted","Data":"ef9df8e35e2ebab521ed2e887026fe4a6536b92fcc9d9704f123ada129cade87"} Jan 27 20:15:11 crc kubenswrapper[4915]: I0127 20:15:11.770538 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-s687l" podStartSLOduration=1.770519347 podStartE2EDuration="1.770519347s" podCreationTimestamp="2026-01-27 20:15:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:15:11.769746458 +0000 UTC m=+5603.127600132" watchObservedRunningTime="2026-01-27 20:15:11.770519347 +0000 UTC m=+5603.128373011" Jan 27 20:15:11 crc kubenswrapper[4915]: I0127 20:15:11.774285 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2015bf50-de5e-432d-8732-a894c1593de6","Type":"ContainerStarted","Data":"ea321256b62e17b41b941f56d020c196aff9b3a7de5692c63b17a9b63aa19c0d"} Jan 27 20:15:11 crc kubenswrapper[4915]: I0127 20:15:11.774400 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2015bf50-de5e-432d-8732-a894c1593de6","Type":"ContainerStarted","Data":"669a5528fd5bdf8daa7211a4d49a52229fee474016c9c4fee0673864a37b8731"} Jan 27 20:15:11 crc kubenswrapper[4915]: I0127 20:15:11.774524 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2015bf50-de5e-432d-8732-a894c1593de6","Type":"ContainerStarted","Data":"94807ba8fbd5b69ede568351a1c576344f58fe2d348399020001bf6f709aaa9c"} Jan 27 20:15:11 crc kubenswrapper[4915]: I0127 20:15:11.805889 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.805865608 podStartE2EDuration="2.805865608s" podCreationTimestamp="2026-01-27 20:15:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:15:11.790831908 +0000 UTC m=+5603.148685582" watchObservedRunningTime="2026-01-27 20:15:11.805865608 +0000 UTC m=+5603.163719272" Jan 27 20:15:11 crc kubenswrapper[4915]: I0127 20:15:11.812027 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.812000029 podStartE2EDuration="2.812000029s" podCreationTimestamp="2026-01-27 20:15:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:15:11.807448367 +0000 UTC m=+5603.165302031" watchObservedRunningTime="2026-01-27 20:15:11.812000029 +0000 UTC m=+5603.169853693" Jan 27 20:15:11 crc kubenswrapper[4915]: I0127 20:15:11.852153 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.852127838 podStartE2EDuration="2.852127838s" podCreationTimestamp="2026-01-27 20:15:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:15:11.831225583 +0000 UTC m=+5603.189079247" watchObservedRunningTime="2026-01-27 20:15:11.852127838 +0000 UTC m=+5603.209981512" Jan 27 20:15:11 crc kubenswrapper[4915]: I0127 20:15:11.861210 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.861192221 podStartE2EDuration="2.861192221s" podCreationTimestamp="2026-01-27 20:15:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:15:11.852740063 +0000 UTC m=+5603.210593727" watchObservedRunningTime="2026-01-27 20:15:11.861192221 +0000 UTC m=+5603.219045885" Jan 27 20:15:12 crc kubenswrapper[4915]: I0127 20:15:12.794328 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" event={"ID":"937a84dd-3c48-47b5-831f-40bea0da2e8a","Type":"ContainerStarted","Data":"09116952ca1269e1dae98d1adbd854852143cef8321d0761fe062d79febe54c2"} Jan 27 20:15:12 crc kubenswrapper[4915]: I0127 20:15:12.828124 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" podStartSLOduration=3.828098815 podStartE2EDuration="3.828098815s" podCreationTimestamp="2026-01-27 20:15:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:15:12.821276327 +0000 UTC m=+5604.179130001" watchObservedRunningTime="2026-01-27 20:15:12.828098815 +0000 UTC m=+5604.185952509" Jan 27 20:15:13 crc kubenswrapper[4915]: I0127 20:15:13.802529 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" Jan 27 20:15:14 crc kubenswrapper[4915]: I0127 20:15:14.821774 4915 generic.go:334] "Generic (PLEG): container finished" podID="85014245-b9ed-4ed1-834f-224ba6e918b0" containerID="ed649be42c4904a753bb18e29f66456767f94078c79d8d9a28233c88df28e18e" exitCode=0 Jan 27 20:15:14 crc kubenswrapper[4915]: I0127 20:15:14.822247 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-s687l" event={"ID":"85014245-b9ed-4ed1-834f-224ba6e918b0","Type":"ContainerDied","Data":"ed649be42c4904a753bb18e29f66456767f94078c79d8d9a28233c88df28e18e"} Jan 27 20:15:15 crc kubenswrapper[4915]: I0127 20:15:15.129982 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 20:15:15 crc kubenswrapper[4915]: I0127 20:15:15.130036 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 20:15:15 crc kubenswrapper[4915]: I0127 20:15:15.445925 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 20:15:15 crc kubenswrapper[4915]: I0127 20:15:15.458232 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:15:15 crc kubenswrapper[4915]: I0127 20:15:15.837488 4915 generic.go:334] "Generic (PLEG): container finished" podID="65254045-1103-45cd-b7ad-da904497894e" containerID="ce21af0f387886bdb895c158c3d058a99f94f7ff8aeaa61153332404d699aeea" exitCode=0 Jan 27 20:15:15 crc kubenswrapper[4915]: I0127 20:15:15.837580 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qw7g5" event={"ID":"65254045-1103-45cd-b7ad-da904497894e","Type":"ContainerDied","Data":"ce21af0f387886bdb895c158c3d058a99f94f7ff8aeaa61153332404d699aeea"} Jan 27 20:15:16 crc kubenswrapper[4915]: I0127 20:15:16.216850 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-s687l" Jan 27 20:15:16 crc kubenswrapper[4915]: I0127 20:15:16.341289 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85014245-b9ed-4ed1-834f-224ba6e918b0-combined-ca-bundle\") pod \"85014245-b9ed-4ed1-834f-224ba6e918b0\" (UID: \"85014245-b9ed-4ed1-834f-224ba6e918b0\") " Jan 27 20:15:16 crc kubenswrapper[4915]: I0127 20:15:16.341438 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85014245-b9ed-4ed1-834f-224ba6e918b0-scripts\") pod \"85014245-b9ed-4ed1-834f-224ba6e918b0\" (UID: \"85014245-b9ed-4ed1-834f-224ba6e918b0\") " Jan 27 20:15:16 crc kubenswrapper[4915]: I0127 20:15:16.341485 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85014245-b9ed-4ed1-834f-224ba6e918b0-config-data\") pod \"85014245-b9ed-4ed1-834f-224ba6e918b0\" (UID: \"85014245-b9ed-4ed1-834f-224ba6e918b0\") " Jan 27 20:15:16 crc kubenswrapper[4915]: I0127 20:15:16.341525 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xld6w\" (UniqueName: \"kubernetes.io/projected/85014245-b9ed-4ed1-834f-224ba6e918b0-kube-api-access-xld6w\") pod \"85014245-b9ed-4ed1-834f-224ba6e918b0\" (UID: \"85014245-b9ed-4ed1-834f-224ba6e918b0\") " Jan 27 20:15:16 crc kubenswrapper[4915]: I0127 20:15:16.346854 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85014245-b9ed-4ed1-834f-224ba6e918b0-kube-api-access-xld6w" (OuterVolumeSpecName: "kube-api-access-xld6w") pod "85014245-b9ed-4ed1-834f-224ba6e918b0" (UID: "85014245-b9ed-4ed1-834f-224ba6e918b0"). InnerVolumeSpecName "kube-api-access-xld6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:15:16 crc kubenswrapper[4915]: I0127 20:15:16.346864 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85014245-b9ed-4ed1-834f-224ba6e918b0-scripts" (OuterVolumeSpecName: "scripts") pod "85014245-b9ed-4ed1-834f-224ba6e918b0" (UID: "85014245-b9ed-4ed1-834f-224ba6e918b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:15:16 crc kubenswrapper[4915]: I0127 20:15:16.365476 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85014245-b9ed-4ed1-834f-224ba6e918b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85014245-b9ed-4ed1-834f-224ba6e918b0" (UID: "85014245-b9ed-4ed1-834f-224ba6e918b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:15:16 crc kubenswrapper[4915]: I0127 20:15:16.386719 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85014245-b9ed-4ed1-834f-224ba6e918b0-config-data" (OuterVolumeSpecName: "config-data") pod "85014245-b9ed-4ed1-834f-224ba6e918b0" (UID: "85014245-b9ed-4ed1-834f-224ba6e918b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:15:16 crc kubenswrapper[4915]: I0127 20:15:16.443476 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xld6w\" (UniqueName: \"kubernetes.io/projected/85014245-b9ed-4ed1-834f-224ba6e918b0-kube-api-access-xld6w\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:16 crc kubenswrapper[4915]: I0127 20:15:16.443512 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85014245-b9ed-4ed1-834f-224ba6e918b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:16 crc kubenswrapper[4915]: I0127 20:15:16.443524 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85014245-b9ed-4ed1-834f-224ba6e918b0-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:16 crc kubenswrapper[4915]: I0127 20:15:16.443534 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85014245-b9ed-4ed1-834f-224ba6e918b0-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:16 crc kubenswrapper[4915]: I0127 20:15:16.850926 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-s687l" event={"ID":"85014245-b9ed-4ed1-834f-224ba6e918b0","Type":"ContainerDied","Data":"507b68f264f5af30e38c94a7b1f04422da288cd479bb22546737f85367e6ef04"} Jan 27 20:15:16 crc kubenswrapper[4915]: I0127 20:15:16.850979 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="507b68f264f5af30e38c94a7b1f04422da288cd479bb22546737f85367e6ef04" Jan 27 20:15:16 crc kubenswrapper[4915]: I0127 20:15:16.851133 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-s687l" Jan 27 20:15:16 crc kubenswrapper[4915]: I0127 20:15:16.941880 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 20:15:16 crc kubenswrapper[4915]: E0127 20:15:16.946069 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85014245-b9ed-4ed1-834f-224ba6e918b0" containerName="nova-cell1-conductor-db-sync" Jan 27 20:15:16 crc kubenswrapper[4915]: I0127 20:15:16.946211 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="85014245-b9ed-4ed1-834f-224ba6e918b0" containerName="nova-cell1-conductor-db-sync" Jan 27 20:15:16 crc kubenswrapper[4915]: I0127 20:15:16.946430 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="85014245-b9ed-4ed1-834f-224ba6e918b0" containerName="nova-cell1-conductor-db-sync" Jan 27 20:15:16 crc kubenswrapper[4915]: I0127 20:15:16.947379 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 20:15:16 crc kubenswrapper[4915]: I0127 20:15:16.950223 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 20:15:16 crc kubenswrapper[4915]: I0127 20:15:16.985489 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.059775 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c556fb16-444f-434b-a1a6-54c41a6dc92b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c556fb16-444f-434b-a1a6-54c41a6dc92b\") " pod="openstack/nova-cell1-conductor-0" Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.059854 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7msr\" (UniqueName: \"kubernetes.io/projected/c556fb16-444f-434b-a1a6-54c41a6dc92b-kube-api-access-l7msr\") pod \"nova-cell1-conductor-0\" (UID: \"c556fb16-444f-434b-a1a6-54c41a6dc92b\") " pod="openstack/nova-cell1-conductor-0" Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.059881 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c556fb16-444f-434b-a1a6-54c41a6dc92b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c556fb16-444f-434b-a1a6-54c41a6dc92b\") " pod="openstack/nova-cell1-conductor-0" Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.162055 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c556fb16-444f-434b-a1a6-54c41a6dc92b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c556fb16-444f-434b-a1a6-54c41a6dc92b\") " pod="openstack/nova-cell1-conductor-0" Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.162257 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c556fb16-444f-434b-a1a6-54c41a6dc92b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c556fb16-444f-434b-a1a6-54c41a6dc92b\") " pod="openstack/nova-cell1-conductor-0" Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.162303 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7msr\" (UniqueName: \"kubernetes.io/projected/c556fb16-444f-434b-a1a6-54c41a6dc92b-kube-api-access-l7msr\") pod \"nova-cell1-conductor-0\" (UID: \"c556fb16-444f-434b-a1a6-54c41a6dc92b\") " pod="openstack/nova-cell1-conductor-0" Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.168423 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c556fb16-444f-434b-a1a6-54c41a6dc92b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c556fb16-444f-434b-a1a6-54c41a6dc92b\") " pod="openstack/nova-cell1-conductor-0" Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.169372 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c556fb16-444f-434b-a1a6-54c41a6dc92b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c556fb16-444f-434b-a1a6-54c41a6dc92b\") " pod="openstack/nova-cell1-conductor-0" Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.187504 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7msr\" (UniqueName: \"kubernetes.io/projected/c556fb16-444f-434b-a1a6-54c41a6dc92b-kube-api-access-l7msr\") pod \"nova-cell1-conductor-0\" (UID: \"c556fb16-444f-434b-a1a6-54c41a6dc92b\") " pod="openstack/nova-cell1-conductor-0" Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.246450 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qw7g5" Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.268699 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.365469 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65254045-1103-45cd-b7ad-da904497894e-scripts\") pod \"65254045-1103-45cd-b7ad-da904497894e\" (UID: \"65254045-1103-45cd-b7ad-da904497894e\") " Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.365531 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgk56\" (UniqueName: \"kubernetes.io/projected/65254045-1103-45cd-b7ad-da904497894e-kube-api-access-pgk56\") pod \"65254045-1103-45cd-b7ad-da904497894e\" (UID: \"65254045-1103-45cd-b7ad-da904497894e\") " Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.365596 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65254045-1103-45cd-b7ad-da904497894e-config-data\") pod \"65254045-1103-45cd-b7ad-da904497894e\" (UID: \"65254045-1103-45cd-b7ad-da904497894e\") " Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.365712 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65254045-1103-45cd-b7ad-da904497894e-combined-ca-bundle\") pod \"65254045-1103-45cd-b7ad-da904497894e\" (UID: \"65254045-1103-45cd-b7ad-da904497894e\") " Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.374988 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65254045-1103-45cd-b7ad-da904497894e-scripts" (OuterVolumeSpecName: "scripts") pod "65254045-1103-45cd-b7ad-da904497894e" (UID: "65254045-1103-45cd-b7ad-da904497894e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.374999 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65254045-1103-45cd-b7ad-da904497894e-kube-api-access-pgk56" (OuterVolumeSpecName: "kube-api-access-pgk56") pod "65254045-1103-45cd-b7ad-da904497894e" (UID: "65254045-1103-45cd-b7ad-da904497894e"). InnerVolumeSpecName "kube-api-access-pgk56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.390848 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65254045-1103-45cd-b7ad-da904497894e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65254045-1103-45cd-b7ad-da904497894e" (UID: "65254045-1103-45cd-b7ad-da904497894e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.395128 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65254045-1103-45cd-b7ad-da904497894e-config-data" (OuterVolumeSpecName: "config-data") pod "65254045-1103-45cd-b7ad-da904497894e" (UID: "65254045-1103-45cd-b7ad-da904497894e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.473248 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65254045-1103-45cd-b7ad-da904497894e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.473514 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65254045-1103-45cd-b7ad-da904497894e-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.474252 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgk56\" (UniqueName: \"kubernetes.io/projected/65254045-1103-45cd-b7ad-da904497894e-kube-api-access-pgk56\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.474271 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65254045-1103-45cd-b7ad-da904497894e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:17 crc kubenswrapper[4915]: W0127 20:15:17.698258 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc556fb16_444f_434b_a1a6_54c41a6dc92b.slice/crio-bd11bde80d0770ca35ee0df746a6f4c49f8c78533ac690ee307429fe40d6443d WatchSource:0}: Error finding container bd11bde80d0770ca35ee0df746a6f4c49f8c78533ac690ee307429fe40d6443d: Status 404 returned error can't find the container with id bd11bde80d0770ca35ee0df746a6f4c49f8c78533ac690ee307429fe40d6443d Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.698892 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.861885 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qw7g5" event={"ID":"65254045-1103-45cd-b7ad-da904497894e","Type":"ContainerDied","Data":"a04b3cf0f5ce3201e1982b726d1c13c43956cd7b115beb83945b63e0d6de2d7c"} Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.862177 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a04b3cf0f5ce3201e1982b726d1c13c43956cd7b115beb83945b63e0d6de2d7c" Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.861925 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qw7g5" Jan 27 20:15:17 crc kubenswrapper[4915]: I0127 20:15:17.863316 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c556fb16-444f-434b-a1a6-54c41a6dc92b","Type":"ContainerStarted","Data":"bd11bde80d0770ca35ee0df746a6f4c49f8c78533ac690ee307429fe40d6443d"} Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.039744 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.040022 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4cea03b9-f218-4d20-86e6-94add13710e3" containerName="nova-api-log" containerID="cri-o://80871dddb0a10227151de97124919f8d75b278ee7ad936bcacffa342ef69d086" gracePeriod=30 Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.040175 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4cea03b9-f218-4d20-86e6-94add13710e3" containerName="nova-api-api" containerID="cri-o://c2bb0558803889ce1bc71f30b97d6efe240f69782b0d6287640d57a81914b840" gracePeriod=30 Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.064260 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.064484 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d2e40c09-ab7a-452a-a474-50bce06699c4" containerName="nova-scheduler-scheduler" containerID="cri-o://a27a05c2aa673e9e84033a689edaa95471404b65b24160b6b58265fdb2ddcbf2" gracePeriod=30 Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.091418 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.091696 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2015bf50-de5e-432d-8732-a894c1593de6" containerName="nova-metadata-log" containerID="cri-o://669a5528fd5bdf8daa7211a4d49a52229fee474016c9c4fee0673864a37b8731" gracePeriod=30 Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.091879 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2015bf50-de5e-432d-8732-a894c1593de6" containerName="nova-metadata-metadata" containerID="cri-o://ea321256b62e17b41b941f56d020c196aff9b3a7de5692c63b17a9b63aa19c0d" gracePeriod=30 Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.582903 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.660584 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.701384 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cea03b9-f218-4d20-86e6-94add13710e3-logs\") pod \"4cea03b9-f218-4d20-86e6-94add13710e3\" (UID: \"4cea03b9-f218-4d20-86e6-94add13710e3\") " Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.701500 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qmtk\" (UniqueName: \"kubernetes.io/projected/4cea03b9-f218-4d20-86e6-94add13710e3-kube-api-access-7qmtk\") pod \"4cea03b9-f218-4d20-86e6-94add13710e3\" (UID: \"4cea03b9-f218-4d20-86e6-94add13710e3\") " Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.701579 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cea03b9-f218-4d20-86e6-94add13710e3-config-data\") pod \"4cea03b9-f218-4d20-86e6-94add13710e3\" (UID: \"4cea03b9-f218-4d20-86e6-94add13710e3\") " Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.701638 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cea03b9-f218-4d20-86e6-94add13710e3-combined-ca-bundle\") pod \"4cea03b9-f218-4d20-86e6-94add13710e3\" (UID: \"4cea03b9-f218-4d20-86e6-94add13710e3\") " Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.702374 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cea03b9-f218-4d20-86e6-94add13710e3-logs" (OuterVolumeSpecName: "logs") pod "4cea03b9-f218-4d20-86e6-94add13710e3" (UID: "4cea03b9-f218-4d20-86e6-94add13710e3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.710077 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cea03b9-f218-4d20-86e6-94add13710e3-kube-api-access-7qmtk" (OuterVolumeSpecName: "kube-api-access-7qmtk") pod "4cea03b9-f218-4d20-86e6-94add13710e3" (UID: "4cea03b9-f218-4d20-86e6-94add13710e3"). InnerVolumeSpecName "kube-api-access-7qmtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.728754 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cea03b9-f218-4d20-86e6-94add13710e3-config-data" (OuterVolumeSpecName: "config-data") pod "4cea03b9-f218-4d20-86e6-94add13710e3" (UID: "4cea03b9-f218-4d20-86e6-94add13710e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.732521 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cea03b9-f218-4d20-86e6-94add13710e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4cea03b9-f218-4d20-86e6-94add13710e3" (UID: "4cea03b9-f218-4d20-86e6-94add13710e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.802775 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2015bf50-de5e-432d-8732-a894c1593de6-logs\") pod \"2015bf50-de5e-432d-8732-a894c1593de6\" (UID: \"2015bf50-de5e-432d-8732-a894c1593de6\") " Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.802881 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2015bf50-de5e-432d-8732-a894c1593de6-config-data\") pod \"2015bf50-de5e-432d-8732-a894c1593de6\" (UID: \"2015bf50-de5e-432d-8732-a894c1593de6\") " Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.803025 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2015bf50-de5e-432d-8732-a894c1593de6-combined-ca-bundle\") pod \"2015bf50-de5e-432d-8732-a894c1593de6\" (UID: \"2015bf50-de5e-432d-8732-a894c1593de6\") " Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.803081 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9dgg\" (UniqueName: \"kubernetes.io/projected/2015bf50-de5e-432d-8732-a894c1593de6-kube-api-access-m9dgg\") pod \"2015bf50-de5e-432d-8732-a894c1593de6\" (UID: \"2015bf50-de5e-432d-8732-a894c1593de6\") " Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.803608 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2015bf50-de5e-432d-8732-a894c1593de6-logs" (OuterVolumeSpecName: "logs") pod "2015bf50-de5e-432d-8732-a894c1593de6" (UID: "2015bf50-de5e-432d-8732-a894c1593de6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.803940 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cea03b9-f218-4d20-86e6-94add13710e3-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.803991 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qmtk\" (UniqueName: \"kubernetes.io/projected/4cea03b9-f218-4d20-86e6-94add13710e3-kube-api-access-7qmtk\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.804007 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cea03b9-f218-4d20-86e6-94add13710e3-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.804019 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cea03b9-f218-4d20-86e6-94add13710e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.807087 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2015bf50-de5e-432d-8732-a894c1593de6-kube-api-access-m9dgg" (OuterVolumeSpecName: "kube-api-access-m9dgg") pod "2015bf50-de5e-432d-8732-a894c1593de6" (UID: "2015bf50-de5e-432d-8732-a894c1593de6"). InnerVolumeSpecName "kube-api-access-m9dgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.825033 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2015bf50-de5e-432d-8732-a894c1593de6-config-data" (OuterVolumeSpecName: "config-data") pod "2015bf50-de5e-432d-8732-a894c1593de6" (UID: "2015bf50-de5e-432d-8732-a894c1593de6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.860473 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2015bf50-de5e-432d-8732-a894c1593de6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2015bf50-de5e-432d-8732-a894c1593de6" (UID: "2015bf50-de5e-432d-8732-a894c1593de6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.873111 4915 generic.go:334] "Generic (PLEG): container finished" podID="4cea03b9-f218-4d20-86e6-94add13710e3" containerID="c2bb0558803889ce1bc71f30b97d6efe240f69782b0d6287640d57a81914b840" exitCode=0 Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.873148 4915 generic.go:334] "Generic (PLEG): container finished" podID="4cea03b9-f218-4d20-86e6-94add13710e3" containerID="80871dddb0a10227151de97124919f8d75b278ee7ad936bcacffa342ef69d086" exitCode=143 Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.873227 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.874681 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4cea03b9-f218-4d20-86e6-94add13710e3","Type":"ContainerDied","Data":"c2bb0558803889ce1bc71f30b97d6efe240f69782b0d6287640d57a81914b840"} Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.875027 4915 scope.go:117] "RemoveContainer" containerID="c2bb0558803889ce1bc71f30b97d6efe240f69782b0d6287640d57a81914b840" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.875491 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4cea03b9-f218-4d20-86e6-94add13710e3","Type":"ContainerDied","Data":"80871dddb0a10227151de97124919f8d75b278ee7ad936bcacffa342ef69d086"} Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.878027 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.878214 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4cea03b9-f218-4d20-86e6-94add13710e3","Type":"ContainerDied","Data":"488d79262d62f014ecde7310bc13278d93d56b352bff6267caae5b5b44c11b09"} Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.878383 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c556fb16-444f-434b-a1a6-54c41a6dc92b","Type":"ContainerStarted","Data":"dc30cbf49820b333f04cab867bdb423603015bbdc7ff1cc34b576f324f21c64b"} Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.878515 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2015bf50-de5e-432d-8732-a894c1593de6","Type":"ContainerDied","Data":"ea321256b62e17b41b941f56d020c196aff9b3a7de5692c63b17a9b63aa19c0d"} Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.878161 4915 generic.go:334] "Generic (PLEG): container finished" podID="2015bf50-de5e-432d-8732-a894c1593de6" containerID="ea321256b62e17b41b941f56d020c196aff9b3a7de5692c63b17a9b63aa19c0d" exitCode=0 Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.878227 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.879057 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2015bf50-de5e-432d-8732-a894c1593de6","Type":"ContainerDied","Data":"669a5528fd5bdf8daa7211a4d49a52229fee474016c9c4fee0673864a37b8731"} Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.878830 4915 generic.go:334] "Generic (PLEG): container finished" podID="2015bf50-de5e-432d-8732-a894c1593de6" containerID="669a5528fd5bdf8daa7211a4d49a52229fee474016c9c4fee0673864a37b8731" exitCode=143 Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.879375 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2015bf50-de5e-432d-8732-a894c1593de6","Type":"ContainerDied","Data":"94807ba8fbd5b69ede568351a1c576344f58fe2d348399020001bf6f709aaa9c"} Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.905367 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2015bf50-de5e-432d-8732-a894c1593de6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.905746 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9dgg\" (UniqueName: \"kubernetes.io/projected/2015bf50-de5e-432d-8732-a894c1593de6-kube-api-access-m9dgg\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.905762 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2015bf50-de5e-432d-8732-a894c1593de6-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.905775 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2015bf50-de5e-432d-8732-a894c1593de6-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.916189 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.9161614 podStartE2EDuration="2.9161614s" podCreationTimestamp="2026-01-27 20:15:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:15:18.900013392 +0000 UTC m=+5610.257867076" watchObservedRunningTime="2026-01-27 20:15:18.9161614 +0000 UTC m=+5610.274015084" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.926083 4915 scope.go:117] "RemoveContainer" containerID="80871dddb0a10227151de97124919f8d75b278ee7ad936bcacffa342ef69d086" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.929183 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.947292 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.955338 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.962420 4915 scope.go:117] "RemoveContainer" containerID="c2bb0558803889ce1bc71f30b97d6efe240f69782b0d6287640d57a81914b840" Jan 27 20:15:18 crc kubenswrapper[4915]: E0127 20:15:18.963383 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2bb0558803889ce1bc71f30b97d6efe240f69782b0d6287640d57a81914b840\": container with ID starting with c2bb0558803889ce1bc71f30b97d6efe240f69782b0d6287640d57a81914b840 not found: ID does not exist" containerID="c2bb0558803889ce1bc71f30b97d6efe240f69782b0d6287640d57a81914b840" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.963459 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2bb0558803889ce1bc71f30b97d6efe240f69782b0d6287640d57a81914b840"} err="failed to get container status \"c2bb0558803889ce1bc71f30b97d6efe240f69782b0d6287640d57a81914b840\": rpc error: code = NotFound desc = could not find container \"c2bb0558803889ce1bc71f30b97d6efe240f69782b0d6287640d57a81914b840\": container with ID starting with c2bb0558803889ce1bc71f30b97d6efe240f69782b0d6287640d57a81914b840 not found: ID does not exist" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.963505 4915 scope.go:117] "RemoveContainer" containerID="80871dddb0a10227151de97124919f8d75b278ee7ad936bcacffa342ef69d086" Jan 27 20:15:18 crc kubenswrapper[4915]: E0127 20:15:18.964162 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80871dddb0a10227151de97124919f8d75b278ee7ad936bcacffa342ef69d086\": container with ID starting with 80871dddb0a10227151de97124919f8d75b278ee7ad936bcacffa342ef69d086 not found: ID does not exist" containerID="80871dddb0a10227151de97124919f8d75b278ee7ad936bcacffa342ef69d086" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.964190 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80871dddb0a10227151de97124919f8d75b278ee7ad936bcacffa342ef69d086"} err="failed to get container status \"80871dddb0a10227151de97124919f8d75b278ee7ad936bcacffa342ef69d086\": rpc error: code = NotFound desc = could not find container \"80871dddb0a10227151de97124919f8d75b278ee7ad936bcacffa342ef69d086\": container with ID starting with 80871dddb0a10227151de97124919f8d75b278ee7ad936bcacffa342ef69d086 not found: ID does not exist" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.964208 4915 scope.go:117] "RemoveContainer" containerID="c2bb0558803889ce1bc71f30b97d6efe240f69782b0d6287640d57a81914b840" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.964575 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2bb0558803889ce1bc71f30b97d6efe240f69782b0d6287640d57a81914b840"} err="failed to get container status \"c2bb0558803889ce1bc71f30b97d6efe240f69782b0d6287640d57a81914b840\": rpc error: code = NotFound desc = could not find container \"c2bb0558803889ce1bc71f30b97d6efe240f69782b0d6287640d57a81914b840\": container with ID starting with c2bb0558803889ce1bc71f30b97d6efe240f69782b0d6287640d57a81914b840 not found: ID does not exist" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.964601 4915 scope.go:117] "RemoveContainer" containerID="80871dddb0a10227151de97124919f8d75b278ee7ad936bcacffa342ef69d086" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.964953 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80871dddb0a10227151de97124919f8d75b278ee7ad936bcacffa342ef69d086"} err="failed to get container status \"80871dddb0a10227151de97124919f8d75b278ee7ad936bcacffa342ef69d086\": rpc error: code = NotFound desc = could not find container \"80871dddb0a10227151de97124919f8d75b278ee7ad936bcacffa342ef69d086\": container with ID starting with 80871dddb0a10227151de97124919f8d75b278ee7ad936bcacffa342ef69d086 not found: ID does not exist" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.964980 4915 scope.go:117] "RemoveContainer" containerID="ea321256b62e17b41b941f56d020c196aff9b3a7de5692c63b17a9b63aa19c0d" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.965080 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.976305 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 20:15:18 crc kubenswrapper[4915]: E0127 20:15:18.977239 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cea03b9-f218-4d20-86e6-94add13710e3" containerName="nova-api-log" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.977273 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cea03b9-f218-4d20-86e6-94add13710e3" containerName="nova-api-log" Jan 27 20:15:18 crc kubenswrapper[4915]: E0127 20:15:18.977303 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65254045-1103-45cd-b7ad-da904497894e" containerName="nova-manage" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.977315 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="65254045-1103-45cd-b7ad-da904497894e" containerName="nova-manage" Jan 27 20:15:18 crc kubenswrapper[4915]: E0127 20:15:18.977334 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2015bf50-de5e-432d-8732-a894c1593de6" containerName="nova-metadata-log" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.977343 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="2015bf50-de5e-432d-8732-a894c1593de6" containerName="nova-metadata-log" Jan 27 20:15:18 crc kubenswrapper[4915]: E0127 20:15:18.977387 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cea03b9-f218-4d20-86e6-94add13710e3" containerName="nova-api-api" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.977397 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cea03b9-f218-4d20-86e6-94add13710e3" containerName="nova-api-api" Jan 27 20:15:18 crc kubenswrapper[4915]: E0127 20:15:18.977414 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2015bf50-de5e-432d-8732-a894c1593de6" containerName="nova-metadata-metadata" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.977423 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="2015bf50-de5e-432d-8732-a894c1593de6" containerName="nova-metadata-metadata" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.977653 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cea03b9-f218-4d20-86e6-94add13710e3" containerName="nova-api-api" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.977679 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="2015bf50-de5e-432d-8732-a894c1593de6" containerName="nova-metadata-log" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.977695 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="65254045-1103-45cd-b7ad-da904497894e" containerName="nova-manage" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.977710 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="2015bf50-de5e-432d-8732-a894c1593de6" containerName="nova-metadata-metadata" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.977725 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cea03b9-f218-4d20-86e6-94add13710e3" containerName="nova-api-log" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.979581 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.985135 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.994181 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:15:18 crc kubenswrapper[4915]: I0127 20:15:18.995882 4915 scope.go:117] "RemoveContainer" containerID="669a5528fd5bdf8daa7211a4d49a52229fee474016c9c4fee0673864a37b8731" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.000391 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.007451 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.010999 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.019256 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.049754 4915 scope.go:117] "RemoveContainer" containerID="ea321256b62e17b41b941f56d020c196aff9b3a7de5692c63b17a9b63aa19c0d" Jan 27 20:15:19 crc kubenswrapper[4915]: E0127 20:15:19.050530 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea321256b62e17b41b941f56d020c196aff9b3a7de5692c63b17a9b63aa19c0d\": container with ID starting with ea321256b62e17b41b941f56d020c196aff9b3a7de5692c63b17a9b63aa19c0d not found: ID does not exist" containerID="ea321256b62e17b41b941f56d020c196aff9b3a7de5692c63b17a9b63aa19c0d" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.050578 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea321256b62e17b41b941f56d020c196aff9b3a7de5692c63b17a9b63aa19c0d"} err="failed to get container status \"ea321256b62e17b41b941f56d020c196aff9b3a7de5692c63b17a9b63aa19c0d\": rpc error: code = NotFound desc = could not find container \"ea321256b62e17b41b941f56d020c196aff9b3a7de5692c63b17a9b63aa19c0d\": container with ID starting with ea321256b62e17b41b941f56d020c196aff9b3a7de5692c63b17a9b63aa19c0d not found: ID does not exist" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.050613 4915 scope.go:117] "RemoveContainer" containerID="669a5528fd5bdf8daa7211a4d49a52229fee474016c9c4fee0673864a37b8731" Jan 27 20:15:19 crc kubenswrapper[4915]: E0127 20:15:19.051487 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"669a5528fd5bdf8daa7211a4d49a52229fee474016c9c4fee0673864a37b8731\": container with ID starting with 669a5528fd5bdf8daa7211a4d49a52229fee474016c9c4fee0673864a37b8731 not found: ID does not exist" containerID="669a5528fd5bdf8daa7211a4d49a52229fee474016c9c4fee0673864a37b8731" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.051609 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"669a5528fd5bdf8daa7211a4d49a52229fee474016c9c4fee0673864a37b8731"} err="failed to get container status \"669a5528fd5bdf8daa7211a4d49a52229fee474016c9c4fee0673864a37b8731\": rpc error: code = NotFound desc = could not find container \"669a5528fd5bdf8daa7211a4d49a52229fee474016c9c4fee0673864a37b8731\": container with ID starting with 669a5528fd5bdf8daa7211a4d49a52229fee474016c9c4fee0673864a37b8731 not found: ID does not exist" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.051711 4915 scope.go:117] "RemoveContainer" containerID="ea321256b62e17b41b941f56d020c196aff9b3a7de5692c63b17a9b63aa19c0d" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.052205 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea321256b62e17b41b941f56d020c196aff9b3a7de5692c63b17a9b63aa19c0d"} err="failed to get container status \"ea321256b62e17b41b941f56d020c196aff9b3a7de5692c63b17a9b63aa19c0d\": rpc error: code = NotFound desc = could not find container \"ea321256b62e17b41b941f56d020c196aff9b3a7de5692c63b17a9b63aa19c0d\": container with ID starting with ea321256b62e17b41b941f56d020c196aff9b3a7de5692c63b17a9b63aa19c0d not found: ID does not exist" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.052299 4915 scope.go:117] "RemoveContainer" containerID="669a5528fd5bdf8daa7211a4d49a52229fee474016c9c4fee0673864a37b8731" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.052636 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"669a5528fd5bdf8daa7211a4d49a52229fee474016c9c4fee0673864a37b8731"} err="failed to get container status \"669a5528fd5bdf8daa7211a4d49a52229fee474016c9c4fee0673864a37b8731\": rpc error: code = NotFound desc = could not find container \"669a5528fd5bdf8daa7211a4d49a52229fee474016c9c4fee0673864a37b8731\": container with ID starting with 669a5528fd5bdf8daa7211a4d49a52229fee474016c9c4fee0673864a37b8731 not found: ID does not exist" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.111811 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swljh\" (UniqueName: \"kubernetes.io/projected/948d654b-fa22-4247-98f2-887f6828d9fd-kube-api-access-swljh\") pod \"nova-api-0\" (UID: \"948d654b-fa22-4247-98f2-887f6828d9fd\") " pod="openstack/nova-api-0" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.112173 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkbwc\" (UniqueName: \"kubernetes.io/projected/24566547-8a4e-4fd8-9597-1a16cdbec9e9-kube-api-access-jkbwc\") pod \"nova-metadata-0\" (UID: \"24566547-8a4e-4fd8-9597-1a16cdbec9e9\") " pod="openstack/nova-metadata-0" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.112322 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948d654b-fa22-4247-98f2-887f6828d9fd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"948d654b-fa22-4247-98f2-887f6828d9fd\") " pod="openstack/nova-api-0" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.112511 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24566547-8a4e-4fd8-9597-1a16cdbec9e9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"24566547-8a4e-4fd8-9597-1a16cdbec9e9\") " pod="openstack/nova-metadata-0" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.112615 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24566547-8a4e-4fd8-9597-1a16cdbec9e9-config-data\") pod \"nova-metadata-0\" (UID: \"24566547-8a4e-4fd8-9597-1a16cdbec9e9\") " pod="openstack/nova-metadata-0" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.112721 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948d654b-fa22-4247-98f2-887f6828d9fd-config-data\") pod \"nova-api-0\" (UID: \"948d654b-fa22-4247-98f2-887f6828d9fd\") " pod="openstack/nova-api-0" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.113117 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948d654b-fa22-4247-98f2-887f6828d9fd-logs\") pod \"nova-api-0\" (UID: \"948d654b-fa22-4247-98f2-887f6828d9fd\") " pod="openstack/nova-api-0" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.113252 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24566547-8a4e-4fd8-9597-1a16cdbec9e9-logs\") pod \"nova-metadata-0\" (UID: \"24566547-8a4e-4fd8-9597-1a16cdbec9e9\") " pod="openstack/nova-metadata-0" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.215358 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948d654b-fa22-4247-98f2-887f6828d9fd-config-data\") pod \"nova-api-0\" (UID: \"948d654b-fa22-4247-98f2-887f6828d9fd\") " pod="openstack/nova-api-0" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.215396 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24566547-8a4e-4fd8-9597-1a16cdbec9e9-config-data\") pod \"nova-metadata-0\" (UID: \"24566547-8a4e-4fd8-9597-1a16cdbec9e9\") " pod="openstack/nova-metadata-0" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.215474 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948d654b-fa22-4247-98f2-887f6828d9fd-logs\") pod \"nova-api-0\" (UID: \"948d654b-fa22-4247-98f2-887f6828d9fd\") " pod="openstack/nova-api-0" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.215507 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24566547-8a4e-4fd8-9597-1a16cdbec9e9-logs\") pod \"nova-metadata-0\" (UID: \"24566547-8a4e-4fd8-9597-1a16cdbec9e9\") " pod="openstack/nova-metadata-0" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.215560 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swljh\" (UniqueName: \"kubernetes.io/projected/948d654b-fa22-4247-98f2-887f6828d9fd-kube-api-access-swljh\") pod \"nova-api-0\" (UID: \"948d654b-fa22-4247-98f2-887f6828d9fd\") " pod="openstack/nova-api-0" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.215586 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbwc\" (UniqueName: \"kubernetes.io/projected/24566547-8a4e-4fd8-9597-1a16cdbec9e9-kube-api-access-jkbwc\") pod \"nova-metadata-0\" (UID: \"24566547-8a4e-4fd8-9597-1a16cdbec9e9\") " pod="openstack/nova-metadata-0" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.215605 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948d654b-fa22-4247-98f2-887f6828d9fd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"948d654b-fa22-4247-98f2-887f6828d9fd\") " pod="openstack/nova-api-0" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.215625 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24566547-8a4e-4fd8-9597-1a16cdbec9e9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"24566547-8a4e-4fd8-9597-1a16cdbec9e9\") " pod="openstack/nova-metadata-0" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.216522 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24566547-8a4e-4fd8-9597-1a16cdbec9e9-logs\") pod \"nova-metadata-0\" (UID: \"24566547-8a4e-4fd8-9597-1a16cdbec9e9\") " pod="openstack/nova-metadata-0" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.216988 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948d654b-fa22-4247-98f2-887f6828d9fd-logs\") pod \"nova-api-0\" (UID: \"948d654b-fa22-4247-98f2-887f6828d9fd\") " pod="openstack/nova-api-0" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.220475 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24566547-8a4e-4fd8-9597-1a16cdbec9e9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"24566547-8a4e-4fd8-9597-1a16cdbec9e9\") " pod="openstack/nova-metadata-0" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.221235 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948d654b-fa22-4247-98f2-887f6828d9fd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"948d654b-fa22-4247-98f2-887f6828d9fd\") " pod="openstack/nova-api-0" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.222857 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948d654b-fa22-4247-98f2-887f6828d9fd-config-data\") pod \"nova-api-0\" (UID: \"948d654b-fa22-4247-98f2-887f6828d9fd\") " pod="openstack/nova-api-0" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.232757 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24566547-8a4e-4fd8-9597-1a16cdbec9e9-config-data\") pod \"nova-metadata-0\" (UID: \"24566547-8a4e-4fd8-9597-1a16cdbec9e9\") " pod="openstack/nova-metadata-0" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.238656 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkbwc\" (UniqueName: \"kubernetes.io/projected/24566547-8a4e-4fd8-9597-1a16cdbec9e9-kube-api-access-jkbwc\") pod \"nova-metadata-0\" (UID: \"24566547-8a4e-4fd8-9597-1a16cdbec9e9\") " pod="openstack/nova-metadata-0" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.239092 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swljh\" (UniqueName: \"kubernetes.io/projected/948d654b-fa22-4247-98f2-887f6828d9fd-kube-api-access-swljh\") pod \"nova-api-0\" (UID: \"948d654b-fa22-4247-98f2-887f6828d9fd\") " pod="openstack/nova-api-0" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.309069 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.336636 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.378524 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2015bf50-de5e-432d-8732-a894c1593de6" path="/var/lib/kubelet/pods/2015bf50-de5e-432d-8732-a894c1593de6/volumes" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.379612 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cea03b9-f218-4d20-86e6-94add13710e3" path="/var/lib/kubelet/pods/4cea03b9-f218-4d20-86e6-94add13710e3/volumes" Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.764857 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.854599 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.893313 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"948d654b-fa22-4247-98f2-887f6828d9fd","Type":"ContainerStarted","Data":"7502f2b056bc1ff461ccec5c61ee513344288026506d7340066e6bd0b8312376"} Jan 27 20:15:19 crc kubenswrapper[4915]: I0127 20:15:19.896486 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24566547-8a4e-4fd8-9597-1a16cdbec9e9","Type":"ContainerStarted","Data":"0d0c6161ff8f04b1602a8a9db6edc7d099822ff54c451f0727a205c6fbeeec42"} Jan 27 20:15:20 crc kubenswrapper[4915]: I0127 20:15:20.405010 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" Jan 27 20:15:20 crc kubenswrapper[4915]: I0127 20:15:20.458818 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:15:20 crc kubenswrapper[4915]: I0127 20:15:20.469957 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-875c99799-qdfcx"] Jan 27 20:15:20 crc kubenswrapper[4915]: I0127 20:15:20.470262 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-875c99799-qdfcx" podUID="e962e699-4ef5-48c8-9704-724462d56679" containerName="dnsmasq-dns" containerID="cri-o://1bb097f916c90d17a58fc5d98469e6091b973555b1b4de4f38a49b6737ece427" gracePeriod=10 Jan 27 20:15:20 crc kubenswrapper[4915]: I0127 20:15:20.702164 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:15:20 crc kubenswrapper[4915]: I0127 20:15:20.935319 4915 generic.go:334] "Generic (PLEG): container finished" podID="e962e699-4ef5-48c8-9704-724462d56679" containerID="1bb097f916c90d17a58fc5d98469e6091b973555b1b4de4f38a49b6737ece427" exitCode=0 Jan 27 20:15:20 crc kubenswrapper[4915]: I0127 20:15:20.935471 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-875c99799-qdfcx" event={"ID":"e962e699-4ef5-48c8-9704-724462d56679","Type":"ContainerDied","Data":"1bb097f916c90d17a58fc5d98469e6091b973555b1b4de4f38a49b6737ece427"} Jan 27 20:15:20 crc kubenswrapper[4915]: I0127 20:15:20.944083 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"948d654b-fa22-4247-98f2-887f6828d9fd","Type":"ContainerStarted","Data":"9ae9ccd3afe3f2a89205fe1dbfd4def86a611b53f883e8bb6a479d5e69c82df0"} Jan 27 20:15:20 crc kubenswrapper[4915]: I0127 20:15:20.944125 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"948d654b-fa22-4247-98f2-887f6828d9fd","Type":"ContainerStarted","Data":"3fd0f8d5f488e24eb7e32222a6049d5df4b5fd4ca64b5ce72f548b2ede7f7d29"} Jan 27 20:15:20 crc kubenswrapper[4915]: I0127 20:15:20.950402 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24566547-8a4e-4fd8-9597-1a16cdbec9e9","Type":"ContainerStarted","Data":"ef561dc8e739a2c43dd4fcb1546fa37e6ff5d701ac46a606a8583f76261a149f"} Jan 27 20:15:20 crc kubenswrapper[4915]: I0127 20:15:20.950444 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24566547-8a4e-4fd8-9597-1a16cdbec9e9","Type":"ContainerStarted","Data":"afca5d0db5e09e2e2f16c3ba5666a9f6849f0e3b45dd82110151438e59a4e31b"} Jan 27 20:15:20 crc kubenswrapper[4915]: I0127 20:15:20.962070 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:15:20 crc kubenswrapper[4915]: I0127 20:15:20.977617 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.977592411 podStartE2EDuration="2.977592411s" podCreationTimestamp="2026-01-27 20:15:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:15:20.967434931 +0000 UTC m=+5612.325288595" watchObservedRunningTime="2026-01-27 20:15:20.977592411 +0000 UTC m=+5612.335446075" Jan 27 20:15:21 crc kubenswrapper[4915]: I0127 20:15:21.016116 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.01609851 podStartE2EDuration="3.01609851s" podCreationTimestamp="2026-01-27 20:15:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:15:21.014463089 +0000 UTC m=+5612.372316753" watchObservedRunningTime="2026-01-27 20:15:21.01609851 +0000 UTC m=+5612.373952174" Jan 27 20:15:21 crc kubenswrapper[4915]: I0127 20:15:21.073375 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-875c99799-qdfcx" Jan 27 20:15:21 crc kubenswrapper[4915]: I0127 20:15:21.161039 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e962e699-4ef5-48c8-9704-724462d56679-dns-svc\") pod \"e962e699-4ef5-48c8-9704-724462d56679\" (UID: \"e962e699-4ef5-48c8-9704-724462d56679\") " Jan 27 20:15:21 crc kubenswrapper[4915]: I0127 20:15:21.161383 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e962e699-4ef5-48c8-9704-724462d56679-ovsdbserver-sb\") pod \"e962e699-4ef5-48c8-9704-724462d56679\" (UID: \"e962e699-4ef5-48c8-9704-724462d56679\") " Jan 27 20:15:21 crc kubenswrapper[4915]: I0127 20:15:21.161434 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e962e699-4ef5-48c8-9704-724462d56679-ovsdbserver-nb\") pod \"e962e699-4ef5-48c8-9704-724462d56679\" (UID: \"e962e699-4ef5-48c8-9704-724462d56679\") " Jan 27 20:15:21 crc kubenswrapper[4915]: I0127 20:15:21.161613 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e962e699-4ef5-48c8-9704-724462d56679-config\") pod \"e962e699-4ef5-48c8-9704-724462d56679\" (UID: \"e962e699-4ef5-48c8-9704-724462d56679\") " Jan 27 20:15:21 crc kubenswrapper[4915]: I0127 20:15:21.161672 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7kgg\" (UniqueName: \"kubernetes.io/projected/e962e699-4ef5-48c8-9704-724462d56679-kube-api-access-l7kgg\") pod \"e962e699-4ef5-48c8-9704-724462d56679\" (UID: \"e962e699-4ef5-48c8-9704-724462d56679\") " Jan 27 20:15:21 crc kubenswrapper[4915]: I0127 20:15:21.166392 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e962e699-4ef5-48c8-9704-724462d56679-kube-api-access-l7kgg" (OuterVolumeSpecName: "kube-api-access-l7kgg") pod "e962e699-4ef5-48c8-9704-724462d56679" (UID: "e962e699-4ef5-48c8-9704-724462d56679"). InnerVolumeSpecName "kube-api-access-l7kgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:15:21 crc kubenswrapper[4915]: I0127 20:15:21.205982 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e962e699-4ef5-48c8-9704-724462d56679-config" (OuterVolumeSpecName: "config") pod "e962e699-4ef5-48c8-9704-724462d56679" (UID: "e962e699-4ef5-48c8-9704-724462d56679"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:15:21 crc kubenswrapper[4915]: I0127 20:15:21.208097 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e962e699-4ef5-48c8-9704-724462d56679-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e962e699-4ef5-48c8-9704-724462d56679" (UID: "e962e699-4ef5-48c8-9704-724462d56679"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:15:21 crc kubenswrapper[4915]: I0127 20:15:21.208917 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e962e699-4ef5-48c8-9704-724462d56679-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e962e699-4ef5-48c8-9704-724462d56679" (UID: "e962e699-4ef5-48c8-9704-724462d56679"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:15:21 crc kubenswrapper[4915]: I0127 20:15:21.213682 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e962e699-4ef5-48c8-9704-724462d56679-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e962e699-4ef5-48c8-9704-724462d56679" (UID: "e962e699-4ef5-48c8-9704-724462d56679"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:15:21 crc kubenswrapper[4915]: I0127 20:15:21.266112 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e962e699-4ef5-48c8-9704-724462d56679-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:21 crc kubenswrapper[4915]: I0127 20:15:21.266367 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7kgg\" (UniqueName: \"kubernetes.io/projected/e962e699-4ef5-48c8-9704-724462d56679-kube-api-access-l7kgg\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:21 crc kubenswrapper[4915]: I0127 20:15:21.266861 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e962e699-4ef5-48c8-9704-724462d56679-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:21 crc kubenswrapper[4915]: I0127 20:15:21.266961 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e962e699-4ef5-48c8-9704-724462d56679-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:21 crc kubenswrapper[4915]: I0127 20:15:21.267019 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e962e699-4ef5-48c8-9704-724462d56679-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:21 crc kubenswrapper[4915]: I0127 20:15:21.956946 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-875c99799-qdfcx" event={"ID":"e962e699-4ef5-48c8-9704-724462d56679","Type":"ContainerDied","Data":"41f299716271cfa9c30dc31c3594530e415dd9afa89cf80700d7b2425b20a8fa"} Jan 27 20:15:21 crc kubenswrapper[4915]: I0127 20:15:21.957019 4915 scope.go:117] "RemoveContainer" containerID="1bb097f916c90d17a58fc5d98469e6091b973555b1b4de4f38a49b6737ece427" Jan 27 20:15:21 crc kubenswrapper[4915]: I0127 20:15:21.957121 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-875c99799-qdfcx" Jan 27 20:15:21 crc kubenswrapper[4915]: I0127 20:15:21.986293 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-875c99799-qdfcx"] Jan 27 20:15:21 crc kubenswrapper[4915]: I0127 20:15:21.986983 4915 scope.go:117] "RemoveContainer" containerID="e5f67737f306cf0ecdbe580a6dd1a6cbc670157dd4106fb508f66a99e190f9c9" Jan 27 20:15:21 crc kubenswrapper[4915]: I0127 20:15:21.998862 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-875c99799-qdfcx"] Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.297770 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.595736 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.693285 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e40c09-ab7a-452a-a474-50bce06699c4-config-data\") pod \"d2e40c09-ab7a-452a-a474-50bce06699c4\" (UID: \"d2e40c09-ab7a-452a-a474-50bce06699c4\") " Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.693366 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn8g2\" (UniqueName: \"kubernetes.io/projected/d2e40c09-ab7a-452a-a474-50bce06699c4-kube-api-access-qn8g2\") pod \"d2e40c09-ab7a-452a-a474-50bce06699c4\" (UID: \"d2e40c09-ab7a-452a-a474-50bce06699c4\") " Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.693558 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e40c09-ab7a-452a-a474-50bce06699c4-combined-ca-bundle\") pod \"d2e40c09-ab7a-452a-a474-50bce06699c4\" (UID: \"d2e40c09-ab7a-452a-a474-50bce06699c4\") " Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.699063 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2e40c09-ab7a-452a-a474-50bce06699c4-kube-api-access-qn8g2" (OuterVolumeSpecName: "kube-api-access-qn8g2") pod "d2e40c09-ab7a-452a-a474-50bce06699c4" (UID: "d2e40c09-ab7a-452a-a474-50bce06699c4"). InnerVolumeSpecName "kube-api-access-qn8g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.720200 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2e40c09-ab7a-452a-a474-50bce06699c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2e40c09-ab7a-452a-a474-50bce06699c4" (UID: "d2e40c09-ab7a-452a-a474-50bce06699c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.729464 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2e40c09-ab7a-452a-a474-50bce06699c4-config-data" (OuterVolumeSpecName: "config-data") pod "d2e40c09-ab7a-452a-a474-50bce06699c4" (UID: "d2e40c09-ab7a-452a-a474-50bce06699c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.784611 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-t6qgw"] Jan 27 20:15:22 crc kubenswrapper[4915]: E0127 20:15:22.785122 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e962e699-4ef5-48c8-9704-724462d56679" containerName="init" Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.785145 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e962e699-4ef5-48c8-9704-724462d56679" containerName="init" Jan 27 20:15:22 crc kubenswrapper[4915]: E0127 20:15:22.785167 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2e40c09-ab7a-452a-a474-50bce06699c4" containerName="nova-scheduler-scheduler" Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.785179 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2e40c09-ab7a-452a-a474-50bce06699c4" containerName="nova-scheduler-scheduler" Jan 27 20:15:22 crc kubenswrapper[4915]: E0127 20:15:22.785190 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e962e699-4ef5-48c8-9704-724462d56679" containerName="dnsmasq-dns" Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.785199 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e962e699-4ef5-48c8-9704-724462d56679" containerName="dnsmasq-dns" Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.795635 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e40c09-ab7a-452a-a474-50bce06699c4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.795671 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn8g2\" (UniqueName: \"kubernetes.io/projected/d2e40c09-ab7a-452a-a474-50bce06699c4-kube-api-access-qn8g2\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.795683 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e40c09-ab7a-452a-a474-50bce06699c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.797396 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2e40c09-ab7a-452a-a474-50bce06699c4" containerName="nova-scheduler-scheduler" Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.797453 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="e962e699-4ef5-48c8-9704-724462d56679" containerName="dnsmasq-dns" Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.798187 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-t6qgw" Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.799976 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-t6qgw"] Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.801009 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.801235 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.899905 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d602920-84b6-42be-936d-95dfc8c05b96-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-t6qgw\" (UID: \"6d602920-84b6-42be-936d-95dfc8c05b96\") " pod="openstack/nova-cell1-cell-mapping-t6qgw" Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.900236 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwjn6\" (UniqueName: \"kubernetes.io/projected/6d602920-84b6-42be-936d-95dfc8c05b96-kube-api-access-gwjn6\") pod \"nova-cell1-cell-mapping-t6qgw\" (UID: \"6d602920-84b6-42be-936d-95dfc8c05b96\") " pod="openstack/nova-cell1-cell-mapping-t6qgw" Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.900455 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d602920-84b6-42be-936d-95dfc8c05b96-scripts\") pod \"nova-cell1-cell-mapping-t6qgw\" (UID: \"6d602920-84b6-42be-936d-95dfc8c05b96\") " pod="openstack/nova-cell1-cell-mapping-t6qgw" Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.900621 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d602920-84b6-42be-936d-95dfc8c05b96-config-data\") pod \"nova-cell1-cell-mapping-t6qgw\" (UID: \"6d602920-84b6-42be-936d-95dfc8c05b96\") " pod="openstack/nova-cell1-cell-mapping-t6qgw" Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.971685 4915 generic.go:334] "Generic (PLEG): container finished" podID="d2e40c09-ab7a-452a-a474-50bce06699c4" containerID="a27a05c2aa673e9e84033a689edaa95471404b65b24160b6b58265fdb2ddcbf2" exitCode=0 Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.971745 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d2e40c09-ab7a-452a-a474-50bce06699c4","Type":"ContainerDied","Data":"a27a05c2aa673e9e84033a689edaa95471404b65b24160b6b58265fdb2ddcbf2"} Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.971783 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d2e40c09-ab7a-452a-a474-50bce06699c4","Type":"ContainerDied","Data":"44b845b8d64b6e8971b17a94c6162de0b3be84a51b33774fce87f399c44dd7e8"} Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.971814 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 20:15:22 crc kubenswrapper[4915]: I0127 20:15:22.971826 4915 scope.go:117] "RemoveContainer" containerID="a27a05c2aa673e9e84033a689edaa95471404b65b24160b6b58265fdb2ddcbf2" Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.002952 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d602920-84b6-42be-936d-95dfc8c05b96-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-t6qgw\" (UID: \"6d602920-84b6-42be-936d-95dfc8c05b96\") " pod="openstack/nova-cell1-cell-mapping-t6qgw" Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.003077 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwjn6\" (UniqueName: \"kubernetes.io/projected/6d602920-84b6-42be-936d-95dfc8c05b96-kube-api-access-gwjn6\") pod \"nova-cell1-cell-mapping-t6qgw\" (UID: \"6d602920-84b6-42be-936d-95dfc8c05b96\") " pod="openstack/nova-cell1-cell-mapping-t6qgw" Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.003169 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d602920-84b6-42be-936d-95dfc8c05b96-scripts\") pod \"nova-cell1-cell-mapping-t6qgw\" (UID: \"6d602920-84b6-42be-936d-95dfc8c05b96\") " pod="openstack/nova-cell1-cell-mapping-t6qgw" Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.003230 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d602920-84b6-42be-936d-95dfc8c05b96-config-data\") pod \"nova-cell1-cell-mapping-t6qgw\" (UID: \"6d602920-84b6-42be-936d-95dfc8c05b96\") " pod="openstack/nova-cell1-cell-mapping-t6qgw" Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.008897 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d602920-84b6-42be-936d-95dfc8c05b96-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-t6qgw\" (UID: \"6d602920-84b6-42be-936d-95dfc8c05b96\") " pod="openstack/nova-cell1-cell-mapping-t6qgw" Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.010392 4915 scope.go:117] "RemoveContainer" containerID="a27a05c2aa673e9e84033a689edaa95471404b65b24160b6b58265fdb2ddcbf2" Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.013032 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.014455 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d602920-84b6-42be-936d-95dfc8c05b96-config-data\") pod \"nova-cell1-cell-mapping-t6qgw\" (UID: \"6d602920-84b6-42be-936d-95dfc8c05b96\") " pod="openstack/nova-cell1-cell-mapping-t6qgw" Jan 27 20:15:23 crc kubenswrapper[4915]: E0127 20:15:23.015555 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a27a05c2aa673e9e84033a689edaa95471404b65b24160b6b58265fdb2ddcbf2\": container with ID starting with a27a05c2aa673e9e84033a689edaa95471404b65b24160b6b58265fdb2ddcbf2 not found: ID does not exist" containerID="a27a05c2aa673e9e84033a689edaa95471404b65b24160b6b58265fdb2ddcbf2" Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.015587 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a27a05c2aa673e9e84033a689edaa95471404b65b24160b6b58265fdb2ddcbf2"} err="failed to get container status \"a27a05c2aa673e9e84033a689edaa95471404b65b24160b6b58265fdb2ddcbf2\": rpc error: code = NotFound desc = could not find container \"a27a05c2aa673e9e84033a689edaa95471404b65b24160b6b58265fdb2ddcbf2\": container with ID starting with a27a05c2aa673e9e84033a689edaa95471404b65b24160b6b58265fdb2ddcbf2 not found: ID does not exist" Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.015991 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d602920-84b6-42be-936d-95dfc8c05b96-scripts\") pod \"nova-cell1-cell-mapping-t6qgw\" (UID: \"6d602920-84b6-42be-936d-95dfc8c05b96\") " pod="openstack/nova-cell1-cell-mapping-t6qgw" Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.035355 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwjn6\" (UniqueName: \"kubernetes.io/projected/6d602920-84b6-42be-936d-95dfc8c05b96-kube-api-access-gwjn6\") pod \"nova-cell1-cell-mapping-t6qgw\" (UID: \"6d602920-84b6-42be-936d-95dfc8c05b96\") " pod="openstack/nova-cell1-cell-mapping-t6qgw" Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.036369 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.047978 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.062347 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.067554 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.091747 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.137928 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-t6qgw" Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.207693 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/126cc9af-dbe0-4a69-a297-83fdaf8a029a-config-data\") pod \"nova-scheduler-0\" (UID: \"126cc9af-dbe0-4a69-a297-83fdaf8a029a\") " pod="openstack/nova-scheduler-0" Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.207815 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwdpq\" (UniqueName: \"kubernetes.io/projected/126cc9af-dbe0-4a69-a297-83fdaf8a029a-kube-api-access-qwdpq\") pod \"nova-scheduler-0\" (UID: \"126cc9af-dbe0-4a69-a297-83fdaf8a029a\") " pod="openstack/nova-scheduler-0" Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.207844 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/126cc9af-dbe0-4a69-a297-83fdaf8a029a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"126cc9af-dbe0-4a69-a297-83fdaf8a029a\") " pod="openstack/nova-scheduler-0" Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.309879 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwdpq\" (UniqueName: \"kubernetes.io/projected/126cc9af-dbe0-4a69-a297-83fdaf8a029a-kube-api-access-qwdpq\") pod \"nova-scheduler-0\" (UID: \"126cc9af-dbe0-4a69-a297-83fdaf8a029a\") " pod="openstack/nova-scheduler-0" Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.310288 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/126cc9af-dbe0-4a69-a297-83fdaf8a029a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"126cc9af-dbe0-4a69-a297-83fdaf8a029a\") " pod="openstack/nova-scheduler-0" Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.310446 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/126cc9af-dbe0-4a69-a297-83fdaf8a029a-config-data\") pod \"nova-scheduler-0\" (UID: \"126cc9af-dbe0-4a69-a297-83fdaf8a029a\") " pod="openstack/nova-scheduler-0" Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.316489 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/126cc9af-dbe0-4a69-a297-83fdaf8a029a-config-data\") pod \"nova-scheduler-0\" (UID: \"126cc9af-dbe0-4a69-a297-83fdaf8a029a\") " pod="openstack/nova-scheduler-0" Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.316489 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/126cc9af-dbe0-4a69-a297-83fdaf8a029a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"126cc9af-dbe0-4a69-a297-83fdaf8a029a\") " pod="openstack/nova-scheduler-0" Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.329648 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwdpq\" (UniqueName: \"kubernetes.io/projected/126cc9af-dbe0-4a69-a297-83fdaf8a029a-kube-api-access-qwdpq\") pod \"nova-scheduler-0\" (UID: \"126cc9af-dbe0-4a69-a297-83fdaf8a029a\") " pod="openstack/nova-scheduler-0" Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.370859 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2e40c09-ab7a-452a-a474-50bce06699c4" path="/var/lib/kubelet/pods/d2e40c09-ab7a-452a-a474-50bce06699c4/volumes" Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.371605 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e962e699-4ef5-48c8-9704-724462d56679" path="/var/lib/kubelet/pods/e962e699-4ef5-48c8-9704-724462d56679/volumes" Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.387956 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.588036 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-t6qgw"] Jan 27 20:15:23 crc kubenswrapper[4915]: W0127 20:15:23.590877 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d602920_84b6_42be_936d_95dfc8c05b96.slice/crio-7817f61c49b76cebcf3c4cd69f0deefe48661c392d0feb57d5e9d8d4ddeabca8 WatchSource:0}: Error finding container 7817f61c49b76cebcf3c4cd69f0deefe48661c392d0feb57d5e9d8d4ddeabca8: Status 404 returned error can't find the container with id 7817f61c49b76cebcf3c4cd69f0deefe48661c392d0feb57d5e9d8d4ddeabca8 Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.810628 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:15:23 crc kubenswrapper[4915]: W0127 20:15:23.813530 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod126cc9af_dbe0_4a69_a297_83fdaf8a029a.slice/crio-8563a2db1fdbd69e69db675b602e0157c7360472fe30ba150702146feb2f5f52 WatchSource:0}: Error finding container 8563a2db1fdbd69e69db675b602e0157c7360472fe30ba150702146feb2f5f52: Status 404 returned error can't find the container with id 8563a2db1fdbd69e69db675b602e0157c7360472fe30ba150702146feb2f5f52 Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.981107 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-t6qgw" event={"ID":"6d602920-84b6-42be-936d-95dfc8c05b96","Type":"ContainerStarted","Data":"cbc41f903edc3f3d4ea202d58435dbc6878ae0e7e57c864f6164cd6e3e754f6c"} Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.981150 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-t6qgw" event={"ID":"6d602920-84b6-42be-936d-95dfc8c05b96","Type":"ContainerStarted","Data":"7817f61c49b76cebcf3c4cd69f0deefe48661c392d0feb57d5e9d8d4ddeabca8"} Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.982750 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"126cc9af-dbe0-4a69-a297-83fdaf8a029a","Type":"ContainerStarted","Data":"20390b0f12e2d4bc7fc656e36dc7ceafea4f9c9083137c4cc87b01fd8f132661"} Jan 27 20:15:23 crc kubenswrapper[4915]: I0127 20:15:23.982779 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"126cc9af-dbe0-4a69-a297-83fdaf8a029a","Type":"ContainerStarted","Data":"8563a2db1fdbd69e69db675b602e0157c7360472fe30ba150702146feb2f5f52"} Jan 27 20:15:24 crc kubenswrapper[4915]: I0127 20:15:24.008712 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-t6qgw" podStartSLOduration=2.008686464 podStartE2EDuration="2.008686464s" podCreationTimestamp="2026-01-27 20:15:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:15:23.995176271 +0000 UTC m=+5615.353029935" watchObservedRunningTime="2026-01-27 20:15:24.008686464 +0000 UTC m=+5615.366540128" Jan 27 20:15:24 crc kubenswrapper[4915]: I0127 20:15:24.013569 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.013552704 podStartE2EDuration="1.013552704s" podCreationTimestamp="2026-01-27 20:15:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:15:24.012771394 +0000 UTC m=+5615.370625058" watchObservedRunningTime="2026-01-27 20:15:24.013552704 +0000 UTC m=+5615.371406368" Jan 27 20:15:24 crc kubenswrapper[4915]: I0127 20:15:24.337131 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 20:15:24 crc kubenswrapper[4915]: I0127 20:15:24.337384 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 20:15:28 crc kubenswrapper[4915]: I0127 20:15:28.031277 4915 generic.go:334] "Generic (PLEG): container finished" podID="6d602920-84b6-42be-936d-95dfc8c05b96" containerID="cbc41f903edc3f3d4ea202d58435dbc6878ae0e7e57c864f6164cd6e3e754f6c" exitCode=0 Jan 27 20:15:28 crc kubenswrapper[4915]: I0127 20:15:28.031551 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-t6qgw" event={"ID":"6d602920-84b6-42be-936d-95dfc8c05b96","Type":"ContainerDied","Data":"cbc41f903edc3f3d4ea202d58435dbc6878ae0e7e57c864f6164cd6e3e754f6c"} Jan 27 20:15:28 crc kubenswrapper[4915]: I0127 20:15:28.388702 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 20:15:29 crc kubenswrapper[4915]: I0127 20:15:29.309270 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 20:15:29 crc kubenswrapper[4915]: I0127 20:15:29.309681 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 20:15:29 crc kubenswrapper[4915]: I0127 20:15:29.337646 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 20:15:29 crc kubenswrapper[4915]: I0127 20:15:29.337705 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 20:15:29 crc kubenswrapper[4915]: I0127 20:15:29.397447 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-t6qgw" Jan 27 20:15:29 crc kubenswrapper[4915]: I0127 20:15:29.534385 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwjn6\" (UniqueName: \"kubernetes.io/projected/6d602920-84b6-42be-936d-95dfc8c05b96-kube-api-access-gwjn6\") pod \"6d602920-84b6-42be-936d-95dfc8c05b96\" (UID: \"6d602920-84b6-42be-936d-95dfc8c05b96\") " Jan 27 20:15:29 crc kubenswrapper[4915]: I0127 20:15:29.534454 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d602920-84b6-42be-936d-95dfc8c05b96-combined-ca-bundle\") pod \"6d602920-84b6-42be-936d-95dfc8c05b96\" (UID: \"6d602920-84b6-42be-936d-95dfc8c05b96\") " Jan 27 20:15:29 crc kubenswrapper[4915]: I0127 20:15:29.534491 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d602920-84b6-42be-936d-95dfc8c05b96-config-data\") pod \"6d602920-84b6-42be-936d-95dfc8c05b96\" (UID: \"6d602920-84b6-42be-936d-95dfc8c05b96\") " Jan 27 20:15:29 crc kubenswrapper[4915]: I0127 20:15:29.534627 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d602920-84b6-42be-936d-95dfc8c05b96-scripts\") pod \"6d602920-84b6-42be-936d-95dfc8c05b96\" (UID: \"6d602920-84b6-42be-936d-95dfc8c05b96\") " Jan 27 20:15:29 crc kubenswrapper[4915]: I0127 20:15:29.540398 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d602920-84b6-42be-936d-95dfc8c05b96-kube-api-access-gwjn6" (OuterVolumeSpecName: "kube-api-access-gwjn6") pod "6d602920-84b6-42be-936d-95dfc8c05b96" (UID: "6d602920-84b6-42be-936d-95dfc8c05b96"). InnerVolumeSpecName "kube-api-access-gwjn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:15:29 crc kubenswrapper[4915]: I0127 20:15:29.540662 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d602920-84b6-42be-936d-95dfc8c05b96-scripts" (OuterVolumeSpecName: "scripts") pod "6d602920-84b6-42be-936d-95dfc8c05b96" (UID: "6d602920-84b6-42be-936d-95dfc8c05b96"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:15:29 crc kubenswrapper[4915]: I0127 20:15:29.565326 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d602920-84b6-42be-936d-95dfc8c05b96-config-data" (OuterVolumeSpecName: "config-data") pod "6d602920-84b6-42be-936d-95dfc8c05b96" (UID: "6d602920-84b6-42be-936d-95dfc8c05b96"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:15:29 crc kubenswrapper[4915]: I0127 20:15:29.573988 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d602920-84b6-42be-936d-95dfc8c05b96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d602920-84b6-42be-936d-95dfc8c05b96" (UID: "6d602920-84b6-42be-936d-95dfc8c05b96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:15:29 crc kubenswrapper[4915]: I0127 20:15:29.636766 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d602920-84b6-42be-936d-95dfc8c05b96-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:29 crc kubenswrapper[4915]: I0127 20:15:29.636943 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwjn6\" (UniqueName: \"kubernetes.io/projected/6d602920-84b6-42be-936d-95dfc8c05b96-kube-api-access-gwjn6\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:29 crc kubenswrapper[4915]: I0127 20:15:29.637002 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d602920-84b6-42be-936d-95dfc8c05b96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:29 crc kubenswrapper[4915]: I0127 20:15:29.637055 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d602920-84b6-42be-936d-95dfc8c05b96-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:30 crc kubenswrapper[4915]: I0127 20:15:30.055257 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-t6qgw" event={"ID":"6d602920-84b6-42be-936d-95dfc8c05b96","Type":"ContainerDied","Data":"7817f61c49b76cebcf3c4cd69f0deefe48661c392d0feb57d5e9d8d4ddeabca8"} Jan 27 20:15:30 crc kubenswrapper[4915]: I0127 20:15:30.055317 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7817f61c49b76cebcf3c4cd69f0deefe48661c392d0feb57d5e9d8d4ddeabca8" Jan 27 20:15:30 crc kubenswrapper[4915]: I0127 20:15:30.055389 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-t6qgw" Jan 27 20:15:30 crc kubenswrapper[4915]: I0127 20:15:30.255997 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:15:30 crc kubenswrapper[4915]: I0127 20:15:30.256254 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="948d654b-fa22-4247-98f2-887f6828d9fd" containerName="nova-api-log" containerID="cri-o://3fd0f8d5f488e24eb7e32222a6049d5df4b5fd4ca64b5ce72f548b2ede7f7d29" gracePeriod=30 Jan 27 20:15:30 crc kubenswrapper[4915]: I0127 20:15:30.256746 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="948d654b-fa22-4247-98f2-887f6828d9fd" containerName="nova-api-api" containerID="cri-o://9ae9ccd3afe3f2a89205fe1dbfd4def86a611b53f883e8bb6a479d5e69c82df0" gracePeriod=30 Jan 27 20:15:30 crc kubenswrapper[4915]: I0127 20:15:30.264339 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="948d654b-fa22-4247-98f2-887f6828d9fd" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.69:8774/\": EOF" Jan 27 20:15:30 crc kubenswrapper[4915]: I0127 20:15:30.264634 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="948d654b-fa22-4247-98f2-887f6828d9fd" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.69:8774/\": EOF" Jan 27 20:15:30 crc kubenswrapper[4915]: I0127 20:15:30.273896 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:15:30 crc kubenswrapper[4915]: I0127 20:15:30.274398 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="126cc9af-dbe0-4a69-a297-83fdaf8a029a" containerName="nova-scheduler-scheduler" containerID="cri-o://20390b0f12e2d4bc7fc656e36dc7ceafea4f9c9083137c4cc87b01fd8f132661" gracePeriod=30 Jan 27 20:15:30 crc kubenswrapper[4915]: I0127 20:15:30.298169 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:15:30 crc kubenswrapper[4915]: I0127 20:15:30.298383 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="24566547-8a4e-4fd8-9597-1a16cdbec9e9" containerName="nova-metadata-log" containerID="cri-o://afca5d0db5e09e2e2f16c3ba5666a9f6849f0e3b45dd82110151438e59a4e31b" gracePeriod=30 Jan 27 20:15:30 crc kubenswrapper[4915]: I0127 20:15:30.298758 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="24566547-8a4e-4fd8-9597-1a16cdbec9e9" containerName="nova-metadata-metadata" containerID="cri-o://ef561dc8e739a2c43dd4fcb1546fa37e6ff5d701ac46a606a8583f76261a149f" gracePeriod=30 Jan 27 20:15:30 crc kubenswrapper[4915]: I0127 20:15:30.308068 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="24566547-8a4e-4fd8-9597-1a16cdbec9e9" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.70:8775/\": EOF" Jan 27 20:15:30 crc kubenswrapper[4915]: I0127 20:15:30.308089 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="24566547-8a4e-4fd8-9597-1a16cdbec9e9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.70:8775/\": EOF" Jan 27 20:15:31 crc kubenswrapper[4915]: I0127 20:15:31.071289 4915 generic.go:334] "Generic (PLEG): container finished" podID="126cc9af-dbe0-4a69-a297-83fdaf8a029a" containerID="20390b0f12e2d4bc7fc656e36dc7ceafea4f9c9083137c4cc87b01fd8f132661" exitCode=0 Jan 27 20:15:31 crc kubenswrapper[4915]: I0127 20:15:31.071738 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"126cc9af-dbe0-4a69-a297-83fdaf8a029a","Type":"ContainerDied","Data":"20390b0f12e2d4bc7fc656e36dc7ceafea4f9c9083137c4cc87b01fd8f132661"} Jan 27 20:15:31 crc kubenswrapper[4915]: I0127 20:15:31.075243 4915 generic.go:334] "Generic (PLEG): container finished" podID="948d654b-fa22-4247-98f2-887f6828d9fd" containerID="3fd0f8d5f488e24eb7e32222a6049d5df4b5fd4ca64b5ce72f548b2ede7f7d29" exitCode=143 Jan 27 20:15:31 crc kubenswrapper[4915]: I0127 20:15:31.075314 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"948d654b-fa22-4247-98f2-887f6828d9fd","Type":"ContainerDied","Data":"3fd0f8d5f488e24eb7e32222a6049d5df4b5fd4ca64b5ce72f548b2ede7f7d29"} Jan 27 20:15:31 crc kubenswrapper[4915]: I0127 20:15:31.078014 4915 generic.go:334] "Generic (PLEG): container finished" podID="24566547-8a4e-4fd8-9597-1a16cdbec9e9" containerID="afca5d0db5e09e2e2f16c3ba5666a9f6849f0e3b45dd82110151438e59a4e31b" exitCode=143 Jan 27 20:15:31 crc kubenswrapper[4915]: I0127 20:15:31.078047 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24566547-8a4e-4fd8-9597-1a16cdbec9e9","Type":"ContainerDied","Data":"afca5d0db5e09e2e2f16c3ba5666a9f6849f0e3b45dd82110151438e59a4e31b"} Jan 27 20:15:31 crc kubenswrapper[4915]: I0127 20:15:31.339723 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 20:15:31 crc kubenswrapper[4915]: I0127 20:15:31.465282 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwdpq\" (UniqueName: \"kubernetes.io/projected/126cc9af-dbe0-4a69-a297-83fdaf8a029a-kube-api-access-qwdpq\") pod \"126cc9af-dbe0-4a69-a297-83fdaf8a029a\" (UID: \"126cc9af-dbe0-4a69-a297-83fdaf8a029a\") " Jan 27 20:15:31 crc kubenswrapper[4915]: I0127 20:15:31.465469 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/126cc9af-dbe0-4a69-a297-83fdaf8a029a-combined-ca-bundle\") pod \"126cc9af-dbe0-4a69-a297-83fdaf8a029a\" (UID: \"126cc9af-dbe0-4a69-a297-83fdaf8a029a\") " Jan 27 20:15:31 crc kubenswrapper[4915]: I0127 20:15:31.465508 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/126cc9af-dbe0-4a69-a297-83fdaf8a029a-config-data\") pod \"126cc9af-dbe0-4a69-a297-83fdaf8a029a\" (UID: \"126cc9af-dbe0-4a69-a297-83fdaf8a029a\") " Jan 27 20:15:31 crc kubenswrapper[4915]: I0127 20:15:31.469451 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/126cc9af-dbe0-4a69-a297-83fdaf8a029a-kube-api-access-qwdpq" (OuterVolumeSpecName: "kube-api-access-qwdpq") pod "126cc9af-dbe0-4a69-a297-83fdaf8a029a" (UID: "126cc9af-dbe0-4a69-a297-83fdaf8a029a"). InnerVolumeSpecName "kube-api-access-qwdpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:15:31 crc kubenswrapper[4915]: I0127 20:15:31.494330 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/126cc9af-dbe0-4a69-a297-83fdaf8a029a-config-data" (OuterVolumeSpecName: "config-data") pod "126cc9af-dbe0-4a69-a297-83fdaf8a029a" (UID: "126cc9af-dbe0-4a69-a297-83fdaf8a029a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:15:31 crc kubenswrapper[4915]: I0127 20:15:31.497811 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/126cc9af-dbe0-4a69-a297-83fdaf8a029a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "126cc9af-dbe0-4a69-a297-83fdaf8a029a" (UID: "126cc9af-dbe0-4a69-a297-83fdaf8a029a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:15:31 crc kubenswrapper[4915]: I0127 20:15:31.567375 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwdpq\" (UniqueName: \"kubernetes.io/projected/126cc9af-dbe0-4a69-a297-83fdaf8a029a-kube-api-access-qwdpq\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:31 crc kubenswrapper[4915]: I0127 20:15:31.567406 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/126cc9af-dbe0-4a69-a297-83fdaf8a029a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:31 crc kubenswrapper[4915]: I0127 20:15:31.567416 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/126cc9af-dbe0-4a69-a297-83fdaf8a029a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:32 crc kubenswrapper[4915]: I0127 20:15:32.087638 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"126cc9af-dbe0-4a69-a297-83fdaf8a029a","Type":"ContainerDied","Data":"8563a2db1fdbd69e69db675b602e0157c7360472fe30ba150702146feb2f5f52"} Jan 27 20:15:32 crc kubenswrapper[4915]: I0127 20:15:32.087913 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 20:15:32 crc kubenswrapper[4915]: I0127 20:15:32.088073 4915 scope.go:117] "RemoveContainer" containerID="20390b0f12e2d4bc7fc656e36dc7ceafea4f9c9083137c4cc87b01fd8f132661" Jan 27 20:15:32 crc kubenswrapper[4915]: I0127 20:15:32.125061 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:15:32 crc kubenswrapper[4915]: I0127 20:15:32.133529 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:15:32 crc kubenswrapper[4915]: I0127 20:15:32.146058 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:15:32 crc kubenswrapper[4915]: E0127 20:15:32.146948 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d602920-84b6-42be-936d-95dfc8c05b96" containerName="nova-manage" Jan 27 20:15:32 crc kubenswrapper[4915]: I0127 20:15:32.146975 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d602920-84b6-42be-936d-95dfc8c05b96" containerName="nova-manage" Jan 27 20:15:32 crc kubenswrapper[4915]: E0127 20:15:32.147007 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="126cc9af-dbe0-4a69-a297-83fdaf8a029a" containerName="nova-scheduler-scheduler" Jan 27 20:15:32 crc kubenswrapper[4915]: I0127 20:15:32.147015 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="126cc9af-dbe0-4a69-a297-83fdaf8a029a" containerName="nova-scheduler-scheduler" Jan 27 20:15:32 crc kubenswrapper[4915]: I0127 20:15:32.147238 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d602920-84b6-42be-936d-95dfc8c05b96" containerName="nova-manage" Jan 27 20:15:32 crc kubenswrapper[4915]: I0127 20:15:32.147265 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="126cc9af-dbe0-4a69-a297-83fdaf8a029a" containerName="nova-scheduler-scheduler" Jan 27 20:15:32 crc kubenswrapper[4915]: I0127 20:15:32.147885 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 20:15:32 crc kubenswrapper[4915]: I0127 20:15:32.150061 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 20:15:32 crc kubenswrapper[4915]: I0127 20:15:32.162137 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:15:32 crc kubenswrapper[4915]: I0127 20:15:32.279223 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e756d45-4816-4bfd-8d5b-05dd204c3fe5-config-data\") pod \"nova-scheduler-0\" (UID: \"0e756d45-4816-4bfd-8d5b-05dd204c3fe5\") " pod="openstack/nova-scheduler-0" Jan 27 20:15:32 crc kubenswrapper[4915]: I0127 20:15:32.279284 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp7w6\" (UniqueName: \"kubernetes.io/projected/0e756d45-4816-4bfd-8d5b-05dd204c3fe5-kube-api-access-fp7w6\") pod \"nova-scheduler-0\" (UID: \"0e756d45-4816-4bfd-8d5b-05dd204c3fe5\") " pod="openstack/nova-scheduler-0" Jan 27 20:15:32 crc kubenswrapper[4915]: I0127 20:15:32.279423 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e756d45-4816-4bfd-8d5b-05dd204c3fe5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0e756d45-4816-4bfd-8d5b-05dd204c3fe5\") " pod="openstack/nova-scheduler-0" Jan 27 20:15:32 crc kubenswrapper[4915]: I0127 20:15:32.380764 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e756d45-4816-4bfd-8d5b-05dd204c3fe5-config-data\") pod \"nova-scheduler-0\" (UID: \"0e756d45-4816-4bfd-8d5b-05dd204c3fe5\") " pod="openstack/nova-scheduler-0" Jan 27 20:15:32 crc kubenswrapper[4915]: I0127 20:15:32.380854 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp7w6\" (UniqueName: \"kubernetes.io/projected/0e756d45-4816-4bfd-8d5b-05dd204c3fe5-kube-api-access-fp7w6\") pod \"nova-scheduler-0\" (UID: \"0e756d45-4816-4bfd-8d5b-05dd204c3fe5\") " pod="openstack/nova-scheduler-0" Jan 27 20:15:32 crc kubenswrapper[4915]: I0127 20:15:32.381000 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e756d45-4816-4bfd-8d5b-05dd204c3fe5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0e756d45-4816-4bfd-8d5b-05dd204c3fe5\") " pod="openstack/nova-scheduler-0" Jan 27 20:15:32 crc kubenswrapper[4915]: I0127 20:15:32.386509 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e756d45-4816-4bfd-8d5b-05dd204c3fe5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0e756d45-4816-4bfd-8d5b-05dd204c3fe5\") " pod="openstack/nova-scheduler-0" Jan 27 20:15:32 crc kubenswrapper[4915]: I0127 20:15:32.386546 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e756d45-4816-4bfd-8d5b-05dd204c3fe5-config-data\") pod \"nova-scheduler-0\" (UID: \"0e756d45-4816-4bfd-8d5b-05dd204c3fe5\") " pod="openstack/nova-scheduler-0" Jan 27 20:15:32 crc kubenswrapper[4915]: I0127 20:15:32.404477 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp7w6\" (UniqueName: \"kubernetes.io/projected/0e756d45-4816-4bfd-8d5b-05dd204c3fe5-kube-api-access-fp7w6\") pod \"nova-scheduler-0\" (UID: \"0e756d45-4816-4bfd-8d5b-05dd204c3fe5\") " pod="openstack/nova-scheduler-0" Jan 27 20:15:32 crc kubenswrapper[4915]: I0127 20:15:32.475822 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 20:15:32 crc kubenswrapper[4915]: I0127 20:15:32.904455 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:15:33 crc kubenswrapper[4915]: I0127 20:15:33.101811 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0e756d45-4816-4bfd-8d5b-05dd204c3fe5","Type":"ContainerStarted","Data":"21ae58e15b6dddfc0026ab1c2a1becef2c5b3e4b4566c7e4cb48314f5aa6676e"} Jan 27 20:15:33 crc kubenswrapper[4915]: I0127 20:15:33.368597 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="126cc9af-dbe0-4a69-a297-83fdaf8a029a" path="/var/lib/kubelet/pods/126cc9af-dbe0-4a69-a297-83fdaf8a029a/volumes" Jan 27 20:15:34 crc kubenswrapper[4915]: I0127 20:15:34.113683 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0e756d45-4816-4bfd-8d5b-05dd204c3fe5","Type":"ContainerStarted","Data":"ec350fecec8eb796545e03ec4f4c61bcbed3571ec3b44ca1037ea0ad9db961cc"} Jan 27 20:15:34 crc kubenswrapper[4915]: I0127 20:15:34.148692 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.148671173 podStartE2EDuration="2.148671173s" podCreationTimestamp="2026-01-27 20:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:15:34.136195785 +0000 UTC m=+5625.494049539" watchObservedRunningTime="2026-01-27 20:15:34.148671173 +0000 UTC m=+5625.506524847" Jan 27 20:15:35 crc kubenswrapper[4915]: I0127 20:15:35.123005 4915 generic.go:334] "Generic (PLEG): container finished" podID="24566547-8a4e-4fd8-9597-1a16cdbec9e9" containerID="ef561dc8e739a2c43dd4fcb1546fa37e6ff5d701ac46a606a8583f76261a149f" exitCode=0 Jan 27 20:15:35 crc kubenswrapper[4915]: I0127 20:15:35.123090 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24566547-8a4e-4fd8-9597-1a16cdbec9e9","Type":"ContainerDied","Data":"ef561dc8e739a2c43dd4fcb1546fa37e6ff5d701ac46a606a8583f76261a149f"} Jan 27 20:15:35 crc kubenswrapper[4915]: I0127 20:15:35.123611 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24566547-8a4e-4fd8-9597-1a16cdbec9e9","Type":"ContainerDied","Data":"0d0c6161ff8f04b1602a8a9db6edc7d099822ff54c451f0727a205c6fbeeec42"} Jan 27 20:15:35 crc kubenswrapper[4915]: I0127 20:15:35.123633 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d0c6161ff8f04b1602a8a9db6edc7d099822ff54c451f0727a205c6fbeeec42" Jan 27 20:15:35 crc kubenswrapper[4915]: I0127 20:15:35.134499 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:15:35 crc kubenswrapper[4915]: I0127 20:15:35.233064 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24566547-8a4e-4fd8-9597-1a16cdbec9e9-config-data\") pod \"24566547-8a4e-4fd8-9597-1a16cdbec9e9\" (UID: \"24566547-8a4e-4fd8-9597-1a16cdbec9e9\") " Jan 27 20:15:35 crc kubenswrapper[4915]: I0127 20:15:35.233155 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkbwc\" (UniqueName: \"kubernetes.io/projected/24566547-8a4e-4fd8-9597-1a16cdbec9e9-kube-api-access-jkbwc\") pod \"24566547-8a4e-4fd8-9597-1a16cdbec9e9\" (UID: \"24566547-8a4e-4fd8-9597-1a16cdbec9e9\") " Jan 27 20:15:35 crc kubenswrapper[4915]: I0127 20:15:35.233234 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24566547-8a4e-4fd8-9597-1a16cdbec9e9-combined-ca-bundle\") pod \"24566547-8a4e-4fd8-9597-1a16cdbec9e9\" (UID: \"24566547-8a4e-4fd8-9597-1a16cdbec9e9\") " Jan 27 20:15:35 crc kubenswrapper[4915]: I0127 20:15:35.233257 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24566547-8a4e-4fd8-9597-1a16cdbec9e9-logs\") pod \"24566547-8a4e-4fd8-9597-1a16cdbec9e9\" (UID: \"24566547-8a4e-4fd8-9597-1a16cdbec9e9\") " Jan 27 20:15:35 crc kubenswrapper[4915]: I0127 20:15:35.234366 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24566547-8a4e-4fd8-9597-1a16cdbec9e9-logs" (OuterVolumeSpecName: "logs") pod "24566547-8a4e-4fd8-9597-1a16cdbec9e9" (UID: "24566547-8a4e-4fd8-9597-1a16cdbec9e9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:15:35 crc kubenswrapper[4915]: I0127 20:15:35.239969 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24566547-8a4e-4fd8-9597-1a16cdbec9e9-kube-api-access-jkbwc" (OuterVolumeSpecName: "kube-api-access-jkbwc") pod "24566547-8a4e-4fd8-9597-1a16cdbec9e9" (UID: "24566547-8a4e-4fd8-9597-1a16cdbec9e9"). InnerVolumeSpecName "kube-api-access-jkbwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:15:35 crc kubenswrapper[4915]: I0127 20:15:35.257162 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24566547-8a4e-4fd8-9597-1a16cdbec9e9-config-data" (OuterVolumeSpecName: "config-data") pod "24566547-8a4e-4fd8-9597-1a16cdbec9e9" (UID: "24566547-8a4e-4fd8-9597-1a16cdbec9e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:15:35 crc kubenswrapper[4915]: I0127 20:15:35.258037 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24566547-8a4e-4fd8-9597-1a16cdbec9e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24566547-8a4e-4fd8-9597-1a16cdbec9e9" (UID: "24566547-8a4e-4fd8-9597-1a16cdbec9e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:15:35 crc kubenswrapper[4915]: I0127 20:15:35.335009 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24566547-8a4e-4fd8-9597-1a16cdbec9e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:35 crc kubenswrapper[4915]: I0127 20:15:35.335038 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24566547-8a4e-4fd8-9597-1a16cdbec9e9-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:35 crc kubenswrapper[4915]: I0127 20:15:35.335049 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24566547-8a4e-4fd8-9597-1a16cdbec9e9-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:35 crc kubenswrapper[4915]: I0127 20:15:35.335058 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkbwc\" (UniqueName: \"kubernetes.io/projected/24566547-8a4e-4fd8-9597-1a16cdbec9e9-kube-api-access-jkbwc\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.050656 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.147725 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948d654b-fa22-4247-98f2-887f6828d9fd-combined-ca-bundle\") pod \"948d654b-fa22-4247-98f2-887f6828d9fd\" (UID: \"948d654b-fa22-4247-98f2-887f6828d9fd\") " Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.147858 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948d654b-fa22-4247-98f2-887f6828d9fd-logs\") pod \"948d654b-fa22-4247-98f2-887f6828d9fd\" (UID: \"948d654b-fa22-4247-98f2-887f6828d9fd\") " Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.148182 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948d654b-fa22-4247-98f2-887f6828d9fd-config-data\") pod \"948d654b-fa22-4247-98f2-887f6828d9fd\" (UID: \"948d654b-fa22-4247-98f2-887f6828d9fd\") " Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.148219 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swljh\" (UniqueName: \"kubernetes.io/projected/948d654b-fa22-4247-98f2-887f6828d9fd-kube-api-access-swljh\") pod \"948d654b-fa22-4247-98f2-887f6828d9fd\" (UID: \"948d654b-fa22-4247-98f2-887f6828d9fd\") " Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.149874 4915 generic.go:334] "Generic (PLEG): container finished" podID="948d654b-fa22-4247-98f2-887f6828d9fd" containerID="9ae9ccd3afe3f2a89205fe1dbfd4def86a611b53f883e8bb6a479d5e69c82df0" exitCode=0 Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.149984 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"948d654b-fa22-4247-98f2-887f6828d9fd","Type":"ContainerDied","Data":"9ae9ccd3afe3f2a89205fe1dbfd4def86a611b53f883e8bb6a479d5e69c82df0"} Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.150092 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"948d654b-fa22-4247-98f2-887f6828d9fd","Type":"ContainerDied","Data":"7502f2b056bc1ff461ccec5c61ee513344288026506d7340066e6bd0b8312376"} Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.150015 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.150136 4915 scope.go:117] "RemoveContainer" containerID="9ae9ccd3afe3f2a89205fe1dbfd4def86a611b53f883e8bb6a479d5e69c82df0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.150484 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/948d654b-fa22-4247-98f2-887f6828d9fd-logs" (OuterVolumeSpecName: "logs") pod "948d654b-fa22-4247-98f2-887f6828d9fd" (UID: "948d654b-fa22-4247-98f2-887f6828d9fd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.150102 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.155410 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/948d654b-fa22-4247-98f2-887f6828d9fd-kube-api-access-swljh" (OuterVolumeSpecName: "kube-api-access-swljh") pod "948d654b-fa22-4247-98f2-887f6828d9fd" (UID: "948d654b-fa22-4247-98f2-887f6828d9fd"). InnerVolumeSpecName "kube-api-access-swljh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.175472 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948d654b-fa22-4247-98f2-887f6828d9fd-config-data" (OuterVolumeSpecName: "config-data") pod "948d654b-fa22-4247-98f2-887f6828d9fd" (UID: "948d654b-fa22-4247-98f2-887f6828d9fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.176129 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948d654b-fa22-4247-98f2-887f6828d9fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "948d654b-fa22-4247-98f2-887f6828d9fd" (UID: "948d654b-fa22-4247-98f2-887f6828d9fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.254778 4915 scope.go:117] "RemoveContainer" containerID="3fd0f8d5f488e24eb7e32222a6049d5df4b5fd4ca64b5ce72f548b2ede7f7d29" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.257468 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948d654b-fa22-4247-98f2-887f6828d9fd-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.257521 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swljh\" (UniqueName: \"kubernetes.io/projected/948d654b-fa22-4247-98f2-887f6828d9fd-kube-api-access-swljh\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.257537 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948d654b-fa22-4247-98f2-887f6828d9fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.257552 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948d654b-fa22-4247-98f2-887f6828d9fd-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.261856 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.277215 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.291696 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:15:36 crc kubenswrapper[4915]: E0127 20:15:36.292144 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948d654b-fa22-4247-98f2-887f6828d9fd" containerName="nova-api-api" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.292161 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="948d654b-fa22-4247-98f2-887f6828d9fd" containerName="nova-api-api" Jan 27 20:15:36 crc kubenswrapper[4915]: E0127 20:15:36.292177 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24566547-8a4e-4fd8-9597-1a16cdbec9e9" containerName="nova-metadata-metadata" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.292183 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="24566547-8a4e-4fd8-9597-1a16cdbec9e9" containerName="nova-metadata-metadata" Jan 27 20:15:36 crc kubenswrapper[4915]: E0127 20:15:36.292200 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948d654b-fa22-4247-98f2-887f6828d9fd" containerName="nova-api-log" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.292207 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="948d654b-fa22-4247-98f2-887f6828d9fd" containerName="nova-api-log" Jan 27 20:15:36 crc kubenswrapper[4915]: E0127 20:15:36.292250 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24566547-8a4e-4fd8-9597-1a16cdbec9e9" containerName="nova-metadata-log" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.292258 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="24566547-8a4e-4fd8-9597-1a16cdbec9e9" containerName="nova-metadata-log" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.292432 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="24566547-8a4e-4fd8-9597-1a16cdbec9e9" containerName="nova-metadata-log" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.292447 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="948d654b-fa22-4247-98f2-887f6828d9fd" containerName="nova-api-log" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.292462 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="948d654b-fa22-4247-98f2-887f6828d9fd" containerName="nova-api-api" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.292477 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="24566547-8a4e-4fd8-9597-1a16cdbec9e9" containerName="nova-metadata-metadata" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.293526 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.298208 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.299584 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.326667 4915 scope.go:117] "RemoveContainer" containerID="9ae9ccd3afe3f2a89205fe1dbfd4def86a611b53f883e8bb6a479d5e69c82df0" Jan 27 20:15:36 crc kubenswrapper[4915]: E0127 20:15:36.328362 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ae9ccd3afe3f2a89205fe1dbfd4def86a611b53f883e8bb6a479d5e69c82df0\": container with ID starting with 9ae9ccd3afe3f2a89205fe1dbfd4def86a611b53f883e8bb6a479d5e69c82df0 not found: ID does not exist" containerID="9ae9ccd3afe3f2a89205fe1dbfd4def86a611b53f883e8bb6a479d5e69c82df0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.328425 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ae9ccd3afe3f2a89205fe1dbfd4def86a611b53f883e8bb6a479d5e69c82df0"} err="failed to get container status \"9ae9ccd3afe3f2a89205fe1dbfd4def86a611b53f883e8bb6a479d5e69c82df0\": rpc error: code = NotFound desc = could not find container \"9ae9ccd3afe3f2a89205fe1dbfd4def86a611b53f883e8bb6a479d5e69c82df0\": container with ID starting with 9ae9ccd3afe3f2a89205fe1dbfd4def86a611b53f883e8bb6a479d5e69c82df0 not found: ID does not exist" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.328468 4915 scope.go:117] "RemoveContainer" containerID="3fd0f8d5f488e24eb7e32222a6049d5df4b5fd4ca64b5ce72f548b2ede7f7d29" Jan 27 20:15:36 crc kubenswrapper[4915]: E0127 20:15:36.330156 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fd0f8d5f488e24eb7e32222a6049d5df4b5fd4ca64b5ce72f548b2ede7f7d29\": container with ID starting with 3fd0f8d5f488e24eb7e32222a6049d5df4b5fd4ca64b5ce72f548b2ede7f7d29 not found: ID does not exist" containerID="3fd0f8d5f488e24eb7e32222a6049d5df4b5fd4ca64b5ce72f548b2ede7f7d29" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.330227 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fd0f8d5f488e24eb7e32222a6049d5df4b5fd4ca64b5ce72f548b2ede7f7d29"} err="failed to get container status \"3fd0f8d5f488e24eb7e32222a6049d5df4b5fd4ca64b5ce72f548b2ede7f7d29\": rpc error: code = NotFound desc = could not find container \"3fd0f8d5f488e24eb7e32222a6049d5df4b5fd4ca64b5ce72f548b2ede7f7d29\": container with ID starting with 3fd0f8d5f488e24eb7e32222a6049d5df4b5fd4ca64b5ce72f548b2ede7f7d29 not found: ID does not exist" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.360543 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a533c6e-e261-4faa-8186-4a3105f1a9e4-logs\") pod \"nova-metadata-0\" (UID: \"4a533c6e-e261-4faa-8186-4a3105f1a9e4\") " pod="openstack/nova-metadata-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.360670 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xczhs\" (UniqueName: \"kubernetes.io/projected/4a533c6e-e261-4faa-8186-4a3105f1a9e4-kube-api-access-xczhs\") pod \"nova-metadata-0\" (UID: \"4a533c6e-e261-4faa-8186-4a3105f1a9e4\") " pod="openstack/nova-metadata-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.360833 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a533c6e-e261-4faa-8186-4a3105f1a9e4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4a533c6e-e261-4faa-8186-4a3105f1a9e4\") " pod="openstack/nova-metadata-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.360858 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a533c6e-e261-4faa-8186-4a3105f1a9e4-config-data\") pod \"nova-metadata-0\" (UID: \"4a533c6e-e261-4faa-8186-4a3105f1a9e4\") " pod="openstack/nova-metadata-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.462044 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a533c6e-e261-4faa-8186-4a3105f1a9e4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4a533c6e-e261-4faa-8186-4a3105f1a9e4\") " pod="openstack/nova-metadata-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.462095 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a533c6e-e261-4faa-8186-4a3105f1a9e4-config-data\") pod \"nova-metadata-0\" (UID: \"4a533c6e-e261-4faa-8186-4a3105f1a9e4\") " pod="openstack/nova-metadata-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.462198 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a533c6e-e261-4faa-8186-4a3105f1a9e4-logs\") pod \"nova-metadata-0\" (UID: \"4a533c6e-e261-4faa-8186-4a3105f1a9e4\") " pod="openstack/nova-metadata-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.462236 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xczhs\" (UniqueName: \"kubernetes.io/projected/4a533c6e-e261-4faa-8186-4a3105f1a9e4-kube-api-access-xczhs\") pod \"nova-metadata-0\" (UID: \"4a533c6e-e261-4faa-8186-4a3105f1a9e4\") " pod="openstack/nova-metadata-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.462833 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a533c6e-e261-4faa-8186-4a3105f1a9e4-logs\") pod \"nova-metadata-0\" (UID: \"4a533c6e-e261-4faa-8186-4a3105f1a9e4\") " pod="openstack/nova-metadata-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.467140 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a533c6e-e261-4faa-8186-4a3105f1a9e4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4a533c6e-e261-4faa-8186-4a3105f1a9e4\") " pod="openstack/nova-metadata-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.467924 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a533c6e-e261-4faa-8186-4a3105f1a9e4-config-data\") pod \"nova-metadata-0\" (UID: \"4a533c6e-e261-4faa-8186-4a3105f1a9e4\") " pod="openstack/nova-metadata-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.486108 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xczhs\" (UniqueName: \"kubernetes.io/projected/4a533c6e-e261-4faa-8186-4a3105f1a9e4-kube-api-access-xczhs\") pod \"nova-metadata-0\" (UID: \"4a533c6e-e261-4faa-8186-4a3105f1a9e4\") " pod="openstack/nova-metadata-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.503804 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.517845 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.535559 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.537974 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.542707 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.550475 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.628976 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.665833 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fae0714b-063b-418d-a1b1-d87a19153cf0-logs\") pod \"nova-api-0\" (UID: \"fae0714b-063b-418d-a1b1-d87a19153cf0\") " pod="openstack/nova-api-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.665898 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fae0714b-063b-418d-a1b1-d87a19153cf0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fae0714b-063b-418d-a1b1-d87a19153cf0\") " pod="openstack/nova-api-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.665977 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs2gv\" (UniqueName: \"kubernetes.io/projected/fae0714b-063b-418d-a1b1-d87a19153cf0-kube-api-access-qs2gv\") pod \"nova-api-0\" (UID: \"fae0714b-063b-418d-a1b1-d87a19153cf0\") " pod="openstack/nova-api-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.666040 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fae0714b-063b-418d-a1b1-d87a19153cf0-config-data\") pod \"nova-api-0\" (UID: \"fae0714b-063b-418d-a1b1-d87a19153cf0\") " pod="openstack/nova-api-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.768907 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs2gv\" (UniqueName: \"kubernetes.io/projected/fae0714b-063b-418d-a1b1-d87a19153cf0-kube-api-access-qs2gv\") pod \"nova-api-0\" (UID: \"fae0714b-063b-418d-a1b1-d87a19153cf0\") " pod="openstack/nova-api-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.769638 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fae0714b-063b-418d-a1b1-d87a19153cf0-config-data\") pod \"nova-api-0\" (UID: \"fae0714b-063b-418d-a1b1-d87a19153cf0\") " pod="openstack/nova-api-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.769822 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fae0714b-063b-418d-a1b1-d87a19153cf0-logs\") pod \"nova-api-0\" (UID: \"fae0714b-063b-418d-a1b1-d87a19153cf0\") " pod="openstack/nova-api-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.769880 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fae0714b-063b-418d-a1b1-d87a19153cf0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fae0714b-063b-418d-a1b1-d87a19153cf0\") " pod="openstack/nova-api-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.773984 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fae0714b-063b-418d-a1b1-d87a19153cf0-logs\") pod \"nova-api-0\" (UID: \"fae0714b-063b-418d-a1b1-d87a19153cf0\") " pod="openstack/nova-api-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.784992 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fae0714b-063b-418d-a1b1-d87a19153cf0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fae0714b-063b-418d-a1b1-d87a19153cf0\") " pod="openstack/nova-api-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.785127 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fae0714b-063b-418d-a1b1-d87a19153cf0-config-data\") pod \"nova-api-0\" (UID: \"fae0714b-063b-418d-a1b1-d87a19153cf0\") " pod="openstack/nova-api-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.790731 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs2gv\" (UniqueName: \"kubernetes.io/projected/fae0714b-063b-418d-a1b1-d87a19153cf0-kube-api-access-qs2gv\") pod \"nova-api-0\" (UID: \"fae0714b-063b-418d-a1b1-d87a19153cf0\") " pod="openstack/nova-api-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.857417 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:15:36 crc kubenswrapper[4915]: I0127 20:15:36.913617 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:15:36 crc kubenswrapper[4915]: W0127 20:15:36.925978 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a533c6e_e261_4faa_8186_4a3105f1a9e4.slice/crio-3f48ac5cc8619c72403c17db7fa4c69424fe887b1355ea85eff7f89051efde20 WatchSource:0}: Error finding container 3f48ac5cc8619c72403c17db7fa4c69424fe887b1355ea85eff7f89051efde20: Status 404 returned error can't find the container with id 3f48ac5cc8619c72403c17db7fa4c69424fe887b1355ea85eff7f89051efde20 Jan 27 20:15:37 crc kubenswrapper[4915]: I0127 20:15:37.161426 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a533c6e-e261-4faa-8186-4a3105f1a9e4","Type":"ContainerStarted","Data":"0d6e1560e3d1a5bc420801f29ba850f2bdaf6a4a5ec110ac98c0a68820db55eb"} Jan 27 20:15:37 crc kubenswrapper[4915]: I0127 20:15:37.161818 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a533c6e-e261-4faa-8186-4a3105f1a9e4","Type":"ContainerStarted","Data":"3f48ac5cc8619c72403c17db7fa4c69424fe887b1355ea85eff7f89051efde20"} Jan 27 20:15:37 crc kubenswrapper[4915]: I0127 20:15:37.304776 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:15:37 crc kubenswrapper[4915]: W0127 20:15:37.307909 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfae0714b_063b_418d_a1b1_d87a19153cf0.slice/crio-96575e480041ac7c6399932b6e510c6bab452baa089766dbf52cb31c04406d76 WatchSource:0}: Error finding container 96575e480041ac7c6399932b6e510c6bab452baa089766dbf52cb31c04406d76: Status 404 returned error can't find the container with id 96575e480041ac7c6399932b6e510c6bab452baa089766dbf52cb31c04406d76 Jan 27 20:15:37 crc kubenswrapper[4915]: I0127 20:15:37.371872 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24566547-8a4e-4fd8-9597-1a16cdbec9e9" path="/var/lib/kubelet/pods/24566547-8a4e-4fd8-9597-1a16cdbec9e9/volumes" Jan 27 20:15:37 crc kubenswrapper[4915]: I0127 20:15:37.372690 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="948d654b-fa22-4247-98f2-887f6828d9fd" path="/var/lib/kubelet/pods/948d654b-fa22-4247-98f2-887f6828d9fd/volumes" Jan 27 20:15:37 crc kubenswrapper[4915]: I0127 20:15:37.476368 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 20:15:38 crc kubenswrapper[4915]: I0127 20:15:38.177429 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fae0714b-063b-418d-a1b1-d87a19153cf0","Type":"ContainerStarted","Data":"e6d52b71864b8afd15c1720688b4afee5eadf603626910959906a22d45943bb6"} Jan 27 20:15:38 crc kubenswrapper[4915]: I0127 20:15:38.177828 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fae0714b-063b-418d-a1b1-d87a19153cf0","Type":"ContainerStarted","Data":"e292794cbd5069421cedb0e8499475dd8c26f016ead0a3b95ce9f4f643ece438"} Jan 27 20:15:38 crc kubenswrapper[4915]: I0127 20:15:38.177849 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fae0714b-063b-418d-a1b1-d87a19153cf0","Type":"ContainerStarted","Data":"96575e480041ac7c6399932b6e510c6bab452baa089766dbf52cb31c04406d76"} Jan 27 20:15:38 crc kubenswrapper[4915]: I0127 20:15:38.181255 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a533c6e-e261-4faa-8186-4a3105f1a9e4","Type":"ContainerStarted","Data":"493573223ef72b46486d52188a23d8ae6e6e0d3cf8deebf23c3c67dbe829228b"} Jan 27 20:15:38 crc kubenswrapper[4915]: I0127 20:15:38.202370 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.202343492 podStartE2EDuration="2.202343492s" podCreationTimestamp="2026-01-27 20:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:15:38.19900487 +0000 UTC m=+5629.556858554" watchObservedRunningTime="2026-01-27 20:15:38.202343492 +0000 UTC m=+5629.560197166" Jan 27 20:15:38 crc kubenswrapper[4915]: I0127 20:15:38.219547 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.219521995 podStartE2EDuration="2.219521995s" podCreationTimestamp="2026-01-27 20:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:15:38.21605187 +0000 UTC m=+5629.573905554" watchObservedRunningTime="2026-01-27 20:15:38.219521995 +0000 UTC m=+5629.577375659" Jan 27 20:15:41 crc kubenswrapper[4915]: I0127 20:15:41.629987 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 20:15:41 crc kubenswrapper[4915]: I0127 20:15:41.630362 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 20:15:42 crc kubenswrapper[4915]: I0127 20:15:42.476363 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 20:15:42 crc kubenswrapper[4915]: I0127 20:15:42.520332 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 20:15:43 crc kubenswrapper[4915]: I0127 20:15:43.250711 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 20:15:46 crc kubenswrapper[4915]: I0127 20:15:46.629949 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 20:15:46 crc kubenswrapper[4915]: I0127 20:15:46.630420 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 20:15:46 crc kubenswrapper[4915]: I0127 20:15:46.858428 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 20:15:46 crc kubenswrapper[4915]: I0127 20:15:46.858496 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 20:15:47 crc kubenswrapper[4915]: I0127 20:15:47.712030 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4a533c6e-e261-4faa-8186-4a3105f1a9e4" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.74:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 20:15:47 crc kubenswrapper[4915]: I0127 20:15:47.712785 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4a533c6e-e261-4faa-8186-4a3105f1a9e4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.74:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 20:15:47 crc kubenswrapper[4915]: I0127 20:15:47.941047 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fae0714b-063b-418d-a1b1-d87a19153cf0" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.75:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 20:15:47 crc kubenswrapper[4915]: I0127 20:15:47.941059 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fae0714b-063b-418d-a1b1-d87a19153cf0" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.75:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 20:15:56 crc kubenswrapper[4915]: I0127 20:15:56.466839 4915 scope.go:117] "RemoveContainer" containerID="952a08ba4644fbea4f825348f02284d7be7d574ce293e32c660aa02161e71393" Jan 27 20:15:56 crc kubenswrapper[4915]: I0127 20:15:56.492440 4915 scope.go:117] "RemoveContainer" containerID="a626ec9f6560ebaf3606b6da727c0258c28c2cd727b0871464cbb89e98658d58" Jan 27 20:15:56 crc kubenswrapper[4915]: I0127 20:15:56.632265 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 20:15:56 crc kubenswrapper[4915]: I0127 20:15:56.632860 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 20:15:56 crc kubenswrapper[4915]: I0127 20:15:56.635175 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 20:15:56 crc kubenswrapper[4915]: I0127 20:15:56.635467 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 20:15:56 crc kubenswrapper[4915]: I0127 20:15:56.862102 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 20:15:56 crc kubenswrapper[4915]: I0127 20:15:56.862713 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 20:15:56 crc kubenswrapper[4915]: I0127 20:15:56.862844 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 20:15:56 crc kubenswrapper[4915]: I0127 20:15:56.865556 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 20:15:57 crc kubenswrapper[4915]: I0127 20:15:57.371122 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 20:15:57 crc kubenswrapper[4915]: I0127 20:15:57.375902 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 20:15:57 crc kubenswrapper[4915]: I0127 20:15:57.572938 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fc458fcf7-66xrv"] Jan 27 20:15:57 crc kubenswrapper[4915]: I0127 20:15:57.575457 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" Jan 27 20:15:57 crc kubenswrapper[4915]: I0127 20:15:57.597995 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fc458fcf7-66xrv"] Jan 27 20:15:57 crc kubenswrapper[4915]: I0127 20:15:57.669017 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a001c350-6514-48b6-a7ed-78f0ad3c55bd-dns-svc\") pod \"dnsmasq-dns-fc458fcf7-66xrv\" (UID: \"a001c350-6514-48b6-a7ed-78f0ad3c55bd\") " pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" Jan 27 20:15:57 crc kubenswrapper[4915]: I0127 20:15:57.669065 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a001c350-6514-48b6-a7ed-78f0ad3c55bd-ovsdbserver-sb\") pod \"dnsmasq-dns-fc458fcf7-66xrv\" (UID: \"a001c350-6514-48b6-a7ed-78f0ad3c55bd\") " pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" Jan 27 20:15:57 crc kubenswrapper[4915]: I0127 20:15:57.669096 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a001c350-6514-48b6-a7ed-78f0ad3c55bd-config\") pod \"dnsmasq-dns-fc458fcf7-66xrv\" (UID: \"a001c350-6514-48b6-a7ed-78f0ad3c55bd\") " pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" Jan 27 20:15:57 crc kubenswrapper[4915]: I0127 20:15:57.669142 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9j96\" (UniqueName: \"kubernetes.io/projected/a001c350-6514-48b6-a7ed-78f0ad3c55bd-kube-api-access-d9j96\") pod \"dnsmasq-dns-fc458fcf7-66xrv\" (UID: \"a001c350-6514-48b6-a7ed-78f0ad3c55bd\") " pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" Jan 27 20:15:57 crc kubenswrapper[4915]: I0127 20:15:57.669176 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a001c350-6514-48b6-a7ed-78f0ad3c55bd-ovsdbserver-nb\") pod \"dnsmasq-dns-fc458fcf7-66xrv\" (UID: \"a001c350-6514-48b6-a7ed-78f0ad3c55bd\") " pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" Jan 27 20:15:57 crc kubenswrapper[4915]: I0127 20:15:57.770878 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a001c350-6514-48b6-a7ed-78f0ad3c55bd-dns-svc\") pod \"dnsmasq-dns-fc458fcf7-66xrv\" (UID: \"a001c350-6514-48b6-a7ed-78f0ad3c55bd\") " pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" Jan 27 20:15:57 crc kubenswrapper[4915]: I0127 20:15:57.770944 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a001c350-6514-48b6-a7ed-78f0ad3c55bd-ovsdbserver-sb\") pod \"dnsmasq-dns-fc458fcf7-66xrv\" (UID: \"a001c350-6514-48b6-a7ed-78f0ad3c55bd\") " pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" Jan 27 20:15:57 crc kubenswrapper[4915]: I0127 20:15:57.771000 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a001c350-6514-48b6-a7ed-78f0ad3c55bd-config\") pod \"dnsmasq-dns-fc458fcf7-66xrv\" (UID: \"a001c350-6514-48b6-a7ed-78f0ad3c55bd\") " pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" Jan 27 20:15:57 crc kubenswrapper[4915]: I0127 20:15:57.771994 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a001c350-6514-48b6-a7ed-78f0ad3c55bd-config\") pod \"dnsmasq-dns-fc458fcf7-66xrv\" (UID: \"a001c350-6514-48b6-a7ed-78f0ad3c55bd\") " pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" Jan 27 20:15:57 crc kubenswrapper[4915]: I0127 20:15:57.771998 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a001c350-6514-48b6-a7ed-78f0ad3c55bd-ovsdbserver-sb\") pod \"dnsmasq-dns-fc458fcf7-66xrv\" (UID: \"a001c350-6514-48b6-a7ed-78f0ad3c55bd\") " pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" Jan 27 20:15:57 crc kubenswrapper[4915]: I0127 20:15:57.772039 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a001c350-6514-48b6-a7ed-78f0ad3c55bd-dns-svc\") pod \"dnsmasq-dns-fc458fcf7-66xrv\" (UID: \"a001c350-6514-48b6-a7ed-78f0ad3c55bd\") " pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" Jan 27 20:15:57 crc kubenswrapper[4915]: I0127 20:15:57.772062 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9j96\" (UniqueName: \"kubernetes.io/projected/a001c350-6514-48b6-a7ed-78f0ad3c55bd-kube-api-access-d9j96\") pod \"dnsmasq-dns-fc458fcf7-66xrv\" (UID: \"a001c350-6514-48b6-a7ed-78f0ad3c55bd\") " pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" Jan 27 20:15:57 crc kubenswrapper[4915]: I0127 20:15:57.772465 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a001c350-6514-48b6-a7ed-78f0ad3c55bd-ovsdbserver-nb\") pod \"dnsmasq-dns-fc458fcf7-66xrv\" (UID: \"a001c350-6514-48b6-a7ed-78f0ad3c55bd\") " pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" Jan 27 20:15:57 crc kubenswrapper[4915]: I0127 20:15:57.773179 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a001c350-6514-48b6-a7ed-78f0ad3c55bd-ovsdbserver-nb\") pod \"dnsmasq-dns-fc458fcf7-66xrv\" (UID: \"a001c350-6514-48b6-a7ed-78f0ad3c55bd\") " pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" Jan 27 20:15:57 crc kubenswrapper[4915]: I0127 20:15:57.798069 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9j96\" (UniqueName: \"kubernetes.io/projected/a001c350-6514-48b6-a7ed-78f0ad3c55bd-kube-api-access-d9j96\") pod \"dnsmasq-dns-fc458fcf7-66xrv\" (UID: \"a001c350-6514-48b6-a7ed-78f0ad3c55bd\") " pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" Jan 27 20:15:57 crc kubenswrapper[4915]: I0127 20:15:57.896241 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" Jan 27 20:15:58 crc kubenswrapper[4915]: I0127 20:15:58.370040 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fc458fcf7-66xrv"] Jan 27 20:15:59 crc kubenswrapper[4915]: I0127 20:15:59.443616 4915 generic.go:334] "Generic (PLEG): container finished" podID="a001c350-6514-48b6-a7ed-78f0ad3c55bd" containerID="b7bb7ea766c84ce5e6db10c0970c8297125d8dc51c0ecac9e2133c5f85e013f7" exitCode=0 Jan 27 20:15:59 crc kubenswrapper[4915]: I0127 20:15:59.443762 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" event={"ID":"a001c350-6514-48b6-a7ed-78f0ad3c55bd","Type":"ContainerDied","Data":"b7bb7ea766c84ce5e6db10c0970c8297125d8dc51c0ecac9e2133c5f85e013f7"} Jan 27 20:15:59 crc kubenswrapper[4915]: I0127 20:15:59.444209 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" event={"ID":"a001c350-6514-48b6-a7ed-78f0ad3c55bd","Type":"ContainerStarted","Data":"fcaedb509095c74c2145fd1954c46f332544c3422cdc8c952a8897e132a9e7f2"} Jan 27 20:16:00 crc kubenswrapper[4915]: I0127 20:16:00.459555 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" event={"ID":"a001c350-6514-48b6-a7ed-78f0ad3c55bd","Type":"ContainerStarted","Data":"4baa5bcd39c1397fe94033c9db38e4ab3b07c21543cc4223c0790cbadd2fbfcb"} Jan 27 20:16:00 crc kubenswrapper[4915]: I0127 20:16:00.460936 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" Jan 27 20:16:00 crc kubenswrapper[4915]: I0127 20:16:00.495730 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" podStartSLOduration=3.495710018 podStartE2EDuration="3.495710018s" podCreationTimestamp="2026-01-27 20:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:16:00.48929793 +0000 UTC m=+5651.847151624" watchObservedRunningTime="2026-01-27 20:16:00.495710018 +0000 UTC m=+5651.853563692" Jan 27 20:16:07 crc kubenswrapper[4915]: I0127 20:16:07.897960 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" Jan 27 20:16:07 crc kubenswrapper[4915]: I0127 20:16:07.966299 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5db6b59b7f-dv4xq"] Jan 27 20:16:07 crc kubenswrapper[4915]: I0127 20:16:07.966927 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" podUID="937a84dd-3c48-47b5-831f-40bea0da2e8a" containerName="dnsmasq-dns" containerID="cri-o://09116952ca1269e1dae98d1adbd854852143cef8321d0761fe062d79febe54c2" gracePeriod=10 Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.445900 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.530743 4915 generic.go:334] "Generic (PLEG): container finished" podID="937a84dd-3c48-47b5-831f-40bea0da2e8a" containerID="09116952ca1269e1dae98d1adbd854852143cef8321d0761fe062d79febe54c2" exitCode=0 Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.530815 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" event={"ID":"937a84dd-3c48-47b5-831f-40bea0da2e8a","Type":"ContainerDied","Data":"09116952ca1269e1dae98d1adbd854852143cef8321d0761fe062d79febe54c2"} Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.530828 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.530845 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5db6b59b7f-dv4xq" event={"ID":"937a84dd-3c48-47b5-831f-40bea0da2e8a","Type":"ContainerDied","Data":"471434a8cf8e98852ad4f6c2e0edbad1fb7577f2707f1169651f1d300dac84e6"} Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.530865 4915 scope.go:117] "RemoveContainer" containerID="09116952ca1269e1dae98d1adbd854852143cef8321d0761fe062d79febe54c2" Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.551896 4915 scope.go:117] "RemoveContainer" containerID="89254b6ab516ff626309af4e9c4db43c5f4fe79a66934e4f87ce696e41144029" Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.573850 4915 scope.go:117] "RemoveContainer" containerID="09116952ca1269e1dae98d1adbd854852143cef8321d0761fe062d79febe54c2" Jan 27 20:16:08 crc kubenswrapper[4915]: E0127 20:16:08.574348 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09116952ca1269e1dae98d1adbd854852143cef8321d0761fe062d79febe54c2\": container with ID starting with 09116952ca1269e1dae98d1adbd854852143cef8321d0761fe062d79febe54c2 not found: ID does not exist" containerID="09116952ca1269e1dae98d1adbd854852143cef8321d0761fe062d79febe54c2" Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.574393 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09116952ca1269e1dae98d1adbd854852143cef8321d0761fe062d79febe54c2"} err="failed to get container status \"09116952ca1269e1dae98d1adbd854852143cef8321d0761fe062d79febe54c2\": rpc error: code = NotFound desc = could not find container \"09116952ca1269e1dae98d1adbd854852143cef8321d0761fe062d79febe54c2\": container with ID starting with 09116952ca1269e1dae98d1adbd854852143cef8321d0761fe062d79febe54c2 not found: ID does not exist" Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.574431 4915 scope.go:117] "RemoveContainer" containerID="89254b6ab516ff626309af4e9c4db43c5f4fe79a66934e4f87ce696e41144029" Jan 27 20:16:08 crc kubenswrapper[4915]: E0127 20:16:08.574718 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89254b6ab516ff626309af4e9c4db43c5f4fe79a66934e4f87ce696e41144029\": container with ID starting with 89254b6ab516ff626309af4e9c4db43c5f4fe79a66934e4f87ce696e41144029 not found: ID does not exist" containerID="89254b6ab516ff626309af4e9c4db43c5f4fe79a66934e4f87ce696e41144029" Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.574742 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89254b6ab516ff626309af4e9c4db43c5f4fe79a66934e4f87ce696e41144029"} err="failed to get container status \"89254b6ab516ff626309af4e9c4db43c5f4fe79a66934e4f87ce696e41144029\": rpc error: code = NotFound desc = could not find container \"89254b6ab516ff626309af4e9c4db43c5f4fe79a66934e4f87ce696e41144029\": container with ID starting with 89254b6ab516ff626309af4e9c4db43c5f4fe79a66934e4f87ce696e41144029 not found: ID does not exist" Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.593806 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/937a84dd-3c48-47b5-831f-40bea0da2e8a-dns-svc\") pod \"937a84dd-3c48-47b5-831f-40bea0da2e8a\" (UID: \"937a84dd-3c48-47b5-831f-40bea0da2e8a\") " Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.593955 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt2t9\" (UniqueName: \"kubernetes.io/projected/937a84dd-3c48-47b5-831f-40bea0da2e8a-kube-api-access-xt2t9\") pod \"937a84dd-3c48-47b5-831f-40bea0da2e8a\" (UID: \"937a84dd-3c48-47b5-831f-40bea0da2e8a\") " Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.594034 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/937a84dd-3c48-47b5-831f-40bea0da2e8a-ovsdbserver-sb\") pod \"937a84dd-3c48-47b5-831f-40bea0da2e8a\" (UID: \"937a84dd-3c48-47b5-831f-40bea0da2e8a\") " Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.594073 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/937a84dd-3c48-47b5-831f-40bea0da2e8a-config\") pod \"937a84dd-3c48-47b5-831f-40bea0da2e8a\" (UID: \"937a84dd-3c48-47b5-831f-40bea0da2e8a\") " Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.594131 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/937a84dd-3c48-47b5-831f-40bea0da2e8a-ovsdbserver-nb\") pod \"937a84dd-3c48-47b5-831f-40bea0da2e8a\" (UID: \"937a84dd-3c48-47b5-831f-40bea0da2e8a\") " Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.598814 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/937a84dd-3c48-47b5-831f-40bea0da2e8a-kube-api-access-xt2t9" (OuterVolumeSpecName: "kube-api-access-xt2t9") pod "937a84dd-3c48-47b5-831f-40bea0da2e8a" (UID: "937a84dd-3c48-47b5-831f-40bea0da2e8a"). InnerVolumeSpecName "kube-api-access-xt2t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.634699 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/937a84dd-3c48-47b5-831f-40bea0da2e8a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "937a84dd-3c48-47b5-831f-40bea0da2e8a" (UID: "937a84dd-3c48-47b5-831f-40bea0da2e8a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.635232 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/937a84dd-3c48-47b5-831f-40bea0da2e8a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "937a84dd-3c48-47b5-831f-40bea0da2e8a" (UID: "937a84dd-3c48-47b5-831f-40bea0da2e8a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.642707 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/937a84dd-3c48-47b5-831f-40bea0da2e8a-config" (OuterVolumeSpecName: "config") pod "937a84dd-3c48-47b5-831f-40bea0da2e8a" (UID: "937a84dd-3c48-47b5-831f-40bea0da2e8a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.665509 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/937a84dd-3c48-47b5-831f-40bea0da2e8a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "937a84dd-3c48-47b5-831f-40bea0da2e8a" (UID: "937a84dd-3c48-47b5-831f-40bea0da2e8a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.696810 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/937a84dd-3c48-47b5-831f-40bea0da2e8a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.696852 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/937a84dd-3c48-47b5-831f-40bea0da2e8a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.696869 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xt2t9\" (UniqueName: \"kubernetes.io/projected/937a84dd-3c48-47b5-831f-40bea0da2e8a-kube-api-access-xt2t9\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.696881 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/937a84dd-3c48-47b5-831f-40bea0da2e8a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.696899 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/937a84dd-3c48-47b5-831f-40bea0da2e8a-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.864363 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5db6b59b7f-dv4xq"] Jan 27 20:16:08 crc kubenswrapper[4915]: I0127 20:16:08.873089 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5db6b59b7f-dv4xq"] Jan 27 20:16:09 crc kubenswrapper[4915]: I0127 20:16:09.369526 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="937a84dd-3c48-47b5-831f-40bea0da2e8a" path="/var/lib/kubelet/pods/937a84dd-3c48-47b5-831f-40bea0da2e8a/volumes" Jan 27 20:16:09 crc kubenswrapper[4915]: I0127 20:16:09.686761 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-wm86m"] Jan 27 20:16:09 crc kubenswrapper[4915]: E0127 20:16:09.687252 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="937a84dd-3c48-47b5-831f-40bea0da2e8a" containerName="init" Jan 27 20:16:09 crc kubenswrapper[4915]: I0127 20:16:09.687275 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="937a84dd-3c48-47b5-831f-40bea0da2e8a" containerName="init" Jan 27 20:16:09 crc kubenswrapper[4915]: E0127 20:16:09.687303 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="937a84dd-3c48-47b5-831f-40bea0da2e8a" containerName="dnsmasq-dns" Jan 27 20:16:09 crc kubenswrapper[4915]: I0127 20:16:09.687310 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="937a84dd-3c48-47b5-831f-40bea0da2e8a" containerName="dnsmasq-dns" Jan 27 20:16:09 crc kubenswrapper[4915]: I0127 20:16:09.687502 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="937a84dd-3c48-47b5-831f-40bea0da2e8a" containerName="dnsmasq-dns" Jan 27 20:16:09 crc kubenswrapper[4915]: I0127 20:16:09.688165 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wm86m" Jan 27 20:16:09 crc kubenswrapper[4915]: I0127 20:16:09.699942 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wm86m"] Jan 27 20:16:09 crc kubenswrapper[4915]: I0127 20:16:09.712655 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw2kv\" (UniqueName: \"kubernetes.io/projected/af11f2e1-23ce-4bfa-af41-f4ad85e784d2-kube-api-access-hw2kv\") pod \"cinder-db-create-wm86m\" (UID: \"af11f2e1-23ce-4bfa-af41-f4ad85e784d2\") " pod="openstack/cinder-db-create-wm86m" Jan 27 20:16:09 crc kubenswrapper[4915]: I0127 20:16:09.712736 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af11f2e1-23ce-4bfa-af41-f4ad85e784d2-operator-scripts\") pod \"cinder-db-create-wm86m\" (UID: \"af11f2e1-23ce-4bfa-af41-f4ad85e784d2\") " pod="openstack/cinder-db-create-wm86m" Jan 27 20:16:09 crc kubenswrapper[4915]: I0127 20:16:09.789161 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-45f5-account-create-update-4mgch"] Jan 27 20:16:09 crc kubenswrapper[4915]: I0127 20:16:09.790764 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-45f5-account-create-update-4mgch" Jan 27 20:16:09 crc kubenswrapper[4915]: I0127 20:16:09.792480 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 27 20:16:09 crc kubenswrapper[4915]: I0127 20:16:09.807518 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-45f5-account-create-update-4mgch"] Jan 27 20:16:09 crc kubenswrapper[4915]: I0127 20:16:09.815948 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw2kv\" (UniqueName: \"kubernetes.io/projected/af11f2e1-23ce-4bfa-af41-f4ad85e784d2-kube-api-access-hw2kv\") pod \"cinder-db-create-wm86m\" (UID: \"af11f2e1-23ce-4bfa-af41-f4ad85e784d2\") " pod="openstack/cinder-db-create-wm86m" Jan 27 20:16:09 crc kubenswrapper[4915]: I0127 20:16:09.816036 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af11f2e1-23ce-4bfa-af41-f4ad85e784d2-operator-scripts\") pod \"cinder-db-create-wm86m\" (UID: \"af11f2e1-23ce-4bfa-af41-f4ad85e784d2\") " pod="openstack/cinder-db-create-wm86m" Jan 27 20:16:09 crc kubenswrapper[4915]: I0127 20:16:09.816819 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af11f2e1-23ce-4bfa-af41-f4ad85e784d2-operator-scripts\") pod \"cinder-db-create-wm86m\" (UID: \"af11f2e1-23ce-4bfa-af41-f4ad85e784d2\") " pod="openstack/cinder-db-create-wm86m" Jan 27 20:16:09 crc kubenswrapper[4915]: I0127 20:16:09.835816 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw2kv\" (UniqueName: \"kubernetes.io/projected/af11f2e1-23ce-4bfa-af41-f4ad85e784d2-kube-api-access-hw2kv\") pod \"cinder-db-create-wm86m\" (UID: \"af11f2e1-23ce-4bfa-af41-f4ad85e784d2\") " pod="openstack/cinder-db-create-wm86m" Jan 27 20:16:09 crc kubenswrapper[4915]: I0127 20:16:09.917944 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvd4s\" (UniqueName: \"kubernetes.io/projected/29a9cc90-238c-46d4-9adb-84aa486ac4b3-kube-api-access-rvd4s\") pod \"cinder-45f5-account-create-update-4mgch\" (UID: \"29a9cc90-238c-46d4-9adb-84aa486ac4b3\") " pod="openstack/cinder-45f5-account-create-update-4mgch" Jan 27 20:16:09 crc kubenswrapper[4915]: I0127 20:16:09.918006 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29a9cc90-238c-46d4-9adb-84aa486ac4b3-operator-scripts\") pod \"cinder-45f5-account-create-update-4mgch\" (UID: \"29a9cc90-238c-46d4-9adb-84aa486ac4b3\") " pod="openstack/cinder-45f5-account-create-update-4mgch" Jan 27 20:16:10 crc kubenswrapper[4915]: I0127 20:16:10.006471 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wm86m" Jan 27 20:16:10 crc kubenswrapper[4915]: I0127 20:16:10.019327 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvd4s\" (UniqueName: \"kubernetes.io/projected/29a9cc90-238c-46d4-9adb-84aa486ac4b3-kube-api-access-rvd4s\") pod \"cinder-45f5-account-create-update-4mgch\" (UID: \"29a9cc90-238c-46d4-9adb-84aa486ac4b3\") " pod="openstack/cinder-45f5-account-create-update-4mgch" Jan 27 20:16:10 crc kubenswrapper[4915]: I0127 20:16:10.019398 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29a9cc90-238c-46d4-9adb-84aa486ac4b3-operator-scripts\") pod \"cinder-45f5-account-create-update-4mgch\" (UID: \"29a9cc90-238c-46d4-9adb-84aa486ac4b3\") " pod="openstack/cinder-45f5-account-create-update-4mgch" Jan 27 20:16:10 crc kubenswrapper[4915]: I0127 20:16:10.020255 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29a9cc90-238c-46d4-9adb-84aa486ac4b3-operator-scripts\") pod \"cinder-45f5-account-create-update-4mgch\" (UID: \"29a9cc90-238c-46d4-9adb-84aa486ac4b3\") " pod="openstack/cinder-45f5-account-create-update-4mgch" Jan 27 20:16:10 crc kubenswrapper[4915]: I0127 20:16:10.037247 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvd4s\" (UniqueName: \"kubernetes.io/projected/29a9cc90-238c-46d4-9adb-84aa486ac4b3-kube-api-access-rvd4s\") pod \"cinder-45f5-account-create-update-4mgch\" (UID: \"29a9cc90-238c-46d4-9adb-84aa486ac4b3\") " pod="openstack/cinder-45f5-account-create-update-4mgch" Jan 27 20:16:10 crc kubenswrapper[4915]: I0127 20:16:10.107766 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-45f5-account-create-update-4mgch" Jan 27 20:16:10 crc kubenswrapper[4915]: I0127 20:16:10.473972 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wm86m"] Jan 27 20:16:10 crc kubenswrapper[4915]: W0127 20:16:10.477209 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf11f2e1_23ce_4bfa_af41_f4ad85e784d2.slice/crio-8f2d6587502bb2ef10b72b5867c014aa456f7b510307ad9eb0ea5491d80a96c5 WatchSource:0}: Error finding container 8f2d6587502bb2ef10b72b5867c014aa456f7b510307ad9eb0ea5491d80a96c5: Status 404 returned error can't find the container with id 8f2d6587502bb2ef10b72b5867c014aa456f7b510307ad9eb0ea5491d80a96c5 Jan 27 20:16:10 crc kubenswrapper[4915]: I0127 20:16:10.552179 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wm86m" event={"ID":"af11f2e1-23ce-4bfa-af41-f4ad85e784d2","Type":"ContainerStarted","Data":"8f2d6587502bb2ef10b72b5867c014aa456f7b510307ad9eb0ea5491d80a96c5"} Jan 27 20:16:10 crc kubenswrapper[4915]: I0127 20:16:10.573553 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-45f5-account-create-update-4mgch"] Jan 27 20:16:10 crc kubenswrapper[4915]: W0127 20:16:10.575456 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29a9cc90_238c_46d4_9adb_84aa486ac4b3.slice/crio-c4ced34f53dc94bafbfef7c4d1001fd5df486e1432ba093d70797a111786742c WatchSource:0}: Error finding container c4ced34f53dc94bafbfef7c4d1001fd5df486e1432ba093d70797a111786742c: Status 404 returned error can't find the container with id c4ced34f53dc94bafbfef7c4d1001fd5df486e1432ba093d70797a111786742c Jan 27 20:16:11 crc kubenswrapper[4915]: I0127 20:16:11.562983 4915 generic.go:334] "Generic (PLEG): container finished" podID="29a9cc90-238c-46d4-9adb-84aa486ac4b3" containerID="dde7e48565783ec89cd137f8224a76f3fb17f39dceccd92e96cea00675497aa0" exitCode=0 Jan 27 20:16:11 crc kubenswrapper[4915]: I0127 20:16:11.563051 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-45f5-account-create-update-4mgch" event={"ID":"29a9cc90-238c-46d4-9adb-84aa486ac4b3","Type":"ContainerDied","Data":"dde7e48565783ec89cd137f8224a76f3fb17f39dceccd92e96cea00675497aa0"} Jan 27 20:16:11 crc kubenswrapper[4915]: I0127 20:16:11.563477 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-45f5-account-create-update-4mgch" event={"ID":"29a9cc90-238c-46d4-9adb-84aa486ac4b3","Type":"ContainerStarted","Data":"c4ced34f53dc94bafbfef7c4d1001fd5df486e1432ba093d70797a111786742c"} Jan 27 20:16:11 crc kubenswrapper[4915]: I0127 20:16:11.565467 4915 generic.go:334] "Generic (PLEG): container finished" podID="af11f2e1-23ce-4bfa-af41-f4ad85e784d2" containerID="e46a1f50336a102dc80d5d23eb54518b77210129002f00b4243ba23ae57edd48" exitCode=0 Jan 27 20:16:11 crc kubenswrapper[4915]: I0127 20:16:11.565504 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wm86m" event={"ID":"af11f2e1-23ce-4bfa-af41-f4ad85e784d2","Type":"ContainerDied","Data":"e46a1f50336a102dc80d5d23eb54518b77210129002f00b4243ba23ae57edd48"} Jan 27 20:16:13 crc kubenswrapper[4915]: I0127 20:16:13.079443 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wm86m" Jan 27 20:16:13 crc kubenswrapper[4915]: I0127 20:16:13.085912 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-45f5-account-create-update-4mgch" Jan 27 20:16:13 crc kubenswrapper[4915]: I0127 20:16:13.275107 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af11f2e1-23ce-4bfa-af41-f4ad85e784d2-operator-scripts\") pod \"af11f2e1-23ce-4bfa-af41-f4ad85e784d2\" (UID: \"af11f2e1-23ce-4bfa-af41-f4ad85e784d2\") " Jan 27 20:16:13 crc kubenswrapper[4915]: I0127 20:16:13.275167 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29a9cc90-238c-46d4-9adb-84aa486ac4b3-operator-scripts\") pod \"29a9cc90-238c-46d4-9adb-84aa486ac4b3\" (UID: \"29a9cc90-238c-46d4-9adb-84aa486ac4b3\") " Jan 27 20:16:13 crc kubenswrapper[4915]: I0127 20:16:13.275193 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw2kv\" (UniqueName: \"kubernetes.io/projected/af11f2e1-23ce-4bfa-af41-f4ad85e784d2-kube-api-access-hw2kv\") pod \"af11f2e1-23ce-4bfa-af41-f4ad85e784d2\" (UID: \"af11f2e1-23ce-4bfa-af41-f4ad85e784d2\") " Jan 27 20:16:13 crc kubenswrapper[4915]: I0127 20:16:13.275227 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvd4s\" (UniqueName: \"kubernetes.io/projected/29a9cc90-238c-46d4-9adb-84aa486ac4b3-kube-api-access-rvd4s\") pod \"29a9cc90-238c-46d4-9adb-84aa486ac4b3\" (UID: \"29a9cc90-238c-46d4-9adb-84aa486ac4b3\") " Jan 27 20:16:13 crc kubenswrapper[4915]: I0127 20:16:13.276063 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29a9cc90-238c-46d4-9adb-84aa486ac4b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "29a9cc90-238c-46d4-9adb-84aa486ac4b3" (UID: "29a9cc90-238c-46d4-9adb-84aa486ac4b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:16:13 crc kubenswrapper[4915]: I0127 20:16:13.276084 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af11f2e1-23ce-4bfa-af41-f4ad85e784d2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "af11f2e1-23ce-4bfa-af41-f4ad85e784d2" (UID: "af11f2e1-23ce-4bfa-af41-f4ad85e784d2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:16:13 crc kubenswrapper[4915]: I0127 20:16:13.280192 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29a9cc90-238c-46d4-9adb-84aa486ac4b3-kube-api-access-rvd4s" (OuterVolumeSpecName: "kube-api-access-rvd4s") pod "29a9cc90-238c-46d4-9adb-84aa486ac4b3" (UID: "29a9cc90-238c-46d4-9adb-84aa486ac4b3"). InnerVolumeSpecName "kube-api-access-rvd4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:16:13 crc kubenswrapper[4915]: I0127 20:16:13.280440 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af11f2e1-23ce-4bfa-af41-f4ad85e784d2-kube-api-access-hw2kv" (OuterVolumeSpecName: "kube-api-access-hw2kv") pod "af11f2e1-23ce-4bfa-af41-f4ad85e784d2" (UID: "af11f2e1-23ce-4bfa-af41-f4ad85e784d2"). InnerVolumeSpecName "kube-api-access-hw2kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:16:13 crc kubenswrapper[4915]: I0127 20:16:13.378014 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29a9cc90-238c-46d4-9adb-84aa486ac4b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:13 crc kubenswrapper[4915]: I0127 20:16:13.378070 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw2kv\" (UniqueName: \"kubernetes.io/projected/af11f2e1-23ce-4bfa-af41-f4ad85e784d2-kube-api-access-hw2kv\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:13 crc kubenswrapper[4915]: I0127 20:16:13.378088 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvd4s\" (UniqueName: \"kubernetes.io/projected/29a9cc90-238c-46d4-9adb-84aa486ac4b3-kube-api-access-rvd4s\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:13 crc kubenswrapper[4915]: I0127 20:16:13.378102 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af11f2e1-23ce-4bfa-af41-f4ad85e784d2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:13 crc kubenswrapper[4915]: I0127 20:16:13.580080 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-45f5-account-create-update-4mgch" Jan 27 20:16:13 crc kubenswrapper[4915]: I0127 20:16:13.580135 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-45f5-account-create-update-4mgch" event={"ID":"29a9cc90-238c-46d4-9adb-84aa486ac4b3","Type":"ContainerDied","Data":"c4ced34f53dc94bafbfef7c4d1001fd5df486e1432ba093d70797a111786742c"} Jan 27 20:16:13 crc kubenswrapper[4915]: I0127 20:16:13.580890 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4ced34f53dc94bafbfef7c4d1001fd5df486e1432ba093d70797a111786742c" Jan 27 20:16:13 crc kubenswrapper[4915]: I0127 20:16:13.583002 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wm86m" event={"ID":"af11f2e1-23ce-4bfa-af41-f4ad85e784d2","Type":"ContainerDied","Data":"8f2d6587502bb2ef10b72b5867c014aa456f7b510307ad9eb0ea5491d80a96c5"} Jan 27 20:16:13 crc kubenswrapper[4915]: I0127 20:16:13.583039 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wm86m" Jan 27 20:16:13 crc kubenswrapper[4915]: I0127 20:16:13.583043 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f2d6587502bb2ef10b72b5867c014aa456f7b510307ad9eb0ea5491d80a96c5" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.008553 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-nhh6j"] Jan 27 20:16:15 crc kubenswrapper[4915]: E0127 20:16:15.009382 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a9cc90-238c-46d4-9adb-84aa486ac4b3" containerName="mariadb-account-create-update" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.009401 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a9cc90-238c-46d4-9adb-84aa486ac4b3" containerName="mariadb-account-create-update" Jan 27 20:16:15 crc kubenswrapper[4915]: E0127 20:16:15.009430 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af11f2e1-23ce-4bfa-af41-f4ad85e784d2" containerName="mariadb-database-create" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.009439 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="af11f2e1-23ce-4bfa-af41-f4ad85e784d2" containerName="mariadb-database-create" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.009659 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="af11f2e1-23ce-4bfa-af41-f4ad85e784d2" containerName="mariadb-database-create" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.009692 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="29a9cc90-238c-46d4-9adb-84aa486ac4b3" containerName="mariadb-account-create-update" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.010428 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nhh6j" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.013306 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-k8jzs" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.013582 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.014356 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.022976 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-nhh6j"] Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.108949 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f8b501-4793-4736-947c-9782978ee613-combined-ca-bundle\") pod \"cinder-db-sync-nhh6j\" (UID: \"53f8b501-4793-4736-947c-9782978ee613\") " pod="openstack/cinder-db-sync-nhh6j" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.109008 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f8b501-4793-4736-947c-9782978ee613-scripts\") pod \"cinder-db-sync-nhh6j\" (UID: \"53f8b501-4793-4736-947c-9782978ee613\") " pod="openstack/cinder-db-sync-nhh6j" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.109090 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f8b501-4793-4736-947c-9782978ee613-config-data\") pod \"cinder-db-sync-nhh6j\" (UID: \"53f8b501-4793-4736-947c-9782978ee613\") " pod="openstack/cinder-db-sync-nhh6j" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.109118 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/53f8b501-4793-4736-947c-9782978ee613-etc-machine-id\") pod \"cinder-db-sync-nhh6j\" (UID: \"53f8b501-4793-4736-947c-9782978ee613\") " pod="openstack/cinder-db-sync-nhh6j" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.109309 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84v76\" (UniqueName: \"kubernetes.io/projected/53f8b501-4793-4736-947c-9782978ee613-kube-api-access-84v76\") pod \"cinder-db-sync-nhh6j\" (UID: \"53f8b501-4793-4736-947c-9782978ee613\") " pod="openstack/cinder-db-sync-nhh6j" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.109446 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/53f8b501-4793-4736-947c-9782978ee613-db-sync-config-data\") pod \"cinder-db-sync-nhh6j\" (UID: \"53f8b501-4793-4736-947c-9782978ee613\") " pod="openstack/cinder-db-sync-nhh6j" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.211918 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f8b501-4793-4736-947c-9782978ee613-config-data\") pod \"cinder-db-sync-nhh6j\" (UID: \"53f8b501-4793-4736-947c-9782978ee613\") " pod="openstack/cinder-db-sync-nhh6j" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.212001 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/53f8b501-4793-4736-947c-9782978ee613-etc-machine-id\") pod \"cinder-db-sync-nhh6j\" (UID: \"53f8b501-4793-4736-947c-9782978ee613\") " pod="openstack/cinder-db-sync-nhh6j" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.212067 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84v76\" (UniqueName: \"kubernetes.io/projected/53f8b501-4793-4736-947c-9782978ee613-kube-api-access-84v76\") pod \"cinder-db-sync-nhh6j\" (UID: \"53f8b501-4793-4736-947c-9782978ee613\") " pod="openstack/cinder-db-sync-nhh6j" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.212211 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/53f8b501-4793-4736-947c-9782978ee613-db-sync-config-data\") pod \"cinder-db-sync-nhh6j\" (UID: \"53f8b501-4793-4736-947c-9782978ee613\") " pod="openstack/cinder-db-sync-nhh6j" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.212298 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f8b501-4793-4736-947c-9782978ee613-combined-ca-bundle\") pod \"cinder-db-sync-nhh6j\" (UID: \"53f8b501-4793-4736-947c-9782978ee613\") " pod="openstack/cinder-db-sync-nhh6j" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.212413 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f8b501-4793-4736-947c-9782978ee613-scripts\") pod \"cinder-db-sync-nhh6j\" (UID: \"53f8b501-4793-4736-947c-9782978ee613\") " pod="openstack/cinder-db-sync-nhh6j" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.212866 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/53f8b501-4793-4736-947c-9782978ee613-etc-machine-id\") pod \"cinder-db-sync-nhh6j\" (UID: \"53f8b501-4793-4736-947c-9782978ee613\") " pod="openstack/cinder-db-sync-nhh6j" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.217520 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/53f8b501-4793-4736-947c-9782978ee613-db-sync-config-data\") pod \"cinder-db-sync-nhh6j\" (UID: \"53f8b501-4793-4736-947c-9782978ee613\") " pod="openstack/cinder-db-sync-nhh6j" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.219143 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f8b501-4793-4736-947c-9782978ee613-combined-ca-bundle\") pod \"cinder-db-sync-nhh6j\" (UID: \"53f8b501-4793-4736-947c-9782978ee613\") " pod="openstack/cinder-db-sync-nhh6j" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.219372 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f8b501-4793-4736-947c-9782978ee613-scripts\") pod \"cinder-db-sync-nhh6j\" (UID: \"53f8b501-4793-4736-947c-9782978ee613\") " pod="openstack/cinder-db-sync-nhh6j" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.220055 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f8b501-4793-4736-947c-9782978ee613-config-data\") pod \"cinder-db-sync-nhh6j\" (UID: \"53f8b501-4793-4736-947c-9782978ee613\") " pod="openstack/cinder-db-sync-nhh6j" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.236041 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84v76\" (UniqueName: \"kubernetes.io/projected/53f8b501-4793-4736-947c-9782978ee613-kube-api-access-84v76\") pod \"cinder-db-sync-nhh6j\" (UID: \"53f8b501-4793-4736-947c-9782978ee613\") " pod="openstack/cinder-db-sync-nhh6j" Jan 27 20:16:15 crc kubenswrapper[4915]: I0127 20:16:15.335235 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nhh6j" Jan 27 20:16:16 crc kubenswrapper[4915]: I0127 20:16:15.801933 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-nhh6j"] Jan 27 20:16:16 crc kubenswrapper[4915]: I0127 20:16:16.613198 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nhh6j" event={"ID":"53f8b501-4793-4736-947c-9782978ee613","Type":"ContainerStarted","Data":"fc426fba05219d0f4345a4616e02896fe1ceda2521a57bf6853e42ada8db0537"} Jan 27 20:16:16 crc kubenswrapper[4915]: I0127 20:16:16.613403 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nhh6j" event={"ID":"53f8b501-4793-4736-947c-9782978ee613","Type":"ContainerStarted","Data":"f70b41881c9664bbeaaf4a1609cd1b19272751c948bdd94919edf44638019b68"} Jan 27 20:16:16 crc kubenswrapper[4915]: I0127 20:16:16.629582 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-nhh6j" podStartSLOduration=2.629562982 podStartE2EDuration="2.629562982s" podCreationTimestamp="2026-01-27 20:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:16:16.626998328 +0000 UTC m=+5667.984851992" watchObservedRunningTime="2026-01-27 20:16:16.629562982 +0000 UTC m=+5667.987416656" Jan 27 20:16:19 crc kubenswrapper[4915]: I0127 20:16:19.649229 4915 generic.go:334] "Generic (PLEG): container finished" podID="53f8b501-4793-4736-947c-9782978ee613" containerID="fc426fba05219d0f4345a4616e02896fe1ceda2521a57bf6853e42ada8db0537" exitCode=0 Jan 27 20:16:19 crc kubenswrapper[4915]: I0127 20:16:19.649277 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nhh6j" event={"ID":"53f8b501-4793-4736-947c-9782978ee613","Type":"ContainerDied","Data":"fc426fba05219d0f4345a4616e02896fe1ceda2521a57bf6853e42ada8db0537"} Jan 27 20:16:21 crc kubenswrapper[4915]: I0127 20:16:21.016042 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nhh6j" Jan 27 20:16:21 crc kubenswrapper[4915]: I0127 20:16:21.119658 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/53f8b501-4793-4736-947c-9782978ee613-etc-machine-id\") pod \"53f8b501-4793-4736-947c-9782978ee613\" (UID: \"53f8b501-4793-4736-947c-9782978ee613\") " Jan 27 20:16:21 crc kubenswrapper[4915]: I0127 20:16:21.119792 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/53f8b501-4793-4736-947c-9782978ee613-db-sync-config-data\") pod \"53f8b501-4793-4736-947c-9782978ee613\" (UID: \"53f8b501-4793-4736-947c-9782978ee613\") " Jan 27 20:16:21 crc kubenswrapper[4915]: I0127 20:16:21.119879 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f8b501-4793-4736-947c-9782978ee613-scripts\") pod \"53f8b501-4793-4736-947c-9782978ee613\" (UID: \"53f8b501-4793-4736-947c-9782978ee613\") " Jan 27 20:16:21 crc kubenswrapper[4915]: I0127 20:16:21.119897 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53f8b501-4793-4736-947c-9782978ee613-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "53f8b501-4793-4736-947c-9782978ee613" (UID: "53f8b501-4793-4736-947c-9782978ee613"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:16:21 crc kubenswrapper[4915]: I0127 20:16:21.119921 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84v76\" (UniqueName: \"kubernetes.io/projected/53f8b501-4793-4736-947c-9782978ee613-kube-api-access-84v76\") pod \"53f8b501-4793-4736-947c-9782978ee613\" (UID: \"53f8b501-4793-4736-947c-9782978ee613\") " Jan 27 20:16:21 crc kubenswrapper[4915]: I0127 20:16:21.119979 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f8b501-4793-4736-947c-9782978ee613-config-data\") pod \"53f8b501-4793-4736-947c-9782978ee613\" (UID: \"53f8b501-4793-4736-947c-9782978ee613\") " Jan 27 20:16:21 crc kubenswrapper[4915]: I0127 20:16:21.120062 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f8b501-4793-4736-947c-9782978ee613-combined-ca-bundle\") pod \"53f8b501-4793-4736-947c-9782978ee613\" (UID: \"53f8b501-4793-4736-947c-9782978ee613\") " Jan 27 20:16:21 crc kubenswrapper[4915]: I0127 20:16:21.120608 4915 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/53f8b501-4793-4736-947c-9782978ee613-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:21 crc kubenswrapper[4915]: I0127 20:16:21.126463 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f8b501-4793-4736-947c-9782978ee613-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "53f8b501-4793-4736-947c-9782978ee613" (UID: "53f8b501-4793-4736-947c-9782978ee613"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:16:21 crc kubenswrapper[4915]: I0127 20:16:21.126924 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f8b501-4793-4736-947c-9782978ee613-scripts" (OuterVolumeSpecName: "scripts") pod "53f8b501-4793-4736-947c-9782978ee613" (UID: "53f8b501-4793-4736-947c-9782978ee613"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:16:21 crc kubenswrapper[4915]: I0127 20:16:21.127636 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53f8b501-4793-4736-947c-9782978ee613-kube-api-access-84v76" (OuterVolumeSpecName: "kube-api-access-84v76") pod "53f8b501-4793-4736-947c-9782978ee613" (UID: "53f8b501-4793-4736-947c-9782978ee613"). InnerVolumeSpecName "kube-api-access-84v76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:16:21 crc kubenswrapper[4915]: I0127 20:16:21.156642 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f8b501-4793-4736-947c-9782978ee613-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53f8b501-4793-4736-947c-9782978ee613" (UID: "53f8b501-4793-4736-947c-9782978ee613"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:16:21 crc kubenswrapper[4915]: I0127 20:16:21.170818 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f8b501-4793-4736-947c-9782978ee613-config-data" (OuterVolumeSpecName: "config-data") pod "53f8b501-4793-4736-947c-9782978ee613" (UID: "53f8b501-4793-4736-947c-9782978ee613"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:16:21 crc kubenswrapper[4915]: I0127 20:16:21.222553 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f8b501-4793-4736-947c-9782978ee613-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:21 crc kubenswrapper[4915]: I0127 20:16:21.222738 4915 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/53f8b501-4793-4736-947c-9782978ee613-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:21 crc kubenswrapper[4915]: I0127 20:16:21.222843 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53f8b501-4793-4736-947c-9782978ee613-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:21 crc kubenswrapper[4915]: I0127 20:16:21.222940 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84v76\" (UniqueName: \"kubernetes.io/projected/53f8b501-4793-4736-947c-9782978ee613-kube-api-access-84v76\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:21 crc kubenswrapper[4915]: I0127 20:16:21.223010 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f8b501-4793-4736-947c-9782978ee613-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:21 crc kubenswrapper[4915]: I0127 20:16:21.668332 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nhh6j" event={"ID":"53f8b501-4793-4736-947c-9782978ee613","Type":"ContainerDied","Data":"f70b41881c9664bbeaaf4a1609cd1b19272751c948bdd94919edf44638019b68"} Jan 27 20:16:21 crc kubenswrapper[4915]: I0127 20:16:21.668378 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f70b41881c9664bbeaaf4a1609cd1b19272751c948bdd94919edf44638019b68" Jan 27 20:16:21 crc kubenswrapper[4915]: I0127 20:16:21.668395 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nhh6j" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.010314 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75b77568d9-2lnpp"] Jan 27 20:16:22 crc kubenswrapper[4915]: E0127 20:16:22.010974 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53f8b501-4793-4736-947c-9782978ee613" containerName="cinder-db-sync" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.010991 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="53f8b501-4793-4736-947c-9782978ee613" containerName="cinder-db-sync" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.011226 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="53f8b501-4793-4736-947c-9782978ee613" containerName="cinder-db-sync" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.012157 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b77568d9-2lnpp" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.024718 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75b77568d9-2lnpp"] Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.138378 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de74ec03-d1bc-43cc-b036-73b93a0d94b3-ovsdbserver-nb\") pod \"dnsmasq-dns-75b77568d9-2lnpp\" (UID: \"de74ec03-d1bc-43cc-b036-73b93a0d94b3\") " pod="openstack/dnsmasq-dns-75b77568d9-2lnpp" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.138446 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v7xn\" (UniqueName: \"kubernetes.io/projected/de74ec03-d1bc-43cc-b036-73b93a0d94b3-kube-api-access-8v7xn\") pod \"dnsmasq-dns-75b77568d9-2lnpp\" (UID: \"de74ec03-d1bc-43cc-b036-73b93a0d94b3\") " pod="openstack/dnsmasq-dns-75b77568d9-2lnpp" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.138509 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de74ec03-d1bc-43cc-b036-73b93a0d94b3-dns-svc\") pod \"dnsmasq-dns-75b77568d9-2lnpp\" (UID: \"de74ec03-d1bc-43cc-b036-73b93a0d94b3\") " pod="openstack/dnsmasq-dns-75b77568d9-2lnpp" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.138546 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de74ec03-d1bc-43cc-b036-73b93a0d94b3-ovsdbserver-sb\") pod \"dnsmasq-dns-75b77568d9-2lnpp\" (UID: \"de74ec03-d1bc-43cc-b036-73b93a0d94b3\") " pod="openstack/dnsmasq-dns-75b77568d9-2lnpp" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.138570 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de74ec03-d1bc-43cc-b036-73b93a0d94b3-config\") pod \"dnsmasq-dns-75b77568d9-2lnpp\" (UID: \"de74ec03-d1bc-43cc-b036-73b93a0d94b3\") " pod="openstack/dnsmasq-dns-75b77568d9-2lnpp" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.232851 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.234537 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.236734 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-k8jzs" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.236898 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.237092 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.239499 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de74ec03-d1bc-43cc-b036-73b93a0d94b3-ovsdbserver-sb\") pod \"dnsmasq-dns-75b77568d9-2lnpp\" (UID: \"de74ec03-d1bc-43cc-b036-73b93a0d94b3\") " pod="openstack/dnsmasq-dns-75b77568d9-2lnpp" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.239531 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de74ec03-d1bc-43cc-b036-73b93a0d94b3-config\") pod \"dnsmasq-dns-75b77568d9-2lnpp\" (UID: \"de74ec03-d1bc-43cc-b036-73b93a0d94b3\") " pod="openstack/dnsmasq-dns-75b77568d9-2lnpp" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.239585 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de74ec03-d1bc-43cc-b036-73b93a0d94b3-ovsdbserver-nb\") pod \"dnsmasq-dns-75b77568d9-2lnpp\" (UID: \"de74ec03-d1bc-43cc-b036-73b93a0d94b3\") " pod="openstack/dnsmasq-dns-75b77568d9-2lnpp" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.239625 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v7xn\" (UniqueName: \"kubernetes.io/projected/de74ec03-d1bc-43cc-b036-73b93a0d94b3-kube-api-access-8v7xn\") pod \"dnsmasq-dns-75b77568d9-2lnpp\" (UID: \"de74ec03-d1bc-43cc-b036-73b93a0d94b3\") " pod="openstack/dnsmasq-dns-75b77568d9-2lnpp" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.239681 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de74ec03-d1bc-43cc-b036-73b93a0d94b3-dns-svc\") pod \"dnsmasq-dns-75b77568d9-2lnpp\" (UID: \"de74ec03-d1bc-43cc-b036-73b93a0d94b3\") " pod="openstack/dnsmasq-dns-75b77568d9-2lnpp" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.240611 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de74ec03-d1bc-43cc-b036-73b93a0d94b3-dns-svc\") pod \"dnsmasq-dns-75b77568d9-2lnpp\" (UID: \"de74ec03-d1bc-43cc-b036-73b93a0d94b3\") " pod="openstack/dnsmasq-dns-75b77568d9-2lnpp" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.241142 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de74ec03-d1bc-43cc-b036-73b93a0d94b3-ovsdbserver-sb\") pod \"dnsmasq-dns-75b77568d9-2lnpp\" (UID: \"de74ec03-d1bc-43cc-b036-73b93a0d94b3\") " pod="openstack/dnsmasq-dns-75b77568d9-2lnpp" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.241643 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de74ec03-d1bc-43cc-b036-73b93a0d94b3-config\") pod \"dnsmasq-dns-75b77568d9-2lnpp\" (UID: \"de74ec03-d1bc-43cc-b036-73b93a0d94b3\") " pod="openstack/dnsmasq-dns-75b77568d9-2lnpp" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.242070 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.242217 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de74ec03-d1bc-43cc-b036-73b93a0d94b3-ovsdbserver-nb\") pod \"dnsmasq-dns-75b77568d9-2lnpp\" (UID: \"de74ec03-d1bc-43cc-b036-73b93a0d94b3\") " pod="openstack/dnsmasq-dns-75b77568d9-2lnpp" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.244228 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.264399 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v7xn\" (UniqueName: \"kubernetes.io/projected/de74ec03-d1bc-43cc-b036-73b93a0d94b3-kube-api-access-8v7xn\") pod \"dnsmasq-dns-75b77568d9-2lnpp\" (UID: \"de74ec03-d1bc-43cc-b036-73b93a0d94b3\") " pod="openstack/dnsmasq-dns-75b77568d9-2lnpp" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.336872 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b77568d9-2lnpp" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.341143 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpkm6\" (UniqueName: \"kubernetes.io/projected/2d88039f-979e-4d21-b877-0b8d7fa35a18-kube-api-access-kpkm6\") pod \"cinder-api-0\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " pod="openstack/cinder-api-0" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.341285 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d88039f-979e-4d21-b877-0b8d7fa35a18-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " pod="openstack/cinder-api-0" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.341318 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d88039f-979e-4d21-b877-0b8d7fa35a18-config-data-custom\") pod \"cinder-api-0\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " pod="openstack/cinder-api-0" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.341388 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d88039f-979e-4d21-b877-0b8d7fa35a18-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " pod="openstack/cinder-api-0" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.341451 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d88039f-979e-4d21-b877-0b8d7fa35a18-scripts\") pod \"cinder-api-0\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " pod="openstack/cinder-api-0" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.341511 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d88039f-979e-4d21-b877-0b8d7fa35a18-config-data\") pod \"cinder-api-0\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " pod="openstack/cinder-api-0" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.341580 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d88039f-979e-4d21-b877-0b8d7fa35a18-logs\") pod \"cinder-api-0\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " pod="openstack/cinder-api-0" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.442985 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d88039f-979e-4d21-b877-0b8d7fa35a18-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " pod="openstack/cinder-api-0" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.443044 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d88039f-979e-4d21-b877-0b8d7fa35a18-config-data-custom\") pod \"cinder-api-0\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " pod="openstack/cinder-api-0" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.443079 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d88039f-979e-4d21-b877-0b8d7fa35a18-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " pod="openstack/cinder-api-0" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.443131 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d88039f-979e-4d21-b877-0b8d7fa35a18-scripts\") pod \"cinder-api-0\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " pod="openstack/cinder-api-0" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.443174 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d88039f-979e-4d21-b877-0b8d7fa35a18-config-data\") pod \"cinder-api-0\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " pod="openstack/cinder-api-0" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.443194 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d88039f-979e-4d21-b877-0b8d7fa35a18-logs\") pod \"cinder-api-0\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " pod="openstack/cinder-api-0" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.443326 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpkm6\" (UniqueName: \"kubernetes.io/projected/2d88039f-979e-4d21-b877-0b8d7fa35a18-kube-api-access-kpkm6\") pod \"cinder-api-0\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " pod="openstack/cinder-api-0" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.445910 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d88039f-979e-4d21-b877-0b8d7fa35a18-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " pod="openstack/cinder-api-0" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.447104 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d88039f-979e-4d21-b877-0b8d7fa35a18-logs\") pod \"cinder-api-0\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " pod="openstack/cinder-api-0" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.448418 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d88039f-979e-4d21-b877-0b8d7fa35a18-scripts\") pod \"cinder-api-0\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " pod="openstack/cinder-api-0" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.449106 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d88039f-979e-4d21-b877-0b8d7fa35a18-config-data-custom\") pod \"cinder-api-0\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " pod="openstack/cinder-api-0" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.451316 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d88039f-979e-4d21-b877-0b8d7fa35a18-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " pod="openstack/cinder-api-0" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.452074 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d88039f-979e-4d21-b877-0b8d7fa35a18-config-data\") pod \"cinder-api-0\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " pod="openstack/cinder-api-0" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.464455 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpkm6\" (UniqueName: \"kubernetes.io/projected/2d88039f-979e-4d21-b877-0b8d7fa35a18-kube-api-access-kpkm6\") pod \"cinder-api-0\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " pod="openstack/cinder-api-0" Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.567416 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 20:16:22 crc kubenswrapper[4915]: W0127 20:16:22.877389 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde74ec03_d1bc_43cc_b036_73b93a0d94b3.slice/crio-50aaebfc978933cb92ddaf2387ca53e55a9ca13ac4d3530ed1d357f4158d41a2 WatchSource:0}: Error finding container 50aaebfc978933cb92ddaf2387ca53e55a9ca13ac4d3530ed1d357f4158d41a2: Status 404 returned error can't find the container with id 50aaebfc978933cb92ddaf2387ca53e55a9ca13ac4d3530ed1d357f4158d41a2 Jan 27 20:16:22 crc kubenswrapper[4915]: I0127 20:16:22.878691 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75b77568d9-2lnpp"] Jan 27 20:16:23 crc kubenswrapper[4915]: W0127 20:16:23.073294 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d88039f_979e_4d21_b877_0b8d7fa35a18.slice/crio-7e743e7f0d0f96f910752793fd5f0c9847d44eadfab8a84a01d5c0150798e07f WatchSource:0}: Error finding container 7e743e7f0d0f96f910752793fd5f0c9847d44eadfab8a84a01d5c0150798e07f: Status 404 returned error can't find the container with id 7e743e7f0d0f96f910752793fd5f0c9847d44eadfab8a84a01d5c0150798e07f Jan 27 20:16:23 crc kubenswrapper[4915]: I0127 20:16:23.073977 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 20:16:23 crc kubenswrapper[4915]: I0127 20:16:23.689231 4915 generic.go:334] "Generic (PLEG): container finished" podID="de74ec03-d1bc-43cc-b036-73b93a0d94b3" containerID="4ce8c793fa9e1437d4238ba9ce15990a5e4000d765bff1aed421ac2a463358f7" exitCode=0 Jan 27 20:16:23 crc kubenswrapper[4915]: I0127 20:16:23.689307 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b77568d9-2lnpp" event={"ID":"de74ec03-d1bc-43cc-b036-73b93a0d94b3","Type":"ContainerDied","Data":"4ce8c793fa9e1437d4238ba9ce15990a5e4000d765bff1aed421ac2a463358f7"} Jan 27 20:16:23 crc kubenswrapper[4915]: I0127 20:16:23.690115 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b77568d9-2lnpp" event={"ID":"de74ec03-d1bc-43cc-b036-73b93a0d94b3","Type":"ContainerStarted","Data":"50aaebfc978933cb92ddaf2387ca53e55a9ca13ac4d3530ed1d357f4158d41a2"} Jan 27 20:16:23 crc kubenswrapper[4915]: I0127 20:16:23.691927 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2d88039f-979e-4d21-b877-0b8d7fa35a18","Type":"ContainerStarted","Data":"7e743e7f0d0f96f910752793fd5f0c9847d44eadfab8a84a01d5c0150798e07f"} Jan 27 20:16:24 crc kubenswrapper[4915]: I0127 20:16:24.703868 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b77568d9-2lnpp" event={"ID":"de74ec03-d1bc-43cc-b036-73b93a0d94b3","Type":"ContainerStarted","Data":"74eb83d36135888f77652ccc85f87aac1a882483a396e73969796320d2853a09"} Jan 27 20:16:24 crc kubenswrapper[4915]: I0127 20:16:24.704520 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75b77568d9-2lnpp" Jan 27 20:16:24 crc kubenswrapper[4915]: I0127 20:16:24.706348 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2d88039f-979e-4d21-b877-0b8d7fa35a18","Type":"ContainerStarted","Data":"540825fb8f50b6b4dd4d43dbd949e6860bb97c12143cc0210a2491095215bffa"} Jan 27 20:16:24 crc kubenswrapper[4915]: I0127 20:16:24.706414 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2d88039f-979e-4d21-b877-0b8d7fa35a18","Type":"ContainerStarted","Data":"0ab89ff5879d1bcc8aa16bf11a64bc4b343619eccf69cebd4441412db8f931f0"} Jan 27 20:16:24 crc kubenswrapper[4915]: I0127 20:16:24.706545 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 20:16:24 crc kubenswrapper[4915]: I0127 20:16:24.744711 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75b77568d9-2lnpp" podStartSLOduration=3.744678301 podStartE2EDuration="3.744678301s" podCreationTimestamp="2026-01-27 20:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:16:24.733193048 +0000 UTC m=+5676.091046722" watchObservedRunningTime="2026-01-27 20:16:24.744678301 +0000 UTC m=+5676.102531965" Jan 27 20:16:24 crc kubenswrapper[4915]: I0127 20:16:24.760092 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.7600721 podStartE2EDuration="2.7600721s" podCreationTimestamp="2026-01-27 20:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:16:24.752842742 +0000 UTC m=+5676.110696406" watchObservedRunningTime="2026-01-27 20:16:24.7600721 +0000 UTC m=+5676.117925764" Jan 27 20:16:32 crc kubenswrapper[4915]: I0127 20:16:32.338956 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75b77568d9-2lnpp" Jan 27 20:16:32 crc kubenswrapper[4915]: I0127 20:16:32.417903 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fc458fcf7-66xrv"] Jan 27 20:16:32 crc kubenswrapper[4915]: I0127 20:16:32.418185 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" podUID="a001c350-6514-48b6-a7ed-78f0ad3c55bd" containerName="dnsmasq-dns" containerID="cri-o://4baa5bcd39c1397fe94033c9db38e4ab3b07c21543cc4223c0790cbadd2fbfcb" gracePeriod=10 Jan 27 20:16:32 crc kubenswrapper[4915]: I0127 20:16:32.785882 4915 generic.go:334] "Generic (PLEG): container finished" podID="a001c350-6514-48b6-a7ed-78f0ad3c55bd" containerID="4baa5bcd39c1397fe94033c9db38e4ab3b07c21543cc4223c0790cbadd2fbfcb" exitCode=0 Jan 27 20:16:32 crc kubenswrapper[4915]: I0127 20:16:32.785914 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" event={"ID":"a001c350-6514-48b6-a7ed-78f0ad3c55bd","Type":"ContainerDied","Data":"4baa5bcd39c1397fe94033c9db38e4ab3b07c21543cc4223c0790cbadd2fbfcb"} Jan 27 20:16:32 crc kubenswrapper[4915]: I0127 20:16:32.896932 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" podUID="a001c350-6514-48b6-a7ed-78f0ad3c55bd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.76:5353: connect: connection refused" Jan 27 20:16:33 crc kubenswrapper[4915]: I0127 20:16:33.470577 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" Jan 27 20:16:33 crc kubenswrapper[4915]: I0127 20:16:33.552946 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a001c350-6514-48b6-a7ed-78f0ad3c55bd-dns-svc\") pod \"a001c350-6514-48b6-a7ed-78f0ad3c55bd\" (UID: \"a001c350-6514-48b6-a7ed-78f0ad3c55bd\") " Jan 27 20:16:33 crc kubenswrapper[4915]: I0127 20:16:33.553008 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a001c350-6514-48b6-a7ed-78f0ad3c55bd-ovsdbserver-nb\") pod \"a001c350-6514-48b6-a7ed-78f0ad3c55bd\" (UID: \"a001c350-6514-48b6-a7ed-78f0ad3c55bd\") " Jan 27 20:16:33 crc kubenswrapper[4915]: I0127 20:16:33.553085 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a001c350-6514-48b6-a7ed-78f0ad3c55bd-ovsdbserver-sb\") pod \"a001c350-6514-48b6-a7ed-78f0ad3c55bd\" (UID: \"a001c350-6514-48b6-a7ed-78f0ad3c55bd\") " Jan 27 20:16:33 crc kubenswrapper[4915]: I0127 20:16:33.553245 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a001c350-6514-48b6-a7ed-78f0ad3c55bd-config\") pod \"a001c350-6514-48b6-a7ed-78f0ad3c55bd\" (UID: \"a001c350-6514-48b6-a7ed-78f0ad3c55bd\") " Jan 27 20:16:33 crc kubenswrapper[4915]: I0127 20:16:33.553287 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9j96\" (UniqueName: \"kubernetes.io/projected/a001c350-6514-48b6-a7ed-78f0ad3c55bd-kube-api-access-d9j96\") pod \"a001c350-6514-48b6-a7ed-78f0ad3c55bd\" (UID: \"a001c350-6514-48b6-a7ed-78f0ad3c55bd\") " Jan 27 20:16:33 crc kubenswrapper[4915]: I0127 20:16:33.562087 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a001c350-6514-48b6-a7ed-78f0ad3c55bd-kube-api-access-d9j96" (OuterVolumeSpecName: "kube-api-access-d9j96") pod "a001c350-6514-48b6-a7ed-78f0ad3c55bd" (UID: "a001c350-6514-48b6-a7ed-78f0ad3c55bd"). InnerVolumeSpecName "kube-api-access-d9j96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:16:33 crc kubenswrapper[4915]: I0127 20:16:33.614217 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a001c350-6514-48b6-a7ed-78f0ad3c55bd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a001c350-6514-48b6-a7ed-78f0ad3c55bd" (UID: "a001c350-6514-48b6-a7ed-78f0ad3c55bd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:16:33 crc kubenswrapper[4915]: I0127 20:16:33.615779 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a001c350-6514-48b6-a7ed-78f0ad3c55bd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a001c350-6514-48b6-a7ed-78f0ad3c55bd" (UID: "a001c350-6514-48b6-a7ed-78f0ad3c55bd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:16:33 crc kubenswrapper[4915]: I0127 20:16:33.624709 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a001c350-6514-48b6-a7ed-78f0ad3c55bd-config" (OuterVolumeSpecName: "config") pod "a001c350-6514-48b6-a7ed-78f0ad3c55bd" (UID: "a001c350-6514-48b6-a7ed-78f0ad3c55bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:16:33 crc kubenswrapper[4915]: I0127 20:16:33.626560 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a001c350-6514-48b6-a7ed-78f0ad3c55bd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a001c350-6514-48b6-a7ed-78f0ad3c55bd" (UID: "a001c350-6514-48b6-a7ed-78f0ad3c55bd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:16:33 crc kubenswrapper[4915]: I0127 20:16:33.655124 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a001c350-6514-48b6-a7ed-78f0ad3c55bd-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:33 crc kubenswrapper[4915]: I0127 20:16:33.655164 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9j96\" (UniqueName: \"kubernetes.io/projected/a001c350-6514-48b6-a7ed-78f0ad3c55bd-kube-api-access-d9j96\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:33 crc kubenswrapper[4915]: I0127 20:16:33.655180 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a001c350-6514-48b6-a7ed-78f0ad3c55bd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:33 crc kubenswrapper[4915]: I0127 20:16:33.655192 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a001c350-6514-48b6-a7ed-78f0ad3c55bd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:33 crc kubenswrapper[4915]: I0127 20:16:33.655204 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a001c350-6514-48b6-a7ed-78f0ad3c55bd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:33 crc kubenswrapper[4915]: I0127 20:16:33.796209 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" event={"ID":"a001c350-6514-48b6-a7ed-78f0ad3c55bd","Type":"ContainerDied","Data":"fcaedb509095c74c2145fd1954c46f332544c3422cdc8c952a8897e132a9e7f2"} Jan 27 20:16:33 crc kubenswrapper[4915]: I0127 20:16:33.796293 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fc458fcf7-66xrv" Jan 27 20:16:33 crc kubenswrapper[4915]: I0127 20:16:33.796290 4915 scope.go:117] "RemoveContainer" containerID="4baa5bcd39c1397fe94033c9db38e4ab3b07c21543cc4223c0790cbadd2fbfcb" Jan 27 20:16:33 crc kubenswrapper[4915]: I0127 20:16:33.819521 4915 scope.go:117] "RemoveContainer" containerID="b7bb7ea766c84ce5e6db10c0970c8297125d8dc51c0ecac9e2133c5f85e013f7" Jan 27 20:16:33 crc kubenswrapper[4915]: I0127 20:16:33.838504 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fc458fcf7-66xrv"] Jan 27 20:16:33 crc kubenswrapper[4915]: I0127 20:16:33.848140 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fc458fcf7-66xrv"] Jan 27 20:16:33 crc kubenswrapper[4915]: I0127 20:16:33.978817 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:16:33 crc kubenswrapper[4915]: I0127 20:16:33.979197 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fae0714b-063b-418d-a1b1-d87a19153cf0" containerName="nova-api-api" containerID="cri-o://e6d52b71864b8afd15c1720688b4afee5eadf603626910959906a22d45943bb6" gracePeriod=30 Jan 27 20:16:33 crc kubenswrapper[4915]: I0127 20:16:33.979353 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fae0714b-063b-418d-a1b1-d87a19153cf0" containerName="nova-api-log" containerID="cri-o://e292794cbd5069421cedb0e8499475dd8c26f016ead0a3b95ce9f4f643ece438" gracePeriod=30 Jan 27 20:16:34 crc kubenswrapper[4915]: I0127 20:16:34.023696 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:16:34 crc kubenswrapper[4915]: I0127 20:16:34.037611 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0e756d45-4816-4bfd-8d5b-05dd204c3fe5" containerName="nova-scheduler-scheduler" containerID="cri-o://ec350fecec8eb796545e03ec4f4c61bcbed3571ec3b44ca1037ea0ad9db961cc" gracePeriod=30 Jan 27 20:16:34 crc kubenswrapper[4915]: I0127 20:16:34.053413 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 20:16:34 crc kubenswrapper[4915]: I0127 20:16:34.053736 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="c3080a4d-b714-443c-83cc-ba3e7b82e3b5" containerName="nova-cell0-conductor-conductor" containerID="cri-o://a789ec3a448c54abbf18cab4369fc2c2365028f5015485ed2367d4376fcf350c" gracePeriod=30 Jan 27 20:16:34 crc kubenswrapper[4915]: I0127 20:16:34.078281 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:16:34 crc kubenswrapper[4915]: I0127 20:16:34.078678 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4a533c6e-e261-4faa-8186-4a3105f1a9e4" containerName="nova-metadata-log" containerID="cri-o://0d6e1560e3d1a5bc420801f29ba850f2bdaf6a4a5ec110ac98c0a68820db55eb" gracePeriod=30 Jan 27 20:16:34 crc kubenswrapper[4915]: I0127 20:16:34.079375 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4a533c6e-e261-4faa-8186-4a3105f1a9e4" containerName="nova-metadata-metadata" containerID="cri-o://493573223ef72b46486d52188a23d8ae6e6e0d3cf8deebf23c3c67dbe829228b" gracePeriod=30 Jan 27 20:16:34 crc kubenswrapper[4915]: I0127 20:16:34.118272 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 20:16:34 crc kubenswrapper[4915]: I0127 20:16:34.118520 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="26208d0d-91a2-4a24-8bf2-bcb293abe6ba" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://c901a3dde72a6e18513cdc0e1435998cc7c1e2466153cb2b3e0a5bd429f9bff3" gracePeriod=30 Jan 27 20:16:34 crc kubenswrapper[4915]: I0127 20:16:34.133599 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 20:16:34 crc kubenswrapper[4915]: I0127 20:16:34.133830 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="c556fb16-444f-434b-a1a6-54c41a6dc92b" containerName="nova-cell1-conductor-conductor" containerID="cri-o://dc30cbf49820b333f04cab867bdb423603015bbdc7ff1cc34b576f324f21c64b" gracePeriod=30 Jan 27 20:16:34 crc kubenswrapper[4915]: E0127 20:16:34.274604 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfae0714b_063b_418d_a1b1_d87a19153cf0.slice/crio-e292794cbd5069421cedb0e8499475dd8c26f016ead0a3b95ce9f4f643ece438.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a533c6e_e261_4faa_8186_4a3105f1a9e4.slice/crio-0d6e1560e3d1a5bc420801f29ba850f2bdaf6a4a5ec110ac98c0a68820db55eb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a533c6e_e261_4faa_8186_4a3105f1a9e4.slice/crio-conmon-0d6e1560e3d1a5bc420801f29ba850f2bdaf6a4a5ec110ac98c0a68820db55eb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfae0714b_063b_418d_a1b1_d87a19153cf0.slice/crio-conmon-e292794cbd5069421cedb0e8499475dd8c26f016ead0a3b95ce9f4f643ece438.scope\": RecentStats: unable to find data in memory cache]" Jan 27 20:16:34 crc kubenswrapper[4915]: I0127 20:16:34.804950 4915 generic.go:334] "Generic (PLEG): container finished" podID="26208d0d-91a2-4a24-8bf2-bcb293abe6ba" containerID="c901a3dde72a6e18513cdc0e1435998cc7c1e2466153cb2b3e0a5bd429f9bff3" exitCode=0 Jan 27 20:16:34 crc kubenswrapper[4915]: I0127 20:16:34.805027 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"26208d0d-91a2-4a24-8bf2-bcb293abe6ba","Type":"ContainerDied","Data":"c901a3dde72a6e18513cdc0e1435998cc7c1e2466153cb2b3e0a5bd429f9bff3"} Jan 27 20:16:34 crc kubenswrapper[4915]: I0127 20:16:34.808171 4915 generic.go:334] "Generic (PLEG): container finished" podID="4a533c6e-e261-4faa-8186-4a3105f1a9e4" containerID="0d6e1560e3d1a5bc420801f29ba850f2bdaf6a4a5ec110ac98c0a68820db55eb" exitCode=143 Jan 27 20:16:34 crc kubenswrapper[4915]: I0127 20:16:34.808208 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a533c6e-e261-4faa-8186-4a3105f1a9e4","Type":"ContainerDied","Data":"0d6e1560e3d1a5bc420801f29ba850f2bdaf6a4a5ec110ac98c0a68820db55eb"} Jan 27 20:16:34 crc kubenswrapper[4915]: I0127 20:16:34.811372 4915 generic.go:334] "Generic (PLEG): container finished" podID="fae0714b-063b-418d-a1b1-d87a19153cf0" containerID="e292794cbd5069421cedb0e8499475dd8c26f016ead0a3b95ce9f4f643ece438" exitCode=143 Jan 27 20:16:34 crc kubenswrapper[4915]: I0127 20:16:34.811403 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fae0714b-063b-418d-a1b1-d87a19153cf0","Type":"ContainerDied","Data":"e292794cbd5069421cedb0e8499475dd8c26f016ead0a3b95ce9f4f643ece438"} Jan 27 20:16:34 crc kubenswrapper[4915]: I0127 20:16:34.951037 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 27 20:16:35 crc kubenswrapper[4915]: I0127 20:16:35.174442 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:16:35 crc kubenswrapper[4915]: I0127 20:16:35.286739 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26208d0d-91a2-4a24-8bf2-bcb293abe6ba-combined-ca-bundle\") pod \"26208d0d-91a2-4a24-8bf2-bcb293abe6ba\" (UID: \"26208d0d-91a2-4a24-8bf2-bcb293abe6ba\") " Jan 27 20:16:35 crc kubenswrapper[4915]: I0127 20:16:35.286928 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l228t\" (UniqueName: \"kubernetes.io/projected/26208d0d-91a2-4a24-8bf2-bcb293abe6ba-kube-api-access-l228t\") pod \"26208d0d-91a2-4a24-8bf2-bcb293abe6ba\" (UID: \"26208d0d-91a2-4a24-8bf2-bcb293abe6ba\") " Jan 27 20:16:35 crc kubenswrapper[4915]: I0127 20:16:35.286975 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26208d0d-91a2-4a24-8bf2-bcb293abe6ba-config-data\") pod \"26208d0d-91a2-4a24-8bf2-bcb293abe6ba\" (UID: \"26208d0d-91a2-4a24-8bf2-bcb293abe6ba\") " Jan 27 20:16:35 crc kubenswrapper[4915]: I0127 20:16:35.294676 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26208d0d-91a2-4a24-8bf2-bcb293abe6ba-kube-api-access-l228t" (OuterVolumeSpecName: "kube-api-access-l228t") pod "26208d0d-91a2-4a24-8bf2-bcb293abe6ba" (UID: "26208d0d-91a2-4a24-8bf2-bcb293abe6ba"). InnerVolumeSpecName "kube-api-access-l228t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:16:35 crc kubenswrapper[4915]: I0127 20:16:35.312236 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26208d0d-91a2-4a24-8bf2-bcb293abe6ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26208d0d-91a2-4a24-8bf2-bcb293abe6ba" (UID: "26208d0d-91a2-4a24-8bf2-bcb293abe6ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:16:35 crc kubenswrapper[4915]: I0127 20:16:35.333087 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26208d0d-91a2-4a24-8bf2-bcb293abe6ba-config-data" (OuterVolumeSpecName: "config-data") pod "26208d0d-91a2-4a24-8bf2-bcb293abe6ba" (UID: "26208d0d-91a2-4a24-8bf2-bcb293abe6ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:16:35 crc kubenswrapper[4915]: I0127 20:16:35.368455 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a001c350-6514-48b6-a7ed-78f0ad3c55bd" path="/var/lib/kubelet/pods/a001c350-6514-48b6-a7ed-78f0ad3c55bd/volumes" Jan 27 20:16:35 crc kubenswrapper[4915]: I0127 20:16:35.389417 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26208d0d-91a2-4a24-8bf2-bcb293abe6ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:35 crc kubenswrapper[4915]: I0127 20:16:35.389457 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l228t\" (UniqueName: \"kubernetes.io/projected/26208d0d-91a2-4a24-8bf2-bcb293abe6ba-kube-api-access-l228t\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:35 crc kubenswrapper[4915]: I0127 20:16:35.389469 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26208d0d-91a2-4a24-8bf2-bcb293abe6ba-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:35 crc kubenswrapper[4915]: I0127 20:16:35.819839 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"26208d0d-91a2-4a24-8bf2-bcb293abe6ba","Type":"ContainerDied","Data":"ef9df8e35e2ebab521ed2e887026fe4a6536b92fcc9d9704f123ada129cade87"} Jan 27 20:16:35 crc kubenswrapper[4915]: I0127 20:16:35.819882 4915 scope.go:117] "RemoveContainer" containerID="c901a3dde72a6e18513cdc0e1435998cc7c1e2466153cb2b3e0a5bd429f9bff3" Jan 27 20:16:35 crc kubenswrapper[4915]: I0127 20:16:35.819982 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:16:35 crc kubenswrapper[4915]: I0127 20:16:35.853255 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 20:16:35 crc kubenswrapper[4915]: I0127 20:16:35.871385 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 20:16:35 crc kubenswrapper[4915]: I0127 20:16:35.896967 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 20:16:35 crc kubenswrapper[4915]: E0127 20:16:35.897421 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a001c350-6514-48b6-a7ed-78f0ad3c55bd" containerName="dnsmasq-dns" Jan 27 20:16:35 crc kubenswrapper[4915]: I0127 20:16:35.897446 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a001c350-6514-48b6-a7ed-78f0ad3c55bd" containerName="dnsmasq-dns" Jan 27 20:16:35 crc kubenswrapper[4915]: E0127 20:16:35.897475 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26208d0d-91a2-4a24-8bf2-bcb293abe6ba" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 20:16:35 crc kubenswrapper[4915]: I0127 20:16:35.897485 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="26208d0d-91a2-4a24-8bf2-bcb293abe6ba" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 20:16:35 crc kubenswrapper[4915]: E0127 20:16:35.897497 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a001c350-6514-48b6-a7ed-78f0ad3c55bd" containerName="init" Jan 27 20:16:35 crc kubenswrapper[4915]: I0127 20:16:35.897504 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a001c350-6514-48b6-a7ed-78f0ad3c55bd" containerName="init" Jan 27 20:16:35 crc kubenswrapper[4915]: I0127 20:16:35.897757 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="26208d0d-91a2-4a24-8bf2-bcb293abe6ba" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 20:16:35 crc kubenswrapper[4915]: I0127 20:16:35.898199 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a001c350-6514-48b6-a7ed-78f0ad3c55bd" containerName="dnsmasq-dns" Jan 27 20:16:35 crc kubenswrapper[4915]: I0127 20:16:35.899031 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:16:35 crc kubenswrapper[4915]: I0127 20:16:35.902246 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 20:16:35 crc kubenswrapper[4915]: I0127 20:16:35.911336 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.003403 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3e36924-9ae1-415f-87e7-2a8eca6a7a93-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e3e36924-9ae1-415f-87e7-2a8eca6a7a93\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.003460 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e36924-9ae1-415f-87e7-2a8eca6a7a93-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e3e36924-9ae1-415f-87e7-2a8eca6a7a93\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.003507 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22dcd\" (UniqueName: \"kubernetes.io/projected/e3e36924-9ae1-415f-87e7-2a8eca6a7a93-kube-api-access-22dcd\") pod \"nova-cell1-novncproxy-0\" (UID: \"e3e36924-9ae1-415f-87e7-2a8eca6a7a93\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.106134 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3e36924-9ae1-415f-87e7-2a8eca6a7a93-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e3e36924-9ae1-415f-87e7-2a8eca6a7a93\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.106607 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e36924-9ae1-415f-87e7-2a8eca6a7a93-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e3e36924-9ae1-415f-87e7-2a8eca6a7a93\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.106696 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22dcd\" (UniqueName: \"kubernetes.io/projected/e3e36924-9ae1-415f-87e7-2a8eca6a7a93-kube-api-access-22dcd\") pod \"nova-cell1-novncproxy-0\" (UID: \"e3e36924-9ae1-415f-87e7-2a8eca6a7a93\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.112606 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e36924-9ae1-415f-87e7-2a8eca6a7a93-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e3e36924-9ae1-415f-87e7-2a8eca6a7a93\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.114997 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3e36924-9ae1-415f-87e7-2a8eca6a7a93-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e3e36924-9ae1-415f-87e7-2a8eca6a7a93\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.126231 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22dcd\" (UniqueName: \"kubernetes.io/projected/e3e36924-9ae1-415f-87e7-2a8eca6a7a93-kube-api-access-22dcd\") pod \"nova-cell1-novncproxy-0\" (UID: \"e3e36924-9ae1-415f-87e7-2a8eca6a7a93\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.223514 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.374240 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.514890 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c556fb16-444f-434b-a1a6-54c41a6dc92b-combined-ca-bundle\") pod \"c556fb16-444f-434b-a1a6-54c41a6dc92b\" (UID: \"c556fb16-444f-434b-a1a6-54c41a6dc92b\") " Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.515027 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c556fb16-444f-434b-a1a6-54c41a6dc92b-config-data\") pod \"c556fb16-444f-434b-a1a6-54c41a6dc92b\" (UID: \"c556fb16-444f-434b-a1a6-54c41a6dc92b\") " Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.515106 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7msr\" (UniqueName: \"kubernetes.io/projected/c556fb16-444f-434b-a1a6-54c41a6dc92b-kube-api-access-l7msr\") pod \"c556fb16-444f-434b-a1a6-54c41a6dc92b\" (UID: \"c556fb16-444f-434b-a1a6-54c41a6dc92b\") " Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.519106 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c556fb16-444f-434b-a1a6-54c41a6dc92b-kube-api-access-l7msr" (OuterVolumeSpecName: "kube-api-access-l7msr") pod "c556fb16-444f-434b-a1a6-54c41a6dc92b" (UID: "c556fb16-444f-434b-a1a6-54c41a6dc92b"). InnerVolumeSpecName "kube-api-access-l7msr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.540977 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c556fb16-444f-434b-a1a6-54c41a6dc92b-config-data" (OuterVolumeSpecName: "config-data") pod "c556fb16-444f-434b-a1a6-54c41a6dc92b" (UID: "c556fb16-444f-434b-a1a6-54c41a6dc92b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.568685 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c556fb16-444f-434b-a1a6-54c41a6dc92b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c556fb16-444f-434b-a1a6-54c41a6dc92b" (UID: "c556fb16-444f-434b-a1a6-54c41a6dc92b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.616646 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c556fb16-444f-434b-a1a6-54c41a6dc92b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.616686 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c556fb16-444f-434b-a1a6-54c41a6dc92b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.616697 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7msr\" (UniqueName: \"kubernetes.io/projected/c556fb16-444f-434b-a1a6-54c41a6dc92b-kube-api-access-l7msr\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.689266 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 20:16:36 crc kubenswrapper[4915]: W0127 20:16:36.689990 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3e36924_9ae1_415f_87e7_2a8eca6a7a93.slice/crio-96d87e97b84b3faaebdc71aa1c81dbdd70bbb8fc7809eea3b56e2ef0284d4fb4 WatchSource:0}: Error finding container 96d87e97b84b3faaebdc71aa1c81dbdd70bbb8fc7809eea3b56e2ef0284d4fb4: Status 404 returned error can't find the container with id 96d87e97b84b3faaebdc71aa1c81dbdd70bbb8fc7809eea3b56e2ef0284d4fb4 Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.830921 4915 generic.go:334] "Generic (PLEG): container finished" podID="c556fb16-444f-434b-a1a6-54c41a6dc92b" containerID="dc30cbf49820b333f04cab867bdb423603015bbdc7ff1cc34b576f324f21c64b" exitCode=0 Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.830990 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c556fb16-444f-434b-a1a6-54c41a6dc92b","Type":"ContainerDied","Data":"dc30cbf49820b333f04cab867bdb423603015bbdc7ff1cc34b576f324f21c64b"} Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.831015 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c556fb16-444f-434b-a1a6-54c41a6dc92b","Type":"ContainerDied","Data":"bd11bde80d0770ca35ee0df746a6f4c49f8c78533ac690ee307429fe40d6443d"} Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.831031 4915 scope.go:117] "RemoveContainer" containerID="dc30cbf49820b333f04cab867bdb423603015bbdc7ff1cc34b576f324f21c64b" Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.831030 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.834312 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e3e36924-9ae1-415f-87e7-2a8eca6a7a93","Type":"ContainerStarted","Data":"96d87e97b84b3faaebdc71aa1c81dbdd70bbb8fc7809eea3b56e2ef0284d4fb4"} Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.852652 4915 scope.go:117] "RemoveContainer" containerID="dc30cbf49820b333f04cab867bdb423603015bbdc7ff1cc34b576f324f21c64b" Jan 27 20:16:36 crc kubenswrapper[4915]: E0127 20:16:36.853185 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc30cbf49820b333f04cab867bdb423603015bbdc7ff1cc34b576f324f21c64b\": container with ID starting with dc30cbf49820b333f04cab867bdb423603015bbdc7ff1cc34b576f324f21c64b not found: ID does not exist" containerID="dc30cbf49820b333f04cab867bdb423603015bbdc7ff1cc34b576f324f21c64b" Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.853221 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc30cbf49820b333f04cab867bdb423603015bbdc7ff1cc34b576f324f21c64b"} err="failed to get container status \"dc30cbf49820b333f04cab867bdb423603015bbdc7ff1cc34b576f324f21c64b\": rpc error: code = NotFound desc = could not find container \"dc30cbf49820b333f04cab867bdb423603015bbdc7ff1cc34b576f324f21c64b\": container with ID starting with dc30cbf49820b333f04cab867bdb423603015bbdc7ff1cc34b576f324f21c64b not found: ID does not exist" Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.874805 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.907775 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.918598 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 20:16:36 crc kubenswrapper[4915]: E0127 20:16:36.919281 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c556fb16-444f-434b-a1a6-54c41a6dc92b" containerName="nova-cell1-conductor-conductor" Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.919307 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c556fb16-444f-434b-a1a6-54c41a6dc92b" containerName="nova-cell1-conductor-conductor" Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.919614 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c556fb16-444f-434b-a1a6-54c41a6dc92b" containerName="nova-cell1-conductor-conductor" Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.920382 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.922696 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 20:16:36 crc kubenswrapper[4915]: I0127 20:16:36.931254 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.024210 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/448a7ba3-9525-4822-bcd1-336594ae694a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"448a7ba3-9525-4822-bcd1-336594ae694a\") " pod="openstack/nova-cell1-conductor-0" Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.024312 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpgrc\" (UniqueName: \"kubernetes.io/projected/448a7ba3-9525-4822-bcd1-336594ae694a-kube-api-access-hpgrc\") pod \"nova-cell1-conductor-0\" (UID: \"448a7ba3-9525-4822-bcd1-336594ae694a\") " pod="openstack/nova-cell1-conductor-0" Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.024340 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/448a7ba3-9525-4822-bcd1-336594ae694a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"448a7ba3-9525-4822-bcd1-336594ae694a\") " pod="openstack/nova-cell1-conductor-0" Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.126181 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpgrc\" (UniqueName: \"kubernetes.io/projected/448a7ba3-9525-4822-bcd1-336594ae694a-kube-api-access-hpgrc\") pod \"nova-cell1-conductor-0\" (UID: \"448a7ba3-9525-4822-bcd1-336594ae694a\") " pod="openstack/nova-cell1-conductor-0" Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.126231 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/448a7ba3-9525-4822-bcd1-336594ae694a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"448a7ba3-9525-4822-bcd1-336594ae694a\") " pod="openstack/nova-cell1-conductor-0" Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.126343 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/448a7ba3-9525-4822-bcd1-336594ae694a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"448a7ba3-9525-4822-bcd1-336594ae694a\") " pod="openstack/nova-cell1-conductor-0" Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.131091 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/448a7ba3-9525-4822-bcd1-336594ae694a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"448a7ba3-9525-4822-bcd1-336594ae694a\") " pod="openstack/nova-cell1-conductor-0" Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.133248 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/448a7ba3-9525-4822-bcd1-336594ae694a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"448a7ba3-9525-4822-bcd1-336594ae694a\") " pod="openstack/nova-cell1-conductor-0" Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.143930 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpgrc\" (UniqueName: \"kubernetes.io/projected/448a7ba3-9525-4822-bcd1-336594ae694a-kube-api-access-hpgrc\") pod \"nova-cell1-conductor-0\" (UID: \"448a7ba3-9525-4822-bcd1-336594ae694a\") " pod="openstack/nova-cell1-conductor-0" Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.229997 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4a533c6e-e261-4faa-8186-4a3105f1a9e4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.74:8775/\": read tcp 10.217.0.2:43442->10.217.1.74:8775: read: connection reset by peer" Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.230020 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4a533c6e-e261-4faa-8186-4a3105f1a9e4" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.74:8775/\": read tcp 10.217.0.2:43440->10.217.1.74:8775: read: connection reset by peer" Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.244486 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.395110 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26208d0d-91a2-4a24-8bf2-bcb293abe6ba" path="/var/lib/kubelet/pods/26208d0d-91a2-4a24-8bf2-bcb293abe6ba/volumes" Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.395921 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c556fb16-444f-434b-a1a6-54c41a6dc92b" path="/var/lib/kubelet/pods/c556fb16-444f-434b-a1a6-54c41a6dc92b/volumes" Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.406320 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="fae0714b-063b-418d-a1b1-d87a19153cf0" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.75:8774/\": read tcp 10.217.0.2:40436->10.217.1.75:8774: read: connection reset by peer" Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.406506 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="fae0714b-063b-418d-a1b1-d87a19153cf0" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.75:8774/\": read tcp 10.217.0.2:40422->10.217.1.75:8774: read: connection reset by peer" Jan 27 20:16:37 crc kubenswrapper[4915]: E0127 20:16:37.484234 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec350fecec8eb796545e03ec4f4c61bcbed3571ec3b44ca1037ea0ad9db961cc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 20:16:37 crc kubenswrapper[4915]: E0127 20:16:37.488044 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec350fecec8eb796545e03ec4f4c61bcbed3571ec3b44ca1037ea0ad9db961cc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 20:16:37 crc kubenswrapper[4915]: E0127 20:16:37.493302 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ec350fecec8eb796545e03ec4f4c61bcbed3571ec3b44ca1037ea0ad9db961cc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 20:16:37 crc kubenswrapper[4915]: E0127 20:16:37.493374 4915 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="0e756d45-4816-4bfd-8d5b-05dd204c3fe5" containerName="nova-scheduler-scheduler" Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.781215 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.783376 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.877541 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e3e36924-9ae1-415f-87e7-2a8eca6a7a93","Type":"ContainerStarted","Data":"6b966d45e9342bfdfba17f7536f4ccf6d788c0b560c50dbc4b19cda197508c6d"} Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.884496 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"448a7ba3-9525-4822-bcd1-336594ae694a","Type":"ContainerStarted","Data":"bb3911300664a961af58d3a5399765f8cbead3333781160d09cd261899b03e81"} Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.895198 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.895182487 podStartE2EDuration="2.895182487s" podCreationTimestamp="2026-01-27 20:16:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:16:37.892575453 +0000 UTC m=+5689.250429127" watchObservedRunningTime="2026-01-27 20:16:37.895182487 +0000 UTC m=+5689.253036151" Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.910044 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.911003 4915 generic.go:334] "Generic (PLEG): container finished" podID="c3080a4d-b714-443c-83cc-ba3e7b82e3b5" containerID="a789ec3a448c54abbf18cab4369fc2c2365028f5015485ed2367d4376fcf350c" exitCode=0 Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.911083 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c3080a4d-b714-443c-83cc-ba3e7b82e3b5","Type":"ContainerDied","Data":"a789ec3a448c54abbf18cab4369fc2c2365028f5015485ed2367d4376fcf350c"} Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.912891 4915 generic.go:334] "Generic (PLEG): container finished" podID="fae0714b-063b-418d-a1b1-d87a19153cf0" containerID="e6d52b71864b8afd15c1720688b4afee5eadf603626910959906a22d45943bb6" exitCode=0 Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.912948 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fae0714b-063b-418d-a1b1-d87a19153cf0","Type":"ContainerDied","Data":"e6d52b71864b8afd15c1720688b4afee5eadf603626910959906a22d45943bb6"} Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.912988 4915 scope.go:117] "RemoveContainer" containerID="e6d52b71864b8afd15c1720688b4afee5eadf603626910959906a22d45943bb6" Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.913115 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.918510 4915 generic.go:334] "Generic (PLEG): container finished" podID="4a533c6e-e261-4faa-8186-4a3105f1a9e4" containerID="493573223ef72b46486d52188a23d8ae6e6e0d3cf8deebf23c3c67dbe829228b" exitCode=0 Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.918551 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a533c6e-e261-4faa-8186-4a3105f1a9e4","Type":"ContainerDied","Data":"493573223ef72b46486d52188a23d8ae6e6e0d3cf8deebf23c3c67dbe829228b"} Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.918578 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a533c6e-e261-4faa-8186-4a3105f1a9e4","Type":"ContainerDied","Data":"3f48ac5cc8619c72403c17db7fa4c69424fe887b1355ea85eff7f89051efde20"} Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.918636 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.947499 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a533c6e-e261-4faa-8186-4a3105f1a9e4-combined-ca-bundle\") pod \"4a533c6e-e261-4faa-8186-4a3105f1a9e4\" (UID: \"4a533c6e-e261-4faa-8186-4a3105f1a9e4\") " Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.947560 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xczhs\" (UniqueName: \"kubernetes.io/projected/4a533c6e-e261-4faa-8186-4a3105f1a9e4-kube-api-access-xczhs\") pod \"4a533c6e-e261-4faa-8186-4a3105f1a9e4\" (UID: \"4a533c6e-e261-4faa-8186-4a3105f1a9e4\") " Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.947594 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a533c6e-e261-4faa-8186-4a3105f1a9e4-logs\") pod \"4a533c6e-e261-4faa-8186-4a3105f1a9e4\" (UID: \"4a533c6e-e261-4faa-8186-4a3105f1a9e4\") " Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.947646 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a533c6e-e261-4faa-8186-4a3105f1a9e4-config-data\") pod \"4a533c6e-e261-4faa-8186-4a3105f1a9e4\" (UID: \"4a533c6e-e261-4faa-8186-4a3105f1a9e4\") " Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.952256 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a533c6e-e261-4faa-8186-4a3105f1a9e4-logs" (OuterVolumeSpecName: "logs") pod "4a533c6e-e261-4faa-8186-4a3105f1a9e4" (UID: "4a533c6e-e261-4faa-8186-4a3105f1a9e4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.953658 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a533c6e-e261-4faa-8186-4a3105f1a9e4-kube-api-access-xczhs" (OuterVolumeSpecName: "kube-api-access-xczhs") pod "4a533c6e-e261-4faa-8186-4a3105f1a9e4" (UID: "4a533c6e-e261-4faa-8186-4a3105f1a9e4"). InnerVolumeSpecName "kube-api-access-xczhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.989505 4915 scope.go:117] "RemoveContainer" containerID="e292794cbd5069421cedb0e8499475dd8c26f016ead0a3b95ce9f4f643ece438" Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.995235 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a533c6e-e261-4faa-8186-4a3105f1a9e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a533c6e-e261-4faa-8186-4a3105f1a9e4" (UID: "4a533c6e-e261-4faa-8186-4a3105f1a9e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:16:37 crc kubenswrapper[4915]: I0127 20:16:37.997966 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a533c6e-e261-4faa-8186-4a3105f1a9e4-config-data" (OuterVolumeSpecName: "config-data") pod "4a533c6e-e261-4faa-8186-4a3105f1a9e4" (UID: "4a533c6e-e261-4faa-8186-4a3105f1a9e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.055438 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fae0714b-063b-418d-a1b1-d87a19153cf0-combined-ca-bundle\") pod \"fae0714b-063b-418d-a1b1-d87a19153cf0\" (UID: \"fae0714b-063b-418d-a1b1-d87a19153cf0\") " Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.055518 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fae0714b-063b-418d-a1b1-d87a19153cf0-logs\") pod \"fae0714b-063b-418d-a1b1-d87a19153cf0\" (UID: \"fae0714b-063b-418d-a1b1-d87a19153cf0\") " Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.055621 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs2gv\" (UniqueName: \"kubernetes.io/projected/fae0714b-063b-418d-a1b1-d87a19153cf0-kube-api-access-qs2gv\") pod \"fae0714b-063b-418d-a1b1-d87a19153cf0\" (UID: \"fae0714b-063b-418d-a1b1-d87a19153cf0\") " Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.055699 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fae0714b-063b-418d-a1b1-d87a19153cf0-config-data\") pod \"fae0714b-063b-418d-a1b1-d87a19153cf0\" (UID: \"fae0714b-063b-418d-a1b1-d87a19153cf0\") " Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.056233 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a533c6e-e261-4faa-8186-4a3105f1a9e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.056249 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xczhs\" (UniqueName: \"kubernetes.io/projected/4a533c6e-e261-4faa-8186-4a3105f1a9e4-kube-api-access-xczhs\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.056262 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a533c6e-e261-4faa-8186-4a3105f1a9e4-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.056272 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a533c6e-e261-4faa-8186-4a3105f1a9e4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.069315 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fae0714b-063b-418d-a1b1-d87a19153cf0-logs" (OuterVolumeSpecName: "logs") pod "fae0714b-063b-418d-a1b1-d87a19153cf0" (UID: "fae0714b-063b-418d-a1b1-d87a19153cf0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.073141 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fae0714b-063b-418d-a1b1-d87a19153cf0-kube-api-access-qs2gv" (OuterVolumeSpecName: "kube-api-access-qs2gv") pod "fae0714b-063b-418d-a1b1-d87a19153cf0" (UID: "fae0714b-063b-418d-a1b1-d87a19153cf0"). InnerVolumeSpecName "kube-api-access-qs2gv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.121991 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fae0714b-063b-418d-a1b1-d87a19153cf0-config-data" (OuterVolumeSpecName: "config-data") pod "fae0714b-063b-418d-a1b1-d87a19153cf0" (UID: "fae0714b-063b-418d-a1b1-d87a19153cf0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.185010 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fae0714b-063b-418d-a1b1-d87a19153cf0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fae0714b-063b-418d-a1b1-d87a19153cf0" (UID: "fae0714b-063b-418d-a1b1-d87a19153cf0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.196559 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fae0714b-063b-418d-a1b1-d87a19153cf0-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.199380 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fae0714b-063b-418d-a1b1-d87a19153cf0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.199414 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fae0714b-063b-418d-a1b1-d87a19153cf0-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.199427 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs2gv\" (UniqueName: \"kubernetes.io/projected/fae0714b-063b-418d-a1b1-d87a19153cf0-kube-api-access-qs2gv\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.276473 4915 scope.go:117] "RemoveContainer" containerID="493573223ef72b46486d52188a23d8ae6e6e0d3cf8deebf23c3c67dbe829228b" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.363763 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.365450 4915 scope.go:117] "RemoveContainer" containerID="0d6e1560e3d1a5bc420801f29ba850f2bdaf6a4a5ec110ac98c0a68820db55eb" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.404351 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3080a4d-b714-443c-83cc-ba3e7b82e3b5-config-data\") pod \"c3080a4d-b714-443c-83cc-ba3e7b82e3b5\" (UID: \"c3080a4d-b714-443c-83cc-ba3e7b82e3b5\") " Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.404518 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf2tz\" (UniqueName: \"kubernetes.io/projected/c3080a4d-b714-443c-83cc-ba3e7b82e3b5-kube-api-access-pf2tz\") pod \"c3080a4d-b714-443c-83cc-ba3e7b82e3b5\" (UID: \"c3080a4d-b714-443c-83cc-ba3e7b82e3b5\") " Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.404574 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3080a4d-b714-443c-83cc-ba3e7b82e3b5-combined-ca-bundle\") pod \"c3080a4d-b714-443c-83cc-ba3e7b82e3b5\" (UID: \"c3080a4d-b714-443c-83cc-ba3e7b82e3b5\") " Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.411990 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3080a4d-b714-443c-83cc-ba3e7b82e3b5-kube-api-access-pf2tz" (OuterVolumeSpecName: "kube-api-access-pf2tz") pod "c3080a4d-b714-443c-83cc-ba3e7b82e3b5" (UID: "c3080a4d-b714-443c-83cc-ba3e7b82e3b5"). InnerVolumeSpecName "kube-api-access-pf2tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.412151 4915 scope.go:117] "RemoveContainer" containerID="493573223ef72b46486d52188a23d8ae6e6e0d3cf8deebf23c3c67dbe829228b" Jan 27 20:16:38 crc kubenswrapper[4915]: E0127 20:16:38.417966 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"493573223ef72b46486d52188a23d8ae6e6e0d3cf8deebf23c3c67dbe829228b\": container with ID starting with 493573223ef72b46486d52188a23d8ae6e6e0d3cf8deebf23c3c67dbe829228b not found: ID does not exist" containerID="493573223ef72b46486d52188a23d8ae6e6e0d3cf8deebf23c3c67dbe829228b" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.418031 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"493573223ef72b46486d52188a23d8ae6e6e0d3cf8deebf23c3c67dbe829228b"} err="failed to get container status \"493573223ef72b46486d52188a23d8ae6e6e0d3cf8deebf23c3c67dbe829228b\": rpc error: code = NotFound desc = could not find container \"493573223ef72b46486d52188a23d8ae6e6e0d3cf8deebf23c3c67dbe829228b\": container with ID starting with 493573223ef72b46486d52188a23d8ae6e6e0d3cf8deebf23c3c67dbe829228b not found: ID does not exist" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.418070 4915 scope.go:117] "RemoveContainer" containerID="0d6e1560e3d1a5bc420801f29ba850f2bdaf6a4a5ec110ac98c0a68820db55eb" Jan 27 20:16:38 crc kubenswrapper[4915]: E0127 20:16:38.425946 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d6e1560e3d1a5bc420801f29ba850f2bdaf6a4a5ec110ac98c0a68820db55eb\": container with ID starting with 0d6e1560e3d1a5bc420801f29ba850f2bdaf6a4a5ec110ac98c0a68820db55eb not found: ID does not exist" containerID="0d6e1560e3d1a5bc420801f29ba850f2bdaf6a4a5ec110ac98c0a68820db55eb" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.425990 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d6e1560e3d1a5bc420801f29ba850f2bdaf6a4a5ec110ac98c0a68820db55eb"} err="failed to get container status \"0d6e1560e3d1a5bc420801f29ba850f2bdaf6a4a5ec110ac98c0a68820db55eb\": rpc error: code = NotFound desc = could not find container \"0d6e1560e3d1a5bc420801f29ba850f2bdaf6a4a5ec110ac98c0a68820db55eb\": container with ID starting with 0d6e1560e3d1a5bc420801f29ba850f2bdaf6a4a5ec110ac98c0a68820db55eb not found: ID does not exist" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.433101 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.468579 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3080a4d-b714-443c-83cc-ba3e7b82e3b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3080a4d-b714-443c-83cc-ba3e7b82e3b5" (UID: "c3080a4d-b714-443c-83cc-ba3e7b82e3b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.470763 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.471047 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3080a4d-b714-443c-83cc-ba3e7b82e3b5-config-data" (OuterVolumeSpecName: "config-data") pod "c3080a4d-b714-443c-83cc-ba3e7b82e3b5" (UID: "c3080a4d-b714-443c-83cc-ba3e7b82e3b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.484366 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.491552 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.499984 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 20:16:38 crc kubenswrapper[4915]: E0127 20:16:38.500431 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a533c6e-e261-4faa-8186-4a3105f1a9e4" containerName="nova-metadata-log" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.500491 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a533c6e-e261-4faa-8186-4a3105f1a9e4" containerName="nova-metadata-log" Jan 27 20:16:38 crc kubenswrapper[4915]: E0127 20:16:38.500551 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3080a4d-b714-443c-83cc-ba3e7b82e3b5" containerName="nova-cell0-conductor-conductor" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.500595 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3080a4d-b714-443c-83cc-ba3e7b82e3b5" containerName="nova-cell0-conductor-conductor" Jan 27 20:16:38 crc kubenswrapper[4915]: E0127 20:16:38.500650 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a533c6e-e261-4faa-8186-4a3105f1a9e4" containerName="nova-metadata-metadata" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.500698 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a533c6e-e261-4faa-8186-4a3105f1a9e4" containerName="nova-metadata-metadata" Jan 27 20:16:38 crc kubenswrapper[4915]: E0127 20:16:38.500761 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fae0714b-063b-418d-a1b1-d87a19153cf0" containerName="nova-api-log" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.500825 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="fae0714b-063b-418d-a1b1-d87a19153cf0" containerName="nova-api-log" Jan 27 20:16:38 crc kubenswrapper[4915]: E0127 20:16:38.500907 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fae0714b-063b-418d-a1b1-d87a19153cf0" containerName="nova-api-api" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.500957 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="fae0714b-063b-418d-a1b1-d87a19153cf0" containerName="nova-api-api" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.501164 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a533c6e-e261-4faa-8186-4a3105f1a9e4" containerName="nova-metadata-log" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.501221 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a533c6e-e261-4faa-8186-4a3105f1a9e4" containerName="nova-metadata-metadata" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.501279 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3080a4d-b714-443c-83cc-ba3e7b82e3b5" containerName="nova-cell0-conductor-conductor" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.501339 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="fae0714b-063b-418d-a1b1-d87a19153cf0" containerName="nova-api-log" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.501386 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="fae0714b-063b-418d-a1b1-d87a19153cf0" containerName="nova-api-api" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.502362 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.505734 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.512958 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.513012 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3080a4d-b714-443c-83cc-ba3e7b82e3b5-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.513042 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf2tz\" (UniqueName: \"kubernetes.io/projected/c3080a4d-b714-443c-83cc-ba3e7b82e3b5-kube-api-access-pf2tz\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.513055 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3080a4d-b714-443c-83cc-ba3e7b82e3b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.517064 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.522132 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.525403 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.529958 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.613745 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81bcc90-1a59-47ed-bfe9-23cae865c970-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b81bcc90-1a59-47ed-bfe9-23cae865c970\") " pod="openstack/nova-api-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.614217 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b81bcc90-1a59-47ed-bfe9-23cae865c970-logs\") pod \"nova-api-0\" (UID: \"b81bcc90-1a59-47ed-bfe9-23cae865c970\") " pod="openstack/nova-api-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.614313 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xknn\" (UniqueName: \"kubernetes.io/projected/b81bcc90-1a59-47ed-bfe9-23cae865c970-kube-api-access-4xknn\") pod \"nova-api-0\" (UID: \"b81bcc90-1a59-47ed-bfe9-23cae865c970\") " pod="openstack/nova-api-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.614377 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b81bcc90-1a59-47ed-bfe9-23cae865c970-config-data\") pod \"nova-api-0\" (UID: \"b81bcc90-1a59-47ed-bfe9-23cae865c970\") " pod="openstack/nova-api-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.614439 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e\") " pod="openstack/nova-metadata-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.614502 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e-logs\") pod \"nova-metadata-0\" (UID: \"e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e\") " pod="openstack/nova-metadata-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.614568 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67xtk\" (UniqueName: \"kubernetes.io/projected/e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e-kube-api-access-67xtk\") pod \"nova-metadata-0\" (UID: \"e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e\") " pod="openstack/nova-metadata-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.614692 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e-config-data\") pod \"nova-metadata-0\" (UID: \"e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e\") " pod="openstack/nova-metadata-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.716354 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81bcc90-1a59-47ed-bfe9-23cae865c970-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b81bcc90-1a59-47ed-bfe9-23cae865c970\") " pod="openstack/nova-api-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.716647 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b81bcc90-1a59-47ed-bfe9-23cae865c970-logs\") pod \"nova-api-0\" (UID: \"b81bcc90-1a59-47ed-bfe9-23cae865c970\") " pod="openstack/nova-api-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.716750 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xknn\" (UniqueName: \"kubernetes.io/projected/b81bcc90-1a59-47ed-bfe9-23cae865c970-kube-api-access-4xknn\") pod \"nova-api-0\" (UID: \"b81bcc90-1a59-47ed-bfe9-23cae865c970\") " pod="openstack/nova-api-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.716863 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b81bcc90-1a59-47ed-bfe9-23cae865c970-config-data\") pod \"nova-api-0\" (UID: \"b81bcc90-1a59-47ed-bfe9-23cae865c970\") " pod="openstack/nova-api-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.717384 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e\") " pod="openstack/nova-metadata-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.717515 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e-logs\") pod \"nova-metadata-0\" (UID: \"e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e\") " pod="openstack/nova-metadata-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.717635 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67xtk\" (UniqueName: \"kubernetes.io/projected/e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e-kube-api-access-67xtk\") pod \"nova-metadata-0\" (UID: \"e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e\") " pod="openstack/nova-metadata-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.717927 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e-config-data\") pod \"nova-metadata-0\" (UID: \"e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e\") " pod="openstack/nova-metadata-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.718101 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e-logs\") pod \"nova-metadata-0\" (UID: \"e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e\") " pod="openstack/nova-metadata-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.717204 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b81bcc90-1a59-47ed-bfe9-23cae865c970-logs\") pod \"nova-api-0\" (UID: \"b81bcc90-1a59-47ed-bfe9-23cae865c970\") " pod="openstack/nova-api-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.720002 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e\") " pod="openstack/nova-metadata-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.721067 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81bcc90-1a59-47ed-bfe9-23cae865c970-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b81bcc90-1a59-47ed-bfe9-23cae865c970\") " pod="openstack/nova-api-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.723181 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b81bcc90-1a59-47ed-bfe9-23cae865c970-config-data\") pod \"nova-api-0\" (UID: \"b81bcc90-1a59-47ed-bfe9-23cae865c970\") " pod="openstack/nova-api-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.725146 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e-config-data\") pod \"nova-metadata-0\" (UID: \"e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e\") " pod="openstack/nova-metadata-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.736290 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xknn\" (UniqueName: \"kubernetes.io/projected/b81bcc90-1a59-47ed-bfe9-23cae865c970-kube-api-access-4xknn\") pod \"nova-api-0\" (UID: \"b81bcc90-1a59-47ed-bfe9-23cae865c970\") " pod="openstack/nova-api-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.738714 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67xtk\" (UniqueName: \"kubernetes.io/projected/e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e-kube-api-access-67xtk\") pod \"nova-metadata-0\" (UID: \"e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e\") " pod="openstack/nova-metadata-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.827377 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.843726 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.938381 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"448a7ba3-9525-4822-bcd1-336594ae694a","Type":"ContainerStarted","Data":"862d4a3ad52a42c797aebce863b8acf6a847a9a7dab7f6067887de418a1868dd"} Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.938627 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.941181 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.944671 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c3080a4d-b714-443c-83cc-ba3e7b82e3b5","Type":"ContainerDied","Data":"6a695faf470c56ac1f1f9e212eb9fb9ee0c55e737c5d94c192cd4aa656ce3bcd"} Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.944725 4915 scope.go:117] "RemoveContainer" containerID="a789ec3a448c54abbf18cab4369fc2c2365028f5015485ed2367d4376fcf350c" Jan 27 20:16:38 crc kubenswrapper[4915]: I0127 20:16:38.963609 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.963593892 podStartE2EDuration="2.963593892s" podCreationTimestamp="2026-01-27 20:16:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:16:38.962161297 +0000 UTC m=+5690.320014961" watchObservedRunningTime="2026-01-27 20:16:38.963593892 +0000 UTC m=+5690.321447556" Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.076507 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.097473 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.109049 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.110328 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.118039 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.119052 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.232926 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt2qp\" (UniqueName: \"kubernetes.io/projected/ee50a3c9-47e3-457d-b0f2-5d828f365449-kube-api-access-pt2qp\") pod \"nova-cell0-conductor-0\" (UID: \"ee50a3c9-47e3-457d-b0f2-5d828f365449\") " pod="openstack/nova-cell0-conductor-0" Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.233042 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee50a3c9-47e3-457d-b0f2-5d828f365449-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ee50a3c9-47e3-457d-b0f2-5d828f365449\") " pod="openstack/nova-cell0-conductor-0" Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.233158 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee50a3c9-47e3-457d-b0f2-5d828f365449-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ee50a3c9-47e3-457d-b0f2-5d828f365449\") " pod="openstack/nova-cell0-conductor-0" Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.335343 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt2qp\" (UniqueName: \"kubernetes.io/projected/ee50a3c9-47e3-457d-b0f2-5d828f365449-kube-api-access-pt2qp\") pod \"nova-cell0-conductor-0\" (UID: \"ee50a3c9-47e3-457d-b0f2-5d828f365449\") " pod="openstack/nova-cell0-conductor-0" Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.335497 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee50a3c9-47e3-457d-b0f2-5d828f365449-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ee50a3c9-47e3-457d-b0f2-5d828f365449\") " pod="openstack/nova-cell0-conductor-0" Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.335592 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee50a3c9-47e3-457d-b0f2-5d828f365449-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ee50a3c9-47e3-457d-b0f2-5d828f365449\") " pod="openstack/nova-cell0-conductor-0" Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.341173 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee50a3c9-47e3-457d-b0f2-5d828f365449-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ee50a3c9-47e3-457d-b0f2-5d828f365449\") " pod="openstack/nova-cell0-conductor-0" Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.350333 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee50a3c9-47e3-457d-b0f2-5d828f365449-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ee50a3c9-47e3-457d-b0f2-5d828f365449\") " pod="openstack/nova-cell0-conductor-0" Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.353914 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt2qp\" (UniqueName: \"kubernetes.io/projected/ee50a3c9-47e3-457d-b0f2-5d828f365449-kube-api-access-pt2qp\") pod \"nova-cell0-conductor-0\" (UID: \"ee50a3c9-47e3-457d-b0f2-5d828f365449\") " pod="openstack/nova-cell0-conductor-0" Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.370850 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a533c6e-e261-4faa-8186-4a3105f1a9e4" path="/var/lib/kubelet/pods/4a533c6e-e261-4faa-8186-4a3105f1a9e4/volumes" Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.371424 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3080a4d-b714-443c-83cc-ba3e7b82e3b5" path="/var/lib/kubelet/pods/c3080a4d-b714-443c-83cc-ba3e7b82e3b5/volumes" Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.371932 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fae0714b-063b-418d-a1b1-d87a19153cf0" path="/var/lib/kubelet/pods/fae0714b-063b-418d-a1b1-d87a19153cf0/volumes" Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.372919 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:16:39 crc kubenswrapper[4915]: W0127 20:16:39.383292 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb81bcc90_1a59_47ed_bfe9_23cae865c970.slice/crio-d0e26fd5ea0c64c205f921300212677234e2ac35c8b5f873ddb56cad627e228c WatchSource:0}: Error finding container d0e26fd5ea0c64c205f921300212677234e2ac35c8b5f873ddb56cad627e228c: Status 404 returned error can't find the container with id d0e26fd5ea0c64c205f921300212677234e2ac35c8b5f873ddb56cad627e228c Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.386145 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.456275 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.896319 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 20:16:39 crc kubenswrapper[4915]: W0127 20:16:39.906482 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee50a3c9_47e3_457d_b0f2_5d828f365449.slice/crio-e252b7755160c5043d2b4ce4b4c69bc8d011271ba42fc27ca94cfa1ddaafdae8 WatchSource:0}: Error finding container e252b7755160c5043d2b4ce4b4c69bc8d011271ba42fc27ca94cfa1ddaafdae8: Status 404 returned error can't find the container with id e252b7755160c5043d2b4ce4b4c69bc8d011271ba42fc27ca94cfa1ddaafdae8 Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.960593 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ee50a3c9-47e3-457d-b0f2-5d828f365449","Type":"ContainerStarted","Data":"e252b7755160c5043d2b4ce4b4c69bc8d011271ba42fc27ca94cfa1ddaafdae8"} Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.966637 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b81bcc90-1a59-47ed-bfe9-23cae865c970","Type":"ContainerStarted","Data":"7052f112783bdaf5d1122147f1a3cdf5fa69a1b7e0f229f6317082c386b1d350"} Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.966914 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b81bcc90-1a59-47ed-bfe9-23cae865c970","Type":"ContainerStarted","Data":"123836058d4d0a55e3af2dc61bc0e047010bd2906ee9a8f42aca11d4612235e2"} Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.966944 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b81bcc90-1a59-47ed-bfe9-23cae865c970","Type":"ContainerStarted","Data":"d0e26fd5ea0c64c205f921300212677234e2ac35c8b5f873ddb56cad627e228c"} Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.972074 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e","Type":"ContainerStarted","Data":"1ec365590db2ac123a1b53e26e976cc7bd4152644d9be296c6a2539900672230"} Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.972123 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e","Type":"ContainerStarted","Data":"ee3e8cacc3e8b311a9e886d67f5f3b7ee78ae21705c6bf0709c56903064bd9cf"} Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.972135 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e","Type":"ContainerStarted","Data":"3196475ed578010d1d91e3885484a5a894c93c290042e5a00a0d99d5761a3157"} Jan 27 20:16:39 crc kubenswrapper[4915]: I0127 20:16:39.989581 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.989565351 podStartE2EDuration="1.989565351s" podCreationTimestamp="2026-01-27 20:16:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:16:39.983909221 +0000 UTC m=+5691.341762885" watchObservedRunningTime="2026-01-27 20:16:39.989565351 +0000 UTC m=+5691.347419015" Jan 27 20:16:40 crc kubenswrapper[4915]: I0127 20:16:40.023316 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.023302332 podStartE2EDuration="2.023302332s" podCreationTimestamp="2026-01-27 20:16:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:16:40.020890172 +0000 UTC m=+5691.378743836" watchObservedRunningTime="2026-01-27 20:16:40.023302332 +0000 UTC m=+5691.381155996" Jan 27 20:16:40 crc kubenswrapper[4915]: I0127 20:16:40.984115 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ee50a3c9-47e3-457d-b0f2-5d828f365449","Type":"ContainerStarted","Data":"d44dab790afd64ed90705226f5c7a31faa059dae314863cd9e917ea3d277f016"} Jan 27 20:16:40 crc kubenswrapper[4915]: I0127 20:16:40.984871 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 27 20:16:40 crc kubenswrapper[4915]: I0127 20:16:40.998439 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.9984161070000002 podStartE2EDuration="1.998416107s" podCreationTimestamp="2026-01-27 20:16:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:16:40.996974731 +0000 UTC m=+5692.354828395" watchObservedRunningTime="2026-01-27 20:16:40.998416107 +0000 UTC m=+5692.356269771" Jan 27 20:16:41 crc kubenswrapper[4915]: I0127 20:16:41.224195 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:16:41 crc kubenswrapper[4915]: I0127 20:16:41.535274 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 20:16:41 crc kubenswrapper[4915]: I0127 20:16:41.582363 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp7w6\" (UniqueName: \"kubernetes.io/projected/0e756d45-4816-4bfd-8d5b-05dd204c3fe5-kube-api-access-fp7w6\") pod \"0e756d45-4816-4bfd-8d5b-05dd204c3fe5\" (UID: \"0e756d45-4816-4bfd-8d5b-05dd204c3fe5\") " Jan 27 20:16:41 crc kubenswrapper[4915]: I0127 20:16:41.582440 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e756d45-4816-4bfd-8d5b-05dd204c3fe5-config-data\") pod \"0e756d45-4816-4bfd-8d5b-05dd204c3fe5\" (UID: \"0e756d45-4816-4bfd-8d5b-05dd204c3fe5\") " Jan 27 20:16:41 crc kubenswrapper[4915]: I0127 20:16:41.582531 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e756d45-4816-4bfd-8d5b-05dd204c3fe5-combined-ca-bundle\") pod \"0e756d45-4816-4bfd-8d5b-05dd204c3fe5\" (UID: \"0e756d45-4816-4bfd-8d5b-05dd204c3fe5\") " Jan 27 20:16:41 crc kubenswrapper[4915]: I0127 20:16:41.590559 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e756d45-4816-4bfd-8d5b-05dd204c3fe5-kube-api-access-fp7w6" (OuterVolumeSpecName: "kube-api-access-fp7w6") pod "0e756d45-4816-4bfd-8d5b-05dd204c3fe5" (UID: "0e756d45-4816-4bfd-8d5b-05dd204c3fe5"). InnerVolumeSpecName "kube-api-access-fp7w6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:16:41 crc kubenswrapper[4915]: I0127 20:16:41.628982 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e756d45-4816-4bfd-8d5b-05dd204c3fe5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e756d45-4816-4bfd-8d5b-05dd204c3fe5" (UID: "0e756d45-4816-4bfd-8d5b-05dd204c3fe5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:16:41 crc kubenswrapper[4915]: I0127 20:16:41.637953 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e756d45-4816-4bfd-8d5b-05dd204c3fe5-config-data" (OuterVolumeSpecName: "config-data") pod "0e756d45-4816-4bfd-8d5b-05dd204c3fe5" (UID: "0e756d45-4816-4bfd-8d5b-05dd204c3fe5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:16:41 crc kubenswrapper[4915]: I0127 20:16:41.686216 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp7w6\" (UniqueName: \"kubernetes.io/projected/0e756d45-4816-4bfd-8d5b-05dd204c3fe5-kube-api-access-fp7w6\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:41 crc kubenswrapper[4915]: I0127 20:16:41.686254 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e756d45-4816-4bfd-8d5b-05dd204c3fe5-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:41 crc kubenswrapper[4915]: I0127 20:16:41.686266 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e756d45-4816-4bfd-8d5b-05dd204c3fe5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:41 crc kubenswrapper[4915]: I0127 20:16:41.994318 4915 generic.go:334] "Generic (PLEG): container finished" podID="0e756d45-4816-4bfd-8d5b-05dd204c3fe5" containerID="ec350fecec8eb796545e03ec4f4c61bcbed3571ec3b44ca1037ea0ad9db961cc" exitCode=0 Jan 27 20:16:41 crc kubenswrapper[4915]: I0127 20:16:41.994562 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0e756d45-4816-4bfd-8d5b-05dd204c3fe5","Type":"ContainerDied","Data":"ec350fecec8eb796545e03ec4f4c61bcbed3571ec3b44ca1037ea0ad9db961cc"} Jan 27 20:16:41 crc kubenswrapper[4915]: I0127 20:16:41.994610 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 20:16:41 crc kubenswrapper[4915]: I0127 20:16:41.994630 4915 scope.go:117] "RemoveContainer" containerID="ec350fecec8eb796545e03ec4f4c61bcbed3571ec3b44ca1037ea0ad9db961cc" Jan 27 20:16:41 crc kubenswrapper[4915]: I0127 20:16:41.994615 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0e756d45-4816-4bfd-8d5b-05dd204c3fe5","Type":"ContainerDied","Data":"21ae58e15b6dddfc0026ab1c2a1becef2c5b3e4b4566c7e4cb48314f5aa6676e"} Jan 27 20:16:42 crc kubenswrapper[4915]: I0127 20:16:42.032149 4915 scope.go:117] "RemoveContainer" containerID="ec350fecec8eb796545e03ec4f4c61bcbed3571ec3b44ca1037ea0ad9db961cc" Jan 27 20:16:42 crc kubenswrapper[4915]: E0127 20:16:42.033152 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec350fecec8eb796545e03ec4f4c61bcbed3571ec3b44ca1037ea0ad9db961cc\": container with ID starting with ec350fecec8eb796545e03ec4f4c61bcbed3571ec3b44ca1037ea0ad9db961cc not found: ID does not exist" containerID="ec350fecec8eb796545e03ec4f4c61bcbed3571ec3b44ca1037ea0ad9db961cc" Jan 27 20:16:42 crc kubenswrapper[4915]: I0127 20:16:42.033189 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec350fecec8eb796545e03ec4f4c61bcbed3571ec3b44ca1037ea0ad9db961cc"} err="failed to get container status \"ec350fecec8eb796545e03ec4f4c61bcbed3571ec3b44ca1037ea0ad9db961cc\": rpc error: code = NotFound desc = could not find container \"ec350fecec8eb796545e03ec4f4c61bcbed3571ec3b44ca1037ea0ad9db961cc\": container with ID starting with ec350fecec8eb796545e03ec4f4c61bcbed3571ec3b44ca1037ea0ad9db961cc not found: ID does not exist" Jan 27 20:16:42 crc kubenswrapper[4915]: I0127 20:16:42.050672 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:16:42 crc kubenswrapper[4915]: I0127 20:16:42.070082 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:16:42 crc kubenswrapper[4915]: I0127 20:16:42.098367 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:16:42 crc kubenswrapper[4915]: E0127 20:16:42.099002 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e756d45-4816-4bfd-8d5b-05dd204c3fe5" containerName="nova-scheduler-scheduler" Jan 27 20:16:42 crc kubenswrapper[4915]: I0127 20:16:42.099034 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e756d45-4816-4bfd-8d5b-05dd204c3fe5" containerName="nova-scheduler-scheduler" Jan 27 20:16:42 crc kubenswrapper[4915]: I0127 20:16:42.099397 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e756d45-4816-4bfd-8d5b-05dd204c3fe5" containerName="nova-scheduler-scheduler" Jan 27 20:16:42 crc kubenswrapper[4915]: I0127 20:16:42.100444 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 20:16:42 crc kubenswrapper[4915]: I0127 20:16:42.102280 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 20:16:42 crc kubenswrapper[4915]: I0127 20:16:42.109518 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:16:42 crc kubenswrapper[4915]: I0127 20:16:42.197556 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt75x\" (UniqueName: \"kubernetes.io/projected/97d1cd56-fbc6-4949-a880-f04476b51277-kube-api-access-dt75x\") pod \"nova-scheduler-0\" (UID: \"97d1cd56-fbc6-4949-a880-f04476b51277\") " pod="openstack/nova-scheduler-0" Jan 27 20:16:42 crc kubenswrapper[4915]: I0127 20:16:42.197845 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d1cd56-fbc6-4949-a880-f04476b51277-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"97d1cd56-fbc6-4949-a880-f04476b51277\") " pod="openstack/nova-scheduler-0" Jan 27 20:16:42 crc kubenswrapper[4915]: I0127 20:16:42.197920 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97d1cd56-fbc6-4949-a880-f04476b51277-config-data\") pod \"nova-scheduler-0\" (UID: \"97d1cd56-fbc6-4949-a880-f04476b51277\") " pod="openstack/nova-scheduler-0" Jan 27 20:16:42 crc kubenswrapper[4915]: I0127 20:16:42.300316 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d1cd56-fbc6-4949-a880-f04476b51277-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"97d1cd56-fbc6-4949-a880-f04476b51277\") " pod="openstack/nova-scheduler-0" Jan 27 20:16:42 crc kubenswrapper[4915]: I0127 20:16:42.300373 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97d1cd56-fbc6-4949-a880-f04476b51277-config-data\") pod \"nova-scheduler-0\" (UID: \"97d1cd56-fbc6-4949-a880-f04476b51277\") " pod="openstack/nova-scheduler-0" Jan 27 20:16:42 crc kubenswrapper[4915]: I0127 20:16:42.300462 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt75x\" (UniqueName: \"kubernetes.io/projected/97d1cd56-fbc6-4949-a880-f04476b51277-kube-api-access-dt75x\") pod \"nova-scheduler-0\" (UID: \"97d1cd56-fbc6-4949-a880-f04476b51277\") " pod="openstack/nova-scheduler-0" Jan 27 20:16:42 crc kubenswrapper[4915]: I0127 20:16:42.305637 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97d1cd56-fbc6-4949-a880-f04476b51277-config-data\") pod \"nova-scheduler-0\" (UID: \"97d1cd56-fbc6-4949-a880-f04476b51277\") " pod="openstack/nova-scheduler-0" Jan 27 20:16:42 crc kubenswrapper[4915]: I0127 20:16:42.306017 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d1cd56-fbc6-4949-a880-f04476b51277-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"97d1cd56-fbc6-4949-a880-f04476b51277\") " pod="openstack/nova-scheduler-0" Jan 27 20:16:42 crc kubenswrapper[4915]: I0127 20:16:42.318308 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt75x\" (UniqueName: \"kubernetes.io/projected/97d1cd56-fbc6-4949-a880-f04476b51277-kube-api-access-dt75x\") pod \"nova-scheduler-0\" (UID: \"97d1cd56-fbc6-4949-a880-f04476b51277\") " pod="openstack/nova-scheduler-0" Jan 27 20:16:42 crc kubenswrapper[4915]: I0127 20:16:42.421736 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 20:16:42 crc kubenswrapper[4915]: I0127 20:16:42.906722 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:16:42 crc kubenswrapper[4915]: W0127 20:16:42.913548 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97d1cd56_fbc6_4949_a880_f04476b51277.slice/crio-47cdb208fd58c4ba0d41af542a373b5c37101719e1779ed25eebc90f595fff80 WatchSource:0}: Error finding container 47cdb208fd58c4ba0d41af542a373b5c37101719e1779ed25eebc90f595fff80: Status 404 returned error can't find the container with id 47cdb208fd58c4ba0d41af542a373b5c37101719e1779ed25eebc90f595fff80 Jan 27 20:16:43 crc kubenswrapper[4915]: I0127 20:16:43.011056 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"97d1cd56-fbc6-4949-a880-f04476b51277","Type":"ContainerStarted","Data":"47cdb208fd58c4ba0d41af542a373b5c37101719e1779ed25eebc90f595fff80"} Jan 27 20:16:43 crc kubenswrapper[4915]: I0127 20:16:43.371234 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e756d45-4816-4bfd-8d5b-05dd204c3fe5" path="/var/lib/kubelet/pods/0e756d45-4816-4bfd-8d5b-05dd204c3fe5/volumes" Jan 27 20:16:43 crc kubenswrapper[4915]: I0127 20:16:43.844027 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 20:16:43 crc kubenswrapper[4915]: I0127 20:16:43.844086 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 20:16:44 crc kubenswrapper[4915]: I0127 20:16:44.022893 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"97d1cd56-fbc6-4949-a880-f04476b51277","Type":"ContainerStarted","Data":"0f3f69fbe9df926db6a000a19545f7ab9c805d7854682e9670b2c9271be6af61"} Jan 27 20:16:44 crc kubenswrapper[4915]: I0127 20:16:44.046163 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.046130998 podStartE2EDuration="2.046130998s" podCreationTimestamp="2026-01-27 20:16:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:16:44.040868458 +0000 UTC m=+5695.398722122" watchObservedRunningTime="2026-01-27 20:16:44.046130998 +0000 UTC m=+5695.403984692" Jan 27 20:16:46 crc kubenswrapper[4915]: I0127 20:16:46.224274 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:16:46 crc kubenswrapper[4915]: I0127 20:16:46.241140 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:16:47 crc kubenswrapper[4915]: I0127 20:16:47.062687 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:16:47 crc kubenswrapper[4915]: I0127 20:16:47.278704 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 27 20:16:47 crc kubenswrapper[4915]: I0127 20:16:47.422055 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 20:16:48 crc kubenswrapper[4915]: I0127 20:16:48.827627 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 20:16:48 crc kubenswrapper[4915]: I0127 20:16:48.828719 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 20:16:48 crc kubenswrapper[4915]: I0127 20:16:48.844373 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 20:16:48 crc kubenswrapper[4915]: I0127 20:16:48.844449 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 20:16:49 crc kubenswrapper[4915]: I0127 20:16:49.494281 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 27 20:16:49 crc kubenswrapper[4915]: I0127 20:16:49.951088 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b81bcc90-1a59-47ed-bfe9-23cae865c970" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.84:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 20:16:49 crc kubenswrapper[4915]: I0127 20:16:49.951168 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b81bcc90-1a59-47ed-bfe9-23cae865c970" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.84:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 20:16:49 crc kubenswrapper[4915]: I0127 20:16:49.951282 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.85:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 20:16:49 crc kubenswrapper[4915]: I0127 20:16:49.951935 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.85:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 20:16:50 crc kubenswrapper[4915]: I0127 20:16:50.624392 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:16:50 crc kubenswrapper[4915]: I0127 20:16:50.624764 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:16:52 crc kubenswrapper[4915]: I0127 20:16:52.422990 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 20:16:52 crc kubenswrapper[4915]: I0127 20:16:52.452163 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 20:16:53 crc kubenswrapper[4915]: I0127 20:16:53.040897 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 20:16:53 crc kubenswrapper[4915]: I0127 20:16:53.043454 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 20:16:53 crc kubenswrapper[4915]: I0127 20:16:53.048518 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 20:16:53 crc kubenswrapper[4915]: I0127 20:16:53.059644 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 20:16:53 crc kubenswrapper[4915]: I0127 20:16:53.138532 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 20:16:53 crc kubenswrapper[4915]: I0127 20:16:53.140393 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-config-data\") pod \"cinder-scheduler-0\" (UID: \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:16:53 crc kubenswrapper[4915]: I0127 20:16:53.140523 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:16:53 crc kubenswrapper[4915]: I0127 20:16:53.140659 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:16:53 crc kubenswrapper[4915]: I0127 20:16:53.140731 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-scripts\") pod \"cinder-scheduler-0\" (UID: \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:16:53 crc kubenswrapper[4915]: I0127 20:16:53.140855 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:16:53 crc kubenswrapper[4915]: I0127 20:16:53.141167 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd55l\" (UniqueName: \"kubernetes.io/projected/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-kube-api-access-fd55l\") pod \"cinder-scheduler-0\" (UID: \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:16:53 crc kubenswrapper[4915]: I0127 20:16:53.242965 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-config-data\") pod \"cinder-scheduler-0\" (UID: \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:16:53 crc kubenswrapper[4915]: I0127 20:16:53.243016 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:16:53 crc kubenswrapper[4915]: I0127 20:16:53.243053 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:16:53 crc kubenswrapper[4915]: I0127 20:16:53.243077 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-scripts\") pod \"cinder-scheduler-0\" (UID: \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:16:53 crc kubenswrapper[4915]: I0127 20:16:53.243107 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:16:53 crc kubenswrapper[4915]: I0127 20:16:53.243195 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd55l\" (UniqueName: \"kubernetes.io/projected/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-kube-api-access-fd55l\") pod \"cinder-scheduler-0\" (UID: \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:16:53 crc kubenswrapper[4915]: I0127 20:16:53.243323 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:16:53 crc kubenswrapper[4915]: I0127 20:16:53.250422 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:16:53 crc kubenswrapper[4915]: I0127 20:16:53.251447 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:16:53 crc kubenswrapper[4915]: I0127 20:16:53.251545 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-config-data\") pod \"cinder-scheduler-0\" (UID: \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:16:53 crc kubenswrapper[4915]: I0127 20:16:53.252956 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-scripts\") pod \"cinder-scheduler-0\" (UID: \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:16:53 crc kubenswrapper[4915]: I0127 20:16:53.265752 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd55l\" (UniqueName: \"kubernetes.io/projected/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-kube-api-access-fd55l\") pod \"cinder-scheduler-0\" (UID: \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:16:53 crc kubenswrapper[4915]: I0127 20:16:53.367377 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 20:16:53 crc kubenswrapper[4915]: I0127 20:16:53.853847 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 20:16:53 crc kubenswrapper[4915]: W0127 20:16:53.860995 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7401dc91_1189_4f7f_bf5a_23dc4f84e7b4.slice/crio-14cbdc17df4bca112ce2c30d01bdccd9fca554d4610f80e920c482cf403af80f WatchSource:0}: Error finding container 14cbdc17df4bca112ce2c30d01bdccd9fca554d4610f80e920c482cf403af80f: Status 404 returned error can't find the container with id 14cbdc17df4bca112ce2c30d01bdccd9fca554d4610f80e920c482cf403af80f Jan 27 20:16:54 crc kubenswrapper[4915]: I0127 20:16:54.127065 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4","Type":"ContainerStarted","Data":"14cbdc17df4bca112ce2c30d01bdccd9fca554d4610f80e920c482cf403af80f"} Jan 27 20:16:54 crc kubenswrapper[4915]: I0127 20:16:54.745525 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 20:16:54 crc kubenswrapper[4915]: I0127 20:16:54.746543 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="2d88039f-979e-4d21-b877-0b8d7fa35a18" containerName="cinder-api-log" containerID="cri-o://540825fb8f50b6b4dd4d43dbd949e6860bb97c12143cc0210a2491095215bffa" gracePeriod=30 Jan 27 20:16:54 crc kubenswrapper[4915]: I0127 20:16:54.747091 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="2d88039f-979e-4d21-b877-0b8d7fa35a18" containerName="cinder-api" containerID="cri-o://0ab89ff5879d1bcc8aa16bf11a64bc4b343619eccf69cebd4441412db8f931f0" gracePeriod=30 Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.149254 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4","Type":"ContainerStarted","Data":"53f16565e772682e7037d062649c0bb43911797952f39483e365ff260264db2c"} Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.150963 4915 generic.go:334] "Generic (PLEG): container finished" podID="2d88039f-979e-4d21-b877-0b8d7fa35a18" containerID="540825fb8f50b6b4dd4d43dbd949e6860bb97c12143cc0210a2491095215bffa" exitCode=143 Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.151012 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2d88039f-979e-4d21-b877-0b8d7fa35a18","Type":"ContainerDied","Data":"540825fb8f50b6b4dd4d43dbd949e6860bb97c12143cc0210a2491095215bffa"} Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.298396 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.301320 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.319920 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.320829 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.402887 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1677f1f0-6421-4baa-82a5-9eb372118c1c-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.403013 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-run\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.403044 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.403074 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1677f1f0-6421-4baa-82a5-9eb372118c1c-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.403102 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bw6x\" (UniqueName: \"kubernetes.io/projected/1677f1f0-6421-4baa-82a5-9eb372118c1c-kube-api-access-8bw6x\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.403133 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.403341 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1677f1f0-6421-4baa-82a5-9eb372118c1c-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.403405 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.403425 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1677f1f0-6421-4baa-82a5-9eb372118c1c-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.403451 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.403471 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-dev\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.403511 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.403555 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-sys\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.403624 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.403681 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.403705 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1677f1f0-6421-4baa-82a5-9eb372118c1c-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.505515 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.505569 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-sys\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.505636 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.505698 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.505719 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1677f1f0-6421-4baa-82a5-9eb372118c1c-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.505749 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1677f1f0-6421-4baa-82a5-9eb372118c1c-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.505821 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-run\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.505852 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.505880 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1677f1f0-6421-4baa-82a5-9eb372118c1c-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.505913 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bw6x\" (UniqueName: \"kubernetes.io/projected/1677f1f0-6421-4baa-82a5-9eb372118c1c-kube-api-access-8bw6x\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.505946 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.506022 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1677f1f0-6421-4baa-82a5-9eb372118c1c-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.506055 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.506079 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1677f1f0-6421-4baa-82a5-9eb372118c1c-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.506102 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.506123 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-dev\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.506225 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-dev\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.506318 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.506341 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-sys\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.506568 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.506740 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.507146 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.507563 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.507619 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-run\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.507879 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.509313 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1677f1f0-6421-4baa-82a5-9eb372118c1c-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.530926 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1677f1f0-6421-4baa-82a5-9eb372118c1c-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.530924 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1677f1f0-6421-4baa-82a5-9eb372118c1c-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.531377 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bw6x\" (UniqueName: \"kubernetes.io/projected/1677f1f0-6421-4baa-82a5-9eb372118c1c-kube-api-access-8bw6x\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.531736 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1677f1f0-6421-4baa-82a5-9eb372118c1c-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.532157 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1677f1f0-6421-4baa-82a5-9eb372118c1c-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.533235 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1677f1f0-6421-4baa-82a5-9eb372118c1c-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"1677f1f0-6421-4baa-82a5-9eb372118c1c\") " pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.667368 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.954075 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.956484 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.959244 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 27 20:16:55 crc kubenswrapper[4915]: I0127 20:16:55.986343 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.016039 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-dev\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.016177 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-lib-modules\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.016218 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3e47fb87-132e-4753-b24f-c79929b6e73e-config-data-custom\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.016249 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e47fb87-132e-4753-b24f-c79929b6e73e-config-data\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.016377 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e47fb87-132e-4753-b24f-c79929b6e73e-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.016406 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.016472 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.016597 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3e47fb87-132e-4753-b24f-c79929b6e73e-ceph\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.016658 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mnl2\" (UniqueName: \"kubernetes.io/projected/3e47fb87-132e-4753-b24f-c79929b6e73e-kube-api-access-5mnl2\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.016709 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.016727 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-sys\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.016869 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e47fb87-132e-4753-b24f-c79929b6e73e-scripts\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.016940 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.017002 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-run\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.017028 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.017099 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-etc-nvme\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.119694 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3e47fb87-132e-4753-b24f-c79929b6e73e-config-data-custom\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.119758 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e47fb87-132e-4753-b24f-c79929b6e73e-config-data\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.119813 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e47fb87-132e-4753-b24f-c79929b6e73e-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.119839 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.119880 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.119922 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3e47fb87-132e-4753-b24f-c79929b6e73e-ceph\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.119953 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mnl2\" (UniqueName: \"kubernetes.io/projected/3e47fb87-132e-4753-b24f-c79929b6e73e-kube-api-access-5mnl2\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.119982 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.120006 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-sys\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.120048 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e47fb87-132e-4753-b24f-c79929b6e73e-scripts\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.120073 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.120092 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-run\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.120115 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.120141 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-etc-nvme\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.120172 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-dev\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.120197 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-lib-modules\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.120319 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-lib-modules\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.121074 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-sys\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.121210 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.121236 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.121268 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-run\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.121305 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.121345 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-etc-nvme\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.121378 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.121403 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-dev\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.121436 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3e47fb87-132e-4753-b24f-c79929b6e73e-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.129034 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3e47fb87-132e-4753-b24f-c79929b6e73e-config-data-custom\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.129755 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e47fb87-132e-4753-b24f-c79929b6e73e-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.131022 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e47fb87-132e-4753-b24f-c79929b6e73e-scripts\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.131616 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3e47fb87-132e-4753-b24f-c79929b6e73e-ceph\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.132446 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e47fb87-132e-4753-b24f-c79929b6e73e-config-data\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.142743 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mnl2\" (UniqueName: \"kubernetes.io/projected/3e47fb87-132e-4753-b24f-c79929b6e73e-kube-api-access-5mnl2\") pod \"cinder-backup-0\" (UID: \"3e47fb87-132e-4753-b24f-c79929b6e73e\") " pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.191148 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4","Type":"ContainerStarted","Data":"7772f480b01e43576ba6a39f8cfc01c7b9c5b581ebb4348634e276fc7fa021a9"} Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.215532 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.215512691 podStartE2EDuration="3.215512691s" podCreationTimestamp="2026-01-27 20:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:16:56.212527667 +0000 UTC m=+5707.570381341" watchObservedRunningTime="2026-01-27 20:16:56.215512691 +0000 UTC m=+5707.573366355" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.247579 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 27 20:16:56 crc kubenswrapper[4915]: W0127 20:16:56.255179 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1677f1f0_6421_4baa_82a5_9eb372118c1c.slice/crio-efe0b4a0ce18f6ee4ffc55b3e50ebda82d56017051f57fc342941c48bfa2c31e WatchSource:0}: Error finding container efe0b4a0ce18f6ee4ffc55b3e50ebda82d56017051f57fc342941c48bfa2c31e: Status 404 returned error can't find the container with id efe0b4a0ce18f6ee4ffc55b3e50ebda82d56017051f57fc342941c48bfa2c31e Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.288636 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 27 20:16:56 crc kubenswrapper[4915]: I0127 20:16:56.832752 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 27 20:16:57 crc kubenswrapper[4915]: I0127 20:16:57.201209 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"3e47fb87-132e-4753-b24f-c79929b6e73e","Type":"ContainerStarted","Data":"c61d33416a623d4d94c3451d25242c893efe174768c1bfc8f4ef355ce2150234"} Jan 27 20:16:57 crc kubenswrapper[4915]: I0127 20:16:57.203228 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"1677f1f0-6421-4baa-82a5-9eb372118c1c","Type":"ContainerStarted","Data":"efe0b4a0ce18f6ee4ffc55b3e50ebda82d56017051f57fc342941c48bfa2c31e"} Jan 27 20:16:57 crc kubenswrapper[4915]: I0127 20:16:57.921004 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="2d88039f-979e-4d21-b877-0b8d7fa35a18" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.1.81:8776/healthcheck\": read tcp 10.217.0.2:34374->10.217.1.81:8776: read: connection reset by peer" Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.218063 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"1677f1f0-6421-4baa-82a5-9eb372118c1c","Type":"ContainerStarted","Data":"2323064198c5f5a6a25bb164ca3b722844bd1762384ee3a9e1ab07d9d086605e"} Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.218703 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"1677f1f0-6421-4baa-82a5-9eb372118c1c","Type":"ContainerStarted","Data":"750093b4d665a671773e9a71f7ac15abbc6147ff69b57f86b135503a21924e6b"} Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.223415 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"3e47fb87-132e-4753-b24f-c79929b6e73e","Type":"ContainerStarted","Data":"081fd67de6a4333658764d1bb0b0415e29803985f97ef5e66d79396b6ecbe6e3"} Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.223474 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"3e47fb87-132e-4753-b24f-c79929b6e73e","Type":"ContainerStarted","Data":"ba30eb020c32b445a57705d146511dbeb71bd8937b83f984c8d0964908f9429f"} Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.237585 4915 generic.go:334] "Generic (PLEG): container finished" podID="2d88039f-979e-4d21-b877-0b8d7fa35a18" containerID="0ab89ff5879d1bcc8aa16bf11a64bc4b343619eccf69cebd4441412db8f931f0" exitCode=0 Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.237636 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2d88039f-979e-4d21-b877-0b8d7fa35a18","Type":"ContainerDied","Data":"0ab89ff5879d1bcc8aa16bf11a64bc4b343619eccf69cebd4441412db8f931f0"} Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.244775 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.459915801 podStartE2EDuration="3.244759489s" podCreationTimestamp="2026-01-27 20:16:55 +0000 UTC" firstStartedPulling="2026-01-27 20:16:56.257648509 +0000 UTC m=+5707.615502173" lastFinishedPulling="2026-01-27 20:16:57.042492207 +0000 UTC m=+5708.400345861" observedRunningTime="2026-01-27 20:16:58.239063189 +0000 UTC m=+5709.596916853" watchObservedRunningTime="2026-01-27 20:16:58.244759489 +0000 UTC m=+5709.602613153" Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.281987 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.517513481 podStartE2EDuration="3.281966426s" podCreationTimestamp="2026-01-27 20:16:55 +0000 UTC" firstStartedPulling="2026-01-27 20:16:56.86443673 +0000 UTC m=+5708.222290384" lastFinishedPulling="2026-01-27 20:16:57.628889665 +0000 UTC m=+5708.986743329" observedRunningTime="2026-01-27 20:16:58.26753989 +0000 UTC m=+5709.625393554" watchObservedRunningTime="2026-01-27 20:16:58.281966426 +0000 UTC m=+5709.639820090" Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.282220 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.368350 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.375625 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpkm6\" (UniqueName: \"kubernetes.io/projected/2d88039f-979e-4d21-b877-0b8d7fa35a18-kube-api-access-kpkm6\") pod \"2d88039f-979e-4d21-b877-0b8d7fa35a18\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.375684 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d88039f-979e-4d21-b877-0b8d7fa35a18-combined-ca-bundle\") pod \"2d88039f-979e-4d21-b877-0b8d7fa35a18\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.375732 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d88039f-979e-4d21-b877-0b8d7fa35a18-scripts\") pod \"2d88039f-979e-4d21-b877-0b8d7fa35a18\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.375777 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d88039f-979e-4d21-b877-0b8d7fa35a18-config-data\") pod \"2d88039f-979e-4d21-b877-0b8d7fa35a18\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.375869 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d88039f-979e-4d21-b877-0b8d7fa35a18-etc-machine-id\") pod \"2d88039f-979e-4d21-b877-0b8d7fa35a18\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.375904 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d88039f-979e-4d21-b877-0b8d7fa35a18-config-data-custom\") pod \"2d88039f-979e-4d21-b877-0b8d7fa35a18\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.376065 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d88039f-979e-4d21-b877-0b8d7fa35a18-logs\") pod \"2d88039f-979e-4d21-b877-0b8d7fa35a18\" (UID: \"2d88039f-979e-4d21-b877-0b8d7fa35a18\") " Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.376362 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d88039f-979e-4d21-b877-0b8d7fa35a18-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2d88039f-979e-4d21-b877-0b8d7fa35a18" (UID: "2d88039f-979e-4d21-b877-0b8d7fa35a18"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.376850 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d88039f-979e-4d21-b877-0b8d7fa35a18-logs" (OuterVolumeSpecName: "logs") pod "2d88039f-979e-4d21-b877-0b8d7fa35a18" (UID: "2d88039f-979e-4d21-b877-0b8d7fa35a18"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.381349 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d88039f-979e-4d21-b877-0b8d7fa35a18-kube-api-access-kpkm6" (OuterVolumeSpecName: "kube-api-access-kpkm6") pod "2d88039f-979e-4d21-b877-0b8d7fa35a18" (UID: "2d88039f-979e-4d21-b877-0b8d7fa35a18"). InnerVolumeSpecName "kube-api-access-kpkm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.382411 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d88039f-979e-4d21-b877-0b8d7fa35a18-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2d88039f-979e-4d21-b877-0b8d7fa35a18" (UID: "2d88039f-979e-4d21-b877-0b8d7fa35a18"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.385052 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d88039f-979e-4d21-b877-0b8d7fa35a18-scripts" (OuterVolumeSpecName: "scripts") pod "2d88039f-979e-4d21-b877-0b8d7fa35a18" (UID: "2d88039f-979e-4d21-b877-0b8d7fa35a18"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.424018 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d88039f-979e-4d21-b877-0b8d7fa35a18-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d88039f-979e-4d21-b877-0b8d7fa35a18" (UID: "2d88039f-979e-4d21-b877-0b8d7fa35a18"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.443890 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d88039f-979e-4d21-b877-0b8d7fa35a18-config-data" (OuterVolumeSpecName: "config-data") pod "2d88039f-979e-4d21-b877-0b8d7fa35a18" (UID: "2d88039f-979e-4d21-b877-0b8d7fa35a18"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.478226 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpkm6\" (UniqueName: \"kubernetes.io/projected/2d88039f-979e-4d21-b877-0b8d7fa35a18-kube-api-access-kpkm6\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.478268 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d88039f-979e-4d21-b877-0b8d7fa35a18-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.478387 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d88039f-979e-4d21-b877-0b8d7fa35a18-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.478750 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d88039f-979e-4d21-b877-0b8d7fa35a18-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.478781 4915 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d88039f-979e-4d21-b877-0b8d7fa35a18-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.478804 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d88039f-979e-4d21-b877-0b8d7fa35a18-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.478816 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d88039f-979e-4d21-b877-0b8d7fa35a18-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.832720 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.833770 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.834388 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.846938 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.847351 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.848868 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 20:16:58 crc kubenswrapper[4915]: I0127 20:16:58.849125 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.254519 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.263989 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2d88039f-979e-4d21-b877-0b8d7fa35a18","Type":"ContainerDied","Data":"7e743e7f0d0f96f910752793fd5f0c9847d44eadfab8a84a01d5c0150798e07f"} Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.264029 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.264046 4915 scope.go:117] "RemoveContainer" containerID="0ab89ff5879d1bcc8aa16bf11a64bc4b343619eccf69cebd4441412db8f931f0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.272660 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.278106 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.305699 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.333858 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.335288 4915 scope.go:117] "RemoveContainer" containerID="540825fb8f50b6b4dd4d43dbd949e6860bb97c12143cc0210a2491095215bffa" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.348844 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 20:16:59 crc kubenswrapper[4915]: E0127 20:16:59.349225 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d88039f-979e-4d21-b877-0b8d7fa35a18" containerName="cinder-api" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.349238 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d88039f-979e-4d21-b877-0b8d7fa35a18" containerName="cinder-api" Jan 27 20:16:59 crc kubenswrapper[4915]: E0127 20:16:59.349270 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d88039f-979e-4d21-b877-0b8d7fa35a18" containerName="cinder-api-log" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.349300 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d88039f-979e-4d21-b877-0b8d7fa35a18" containerName="cinder-api-log" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.349476 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d88039f-979e-4d21-b877-0b8d7fa35a18" containerName="cinder-api-log" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.349492 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d88039f-979e-4d21-b877-0b8d7fa35a18" containerName="cinder-api" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.350404 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.354760 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.402734 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d88039f-979e-4d21-b877-0b8d7fa35a18" path="/var/lib/kubelet/pods/2d88039f-979e-4d21-b877-0b8d7fa35a18/volumes" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.403568 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.407705 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e859ffbc-394d-4d46-b6bb-42a96f99d76d-config-data\") pod \"cinder-api-0\" (UID: \"e859ffbc-394d-4d46-b6bb-42a96f99d76d\") " pod="openstack/cinder-api-0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.407977 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e859ffbc-394d-4d46-b6bb-42a96f99d76d-scripts\") pod \"cinder-api-0\" (UID: \"e859ffbc-394d-4d46-b6bb-42a96f99d76d\") " pod="openstack/cinder-api-0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.408011 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e859ffbc-394d-4d46-b6bb-42a96f99d76d-config-data-custom\") pod \"cinder-api-0\" (UID: \"e859ffbc-394d-4d46-b6bb-42a96f99d76d\") " pod="openstack/cinder-api-0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.408081 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plvpk\" (UniqueName: \"kubernetes.io/projected/e859ffbc-394d-4d46-b6bb-42a96f99d76d-kube-api-access-plvpk\") pod \"cinder-api-0\" (UID: \"e859ffbc-394d-4d46-b6bb-42a96f99d76d\") " pod="openstack/cinder-api-0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.408443 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e859ffbc-394d-4d46-b6bb-42a96f99d76d-logs\") pod \"cinder-api-0\" (UID: \"e859ffbc-394d-4d46-b6bb-42a96f99d76d\") " pod="openstack/cinder-api-0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.408519 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e859ffbc-394d-4d46-b6bb-42a96f99d76d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e859ffbc-394d-4d46-b6bb-42a96f99d76d\") " pod="openstack/cinder-api-0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.408554 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e859ffbc-394d-4d46-b6bb-42a96f99d76d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e859ffbc-394d-4d46-b6bb-42a96f99d76d\") " pod="openstack/cinder-api-0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.510874 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e859ffbc-394d-4d46-b6bb-42a96f99d76d-logs\") pod \"cinder-api-0\" (UID: \"e859ffbc-394d-4d46-b6bb-42a96f99d76d\") " pod="openstack/cinder-api-0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.510927 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e859ffbc-394d-4d46-b6bb-42a96f99d76d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e859ffbc-394d-4d46-b6bb-42a96f99d76d\") " pod="openstack/cinder-api-0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.510949 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e859ffbc-394d-4d46-b6bb-42a96f99d76d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e859ffbc-394d-4d46-b6bb-42a96f99d76d\") " pod="openstack/cinder-api-0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.511020 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e859ffbc-394d-4d46-b6bb-42a96f99d76d-config-data\") pod \"cinder-api-0\" (UID: \"e859ffbc-394d-4d46-b6bb-42a96f99d76d\") " pod="openstack/cinder-api-0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.511057 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e859ffbc-394d-4d46-b6bb-42a96f99d76d-scripts\") pod \"cinder-api-0\" (UID: \"e859ffbc-394d-4d46-b6bb-42a96f99d76d\") " pod="openstack/cinder-api-0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.511075 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e859ffbc-394d-4d46-b6bb-42a96f99d76d-config-data-custom\") pod \"cinder-api-0\" (UID: \"e859ffbc-394d-4d46-b6bb-42a96f99d76d\") " pod="openstack/cinder-api-0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.511097 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plvpk\" (UniqueName: \"kubernetes.io/projected/e859ffbc-394d-4d46-b6bb-42a96f99d76d-kube-api-access-plvpk\") pod \"cinder-api-0\" (UID: \"e859ffbc-394d-4d46-b6bb-42a96f99d76d\") " pod="openstack/cinder-api-0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.511469 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e859ffbc-394d-4d46-b6bb-42a96f99d76d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e859ffbc-394d-4d46-b6bb-42a96f99d76d\") " pod="openstack/cinder-api-0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.512651 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e859ffbc-394d-4d46-b6bb-42a96f99d76d-logs\") pod \"cinder-api-0\" (UID: \"e859ffbc-394d-4d46-b6bb-42a96f99d76d\") " pod="openstack/cinder-api-0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.521875 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e859ffbc-394d-4d46-b6bb-42a96f99d76d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e859ffbc-394d-4d46-b6bb-42a96f99d76d\") " pod="openstack/cinder-api-0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.527400 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e859ffbc-394d-4d46-b6bb-42a96f99d76d-config-data\") pod \"cinder-api-0\" (UID: \"e859ffbc-394d-4d46-b6bb-42a96f99d76d\") " pod="openstack/cinder-api-0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.528659 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e859ffbc-394d-4d46-b6bb-42a96f99d76d-scripts\") pod \"cinder-api-0\" (UID: \"e859ffbc-394d-4d46-b6bb-42a96f99d76d\") " pod="openstack/cinder-api-0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.533957 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plvpk\" (UniqueName: \"kubernetes.io/projected/e859ffbc-394d-4d46-b6bb-42a96f99d76d-kube-api-access-plvpk\") pod \"cinder-api-0\" (UID: \"e859ffbc-394d-4d46-b6bb-42a96f99d76d\") " pod="openstack/cinder-api-0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.553031 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e859ffbc-394d-4d46-b6bb-42a96f99d76d-config-data-custom\") pod \"cinder-api-0\" (UID: \"e859ffbc-394d-4d46-b6bb-42a96f99d76d\") " pod="openstack/cinder-api-0" Jan 27 20:16:59 crc kubenswrapper[4915]: I0127 20:16:59.691499 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 20:17:00 crc kubenswrapper[4915]: I0127 20:17:00.065363 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 20:17:00 crc kubenswrapper[4915]: I0127 20:17:00.268855 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e859ffbc-394d-4d46-b6bb-42a96f99d76d","Type":"ContainerStarted","Data":"82e91dd9ff99d51e46112014537743efc934a16312c9fc6fa1f6cde36f7c45d0"} Jan 27 20:17:00 crc kubenswrapper[4915]: I0127 20:17:00.667473 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 27 20:17:01 crc kubenswrapper[4915]: I0127 20:17:01.283868 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e859ffbc-394d-4d46-b6bb-42a96f99d76d","Type":"ContainerStarted","Data":"23664a86bc4896d5a7afe5e710514022ac54d28e1f9207354a681e99d8a06c37"} Jan 27 20:17:01 crc kubenswrapper[4915]: I0127 20:17:01.289511 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 27 20:17:02 crc kubenswrapper[4915]: I0127 20:17:02.294291 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e859ffbc-394d-4d46-b6bb-42a96f99d76d","Type":"ContainerStarted","Data":"fdb95c1d097cda2e9fcdf9bd6e46ea709155502042ccdeec32a52c860b7be529"} Jan 27 20:17:02 crc kubenswrapper[4915]: I0127 20:17:02.294578 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 20:17:03 crc kubenswrapper[4915]: I0127 20:17:03.588682 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 20:17:03 crc kubenswrapper[4915]: I0127 20:17:03.610410 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.610390414 podStartE2EDuration="4.610390414s" podCreationTimestamp="2026-01-27 20:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:17:02.315721924 +0000 UTC m=+5713.673575588" watchObservedRunningTime="2026-01-27 20:17:03.610390414 +0000 UTC m=+5714.968244078" Jan 27 20:17:03 crc kubenswrapper[4915]: I0127 20:17:03.654023 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 20:17:04 crc kubenswrapper[4915]: I0127 20:17:04.314895 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="7401dc91-1189-4f7f-bf5a-23dc4f84e7b4" containerName="cinder-scheduler" containerID="cri-o://53f16565e772682e7037d062649c0bb43911797952f39483e365ff260264db2c" gracePeriod=30 Jan 27 20:17:04 crc kubenswrapper[4915]: I0127 20:17:04.315002 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="7401dc91-1189-4f7f-bf5a-23dc4f84e7b4" containerName="probe" containerID="cri-o://7772f480b01e43576ba6a39f8cfc01c7b9c5b581ebb4348634e276fc7fa021a9" gracePeriod=30 Jan 27 20:17:05 crc kubenswrapper[4915]: I0127 20:17:05.324073 4915 generic.go:334] "Generic (PLEG): container finished" podID="7401dc91-1189-4f7f-bf5a-23dc4f84e7b4" containerID="7772f480b01e43576ba6a39f8cfc01c7b9c5b581ebb4348634e276fc7fa021a9" exitCode=0 Jan 27 20:17:05 crc kubenswrapper[4915]: I0127 20:17:05.324433 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4","Type":"ContainerDied","Data":"7772f480b01e43576ba6a39f8cfc01c7b9c5b581ebb4348634e276fc7fa021a9"} Jan 27 20:17:05 crc kubenswrapper[4915]: I0127 20:17:05.917120 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 27 20:17:06 crc kubenswrapper[4915]: I0127 20:17:06.338426 4915 generic.go:334] "Generic (PLEG): container finished" podID="7401dc91-1189-4f7f-bf5a-23dc4f84e7b4" containerID="53f16565e772682e7037d062649c0bb43911797952f39483e365ff260264db2c" exitCode=0 Jan 27 20:17:06 crc kubenswrapper[4915]: I0127 20:17:06.338499 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4","Type":"ContainerDied","Data":"53f16565e772682e7037d062649c0bb43911797952f39483e365ff260264db2c"} Jan 27 20:17:06 crc kubenswrapper[4915]: I0127 20:17:06.491812 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 27 20:17:06 crc kubenswrapper[4915]: I0127 20:17:06.619328 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 20:17:06 crc kubenswrapper[4915]: I0127 20:17:06.645357 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd55l\" (UniqueName: \"kubernetes.io/projected/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-kube-api-access-fd55l\") pod \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\" (UID: \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\") " Jan 27 20:17:06 crc kubenswrapper[4915]: I0127 20:17:06.645443 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-config-data-custom\") pod \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\" (UID: \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\") " Jan 27 20:17:06 crc kubenswrapper[4915]: I0127 20:17:06.645544 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-config-data\") pod \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\" (UID: \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\") " Jan 27 20:17:06 crc kubenswrapper[4915]: I0127 20:17:06.645566 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-etc-machine-id\") pod \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\" (UID: \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\") " Jan 27 20:17:06 crc kubenswrapper[4915]: I0127 20:17:06.645699 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-scripts\") pod \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\" (UID: \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\") " Jan 27 20:17:06 crc kubenswrapper[4915]: I0127 20:17:06.645769 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-combined-ca-bundle\") pod \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\" (UID: \"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4\") " Jan 27 20:17:06 crc kubenswrapper[4915]: I0127 20:17:06.646030 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7401dc91-1189-4f7f-bf5a-23dc4f84e7b4" (UID: "7401dc91-1189-4f7f-bf5a-23dc4f84e7b4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:17:06 crc kubenswrapper[4915]: I0127 20:17:06.646358 4915 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 20:17:06 crc kubenswrapper[4915]: I0127 20:17:06.656059 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-scripts" (OuterVolumeSpecName: "scripts") pod "7401dc91-1189-4f7f-bf5a-23dc4f84e7b4" (UID: "7401dc91-1189-4f7f-bf5a-23dc4f84e7b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:17:06 crc kubenswrapper[4915]: I0127 20:17:06.656118 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-kube-api-access-fd55l" (OuterVolumeSpecName: "kube-api-access-fd55l") pod "7401dc91-1189-4f7f-bf5a-23dc4f84e7b4" (UID: "7401dc91-1189-4f7f-bf5a-23dc4f84e7b4"). InnerVolumeSpecName "kube-api-access-fd55l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:17:06 crc kubenswrapper[4915]: I0127 20:17:06.657588 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7401dc91-1189-4f7f-bf5a-23dc4f84e7b4" (UID: "7401dc91-1189-4f7f-bf5a-23dc4f84e7b4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:17:06 crc kubenswrapper[4915]: I0127 20:17:06.719690 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7401dc91-1189-4f7f-bf5a-23dc4f84e7b4" (UID: "7401dc91-1189-4f7f-bf5a-23dc4f84e7b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:17:06 crc kubenswrapper[4915]: I0127 20:17:06.748311 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:17:06 crc kubenswrapper[4915]: I0127 20:17:06.748341 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:17:06 crc kubenswrapper[4915]: I0127 20:17:06.748356 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd55l\" (UniqueName: \"kubernetes.io/projected/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-kube-api-access-fd55l\") on node \"crc\" DevicePath \"\"" Jan 27 20:17:06 crc kubenswrapper[4915]: I0127 20:17:06.748366 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 20:17:06 crc kubenswrapper[4915]: I0127 20:17:06.751122 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-config-data" (OuterVolumeSpecName: "config-data") pod "7401dc91-1189-4f7f-bf5a-23dc4f84e7b4" (UID: "7401dc91-1189-4f7f-bf5a-23dc4f84e7b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:17:06 crc kubenswrapper[4915]: I0127 20:17:06.849811 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.354009 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7401dc91-1189-4f7f-bf5a-23dc4f84e7b4","Type":"ContainerDied","Data":"14cbdc17df4bca112ce2c30d01bdccd9fca554d4610f80e920c482cf403af80f"} Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.354069 4915 scope.go:117] "RemoveContainer" containerID="7772f480b01e43576ba6a39f8cfc01c7b9c5b581ebb4348634e276fc7fa021a9" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.354271 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.414480 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.418154 4915 scope.go:117] "RemoveContainer" containerID="53f16565e772682e7037d062649c0bb43911797952f39483e365ff260264db2c" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.422375 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.436380 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 20:17:07 crc kubenswrapper[4915]: E0127 20:17:07.436751 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7401dc91-1189-4f7f-bf5a-23dc4f84e7b4" containerName="probe" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.436776 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="7401dc91-1189-4f7f-bf5a-23dc4f84e7b4" containerName="probe" Jan 27 20:17:07 crc kubenswrapper[4915]: E0127 20:17:07.436817 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7401dc91-1189-4f7f-bf5a-23dc4f84e7b4" containerName="cinder-scheduler" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.436825 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="7401dc91-1189-4f7f-bf5a-23dc4f84e7b4" containerName="cinder-scheduler" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.437012 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="7401dc91-1189-4f7f-bf5a-23dc4f84e7b4" containerName="probe" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.437050 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="7401dc91-1189-4f7f-bf5a-23dc4f84e7b4" containerName="cinder-scheduler" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.438136 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.444118 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.465553 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.562116 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eac71222-8c11-4c6c-9399-0e4eb53487d4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"eac71222-8c11-4c6c-9399-0e4eb53487d4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.562306 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eac71222-8c11-4c6c-9399-0e4eb53487d4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"eac71222-8c11-4c6c-9399-0e4eb53487d4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.562418 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hngcn\" (UniqueName: \"kubernetes.io/projected/eac71222-8c11-4c6c-9399-0e4eb53487d4-kube-api-access-hngcn\") pod \"cinder-scheduler-0\" (UID: \"eac71222-8c11-4c6c-9399-0e4eb53487d4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.562617 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eac71222-8c11-4c6c-9399-0e4eb53487d4-config-data\") pod \"cinder-scheduler-0\" (UID: \"eac71222-8c11-4c6c-9399-0e4eb53487d4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.562668 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eac71222-8c11-4c6c-9399-0e4eb53487d4-scripts\") pod \"cinder-scheduler-0\" (UID: \"eac71222-8c11-4c6c-9399-0e4eb53487d4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.562830 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eac71222-8c11-4c6c-9399-0e4eb53487d4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"eac71222-8c11-4c6c-9399-0e4eb53487d4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.664694 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eac71222-8c11-4c6c-9399-0e4eb53487d4-scripts\") pod \"cinder-scheduler-0\" (UID: \"eac71222-8c11-4c6c-9399-0e4eb53487d4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.664774 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eac71222-8c11-4c6c-9399-0e4eb53487d4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"eac71222-8c11-4c6c-9399-0e4eb53487d4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.664840 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eac71222-8c11-4c6c-9399-0e4eb53487d4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"eac71222-8c11-4c6c-9399-0e4eb53487d4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.664884 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eac71222-8c11-4c6c-9399-0e4eb53487d4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"eac71222-8c11-4c6c-9399-0e4eb53487d4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.665139 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eac71222-8c11-4c6c-9399-0e4eb53487d4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"eac71222-8c11-4c6c-9399-0e4eb53487d4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.665364 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hngcn\" (UniqueName: \"kubernetes.io/projected/eac71222-8c11-4c6c-9399-0e4eb53487d4-kube-api-access-hngcn\") pod \"cinder-scheduler-0\" (UID: \"eac71222-8c11-4c6c-9399-0e4eb53487d4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.665432 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eac71222-8c11-4c6c-9399-0e4eb53487d4-config-data\") pod \"cinder-scheduler-0\" (UID: \"eac71222-8c11-4c6c-9399-0e4eb53487d4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.668395 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eac71222-8c11-4c6c-9399-0e4eb53487d4-scripts\") pod \"cinder-scheduler-0\" (UID: \"eac71222-8c11-4c6c-9399-0e4eb53487d4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.668928 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eac71222-8c11-4c6c-9399-0e4eb53487d4-config-data\") pod \"cinder-scheduler-0\" (UID: \"eac71222-8c11-4c6c-9399-0e4eb53487d4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.669341 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eac71222-8c11-4c6c-9399-0e4eb53487d4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"eac71222-8c11-4c6c-9399-0e4eb53487d4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.669368 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eac71222-8c11-4c6c-9399-0e4eb53487d4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"eac71222-8c11-4c6c-9399-0e4eb53487d4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.689524 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hngcn\" (UniqueName: \"kubernetes.io/projected/eac71222-8c11-4c6c-9399-0e4eb53487d4-kube-api-access-hngcn\") pod \"cinder-scheduler-0\" (UID: \"eac71222-8c11-4c6c-9399-0e4eb53487d4\") " pod="openstack/cinder-scheduler-0" Jan 27 20:17:07 crc kubenswrapper[4915]: I0127 20:17:07.775848 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 20:17:08 crc kubenswrapper[4915]: I0127 20:17:08.042406 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 20:17:08 crc kubenswrapper[4915]: W0127 20:17:08.050707 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeac71222_8c11_4c6c_9399_0e4eb53487d4.slice/crio-98e22484272d3f5281113c22a71f7c914bebe6cdb585bb2cc1edcdb3940cc0cc WatchSource:0}: Error finding container 98e22484272d3f5281113c22a71f7c914bebe6cdb585bb2cc1edcdb3940cc0cc: Status 404 returned error can't find the container with id 98e22484272d3f5281113c22a71f7c914bebe6cdb585bb2cc1edcdb3940cc0cc Jan 27 20:17:08 crc kubenswrapper[4915]: I0127 20:17:08.368028 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eac71222-8c11-4c6c-9399-0e4eb53487d4","Type":"ContainerStarted","Data":"98e22484272d3f5281113c22a71f7c914bebe6cdb585bb2cc1edcdb3940cc0cc"} Jan 27 20:17:09 crc kubenswrapper[4915]: I0127 20:17:09.376508 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7401dc91-1189-4f7f-bf5a-23dc4f84e7b4" path="/var/lib/kubelet/pods/7401dc91-1189-4f7f-bf5a-23dc4f84e7b4/volumes" Jan 27 20:17:09 crc kubenswrapper[4915]: I0127 20:17:09.379995 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eac71222-8c11-4c6c-9399-0e4eb53487d4","Type":"ContainerStarted","Data":"2d5660c7919759a89841ba6133cfd8046a09f912c50261622d9bc91c94842763"} Jan 27 20:17:09 crc kubenswrapper[4915]: I0127 20:17:09.380031 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eac71222-8c11-4c6c-9399-0e4eb53487d4","Type":"ContainerStarted","Data":"97bde5b015a4c828ddcf0521514bff670391e692955caa21edb0bdc9b6f564be"} Jan 27 20:17:09 crc kubenswrapper[4915]: I0127 20:17:09.419190 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.419164496 podStartE2EDuration="2.419164496s" podCreationTimestamp="2026-01-27 20:17:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:17:09.409869537 +0000 UTC m=+5720.767723201" watchObservedRunningTime="2026-01-27 20:17:09.419164496 +0000 UTC m=+5720.777018150" Jan 27 20:17:11 crc kubenswrapper[4915]: I0127 20:17:11.520313 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 27 20:17:12 crc kubenswrapper[4915]: I0127 20:17:12.776076 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 20:17:17 crc kubenswrapper[4915]: I0127 20:17:17.967464 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 20:17:20 crc kubenswrapper[4915]: I0127 20:17:20.625276 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:17:20 crc kubenswrapper[4915]: I0127 20:17:20.625620 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:17:50 crc kubenswrapper[4915]: I0127 20:17:50.624782 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:17:50 crc kubenswrapper[4915]: I0127 20:17:50.625598 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:17:50 crc kubenswrapper[4915]: I0127 20:17:50.625650 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 20:17:50 crc kubenswrapper[4915]: I0127 20:17:50.626478 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 20:17:50 crc kubenswrapper[4915]: I0127 20:17:50.626544 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114" gracePeriod=600 Jan 27 20:17:50 crc kubenswrapper[4915]: I0127 20:17:50.789911 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114" exitCode=0 Jan 27 20:17:50 crc kubenswrapper[4915]: I0127 20:17:50.789984 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114"} Jan 27 20:17:50 crc kubenswrapper[4915]: I0127 20:17:50.790401 4915 scope.go:117] "RemoveContainer" containerID="68967215d5f95104236d9c544ee1dac19345f8fc03262735622caa897c20b480" Jan 27 20:17:50 crc kubenswrapper[4915]: E0127 20:17:50.833989 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:17:51 crc kubenswrapper[4915]: I0127 20:17:51.817575 4915 scope.go:117] "RemoveContainer" containerID="fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114" Jan 27 20:17:51 crc kubenswrapper[4915]: E0127 20:17:51.818294 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:18:04 crc kubenswrapper[4915]: I0127 20:18:04.357648 4915 scope.go:117] "RemoveContainer" containerID="fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114" Jan 27 20:18:04 crc kubenswrapper[4915]: E0127 20:18:04.358440 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:18:18 crc kubenswrapper[4915]: I0127 20:18:18.357897 4915 scope.go:117] "RemoveContainer" containerID="fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114" Jan 27 20:18:18 crc kubenswrapper[4915]: E0127 20:18:18.358894 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:18:31 crc kubenswrapper[4915]: I0127 20:18:31.357445 4915 scope.go:117] "RemoveContainer" containerID="fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114" Jan 27 20:18:31 crc kubenswrapper[4915]: E0127 20:18:31.358251 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:18:45 crc kubenswrapper[4915]: I0127 20:18:45.357973 4915 scope.go:117] "RemoveContainer" containerID="fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114" Jan 27 20:18:45 crc kubenswrapper[4915]: E0127 20:18:45.359100 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:18:56 crc kubenswrapper[4915]: I0127 20:18:56.945200 4915 scope.go:117] "RemoveContainer" containerID="582f1257f3e5dc138b9a7df6b2186651ffc66b3de237aabdb4d328624776f332" Jan 27 20:18:56 crc kubenswrapper[4915]: I0127 20:18:56.998005 4915 scope.go:117] "RemoveContainer" containerID="51d69d60c41b6681f468aeac488c104b40942aa3004a7f012561f3afdaaef61b" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.581010 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mwr2k"] Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.582945 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwr2k" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.584755 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-c2cqg" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.590171 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.592457 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-mcbqz"] Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.595294 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-mcbqz" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.601523 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mwr2k"] Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.616165 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-mcbqz"] Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.761832 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/199b3c02-36d1-4807-8615-2ac7d42b3a3c-scripts\") pod \"ovn-controller-ovs-mcbqz\" (UID: \"199b3c02-36d1-4807-8615-2ac7d42b3a3c\") " pod="openstack/ovn-controller-ovs-mcbqz" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.761932 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/199b3c02-36d1-4807-8615-2ac7d42b3a3c-var-run\") pod \"ovn-controller-ovs-mcbqz\" (UID: \"199b3c02-36d1-4807-8615-2ac7d42b3a3c\") " pod="openstack/ovn-controller-ovs-mcbqz" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.761987 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31b1b872-d75b-4fb7-8995-a9183f0a8728-var-run\") pod \"ovn-controller-mwr2k\" (UID: \"31b1b872-d75b-4fb7-8995-a9183f0a8728\") " pod="openstack/ovn-controller-mwr2k" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.762031 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31b1b872-d75b-4fb7-8995-a9183f0a8728-scripts\") pod \"ovn-controller-mwr2k\" (UID: \"31b1b872-d75b-4fb7-8995-a9183f0a8728\") " pod="openstack/ovn-controller-mwr2k" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.762179 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31b1b872-d75b-4fb7-8995-a9183f0a8728-var-run-ovn\") pod \"ovn-controller-mwr2k\" (UID: \"31b1b872-d75b-4fb7-8995-a9183f0a8728\") " pod="openstack/ovn-controller-mwr2k" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.762347 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31b1b872-d75b-4fb7-8995-a9183f0a8728-var-log-ovn\") pod \"ovn-controller-mwr2k\" (UID: \"31b1b872-d75b-4fb7-8995-a9183f0a8728\") " pod="openstack/ovn-controller-mwr2k" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.762385 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/199b3c02-36d1-4807-8615-2ac7d42b3a3c-etc-ovs\") pod \"ovn-controller-ovs-mcbqz\" (UID: \"199b3c02-36d1-4807-8615-2ac7d42b3a3c\") " pod="openstack/ovn-controller-ovs-mcbqz" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.762449 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56rgq\" (UniqueName: \"kubernetes.io/projected/31b1b872-d75b-4fb7-8995-a9183f0a8728-kube-api-access-56rgq\") pod \"ovn-controller-mwr2k\" (UID: \"31b1b872-d75b-4fb7-8995-a9183f0a8728\") " pod="openstack/ovn-controller-mwr2k" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.762490 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj796\" (UniqueName: \"kubernetes.io/projected/199b3c02-36d1-4807-8615-2ac7d42b3a3c-kube-api-access-cj796\") pod \"ovn-controller-ovs-mcbqz\" (UID: \"199b3c02-36d1-4807-8615-2ac7d42b3a3c\") " pod="openstack/ovn-controller-ovs-mcbqz" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.762667 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/199b3c02-36d1-4807-8615-2ac7d42b3a3c-var-lib\") pod \"ovn-controller-ovs-mcbqz\" (UID: \"199b3c02-36d1-4807-8615-2ac7d42b3a3c\") " pod="openstack/ovn-controller-ovs-mcbqz" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.762717 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/199b3c02-36d1-4807-8615-2ac7d42b3a3c-var-log\") pod \"ovn-controller-ovs-mcbqz\" (UID: \"199b3c02-36d1-4807-8615-2ac7d42b3a3c\") " pod="openstack/ovn-controller-ovs-mcbqz" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.865051 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31b1b872-d75b-4fb7-8995-a9183f0a8728-scripts\") pod \"ovn-controller-mwr2k\" (UID: \"31b1b872-d75b-4fb7-8995-a9183f0a8728\") " pod="openstack/ovn-controller-mwr2k" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.865142 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31b1b872-d75b-4fb7-8995-a9183f0a8728-var-run-ovn\") pod \"ovn-controller-mwr2k\" (UID: \"31b1b872-d75b-4fb7-8995-a9183f0a8728\") " pod="openstack/ovn-controller-mwr2k" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.865211 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31b1b872-d75b-4fb7-8995-a9183f0a8728-var-log-ovn\") pod \"ovn-controller-mwr2k\" (UID: \"31b1b872-d75b-4fb7-8995-a9183f0a8728\") " pod="openstack/ovn-controller-mwr2k" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.865245 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/199b3c02-36d1-4807-8615-2ac7d42b3a3c-etc-ovs\") pod \"ovn-controller-ovs-mcbqz\" (UID: \"199b3c02-36d1-4807-8615-2ac7d42b3a3c\") " pod="openstack/ovn-controller-ovs-mcbqz" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.865286 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56rgq\" (UniqueName: \"kubernetes.io/projected/31b1b872-d75b-4fb7-8995-a9183f0a8728-kube-api-access-56rgq\") pod \"ovn-controller-mwr2k\" (UID: \"31b1b872-d75b-4fb7-8995-a9183f0a8728\") " pod="openstack/ovn-controller-mwr2k" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.865321 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj796\" (UniqueName: \"kubernetes.io/projected/199b3c02-36d1-4807-8615-2ac7d42b3a3c-kube-api-access-cj796\") pod \"ovn-controller-ovs-mcbqz\" (UID: \"199b3c02-36d1-4807-8615-2ac7d42b3a3c\") " pod="openstack/ovn-controller-ovs-mcbqz" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.865394 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/199b3c02-36d1-4807-8615-2ac7d42b3a3c-var-lib\") pod \"ovn-controller-ovs-mcbqz\" (UID: \"199b3c02-36d1-4807-8615-2ac7d42b3a3c\") " pod="openstack/ovn-controller-ovs-mcbqz" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.865427 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/199b3c02-36d1-4807-8615-2ac7d42b3a3c-var-log\") pod \"ovn-controller-ovs-mcbqz\" (UID: \"199b3c02-36d1-4807-8615-2ac7d42b3a3c\") " pod="openstack/ovn-controller-ovs-mcbqz" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.865471 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/199b3c02-36d1-4807-8615-2ac7d42b3a3c-scripts\") pod \"ovn-controller-ovs-mcbqz\" (UID: \"199b3c02-36d1-4807-8615-2ac7d42b3a3c\") " pod="openstack/ovn-controller-ovs-mcbqz" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.865506 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/199b3c02-36d1-4807-8615-2ac7d42b3a3c-var-run\") pod \"ovn-controller-ovs-mcbqz\" (UID: \"199b3c02-36d1-4807-8615-2ac7d42b3a3c\") " pod="openstack/ovn-controller-ovs-mcbqz" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.865562 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31b1b872-d75b-4fb7-8995-a9183f0a8728-var-run\") pod \"ovn-controller-mwr2k\" (UID: \"31b1b872-d75b-4fb7-8995-a9183f0a8728\") " pod="openstack/ovn-controller-mwr2k" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.865699 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/199b3c02-36d1-4807-8615-2ac7d42b3a3c-var-log\") pod \"ovn-controller-ovs-mcbqz\" (UID: \"199b3c02-36d1-4807-8615-2ac7d42b3a3c\") " pod="openstack/ovn-controller-ovs-mcbqz" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.865733 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31b1b872-d75b-4fb7-8995-a9183f0a8728-var-run-ovn\") pod \"ovn-controller-mwr2k\" (UID: \"31b1b872-d75b-4fb7-8995-a9183f0a8728\") " pod="openstack/ovn-controller-mwr2k" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.865814 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/199b3c02-36d1-4807-8615-2ac7d42b3a3c-var-lib\") pod \"ovn-controller-ovs-mcbqz\" (UID: \"199b3c02-36d1-4807-8615-2ac7d42b3a3c\") " pod="openstack/ovn-controller-ovs-mcbqz" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.865819 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31b1b872-d75b-4fb7-8995-a9183f0a8728-var-run\") pod \"ovn-controller-mwr2k\" (UID: \"31b1b872-d75b-4fb7-8995-a9183f0a8728\") " pod="openstack/ovn-controller-mwr2k" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.865893 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/199b3c02-36d1-4807-8615-2ac7d42b3a3c-etc-ovs\") pod \"ovn-controller-ovs-mcbqz\" (UID: \"199b3c02-36d1-4807-8615-2ac7d42b3a3c\") " pod="openstack/ovn-controller-ovs-mcbqz" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.865892 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/199b3c02-36d1-4807-8615-2ac7d42b3a3c-var-run\") pod \"ovn-controller-ovs-mcbqz\" (UID: \"199b3c02-36d1-4807-8615-2ac7d42b3a3c\") " pod="openstack/ovn-controller-ovs-mcbqz" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.867603 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/199b3c02-36d1-4807-8615-2ac7d42b3a3c-scripts\") pod \"ovn-controller-ovs-mcbqz\" (UID: \"199b3c02-36d1-4807-8615-2ac7d42b3a3c\") " pod="openstack/ovn-controller-ovs-mcbqz" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.867687 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31b1b872-d75b-4fb7-8995-a9183f0a8728-var-log-ovn\") pod \"ovn-controller-mwr2k\" (UID: \"31b1b872-d75b-4fb7-8995-a9183f0a8728\") " pod="openstack/ovn-controller-mwr2k" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.868450 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31b1b872-d75b-4fb7-8995-a9183f0a8728-scripts\") pod \"ovn-controller-mwr2k\" (UID: \"31b1b872-d75b-4fb7-8995-a9183f0a8728\") " pod="openstack/ovn-controller-mwr2k" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.889696 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56rgq\" (UniqueName: \"kubernetes.io/projected/31b1b872-d75b-4fb7-8995-a9183f0a8728-kube-api-access-56rgq\") pod \"ovn-controller-mwr2k\" (UID: \"31b1b872-d75b-4fb7-8995-a9183f0a8728\") " pod="openstack/ovn-controller-mwr2k" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.898234 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj796\" (UniqueName: \"kubernetes.io/projected/199b3c02-36d1-4807-8615-2ac7d42b3a3c-kube-api-access-cj796\") pod \"ovn-controller-ovs-mcbqz\" (UID: \"199b3c02-36d1-4807-8615-2ac7d42b3a3c\") " pod="openstack/ovn-controller-ovs-mcbqz" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.899153 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwr2k" Jan 27 20:18:59 crc kubenswrapper[4915]: I0127 20:18:59.917209 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-mcbqz" Jan 27 20:19:00 crc kubenswrapper[4915]: I0127 20:19:00.358732 4915 scope.go:117] "RemoveContainer" containerID="fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114" Jan 27 20:19:00 crc kubenswrapper[4915]: E0127 20:19:00.359600 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:19:00 crc kubenswrapper[4915]: I0127 20:19:00.371671 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mwr2k"] Jan 27 20:19:00 crc kubenswrapper[4915]: I0127 20:19:00.453251 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwr2k" event={"ID":"31b1b872-d75b-4fb7-8995-a9183f0a8728","Type":"ContainerStarted","Data":"1fbf3a290850a106046366dbc61ac4ddc786adb135f755829b284fb822bf4edd"} Jan 27 20:19:00 crc kubenswrapper[4915]: I0127 20:19:00.784645 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-mcbqz"] Jan 27 20:19:00 crc kubenswrapper[4915]: W0127 20:19:00.785499 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod199b3c02_36d1_4807_8615_2ac7d42b3a3c.slice/crio-d9575409b415eefb758841dac25ae8940f96686d4e2103774769fae96a15257d WatchSource:0}: Error finding container d9575409b415eefb758841dac25ae8940f96686d4e2103774769fae96a15257d: Status 404 returned error can't find the container with id d9575409b415eefb758841dac25ae8940f96686d4e2103774769fae96a15257d Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.116021 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-tjwzg"] Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.117870 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-tjwzg" Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.120901 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.134163 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-tjwzg"] Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.207587 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgx4p\" (UniqueName: \"kubernetes.io/projected/5dc8ad96-6e14-4ab9-b6e5-34698d4600e9-kube-api-access-xgx4p\") pod \"ovn-controller-metrics-tjwzg\" (UID: \"5dc8ad96-6e14-4ab9-b6e5-34698d4600e9\") " pod="openstack/ovn-controller-metrics-tjwzg" Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.207723 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5dc8ad96-6e14-4ab9-b6e5-34698d4600e9-ovn-rundir\") pod \"ovn-controller-metrics-tjwzg\" (UID: \"5dc8ad96-6e14-4ab9-b6e5-34698d4600e9\") " pod="openstack/ovn-controller-metrics-tjwzg" Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.207755 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5dc8ad96-6e14-4ab9-b6e5-34698d4600e9-ovs-rundir\") pod \"ovn-controller-metrics-tjwzg\" (UID: \"5dc8ad96-6e14-4ab9-b6e5-34698d4600e9\") " pod="openstack/ovn-controller-metrics-tjwzg" Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.207830 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dc8ad96-6e14-4ab9-b6e5-34698d4600e9-config\") pod \"ovn-controller-metrics-tjwzg\" (UID: \"5dc8ad96-6e14-4ab9-b6e5-34698d4600e9\") " pod="openstack/ovn-controller-metrics-tjwzg" Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.309208 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dc8ad96-6e14-4ab9-b6e5-34698d4600e9-config\") pod \"ovn-controller-metrics-tjwzg\" (UID: \"5dc8ad96-6e14-4ab9-b6e5-34698d4600e9\") " pod="openstack/ovn-controller-metrics-tjwzg" Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.309357 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgx4p\" (UniqueName: \"kubernetes.io/projected/5dc8ad96-6e14-4ab9-b6e5-34698d4600e9-kube-api-access-xgx4p\") pod \"ovn-controller-metrics-tjwzg\" (UID: \"5dc8ad96-6e14-4ab9-b6e5-34698d4600e9\") " pod="openstack/ovn-controller-metrics-tjwzg" Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.309435 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5dc8ad96-6e14-4ab9-b6e5-34698d4600e9-ovn-rundir\") pod \"ovn-controller-metrics-tjwzg\" (UID: \"5dc8ad96-6e14-4ab9-b6e5-34698d4600e9\") " pod="openstack/ovn-controller-metrics-tjwzg" Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.309473 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5dc8ad96-6e14-4ab9-b6e5-34698d4600e9-ovs-rundir\") pod \"ovn-controller-metrics-tjwzg\" (UID: \"5dc8ad96-6e14-4ab9-b6e5-34698d4600e9\") " pod="openstack/ovn-controller-metrics-tjwzg" Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.309862 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5dc8ad96-6e14-4ab9-b6e5-34698d4600e9-ovs-rundir\") pod \"ovn-controller-metrics-tjwzg\" (UID: \"5dc8ad96-6e14-4ab9-b6e5-34698d4600e9\") " pod="openstack/ovn-controller-metrics-tjwzg" Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.309897 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5dc8ad96-6e14-4ab9-b6e5-34698d4600e9-ovn-rundir\") pod \"ovn-controller-metrics-tjwzg\" (UID: \"5dc8ad96-6e14-4ab9-b6e5-34698d4600e9\") " pod="openstack/ovn-controller-metrics-tjwzg" Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.310485 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dc8ad96-6e14-4ab9-b6e5-34698d4600e9-config\") pod \"ovn-controller-metrics-tjwzg\" (UID: \"5dc8ad96-6e14-4ab9-b6e5-34698d4600e9\") " pod="openstack/ovn-controller-metrics-tjwzg" Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.332976 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgx4p\" (UniqueName: \"kubernetes.io/projected/5dc8ad96-6e14-4ab9-b6e5-34698d4600e9-kube-api-access-xgx4p\") pod \"ovn-controller-metrics-tjwzg\" (UID: \"5dc8ad96-6e14-4ab9-b6e5-34698d4600e9\") " pod="openstack/ovn-controller-metrics-tjwzg" Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.356155 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-jlkcq"] Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.357911 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-jlkcq" Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.388565 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-jlkcq"] Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.411868 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bwm9\" (UniqueName: \"kubernetes.io/projected/12e3005f-a1f7-4107-99da-dce54d08fb8f-kube-api-access-4bwm9\") pod \"octavia-db-create-jlkcq\" (UID: \"12e3005f-a1f7-4107-99da-dce54d08fb8f\") " pod="openstack/octavia-db-create-jlkcq" Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.412026 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12e3005f-a1f7-4107-99da-dce54d08fb8f-operator-scripts\") pod \"octavia-db-create-jlkcq\" (UID: \"12e3005f-a1f7-4107-99da-dce54d08fb8f\") " pod="openstack/octavia-db-create-jlkcq" Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.436337 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-tjwzg" Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.486759 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mcbqz" event={"ID":"199b3c02-36d1-4807-8615-2ac7d42b3a3c","Type":"ContainerStarted","Data":"f73b83038c37ece5cd1fe39bc630b2c65db29e3bb9c252c0a28563b34a115ed5"} Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.486828 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mcbqz" event={"ID":"199b3c02-36d1-4807-8615-2ac7d42b3a3c","Type":"ContainerStarted","Data":"d9575409b415eefb758841dac25ae8940f96686d4e2103774769fae96a15257d"} Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.490471 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwr2k" event={"ID":"31b1b872-d75b-4fb7-8995-a9183f0a8728","Type":"ContainerStarted","Data":"cc3719d73224b47c78366288df528d8f6ff493c842a853c4800979f2a513b5f7"} Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.491214 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-mwr2k" Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.514228 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bwm9\" (UniqueName: \"kubernetes.io/projected/12e3005f-a1f7-4107-99da-dce54d08fb8f-kube-api-access-4bwm9\") pod \"octavia-db-create-jlkcq\" (UID: \"12e3005f-a1f7-4107-99da-dce54d08fb8f\") " pod="openstack/octavia-db-create-jlkcq" Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.514368 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12e3005f-a1f7-4107-99da-dce54d08fb8f-operator-scripts\") pod \"octavia-db-create-jlkcq\" (UID: \"12e3005f-a1f7-4107-99da-dce54d08fb8f\") " pod="openstack/octavia-db-create-jlkcq" Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.515812 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12e3005f-a1f7-4107-99da-dce54d08fb8f-operator-scripts\") pod \"octavia-db-create-jlkcq\" (UID: \"12e3005f-a1f7-4107-99da-dce54d08fb8f\") " pod="openstack/octavia-db-create-jlkcq" Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.537279 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bwm9\" (UniqueName: \"kubernetes.io/projected/12e3005f-a1f7-4107-99da-dce54d08fb8f-kube-api-access-4bwm9\") pod \"octavia-db-create-jlkcq\" (UID: \"12e3005f-a1f7-4107-99da-dce54d08fb8f\") " pod="openstack/octavia-db-create-jlkcq" Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.710239 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-jlkcq" Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.975688 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-mwr2k" podStartSLOduration=2.975665683 podStartE2EDuration="2.975665683s" podCreationTimestamp="2026-01-27 20:18:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:19:01.529388467 +0000 UTC m=+5832.887242141" watchObservedRunningTime="2026-01-27 20:19:01.975665683 +0000 UTC m=+5833.333519347" Jan 27 20:19:01 crc kubenswrapper[4915]: I0127 20:19:01.979327 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-tjwzg"] Jan 27 20:19:02 crc kubenswrapper[4915]: I0127 20:19:02.248890 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-jlkcq"] Jan 27 20:19:02 crc kubenswrapper[4915]: I0127 20:19:02.501209 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-jlkcq" event={"ID":"12e3005f-a1f7-4107-99da-dce54d08fb8f","Type":"ContainerStarted","Data":"5f2e914382dc72d2cdefcfb329826a5243207431ddb241c1be323d682549ce3b"} Jan 27 20:19:02 crc kubenswrapper[4915]: I0127 20:19:02.501758 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-jlkcq" event={"ID":"12e3005f-a1f7-4107-99da-dce54d08fb8f","Type":"ContainerStarted","Data":"b78d849cf6272a85309f959639a1273ed513c3ff52d14fde6df604e3f9d6658e"} Jan 27 20:19:02 crc kubenswrapper[4915]: I0127 20:19:02.503385 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-tjwzg" event={"ID":"5dc8ad96-6e14-4ab9-b6e5-34698d4600e9","Type":"ContainerStarted","Data":"6c11be25003470269948e50a4f532ceb472fc46bf312d331cd6da45572602484"} Jan 27 20:19:02 crc kubenswrapper[4915]: I0127 20:19:02.503430 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-tjwzg" event={"ID":"5dc8ad96-6e14-4ab9-b6e5-34698d4600e9","Type":"ContainerStarted","Data":"18a2f494cbba87cb5ca64a138dfb2f099231dc02ece8c8d56f341b7b4ffbad30"} Jan 27 20:19:02 crc kubenswrapper[4915]: I0127 20:19:02.506255 4915 generic.go:334] "Generic (PLEG): container finished" podID="199b3c02-36d1-4807-8615-2ac7d42b3a3c" containerID="f73b83038c37ece5cd1fe39bc630b2c65db29e3bb9c252c0a28563b34a115ed5" exitCode=0 Jan 27 20:19:02 crc kubenswrapper[4915]: I0127 20:19:02.506375 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mcbqz" event={"ID":"199b3c02-36d1-4807-8615-2ac7d42b3a3c","Type":"ContainerDied","Data":"f73b83038c37ece5cd1fe39bc630b2c65db29e3bb9c252c0a28563b34a115ed5"} Jan 27 20:19:02 crc kubenswrapper[4915]: I0127 20:19:02.524036 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-create-jlkcq" podStartSLOduration=1.524010115 podStartE2EDuration="1.524010115s" podCreationTimestamp="2026-01-27 20:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:19:02.516818388 +0000 UTC m=+5833.874672082" watchObservedRunningTime="2026-01-27 20:19:02.524010115 +0000 UTC m=+5833.881863779" Jan 27 20:19:02 crc kubenswrapper[4915]: I0127 20:19:02.586467 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-tjwzg" podStartSLOduration=1.586445247 podStartE2EDuration="1.586445247s" podCreationTimestamp="2026-01-27 20:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:19:02.586019867 +0000 UTC m=+5833.943873531" watchObservedRunningTime="2026-01-27 20:19:02.586445247 +0000 UTC m=+5833.944298911" Jan 27 20:19:02 crc kubenswrapper[4915]: I0127 20:19:02.883885 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-6c17-account-create-update-b7x64"] Jan 27 20:19:02 crc kubenswrapper[4915]: I0127 20:19:02.885124 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-6c17-account-create-update-b7x64" Jan 27 20:19:02 crc kubenswrapper[4915]: I0127 20:19:02.896035 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Jan 27 20:19:02 crc kubenswrapper[4915]: I0127 20:19:02.906105 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-6c17-account-create-update-b7x64"] Jan 27 20:19:02 crc kubenswrapper[4915]: I0127 20:19:02.953237 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4669e3-bdc5-4c02-bec9-2d354c3a7333-operator-scripts\") pod \"octavia-6c17-account-create-update-b7x64\" (UID: \"3b4669e3-bdc5-4c02-bec9-2d354c3a7333\") " pod="openstack/octavia-6c17-account-create-update-b7x64" Jan 27 20:19:02 crc kubenswrapper[4915]: I0127 20:19:02.953323 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4wcb\" (UniqueName: \"kubernetes.io/projected/3b4669e3-bdc5-4c02-bec9-2d354c3a7333-kube-api-access-z4wcb\") pod \"octavia-6c17-account-create-update-b7x64\" (UID: \"3b4669e3-bdc5-4c02-bec9-2d354c3a7333\") " pod="openstack/octavia-6c17-account-create-update-b7x64" Jan 27 20:19:03 crc kubenswrapper[4915]: I0127 20:19:03.038176 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-wn645"] Jan 27 20:19:03 crc kubenswrapper[4915]: I0127 20:19:03.055165 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4669e3-bdc5-4c02-bec9-2d354c3a7333-operator-scripts\") pod \"octavia-6c17-account-create-update-b7x64\" (UID: \"3b4669e3-bdc5-4c02-bec9-2d354c3a7333\") " pod="openstack/octavia-6c17-account-create-update-b7x64" Jan 27 20:19:03 crc kubenswrapper[4915]: I0127 20:19:03.055249 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4wcb\" (UniqueName: \"kubernetes.io/projected/3b4669e3-bdc5-4c02-bec9-2d354c3a7333-kube-api-access-z4wcb\") pod \"octavia-6c17-account-create-update-b7x64\" (UID: \"3b4669e3-bdc5-4c02-bec9-2d354c3a7333\") " pod="openstack/octavia-6c17-account-create-update-b7x64" Jan 27 20:19:03 crc kubenswrapper[4915]: I0127 20:19:03.056096 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-wn645"] Jan 27 20:19:03 crc kubenswrapper[4915]: I0127 20:19:03.056270 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4669e3-bdc5-4c02-bec9-2d354c3a7333-operator-scripts\") pod \"octavia-6c17-account-create-update-b7x64\" (UID: \"3b4669e3-bdc5-4c02-bec9-2d354c3a7333\") " pod="openstack/octavia-6c17-account-create-update-b7x64" Jan 27 20:19:03 crc kubenswrapper[4915]: I0127 20:19:03.074181 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4wcb\" (UniqueName: \"kubernetes.io/projected/3b4669e3-bdc5-4c02-bec9-2d354c3a7333-kube-api-access-z4wcb\") pod \"octavia-6c17-account-create-update-b7x64\" (UID: \"3b4669e3-bdc5-4c02-bec9-2d354c3a7333\") " pod="openstack/octavia-6c17-account-create-update-b7x64" Jan 27 20:19:03 crc kubenswrapper[4915]: I0127 20:19:03.204252 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-6c17-account-create-update-b7x64" Jan 27 20:19:03 crc kubenswrapper[4915]: I0127 20:19:03.376731 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7feb35cc-b62a-40af-9109-63180f240627" path="/var/lib/kubelet/pods/7feb35cc-b62a-40af-9109-63180f240627/volumes" Jan 27 20:19:03 crc kubenswrapper[4915]: I0127 20:19:03.520568 4915 generic.go:334] "Generic (PLEG): container finished" podID="12e3005f-a1f7-4107-99da-dce54d08fb8f" containerID="5f2e914382dc72d2cdefcfb329826a5243207431ddb241c1be323d682549ce3b" exitCode=0 Jan 27 20:19:03 crc kubenswrapper[4915]: I0127 20:19:03.520633 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-jlkcq" event={"ID":"12e3005f-a1f7-4107-99da-dce54d08fb8f","Type":"ContainerDied","Data":"5f2e914382dc72d2cdefcfb329826a5243207431ddb241c1be323d682549ce3b"} Jan 27 20:19:03 crc kubenswrapper[4915]: I0127 20:19:03.524707 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mcbqz" event={"ID":"199b3c02-36d1-4807-8615-2ac7d42b3a3c","Type":"ContainerStarted","Data":"147cf0e05f0a5fbc493f9ca3d2ccf74907e8858b202a6fcb5ca9e86cbedd0567"} Jan 27 20:19:03 crc kubenswrapper[4915]: I0127 20:19:03.524877 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mcbqz" event={"ID":"199b3c02-36d1-4807-8615-2ac7d42b3a3c","Type":"ContainerStarted","Data":"7f9d7126ce67c4257dbe2ff02920ca635fdcfcd3d4730f4151f11fd6de266b8b"} Jan 27 20:19:03 crc kubenswrapper[4915]: I0127 20:19:03.524982 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-mcbqz" Jan 27 20:19:03 crc kubenswrapper[4915]: I0127 20:19:03.527177 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-mcbqz" Jan 27 20:19:03 crc kubenswrapper[4915]: I0127 20:19:03.572562 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-mcbqz" podStartSLOduration=4.572540655 podStartE2EDuration="4.572540655s" podCreationTimestamp="2026-01-27 20:18:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:19:03.560636863 +0000 UTC m=+5834.918490527" watchObservedRunningTime="2026-01-27 20:19:03.572540655 +0000 UTC m=+5834.930394319" Jan 27 20:19:03 crc kubenswrapper[4915]: I0127 20:19:03.735290 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-6c17-account-create-update-b7x64"] Jan 27 20:19:04 crc kubenswrapper[4915]: I0127 20:19:04.034957 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5200-account-create-update-xkkjg"] Jan 27 20:19:04 crc kubenswrapper[4915]: I0127 20:19:04.045295 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-5200-account-create-update-xkkjg"] Jan 27 20:19:04 crc kubenswrapper[4915]: I0127 20:19:04.552732 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-6c17-account-create-update-b7x64" event={"ID":"3b4669e3-bdc5-4c02-bec9-2d354c3a7333","Type":"ContainerStarted","Data":"cf4ac298c9cfa9200337278db4f3025a0fd0911a6bc7c68a14ec8239186ece61"} Jan 27 20:19:04 crc kubenswrapper[4915]: I0127 20:19:04.553934 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-6c17-account-create-update-b7x64" event={"ID":"3b4669e3-bdc5-4c02-bec9-2d354c3a7333","Type":"ContainerStarted","Data":"2147b29b813a5143732f73592b25a8959653e3513e374e22dbff69ad54d0773b"} Jan 27 20:19:04 crc kubenswrapper[4915]: I0127 20:19:04.578338 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-6c17-account-create-update-b7x64" podStartSLOduration=2.5782663059999997 podStartE2EDuration="2.578266306s" podCreationTimestamp="2026-01-27 20:19:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:19:04.571289534 +0000 UTC m=+5835.929143218" watchObservedRunningTime="2026-01-27 20:19:04.578266306 +0000 UTC m=+5835.936119990" Jan 27 20:19:04 crc kubenswrapper[4915]: I0127 20:19:04.935233 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-jlkcq" Jan 27 20:19:05 crc kubenswrapper[4915]: I0127 20:19:05.004172 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bwm9\" (UniqueName: \"kubernetes.io/projected/12e3005f-a1f7-4107-99da-dce54d08fb8f-kube-api-access-4bwm9\") pod \"12e3005f-a1f7-4107-99da-dce54d08fb8f\" (UID: \"12e3005f-a1f7-4107-99da-dce54d08fb8f\") " Jan 27 20:19:05 crc kubenswrapper[4915]: I0127 20:19:05.004522 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12e3005f-a1f7-4107-99da-dce54d08fb8f-operator-scripts\") pod \"12e3005f-a1f7-4107-99da-dce54d08fb8f\" (UID: \"12e3005f-a1f7-4107-99da-dce54d08fb8f\") " Jan 27 20:19:05 crc kubenswrapper[4915]: I0127 20:19:05.005142 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12e3005f-a1f7-4107-99da-dce54d08fb8f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "12e3005f-a1f7-4107-99da-dce54d08fb8f" (UID: "12e3005f-a1f7-4107-99da-dce54d08fb8f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:19:05 crc kubenswrapper[4915]: I0127 20:19:05.011718 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12e3005f-a1f7-4107-99da-dce54d08fb8f-kube-api-access-4bwm9" (OuterVolumeSpecName: "kube-api-access-4bwm9") pod "12e3005f-a1f7-4107-99da-dce54d08fb8f" (UID: "12e3005f-a1f7-4107-99da-dce54d08fb8f"). InnerVolumeSpecName "kube-api-access-4bwm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:19:05 crc kubenswrapper[4915]: I0127 20:19:05.106851 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12e3005f-a1f7-4107-99da-dce54d08fb8f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:19:05 crc kubenswrapper[4915]: I0127 20:19:05.106883 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bwm9\" (UniqueName: \"kubernetes.io/projected/12e3005f-a1f7-4107-99da-dce54d08fb8f-kube-api-access-4bwm9\") on node \"crc\" DevicePath \"\"" Jan 27 20:19:05 crc kubenswrapper[4915]: I0127 20:19:05.371570 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0774762f-3eb3-4068-ba74-aeed37d4474e" path="/var/lib/kubelet/pods/0774762f-3eb3-4068-ba74-aeed37d4474e/volumes" Jan 27 20:19:05 crc kubenswrapper[4915]: I0127 20:19:05.567108 4915 generic.go:334] "Generic (PLEG): container finished" podID="3b4669e3-bdc5-4c02-bec9-2d354c3a7333" containerID="cf4ac298c9cfa9200337278db4f3025a0fd0911a6bc7c68a14ec8239186ece61" exitCode=0 Jan 27 20:19:05 crc kubenswrapper[4915]: I0127 20:19:05.567208 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-6c17-account-create-update-b7x64" event={"ID":"3b4669e3-bdc5-4c02-bec9-2d354c3a7333","Type":"ContainerDied","Data":"cf4ac298c9cfa9200337278db4f3025a0fd0911a6bc7c68a14ec8239186ece61"} Jan 27 20:19:05 crc kubenswrapper[4915]: I0127 20:19:05.569427 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-jlkcq" Jan 27 20:19:05 crc kubenswrapper[4915]: I0127 20:19:05.569421 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-jlkcq" event={"ID":"12e3005f-a1f7-4107-99da-dce54d08fb8f","Type":"ContainerDied","Data":"b78d849cf6272a85309f959639a1273ed513c3ff52d14fde6df604e3f9d6658e"} Jan 27 20:19:05 crc kubenswrapper[4915]: I0127 20:19:05.569485 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b78d849cf6272a85309f959639a1273ed513c3ff52d14fde6df604e3f9d6658e" Jan 27 20:19:06 crc kubenswrapper[4915]: I0127 20:19:06.944965 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-6c17-account-create-update-b7x64" Jan 27 20:19:07 crc kubenswrapper[4915]: I0127 20:19:07.043723 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4wcb\" (UniqueName: \"kubernetes.io/projected/3b4669e3-bdc5-4c02-bec9-2d354c3a7333-kube-api-access-z4wcb\") pod \"3b4669e3-bdc5-4c02-bec9-2d354c3a7333\" (UID: \"3b4669e3-bdc5-4c02-bec9-2d354c3a7333\") " Jan 27 20:19:07 crc kubenswrapper[4915]: I0127 20:19:07.044011 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4669e3-bdc5-4c02-bec9-2d354c3a7333-operator-scripts\") pod \"3b4669e3-bdc5-4c02-bec9-2d354c3a7333\" (UID: \"3b4669e3-bdc5-4c02-bec9-2d354c3a7333\") " Jan 27 20:19:07 crc kubenswrapper[4915]: I0127 20:19:07.044438 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b4669e3-bdc5-4c02-bec9-2d354c3a7333-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b4669e3-bdc5-4c02-bec9-2d354c3a7333" (UID: "3b4669e3-bdc5-4c02-bec9-2d354c3a7333"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:19:07 crc kubenswrapper[4915]: I0127 20:19:07.045732 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4669e3-bdc5-4c02-bec9-2d354c3a7333-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:19:07 crc kubenswrapper[4915]: I0127 20:19:07.049182 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b4669e3-bdc5-4c02-bec9-2d354c3a7333-kube-api-access-z4wcb" (OuterVolumeSpecName: "kube-api-access-z4wcb") pod "3b4669e3-bdc5-4c02-bec9-2d354c3a7333" (UID: "3b4669e3-bdc5-4c02-bec9-2d354c3a7333"). InnerVolumeSpecName "kube-api-access-z4wcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:19:07 crc kubenswrapper[4915]: I0127 20:19:07.147843 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4wcb\" (UniqueName: \"kubernetes.io/projected/3b4669e3-bdc5-4c02-bec9-2d354c3a7333-kube-api-access-z4wcb\") on node \"crc\" DevicePath \"\"" Jan 27 20:19:07 crc kubenswrapper[4915]: I0127 20:19:07.592877 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-6c17-account-create-update-b7x64" event={"ID":"3b4669e3-bdc5-4c02-bec9-2d354c3a7333","Type":"ContainerDied","Data":"2147b29b813a5143732f73592b25a8959653e3513e374e22dbff69ad54d0773b"} Jan 27 20:19:07 crc kubenswrapper[4915]: I0127 20:19:07.593213 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2147b29b813a5143732f73592b25a8959653e3513e374e22dbff69ad54d0773b" Jan 27 20:19:07 crc kubenswrapper[4915]: I0127 20:19:07.592959 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-6c17-account-create-update-b7x64" Jan 27 20:19:09 crc kubenswrapper[4915]: I0127 20:19:09.321801 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-rwn2l"] Jan 27 20:19:09 crc kubenswrapper[4915]: E0127 20:19:09.322336 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b4669e3-bdc5-4c02-bec9-2d354c3a7333" containerName="mariadb-account-create-update" Jan 27 20:19:09 crc kubenswrapper[4915]: I0127 20:19:09.322353 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b4669e3-bdc5-4c02-bec9-2d354c3a7333" containerName="mariadb-account-create-update" Jan 27 20:19:09 crc kubenswrapper[4915]: E0127 20:19:09.322373 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12e3005f-a1f7-4107-99da-dce54d08fb8f" containerName="mariadb-database-create" Jan 27 20:19:09 crc kubenswrapper[4915]: I0127 20:19:09.322382 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="12e3005f-a1f7-4107-99da-dce54d08fb8f" containerName="mariadb-database-create" Jan 27 20:19:09 crc kubenswrapper[4915]: I0127 20:19:09.322598 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="12e3005f-a1f7-4107-99da-dce54d08fb8f" containerName="mariadb-database-create" Jan 27 20:19:09 crc kubenswrapper[4915]: I0127 20:19:09.322617 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b4669e3-bdc5-4c02-bec9-2d354c3a7333" containerName="mariadb-account-create-update" Jan 27 20:19:09 crc kubenswrapper[4915]: I0127 20:19:09.323360 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-rwn2l" Jan 27 20:19:09 crc kubenswrapper[4915]: I0127 20:19:09.330604 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-rwn2l"] Jan 27 20:19:09 crc kubenswrapper[4915]: I0127 20:19:09.389306 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf468\" (UniqueName: \"kubernetes.io/projected/2019a39c-dbff-4cdc-abed-9b74dda1f9c8-kube-api-access-vf468\") pod \"octavia-persistence-db-create-rwn2l\" (UID: \"2019a39c-dbff-4cdc-abed-9b74dda1f9c8\") " pod="openstack/octavia-persistence-db-create-rwn2l" Jan 27 20:19:09 crc kubenswrapper[4915]: I0127 20:19:09.389601 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2019a39c-dbff-4cdc-abed-9b74dda1f9c8-operator-scripts\") pod \"octavia-persistence-db-create-rwn2l\" (UID: \"2019a39c-dbff-4cdc-abed-9b74dda1f9c8\") " pod="openstack/octavia-persistence-db-create-rwn2l" Jan 27 20:19:09 crc kubenswrapper[4915]: I0127 20:19:09.492902 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2019a39c-dbff-4cdc-abed-9b74dda1f9c8-operator-scripts\") pod \"octavia-persistence-db-create-rwn2l\" (UID: \"2019a39c-dbff-4cdc-abed-9b74dda1f9c8\") " pod="openstack/octavia-persistence-db-create-rwn2l" Jan 27 20:19:09 crc kubenswrapper[4915]: I0127 20:19:09.493013 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf468\" (UniqueName: \"kubernetes.io/projected/2019a39c-dbff-4cdc-abed-9b74dda1f9c8-kube-api-access-vf468\") pod \"octavia-persistence-db-create-rwn2l\" (UID: \"2019a39c-dbff-4cdc-abed-9b74dda1f9c8\") " pod="openstack/octavia-persistence-db-create-rwn2l" Jan 27 20:19:09 crc kubenswrapper[4915]: I0127 20:19:09.493680 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2019a39c-dbff-4cdc-abed-9b74dda1f9c8-operator-scripts\") pod \"octavia-persistence-db-create-rwn2l\" (UID: \"2019a39c-dbff-4cdc-abed-9b74dda1f9c8\") " pod="openstack/octavia-persistence-db-create-rwn2l" Jan 27 20:19:09 crc kubenswrapper[4915]: I0127 20:19:09.512196 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf468\" (UniqueName: \"kubernetes.io/projected/2019a39c-dbff-4cdc-abed-9b74dda1f9c8-kube-api-access-vf468\") pod \"octavia-persistence-db-create-rwn2l\" (UID: \"2019a39c-dbff-4cdc-abed-9b74dda1f9c8\") " pod="openstack/octavia-persistence-db-create-rwn2l" Jan 27 20:19:09 crc kubenswrapper[4915]: I0127 20:19:09.649491 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-rwn2l" Jan 27 20:19:10 crc kubenswrapper[4915]: I0127 20:19:10.047766 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-flbdx"] Jan 27 20:19:10 crc kubenswrapper[4915]: I0127 20:19:10.057855 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-flbdx"] Jan 27 20:19:10 crc kubenswrapper[4915]: I0127 20:19:10.080225 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-rwn2l"] Jan 27 20:19:10 crc kubenswrapper[4915]: I0127 20:19:10.268352 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-e734-account-create-update-wqtdl"] Jan 27 20:19:10 crc kubenswrapper[4915]: I0127 20:19:10.269850 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-e734-account-create-update-wqtdl" Jan 27 20:19:10 crc kubenswrapper[4915]: I0127 20:19:10.271833 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Jan 27 20:19:10 crc kubenswrapper[4915]: I0127 20:19:10.279832 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-e734-account-create-update-wqtdl"] Jan 27 20:19:10 crc kubenswrapper[4915]: I0127 20:19:10.307655 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc27z\" (UniqueName: \"kubernetes.io/projected/eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc-kube-api-access-wc27z\") pod \"octavia-e734-account-create-update-wqtdl\" (UID: \"eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc\") " pod="openstack/octavia-e734-account-create-update-wqtdl" Jan 27 20:19:10 crc kubenswrapper[4915]: I0127 20:19:10.307736 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc-operator-scripts\") pod \"octavia-e734-account-create-update-wqtdl\" (UID: \"eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc\") " pod="openstack/octavia-e734-account-create-update-wqtdl" Jan 27 20:19:10 crc kubenswrapper[4915]: I0127 20:19:10.410094 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc27z\" (UniqueName: \"kubernetes.io/projected/eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc-kube-api-access-wc27z\") pod \"octavia-e734-account-create-update-wqtdl\" (UID: \"eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc\") " pod="openstack/octavia-e734-account-create-update-wqtdl" Jan 27 20:19:10 crc kubenswrapper[4915]: I0127 20:19:10.410176 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc-operator-scripts\") pod \"octavia-e734-account-create-update-wqtdl\" (UID: \"eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc\") " pod="openstack/octavia-e734-account-create-update-wqtdl" Jan 27 20:19:10 crc kubenswrapper[4915]: I0127 20:19:10.411023 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc-operator-scripts\") pod \"octavia-e734-account-create-update-wqtdl\" (UID: \"eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc\") " pod="openstack/octavia-e734-account-create-update-wqtdl" Jan 27 20:19:10 crc kubenswrapper[4915]: I0127 20:19:10.429252 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc27z\" (UniqueName: \"kubernetes.io/projected/eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc-kube-api-access-wc27z\") pod \"octavia-e734-account-create-update-wqtdl\" (UID: \"eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc\") " pod="openstack/octavia-e734-account-create-update-wqtdl" Jan 27 20:19:10 crc kubenswrapper[4915]: I0127 20:19:10.588288 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-e734-account-create-update-wqtdl" Jan 27 20:19:10 crc kubenswrapper[4915]: I0127 20:19:10.621022 4915 generic.go:334] "Generic (PLEG): container finished" podID="2019a39c-dbff-4cdc-abed-9b74dda1f9c8" containerID="1b7395f9b71ad2bcbcea1c90e477e62fbc86e4ab9cd07a4ea01fe3f43b35791e" exitCode=0 Jan 27 20:19:10 crc kubenswrapper[4915]: I0127 20:19:10.621075 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-rwn2l" event={"ID":"2019a39c-dbff-4cdc-abed-9b74dda1f9c8","Type":"ContainerDied","Data":"1b7395f9b71ad2bcbcea1c90e477e62fbc86e4ab9cd07a4ea01fe3f43b35791e"} Jan 27 20:19:10 crc kubenswrapper[4915]: I0127 20:19:10.621105 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-rwn2l" event={"ID":"2019a39c-dbff-4cdc-abed-9b74dda1f9c8","Type":"ContainerStarted","Data":"29632cf9bbc50910112bcbc1a8dbdb08605730a59e033e11ed6f3c654f863fa9"} Jan 27 20:19:11 crc kubenswrapper[4915]: I0127 20:19:11.021759 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-e734-account-create-update-wqtdl"] Jan 27 20:19:11 crc kubenswrapper[4915]: I0127 20:19:11.378828 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68c47c5d-eb47-40e0-9fb6-cf364ff82535" path="/var/lib/kubelet/pods/68c47c5d-eb47-40e0-9fb6-cf364ff82535/volumes" Jan 27 20:19:11 crc kubenswrapper[4915]: I0127 20:19:11.630003 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-e734-account-create-update-wqtdl" event={"ID":"eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc","Type":"ContainerStarted","Data":"b66b54aa04890aff15548bccfa38bbad809750f3abd02ed7421266f34d335397"} Jan 27 20:19:11 crc kubenswrapper[4915]: I0127 20:19:11.630070 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-e734-account-create-update-wqtdl" event={"ID":"eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc","Type":"ContainerStarted","Data":"de524248f348fa837ff1524685fffea0f1c24b8c2d2693d3b750d7905b90d7fb"} Jan 27 20:19:11 crc kubenswrapper[4915]: I0127 20:19:11.658390 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-e734-account-create-update-wqtdl" podStartSLOduration=1.658364979 podStartE2EDuration="1.658364979s" podCreationTimestamp="2026-01-27 20:19:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:19:11.649591884 +0000 UTC m=+5843.007445548" watchObservedRunningTime="2026-01-27 20:19:11.658364979 +0000 UTC m=+5843.016218643" Jan 27 20:19:11 crc kubenswrapper[4915]: I0127 20:19:11.999195 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-rwn2l" Jan 27 20:19:12 crc kubenswrapper[4915]: I0127 20:19:12.049567 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2019a39c-dbff-4cdc-abed-9b74dda1f9c8-operator-scripts\") pod \"2019a39c-dbff-4cdc-abed-9b74dda1f9c8\" (UID: \"2019a39c-dbff-4cdc-abed-9b74dda1f9c8\") " Jan 27 20:19:12 crc kubenswrapper[4915]: I0127 20:19:12.049652 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf468\" (UniqueName: \"kubernetes.io/projected/2019a39c-dbff-4cdc-abed-9b74dda1f9c8-kube-api-access-vf468\") pod \"2019a39c-dbff-4cdc-abed-9b74dda1f9c8\" (UID: \"2019a39c-dbff-4cdc-abed-9b74dda1f9c8\") " Jan 27 20:19:12 crc kubenswrapper[4915]: I0127 20:19:12.050110 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2019a39c-dbff-4cdc-abed-9b74dda1f9c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2019a39c-dbff-4cdc-abed-9b74dda1f9c8" (UID: "2019a39c-dbff-4cdc-abed-9b74dda1f9c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:19:12 crc kubenswrapper[4915]: I0127 20:19:12.050307 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2019a39c-dbff-4cdc-abed-9b74dda1f9c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:19:12 crc kubenswrapper[4915]: I0127 20:19:12.054903 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2019a39c-dbff-4cdc-abed-9b74dda1f9c8-kube-api-access-vf468" (OuterVolumeSpecName: "kube-api-access-vf468") pod "2019a39c-dbff-4cdc-abed-9b74dda1f9c8" (UID: "2019a39c-dbff-4cdc-abed-9b74dda1f9c8"). InnerVolumeSpecName "kube-api-access-vf468". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:19:12 crc kubenswrapper[4915]: I0127 20:19:12.152083 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vf468\" (UniqueName: \"kubernetes.io/projected/2019a39c-dbff-4cdc-abed-9b74dda1f9c8-kube-api-access-vf468\") on node \"crc\" DevicePath \"\"" Jan 27 20:19:12 crc kubenswrapper[4915]: I0127 20:19:12.640914 4915 generic.go:334] "Generic (PLEG): container finished" podID="eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc" containerID="b66b54aa04890aff15548bccfa38bbad809750f3abd02ed7421266f34d335397" exitCode=0 Jan 27 20:19:12 crc kubenswrapper[4915]: I0127 20:19:12.640968 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-e734-account-create-update-wqtdl" event={"ID":"eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc","Type":"ContainerDied","Data":"b66b54aa04890aff15548bccfa38bbad809750f3abd02ed7421266f34d335397"} Jan 27 20:19:12 crc kubenswrapper[4915]: I0127 20:19:12.643395 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-rwn2l" event={"ID":"2019a39c-dbff-4cdc-abed-9b74dda1f9c8","Type":"ContainerDied","Data":"29632cf9bbc50910112bcbc1a8dbdb08605730a59e033e11ed6f3c654f863fa9"} Jan 27 20:19:12 crc kubenswrapper[4915]: I0127 20:19:12.643428 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29632cf9bbc50910112bcbc1a8dbdb08605730a59e033e11ed6f3c654f863fa9" Jan 27 20:19:12 crc kubenswrapper[4915]: I0127 20:19:12.643463 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-rwn2l" Jan 27 20:19:13 crc kubenswrapper[4915]: I0127 20:19:13.358312 4915 scope.go:117] "RemoveContainer" containerID="fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114" Jan 27 20:19:13 crc kubenswrapper[4915]: E0127 20:19:13.358548 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:19:14 crc kubenswrapper[4915]: I0127 20:19:14.048529 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-e734-account-create-update-wqtdl" Jan 27 20:19:14 crc kubenswrapper[4915]: I0127 20:19:14.090499 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc-operator-scripts\") pod \"eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc\" (UID: \"eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc\") " Jan 27 20:19:14 crc kubenswrapper[4915]: I0127 20:19:14.090661 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc27z\" (UniqueName: \"kubernetes.io/projected/eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc-kube-api-access-wc27z\") pod \"eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc\" (UID: \"eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc\") " Jan 27 20:19:14 crc kubenswrapper[4915]: I0127 20:19:14.091182 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc" (UID: "eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:19:14 crc kubenswrapper[4915]: I0127 20:19:14.091758 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:19:14 crc kubenswrapper[4915]: I0127 20:19:14.099330 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc-kube-api-access-wc27z" (OuterVolumeSpecName: "kube-api-access-wc27z") pod "eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc" (UID: "eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc"). InnerVolumeSpecName "kube-api-access-wc27z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:19:14 crc kubenswrapper[4915]: I0127 20:19:14.193103 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc27z\" (UniqueName: \"kubernetes.io/projected/eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc-kube-api-access-wc27z\") on node \"crc\" DevicePath \"\"" Jan 27 20:19:14 crc kubenswrapper[4915]: I0127 20:19:14.664768 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-e734-account-create-update-wqtdl" event={"ID":"eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc","Type":"ContainerDied","Data":"de524248f348fa837ff1524685fffea0f1c24b8c2d2693d3b750d7905b90d7fb"} Jan 27 20:19:14 crc kubenswrapper[4915]: I0127 20:19:14.665036 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de524248f348fa837ff1524685fffea0f1c24b8c2d2693d3b750d7905b90d7fb" Jan 27 20:19:14 crc kubenswrapper[4915]: I0127 20:19:14.664833 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-e734-account-create-update-wqtdl" Jan 27 20:19:16 crc kubenswrapper[4915]: I0127 20:19:16.467102 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-5db958498-dmvb4"] Jan 27 20:19:16 crc kubenswrapper[4915]: E0127 20:19:16.468036 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2019a39c-dbff-4cdc-abed-9b74dda1f9c8" containerName="mariadb-database-create" Jan 27 20:19:16 crc kubenswrapper[4915]: I0127 20:19:16.468050 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="2019a39c-dbff-4cdc-abed-9b74dda1f9c8" containerName="mariadb-database-create" Jan 27 20:19:16 crc kubenswrapper[4915]: E0127 20:19:16.468067 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc" containerName="mariadb-account-create-update" Jan 27 20:19:16 crc kubenswrapper[4915]: I0127 20:19:16.468073 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc" containerName="mariadb-account-create-update" Jan 27 20:19:16 crc kubenswrapper[4915]: I0127 20:19:16.468260 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="2019a39c-dbff-4cdc-abed-9b74dda1f9c8" containerName="mariadb-database-create" Jan 27 20:19:16 crc kubenswrapper[4915]: I0127 20:19:16.468272 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc" containerName="mariadb-account-create-update" Jan 27 20:19:16 crc kubenswrapper[4915]: I0127 20:19:16.469837 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-5db958498-dmvb4" Jan 27 20:19:16 crc kubenswrapper[4915]: I0127 20:19:16.473229 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-szvxz" Jan 27 20:19:16 crc kubenswrapper[4915]: I0127 20:19:16.473459 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Jan 27 20:19:16 crc kubenswrapper[4915]: I0127 20:19:16.474044 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Jan 27 20:19:16 crc kubenswrapper[4915]: I0127 20:19:16.486767 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-5db958498-dmvb4"] Jan 27 20:19:16 crc kubenswrapper[4915]: I0127 20:19:16.538718 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/5c98922c-696e-4b7e-a12a-16b8f840c516-octavia-run\") pod \"octavia-api-5db958498-dmvb4\" (UID: \"5c98922c-696e-4b7e-a12a-16b8f840c516\") " pod="openstack/octavia-api-5db958498-dmvb4" Jan 27 20:19:16 crc kubenswrapper[4915]: I0127 20:19:16.538833 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c98922c-696e-4b7e-a12a-16b8f840c516-combined-ca-bundle\") pod \"octavia-api-5db958498-dmvb4\" (UID: \"5c98922c-696e-4b7e-a12a-16b8f840c516\") " pod="openstack/octavia-api-5db958498-dmvb4" Jan 27 20:19:16 crc kubenswrapper[4915]: I0127 20:19:16.538914 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c98922c-696e-4b7e-a12a-16b8f840c516-scripts\") pod \"octavia-api-5db958498-dmvb4\" (UID: \"5c98922c-696e-4b7e-a12a-16b8f840c516\") " pod="openstack/octavia-api-5db958498-dmvb4" Jan 27 20:19:16 crc kubenswrapper[4915]: I0127 20:19:16.539033 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5c98922c-696e-4b7e-a12a-16b8f840c516-config-data-merged\") pod \"octavia-api-5db958498-dmvb4\" (UID: \"5c98922c-696e-4b7e-a12a-16b8f840c516\") " pod="openstack/octavia-api-5db958498-dmvb4" Jan 27 20:19:16 crc kubenswrapper[4915]: I0127 20:19:16.539070 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c98922c-696e-4b7e-a12a-16b8f840c516-config-data\") pod \"octavia-api-5db958498-dmvb4\" (UID: \"5c98922c-696e-4b7e-a12a-16b8f840c516\") " pod="openstack/octavia-api-5db958498-dmvb4" Jan 27 20:19:16 crc kubenswrapper[4915]: I0127 20:19:16.640812 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5c98922c-696e-4b7e-a12a-16b8f840c516-config-data-merged\") pod \"octavia-api-5db958498-dmvb4\" (UID: \"5c98922c-696e-4b7e-a12a-16b8f840c516\") " pod="openstack/octavia-api-5db958498-dmvb4" Jan 27 20:19:16 crc kubenswrapper[4915]: I0127 20:19:16.640885 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c98922c-696e-4b7e-a12a-16b8f840c516-config-data\") pod \"octavia-api-5db958498-dmvb4\" (UID: \"5c98922c-696e-4b7e-a12a-16b8f840c516\") " pod="openstack/octavia-api-5db958498-dmvb4" Jan 27 20:19:16 crc kubenswrapper[4915]: I0127 20:19:16.640944 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/5c98922c-696e-4b7e-a12a-16b8f840c516-octavia-run\") pod \"octavia-api-5db958498-dmvb4\" (UID: \"5c98922c-696e-4b7e-a12a-16b8f840c516\") " pod="openstack/octavia-api-5db958498-dmvb4" Jan 27 20:19:16 crc kubenswrapper[4915]: I0127 20:19:16.641010 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c98922c-696e-4b7e-a12a-16b8f840c516-combined-ca-bundle\") pod \"octavia-api-5db958498-dmvb4\" (UID: \"5c98922c-696e-4b7e-a12a-16b8f840c516\") " pod="openstack/octavia-api-5db958498-dmvb4" Jan 27 20:19:16 crc kubenswrapper[4915]: I0127 20:19:16.641093 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c98922c-696e-4b7e-a12a-16b8f840c516-scripts\") pod \"octavia-api-5db958498-dmvb4\" (UID: \"5c98922c-696e-4b7e-a12a-16b8f840c516\") " pod="openstack/octavia-api-5db958498-dmvb4" Jan 27 20:19:16 crc kubenswrapper[4915]: I0127 20:19:16.641521 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5c98922c-696e-4b7e-a12a-16b8f840c516-config-data-merged\") pod \"octavia-api-5db958498-dmvb4\" (UID: \"5c98922c-696e-4b7e-a12a-16b8f840c516\") " pod="openstack/octavia-api-5db958498-dmvb4" Jan 27 20:19:16 crc kubenswrapper[4915]: I0127 20:19:16.641521 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/5c98922c-696e-4b7e-a12a-16b8f840c516-octavia-run\") pod \"octavia-api-5db958498-dmvb4\" (UID: \"5c98922c-696e-4b7e-a12a-16b8f840c516\") " pod="openstack/octavia-api-5db958498-dmvb4" Jan 27 20:19:16 crc kubenswrapper[4915]: I0127 20:19:16.648615 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c98922c-696e-4b7e-a12a-16b8f840c516-combined-ca-bundle\") pod \"octavia-api-5db958498-dmvb4\" (UID: \"5c98922c-696e-4b7e-a12a-16b8f840c516\") " pod="openstack/octavia-api-5db958498-dmvb4" Jan 27 20:19:16 crc kubenswrapper[4915]: I0127 20:19:16.648754 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c98922c-696e-4b7e-a12a-16b8f840c516-scripts\") pod \"octavia-api-5db958498-dmvb4\" (UID: \"5c98922c-696e-4b7e-a12a-16b8f840c516\") " pod="openstack/octavia-api-5db958498-dmvb4" Jan 27 20:19:16 crc kubenswrapper[4915]: I0127 20:19:16.649152 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c98922c-696e-4b7e-a12a-16b8f840c516-config-data\") pod \"octavia-api-5db958498-dmvb4\" (UID: \"5c98922c-696e-4b7e-a12a-16b8f840c516\") " pod="openstack/octavia-api-5db958498-dmvb4" Jan 27 20:19:16 crc kubenswrapper[4915]: I0127 20:19:16.796059 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-5db958498-dmvb4" Jan 27 20:19:17 crc kubenswrapper[4915]: I0127 20:19:17.314687 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-5db958498-dmvb4"] Jan 27 20:19:17 crc kubenswrapper[4915]: W0127 20:19:17.319430 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c98922c_696e_4b7e_a12a_16b8f840c516.slice/crio-befc55f5671882ffef9ad7b7e48e1043ee0b5058af85a8571c5a957ce5689ddd WatchSource:0}: Error finding container befc55f5671882ffef9ad7b7e48e1043ee0b5058af85a8571c5a957ce5689ddd: Status 404 returned error can't find the container with id befc55f5671882ffef9ad7b7e48e1043ee0b5058af85a8571c5a957ce5689ddd Jan 27 20:19:17 crc kubenswrapper[4915]: I0127 20:19:17.322463 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 20:19:17 crc kubenswrapper[4915]: I0127 20:19:17.689572 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-5db958498-dmvb4" event={"ID":"5c98922c-696e-4b7e-a12a-16b8f840c516","Type":"ContainerStarted","Data":"befc55f5671882ffef9ad7b7e48e1043ee0b5058af85a8571c5a957ce5689ddd"} Jan 27 20:19:25 crc kubenswrapper[4915]: I0127 20:19:25.037220 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-bs7n9"] Jan 27 20:19:25 crc kubenswrapper[4915]: I0127 20:19:25.048925 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-bs7n9"] Jan 27 20:19:25 crc kubenswrapper[4915]: I0127 20:19:25.369423 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec" path="/var/lib/kubelet/pods/e5ad98ae-2bcc-4caf-8c6f-4ce6e2d0e0ec/volumes" Jan 27 20:19:26 crc kubenswrapper[4915]: I0127 20:19:26.357550 4915 scope.go:117] "RemoveContainer" containerID="fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114" Jan 27 20:19:26 crc kubenswrapper[4915]: E0127 20:19:26.358160 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:19:26 crc kubenswrapper[4915]: I0127 20:19:26.780199 4915 generic.go:334] "Generic (PLEG): container finished" podID="5c98922c-696e-4b7e-a12a-16b8f840c516" containerID="7b9417dba23f41413ceedee7773bd88cdc9c02c73b27366b5bb7e91eb533519d" exitCode=0 Jan 27 20:19:26 crc kubenswrapper[4915]: I0127 20:19:26.780246 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-5db958498-dmvb4" event={"ID":"5c98922c-696e-4b7e-a12a-16b8f840c516","Type":"ContainerDied","Data":"7b9417dba23f41413ceedee7773bd88cdc9c02c73b27366b5bb7e91eb533519d"} Jan 27 20:19:27 crc kubenswrapper[4915]: I0127 20:19:27.791363 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-5db958498-dmvb4" event={"ID":"5c98922c-696e-4b7e-a12a-16b8f840c516","Type":"ContainerStarted","Data":"cc6d3eb9db5ef914c5dc5325c36dfaf23ae12bfe3e9c7cfe398c161b31d8588b"} Jan 27 20:19:27 crc kubenswrapper[4915]: I0127 20:19:27.791685 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-5db958498-dmvb4" Jan 27 20:19:27 crc kubenswrapper[4915]: I0127 20:19:27.791699 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-5db958498-dmvb4" Jan 27 20:19:27 crc kubenswrapper[4915]: I0127 20:19:27.791707 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-5db958498-dmvb4" event={"ID":"5c98922c-696e-4b7e-a12a-16b8f840c516","Type":"ContainerStarted","Data":"7e4c04f9d4475fefb3a8ab3fa868f3ee1ed2cc7fae2f38653c73083bf7052860"} Jan 27 20:19:27 crc kubenswrapper[4915]: I0127 20:19:27.812832 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-5db958498-dmvb4" podStartSLOduration=3.354433255 podStartE2EDuration="11.812785745s" podCreationTimestamp="2026-01-27 20:19:16 +0000 UTC" firstStartedPulling="2026-01-27 20:19:17.322157633 +0000 UTC m=+5848.680011297" lastFinishedPulling="2026-01-27 20:19:25.780510083 +0000 UTC m=+5857.138363787" observedRunningTime="2026-01-27 20:19:27.81173241 +0000 UTC m=+5859.169586084" watchObservedRunningTime="2026-01-27 20:19:27.812785745 +0000 UTC m=+5859.170639409" Jan 27 20:19:32 crc kubenswrapper[4915]: I0127 20:19:32.494868 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-mbmqb"] Jan 27 20:19:32 crc kubenswrapper[4915]: I0127 20:19:32.497251 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-mbmqb" Jan 27 20:19:32 crc kubenswrapper[4915]: I0127 20:19:32.499263 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Jan 27 20:19:32 crc kubenswrapper[4915]: I0127 20:19:32.499529 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Jan 27 20:19:32 crc kubenswrapper[4915]: I0127 20:19:32.499662 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Jan 27 20:19:32 crc kubenswrapper[4915]: I0127 20:19:32.512059 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-mbmqb"] Jan 27 20:19:32 crc kubenswrapper[4915]: I0127 20:19:32.567511 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/dc1d771a-7c3b-48fc-9779-5ee95669692b-hm-ports\") pod \"octavia-rsyslog-mbmqb\" (UID: \"dc1d771a-7c3b-48fc-9779-5ee95669692b\") " pod="openstack/octavia-rsyslog-mbmqb" Jan 27 20:19:32 crc kubenswrapper[4915]: I0127 20:19:32.567620 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc1d771a-7c3b-48fc-9779-5ee95669692b-scripts\") pod \"octavia-rsyslog-mbmqb\" (UID: \"dc1d771a-7c3b-48fc-9779-5ee95669692b\") " pod="openstack/octavia-rsyslog-mbmqb" Jan 27 20:19:32 crc kubenswrapper[4915]: I0127 20:19:32.567737 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/dc1d771a-7c3b-48fc-9779-5ee95669692b-config-data-merged\") pod \"octavia-rsyslog-mbmqb\" (UID: \"dc1d771a-7c3b-48fc-9779-5ee95669692b\") " pod="openstack/octavia-rsyslog-mbmqb" Jan 27 20:19:32 crc kubenswrapper[4915]: I0127 20:19:32.567964 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc1d771a-7c3b-48fc-9779-5ee95669692b-config-data\") pod \"octavia-rsyslog-mbmqb\" (UID: \"dc1d771a-7c3b-48fc-9779-5ee95669692b\") " pod="openstack/octavia-rsyslog-mbmqb" Jan 27 20:19:32 crc kubenswrapper[4915]: I0127 20:19:32.670545 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc1d771a-7c3b-48fc-9779-5ee95669692b-config-data\") pod \"octavia-rsyslog-mbmqb\" (UID: \"dc1d771a-7c3b-48fc-9779-5ee95669692b\") " pod="openstack/octavia-rsyslog-mbmqb" Jan 27 20:19:32 crc kubenswrapper[4915]: I0127 20:19:32.670981 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/dc1d771a-7c3b-48fc-9779-5ee95669692b-hm-ports\") pod \"octavia-rsyslog-mbmqb\" (UID: \"dc1d771a-7c3b-48fc-9779-5ee95669692b\") " pod="openstack/octavia-rsyslog-mbmqb" Jan 27 20:19:32 crc kubenswrapper[4915]: I0127 20:19:32.671211 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc1d771a-7c3b-48fc-9779-5ee95669692b-scripts\") pod \"octavia-rsyslog-mbmqb\" (UID: \"dc1d771a-7c3b-48fc-9779-5ee95669692b\") " pod="openstack/octavia-rsyslog-mbmqb" Jan 27 20:19:32 crc kubenswrapper[4915]: I0127 20:19:32.671283 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/dc1d771a-7c3b-48fc-9779-5ee95669692b-config-data-merged\") pod \"octavia-rsyslog-mbmqb\" (UID: \"dc1d771a-7c3b-48fc-9779-5ee95669692b\") " pod="openstack/octavia-rsyslog-mbmqb" Jan 27 20:19:32 crc kubenswrapper[4915]: I0127 20:19:32.671915 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/dc1d771a-7c3b-48fc-9779-5ee95669692b-hm-ports\") pod \"octavia-rsyslog-mbmqb\" (UID: \"dc1d771a-7c3b-48fc-9779-5ee95669692b\") " pod="openstack/octavia-rsyslog-mbmqb" Jan 27 20:19:32 crc kubenswrapper[4915]: I0127 20:19:32.671978 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/dc1d771a-7c3b-48fc-9779-5ee95669692b-config-data-merged\") pod \"octavia-rsyslog-mbmqb\" (UID: \"dc1d771a-7c3b-48fc-9779-5ee95669692b\") " pod="openstack/octavia-rsyslog-mbmqb" Jan 27 20:19:32 crc kubenswrapper[4915]: I0127 20:19:32.676617 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc1d771a-7c3b-48fc-9779-5ee95669692b-config-data\") pod \"octavia-rsyslog-mbmqb\" (UID: \"dc1d771a-7c3b-48fc-9779-5ee95669692b\") " pod="openstack/octavia-rsyslog-mbmqb" Jan 27 20:19:32 crc kubenswrapper[4915]: I0127 20:19:32.677122 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc1d771a-7c3b-48fc-9779-5ee95669692b-scripts\") pod \"octavia-rsyslog-mbmqb\" (UID: \"dc1d771a-7c3b-48fc-9779-5ee95669692b\") " pod="openstack/octavia-rsyslog-mbmqb" Jan 27 20:19:32 crc kubenswrapper[4915]: I0127 20:19:32.824621 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-mbmqb" Jan 27 20:19:33 crc kubenswrapper[4915]: I0127 20:19:33.126603 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-59f8cff499-l4k55"] Jan 27 20:19:33 crc kubenswrapper[4915]: I0127 20:19:33.129160 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-l4k55" Jan 27 20:19:33 crc kubenswrapper[4915]: I0127 20:19:33.133952 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Jan 27 20:19:33 crc kubenswrapper[4915]: I0127 20:19:33.146208 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-l4k55"] Jan 27 20:19:33 crc kubenswrapper[4915]: I0127 20:19:33.182353 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/0757156c-25f5-4574-856c-c7973a6ab716-amphora-image\") pod \"octavia-image-upload-59f8cff499-l4k55\" (UID: \"0757156c-25f5-4574-856c-c7973a6ab716\") " pod="openstack/octavia-image-upload-59f8cff499-l4k55" Jan 27 20:19:33 crc kubenswrapper[4915]: I0127 20:19:33.183782 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0757156c-25f5-4574-856c-c7973a6ab716-httpd-config\") pod \"octavia-image-upload-59f8cff499-l4k55\" (UID: \"0757156c-25f5-4574-856c-c7973a6ab716\") " pod="openstack/octavia-image-upload-59f8cff499-l4k55" Jan 27 20:19:33 crc kubenswrapper[4915]: I0127 20:19:33.288074 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0757156c-25f5-4574-856c-c7973a6ab716-httpd-config\") pod \"octavia-image-upload-59f8cff499-l4k55\" (UID: \"0757156c-25f5-4574-856c-c7973a6ab716\") " pod="openstack/octavia-image-upload-59f8cff499-l4k55" Jan 27 20:19:33 crc kubenswrapper[4915]: I0127 20:19:33.288552 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/0757156c-25f5-4574-856c-c7973a6ab716-amphora-image\") pod \"octavia-image-upload-59f8cff499-l4k55\" (UID: \"0757156c-25f5-4574-856c-c7973a6ab716\") " pod="openstack/octavia-image-upload-59f8cff499-l4k55" Jan 27 20:19:33 crc kubenswrapper[4915]: I0127 20:19:33.289184 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/0757156c-25f5-4574-856c-c7973a6ab716-amphora-image\") pod \"octavia-image-upload-59f8cff499-l4k55\" (UID: \"0757156c-25f5-4574-856c-c7973a6ab716\") " pod="openstack/octavia-image-upload-59f8cff499-l4k55" Jan 27 20:19:33 crc kubenswrapper[4915]: I0127 20:19:33.296978 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0757156c-25f5-4574-856c-c7973a6ab716-httpd-config\") pod \"octavia-image-upload-59f8cff499-l4k55\" (UID: \"0757156c-25f5-4574-856c-c7973a6ab716\") " pod="openstack/octavia-image-upload-59f8cff499-l4k55" Jan 27 20:19:33 crc kubenswrapper[4915]: I0127 20:19:33.394946 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-mbmqb"] Jan 27 20:19:33 crc kubenswrapper[4915]: I0127 20:19:33.468345 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-l4k55" Jan 27 20:19:33 crc kubenswrapper[4915]: I0127 20:19:33.646233 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-mbmqb"] Jan 27 20:19:33 crc kubenswrapper[4915]: I0127 20:19:33.860039 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-mbmqb" event={"ID":"dc1d771a-7c3b-48fc-9779-5ee95669692b","Type":"ContainerStarted","Data":"1a553acf186bbfc1a9f0e784e700d4815c675d69b4f5a46f622ec23988c4f48b"} Jan 27 20:19:34 crc kubenswrapper[4915]: I0127 20:19:34.110451 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-l4k55"] Jan 27 20:19:34 crc kubenswrapper[4915]: I0127 20:19:34.871411 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-l4k55" event={"ID":"0757156c-25f5-4574-856c-c7973a6ab716","Type":"ContainerStarted","Data":"9ea8c3aee7aba4e5d5474ebef58ba5a0c95a26b3ec029af1bf8defe79f656e47"} Jan 27 20:19:34 crc kubenswrapper[4915]: I0127 20:19:34.956368 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-mwr2k" podUID="31b1b872-d75b-4fb7-8995-a9183f0a8728" containerName="ovn-controller" probeResult="failure" output=< Jan 27 20:19:34 crc kubenswrapper[4915]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 20:19:34 crc kubenswrapper[4915]: > Jan 27 20:19:34 crc kubenswrapper[4915]: I0127 20:19:34.969695 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-mcbqz" Jan 27 20:19:34 crc kubenswrapper[4915]: I0127 20:19:34.969742 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-mcbqz" Jan 27 20:19:35 crc kubenswrapper[4915]: I0127 20:19:35.069019 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mwr2k-config-cmvq7"] Jan 27 20:19:35 crc kubenswrapper[4915]: I0127 20:19:35.070407 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwr2k-config-cmvq7" Jan 27 20:19:35 crc kubenswrapper[4915]: I0127 20:19:35.073173 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 20:19:35 crc kubenswrapper[4915]: I0127 20:19:35.094315 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mwr2k-config-cmvq7"] Jan 27 20:19:35 crc kubenswrapper[4915]: I0127 20:19:35.168252 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d55gf\" (UniqueName: \"kubernetes.io/projected/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-kube-api-access-d55gf\") pod \"ovn-controller-mwr2k-config-cmvq7\" (UID: \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\") " pod="openstack/ovn-controller-mwr2k-config-cmvq7" Jan 27 20:19:35 crc kubenswrapper[4915]: I0127 20:19:35.168304 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-scripts\") pod \"ovn-controller-mwr2k-config-cmvq7\" (UID: \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\") " pod="openstack/ovn-controller-mwr2k-config-cmvq7" Jan 27 20:19:35 crc kubenswrapper[4915]: I0127 20:19:35.168358 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-var-log-ovn\") pod \"ovn-controller-mwr2k-config-cmvq7\" (UID: \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\") " pod="openstack/ovn-controller-mwr2k-config-cmvq7" Jan 27 20:19:35 crc kubenswrapper[4915]: I0127 20:19:35.168414 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-additional-scripts\") pod \"ovn-controller-mwr2k-config-cmvq7\" (UID: \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\") " pod="openstack/ovn-controller-mwr2k-config-cmvq7" Jan 27 20:19:35 crc kubenswrapper[4915]: I0127 20:19:35.168464 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-var-run\") pod \"ovn-controller-mwr2k-config-cmvq7\" (UID: \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\") " pod="openstack/ovn-controller-mwr2k-config-cmvq7" Jan 27 20:19:35 crc kubenswrapper[4915]: I0127 20:19:35.168488 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-var-run-ovn\") pod \"ovn-controller-mwr2k-config-cmvq7\" (UID: \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\") " pod="openstack/ovn-controller-mwr2k-config-cmvq7" Jan 27 20:19:35 crc kubenswrapper[4915]: I0127 20:19:35.270030 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d55gf\" (UniqueName: \"kubernetes.io/projected/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-kube-api-access-d55gf\") pod \"ovn-controller-mwr2k-config-cmvq7\" (UID: \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\") " pod="openstack/ovn-controller-mwr2k-config-cmvq7" Jan 27 20:19:35 crc kubenswrapper[4915]: I0127 20:19:35.270084 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-scripts\") pod \"ovn-controller-mwr2k-config-cmvq7\" (UID: \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\") " pod="openstack/ovn-controller-mwr2k-config-cmvq7" Jan 27 20:19:35 crc kubenswrapper[4915]: I0127 20:19:35.270134 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-var-log-ovn\") pod \"ovn-controller-mwr2k-config-cmvq7\" (UID: \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\") " pod="openstack/ovn-controller-mwr2k-config-cmvq7" Jan 27 20:19:35 crc kubenswrapper[4915]: I0127 20:19:35.270190 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-additional-scripts\") pod \"ovn-controller-mwr2k-config-cmvq7\" (UID: \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\") " pod="openstack/ovn-controller-mwr2k-config-cmvq7" Jan 27 20:19:35 crc kubenswrapper[4915]: I0127 20:19:35.270238 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-var-run\") pod \"ovn-controller-mwr2k-config-cmvq7\" (UID: \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\") " pod="openstack/ovn-controller-mwr2k-config-cmvq7" Jan 27 20:19:35 crc kubenswrapper[4915]: I0127 20:19:35.270266 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-var-run-ovn\") pod \"ovn-controller-mwr2k-config-cmvq7\" (UID: \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\") " pod="openstack/ovn-controller-mwr2k-config-cmvq7" Jan 27 20:19:35 crc kubenswrapper[4915]: I0127 20:19:35.270460 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-var-run-ovn\") pod \"ovn-controller-mwr2k-config-cmvq7\" (UID: \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\") " pod="openstack/ovn-controller-mwr2k-config-cmvq7" Jan 27 20:19:35 crc kubenswrapper[4915]: I0127 20:19:35.270466 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-var-log-ovn\") pod \"ovn-controller-mwr2k-config-cmvq7\" (UID: \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\") " pod="openstack/ovn-controller-mwr2k-config-cmvq7" Jan 27 20:19:35 crc kubenswrapper[4915]: I0127 20:19:35.270567 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-var-run\") pod \"ovn-controller-mwr2k-config-cmvq7\" (UID: \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\") " pod="openstack/ovn-controller-mwr2k-config-cmvq7" Jan 27 20:19:35 crc kubenswrapper[4915]: I0127 20:19:35.271034 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-additional-scripts\") pod \"ovn-controller-mwr2k-config-cmvq7\" (UID: \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\") " pod="openstack/ovn-controller-mwr2k-config-cmvq7" Jan 27 20:19:35 crc kubenswrapper[4915]: I0127 20:19:35.272330 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-scripts\") pod \"ovn-controller-mwr2k-config-cmvq7\" (UID: \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\") " pod="openstack/ovn-controller-mwr2k-config-cmvq7" Jan 27 20:19:35 crc kubenswrapper[4915]: I0127 20:19:35.297333 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d55gf\" (UniqueName: \"kubernetes.io/projected/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-kube-api-access-d55gf\") pod \"ovn-controller-mwr2k-config-cmvq7\" (UID: \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\") " pod="openstack/ovn-controller-mwr2k-config-cmvq7" Jan 27 20:19:35 crc kubenswrapper[4915]: I0127 20:19:35.396972 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwr2k-config-cmvq7" Jan 27 20:19:35 crc kubenswrapper[4915]: I0127 20:19:35.883651 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-mbmqb" event={"ID":"dc1d771a-7c3b-48fc-9779-5ee95669692b","Type":"ContainerStarted","Data":"e8047a751f5f9929bbc6af9b2f4d67505e4d51feb3b082066889d6191eb99cfd"} Jan 27 20:19:35 crc kubenswrapper[4915]: I0127 20:19:35.885606 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mwr2k-config-cmvq7"] Jan 27 20:19:37 crc kubenswrapper[4915]: W0127 20:19:37.264192 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c0618b0_e233_4ae9_8f9f_ff6292a53c04.slice/crio-4cdc1cfb57b8ed79c5a3581f869f4c7ca9eb8b8c2532b740d957adcd63d751b1 WatchSource:0}: Error finding container 4cdc1cfb57b8ed79c5a3581f869f4c7ca9eb8b8c2532b740d957adcd63d751b1: Status 404 returned error can't find the container with id 4cdc1cfb57b8ed79c5a3581f869f4c7ca9eb8b8c2532b740d957adcd63d751b1 Jan 27 20:19:37 crc kubenswrapper[4915]: I0127 20:19:37.901840 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwr2k-config-cmvq7" event={"ID":"9c0618b0-e233-4ae9-8f9f-ff6292a53c04","Type":"ContainerStarted","Data":"4cdc1cfb57b8ed79c5a3581f869f4c7ca9eb8b8c2532b740d957adcd63d751b1"} Jan 27 20:19:38 crc kubenswrapper[4915]: I0127 20:19:38.357460 4915 scope.go:117] "RemoveContainer" containerID="fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114" Jan 27 20:19:38 crc kubenswrapper[4915]: E0127 20:19:38.357731 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:19:38 crc kubenswrapper[4915]: I0127 20:19:38.911884 4915 generic.go:334] "Generic (PLEG): container finished" podID="dc1d771a-7c3b-48fc-9779-5ee95669692b" containerID="e8047a751f5f9929bbc6af9b2f4d67505e4d51feb3b082066889d6191eb99cfd" exitCode=0 Jan 27 20:19:38 crc kubenswrapper[4915]: I0127 20:19:38.911974 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-mbmqb" event={"ID":"dc1d771a-7c3b-48fc-9779-5ee95669692b","Type":"ContainerDied","Data":"e8047a751f5f9929bbc6af9b2f4d67505e4d51feb3b082066889d6191eb99cfd"} Jan 27 20:19:39 crc kubenswrapper[4915]: I0127 20:19:39.293496 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-gvkrl"] Jan 27 20:19:39 crc kubenswrapper[4915]: I0127 20:19:39.296037 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-gvkrl" Jan 27 20:19:39 crc kubenswrapper[4915]: I0127 20:19:39.306195 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Jan 27 20:19:39 crc kubenswrapper[4915]: I0127 20:19:39.306199 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-gvkrl"] Jan 27 20:19:39 crc kubenswrapper[4915]: I0127 20:19:39.358720 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd34b342-7ecd-4767-ae22-bcc59df27294-combined-ca-bundle\") pod \"octavia-db-sync-gvkrl\" (UID: \"fd34b342-7ecd-4767-ae22-bcc59df27294\") " pod="openstack/octavia-db-sync-gvkrl" Jan 27 20:19:39 crc kubenswrapper[4915]: I0127 20:19:39.359147 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd34b342-7ecd-4767-ae22-bcc59df27294-scripts\") pod \"octavia-db-sync-gvkrl\" (UID: \"fd34b342-7ecd-4767-ae22-bcc59df27294\") " pod="openstack/octavia-db-sync-gvkrl" Jan 27 20:19:39 crc kubenswrapper[4915]: I0127 20:19:39.359175 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd34b342-7ecd-4767-ae22-bcc59df27294-config-data\") pod \"octavia-db-sync-gvkrl\" (UID: \"fd34b342-7ecd-4767-ae22-bcc59df27294\") " pod="openstack/octavia-db-sync-gvkrl" Jan 27 20:19:39 crc kubenswrapper[4915]: I0127 20:19:39.359274 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/fd34b342-7ecd-4767-ae22-bcc59df27294-config-data-merged\") pod \"octavia-db-sync-gvkrl\" (UID: \"fd34b342-7ecd-4767-ae22-bcc59df27294\") " pod="openstack/octavia-db-sync-gvkrl" Jan 27 20:19:39 crc kubenswrapper[4915]: I0127 20:19:39.461258 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/fd34b342-7ecd-4767-ae22-bcc59df27294-config-data-merged\") pod \"octavia-db-sync-gvkrl\" (UID: \"fd34b342-7ecd-4767-ae22-bcc59df27294\") " pod="openstack/octavia-db-sync-gvkrl" Jan 27 20:19:39 crc kubenswrapper[4915]: I0127 20:19:39.461444 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd34b342-7ecd-4767-ae22-bcc59df27294-combined-ca-bundle\") pod \"octavia-db-sync-gvkrl\" (UID: \"fd34b342-7ecd-4767-ae22-bcc59df27294\") " pod="openstack/octavia-db-sync-gvkrl" Jan 27 20:19:39 crc kubenswrapper[4915]: I0127 20:19:39.461507 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd34b342-7ecd-4767-ae22-bcc59df27294-scripts\") pod \"octavia-db-sync-gvkrl\" (UID: \"fd34b342-7ecd-4767-ae22-bcc59df27294\") " pod="openstack/octavia-db-sync-gvkrl" Jan 27 20:19:39 crc kubenswrapper[4915]: I0127 20:19:39.461531 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd34b342-7ecd-4767-ae22-bcc59df27294-config-data\") pod \"octavia-db-sync-gvkrl\" (UID: \"fd34b342-7ecd-4767-ae22-bcc59df27294\") " pod="openstack/octavia-db-sync-gvkrl" Jan 27 20:19:39 crc kubenswrapper[4915]: I0127 20:19:39.461885 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/fd34b342-7ecd-4767-ae22-bcc59df27294-config-data-merged\") pod \"octavia-db-sync-gvkrl\" (UID: \"fd34b342-7ecd-4767-ae22-bcc59df27294\") " pod="openstack/octavia-db-sync-gvkrl" Jan 27 20:19:39 crc kubenswrapper[4915]: I0127 20:19:39.470525 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd34b342-7ecd-4767-ae22-bcc59df27294-config-data\") pod \"octavia-db-sync-gvkrl\" (UID: \"fd34b342-7ecd-4767-ae22-bcc59df27294\") " pod="openstack/octavia-db-sync-gvkrl" Jan 27 20:19:39 crc kubenswrapper[4915]: I0127 20:19:39.476590 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd34b342-7ecd-4767-ae22-bcc59df27294-scripts\") pod \"octavia-db-sync-gvkrl\" (UID: \"fd34b342-7ecd-4767-ae22-bcc59df27294\") " pod="openstack/octavia-db-sync-gvkrl" Jan 27 20:19:39 crc kubenswrapper[4915]: I0127 20:19:39.481207 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd34b342-7ecd-4767-ae22-bcc59df27294-combined-ca-bundle\") pod \"octavia-db-sync-gvkrl\" (UID: \"fd34b342-7ecd-4767-ae22-bcc59df27294\") " pod="openstack/octavia-db-sync-gvkrl" Jan 27 20:19:39 crc kubenswrapper[4915]: I0127 20:19:39.631200 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-gvkrl" Jan 27 20:19:39 crc kubenswrapper[4915]: I0127 20:19:39.945499 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-mwr2k" podUID="31b1b872-d75b-4fb7-8995-a9183f0a8728" containerName="ovn-controller" probeResult="failure" output=< Jan 27 20:19:39 crc kubenswrapper[4915]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 20:19:39 crc kubenswrapper[4915]: > Jan 27 20:19:44 crc kubenswrapper[4915]: I0127 20:19:44.823621 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-gvkrl"] Jan 27 20:19:44 crc kubenswrapper[4915]: W0127 20:19:44.899076 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd34b342_7ecd_4767_ae22_bcc59df27294.slice/crio-246dba141e74406b9b4ca8e5ef41243058baba7b2e2bb7e3da0284654b28b0bd WatchSource:0}: Error finding container 246dba141e74406b9b4ca8e5ef41243058baba7b2e2bb7e3da0284654b28b0bd: Status 404 returned error can't find the container with id 246dba141e74406b9b4ca8e5ef41243058baba7b2e2bb7e3da0284654b28b0bd Jan 27 20:19:44 crc kubenswrapper[4915]: I0127 20:19:44.943913 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-mwr2k" Jan 27 20:19:44 crc kubenswrapper[4915]: I0127 20:19:44.994581 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-l4k55" event={"ID":"0757156c-25f5-4574-856c-c7973a6ab716","Type":"ContainerStarted","Data":"bf13eb3ea4415a35416ecad8c79e9b991614839b997c40869df22035494884b5"} Jan 27 20:19:45 crc kubenswrapper[4915]: I0127 20:19:45.009552 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwr2k-config-cmvq7" event={"ID":"9c0618b0-e233-4ae9-8f9f-ff6292a53c04","Type":"ContainerStarted","Data":"e96ea98ec725452c407773b6130ea4282ab8a6570f3c448c8e95ab1c531b1ca8"} Jan 27 20:19:45 crc kubenswrapper[4915]: I0127 20:19:45.015599 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-gvkrl" event={"ID":"fd34b342-7ecd-4767-ae22-bcc59df27294","Type":"ContainerStarted","Data":"246dba141e74406b9b4ca8e5ef41243058baba7b2e2bb7e3da0284654b28b0bd"} Jan 27 20:19:45 crc kubenswrapper[4915]: I0127 20:19:45.066965 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-mwr2k-config-cmvq7" podStartSLOduration=10.066945559 podStartE2EDuration="10.066945559s" podCreationTimestamp="2026-01-27 20:19:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:19:45.038508191 +0000 UTC m=+5876.396361855" watchObservedRunningTime="2026-01-27 20:19:45.066945559 +0000 UTC m=+5876.424799223" Jan 27 20:19:46 crc kubenswrapper[4915]: I0127 20:19:46.027679 4915 generic.go:334] "Generic (PLEG): container finished" podID="0757156c-25f5-4574-856c-c7973a6ab716" containerID="bf13eb3ea4415a35416ecad8c79e9b991614839b997c40869df22035494884b5" exitCode=0 Jan 27 20:19:46 crc kubenswrapper[4915]: I0127 20:19:46.028124 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-l4k55" event={"ID":"0757156c-25f5-4574-856c-c7973a6ab716","Type":"ContainerDied","Data":"bf13eb3ea4415a35416ecad8c79e9b991614839b997c40869df22035494884b5"} Jan 27 20:19:46 crc kubenswrapper[4915]: I0127 20:19:46.030242 4915 generic.go:334] "Generic (PLEG): container finished" podID="9c0618b0-e233-4ae9-8f9f-ff6292a53c04" containerID="e96ea98ec725452c407773b6130ea4282ab8a6570f3c448c8e95ab1c531b1ca8" exitCode=0 Jan 27 20:19:46 crc kubenswrapper[4915]: I0127 20:19:46.030346 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwr2k-config-cmvq7" event={"ID":"9c0618b0-e233-4ae9-8f9f-ff6292a53c04","Type":"ContainerDied","Data":"e96ea98ec725452c407773b6130ea4282ab8a6570f3c448c8e95ab1c531b1ca8"} Jan 27 20:19:46 crc kubenswrapper[4915]: I0127 20:19:46.033944 4915 generic.go:334] "Generic (PLEG): container finished" podID="fd34b342-7ecd-4767-ae22-bcc59df27294" containerID="0364aa449493f950bdbcc73b039529bf0150f68454f17554f47e92e40d75796a" exitCode=0 Jan 27 20:19:46 crc kubenswrapper[4915]: I0127 20:19:46.034021 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-gvkrl" event={"ID":"fd34b342-7ecd-4767-ae22-bcc59df27294","Type":"ContainerDied","Data":"0364aa449493f950bdbcc73b039529bf0150f68454f17554f47e92e40d75796a"} Jan 27 20:19:46 crc kubenswrapper[4915]: I0127 20:19:46.038685 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-mbmqb" event={"ID":"dc1d771a-7c3b-48fc-9779-5ee95669692b","Type":"ContainerStarted","Data":"8048113b256bb3a6fac514ad74322b817a3852a637abedbabedae8a1a233c471"} Jan 27 20:19:46 crc kubenswrapper[4915]: I0127 20:19:46.038909 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-mbmqb" Jan 27 20:19:46 crc kubenswrapper[4915]: I0127 20:19:46.124749 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-mbmqb" podStartSLOduration=2.546152056 podStartE2EDuration="14.124729477s" podCreationTimestamp="2026-01-27 20:19:32 +0000 UTC" firstStartedPulling="2026-01-27 20:19:33.405138395 +0000 UTC m=+5864.762992059" lastFinishedPulling="2026-01-27 20:19:44.983715816 +0000 UTC m=+5876.341569480" observedRunningTime="2026-01-27 20:19:46.123482806 +0000 UTC m=+5877.481336490" watchObservedRunningTime="2026-01-27 20:19:46.124729477 +0000 UTC m=+5877.482583131" Jan 27 20:19:47 crc kubenswrapper[4915]: I0127 20:19:47.052625 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-gvkrl" event={"ID":"fd34b342-7ecd-4767-ae22-bcc59df27294","Type":"ContainerStarted","Data":"97e9d564cc1d583b44c1ba1c9d614d693c0beb0b58ded612e553ca8257957774"} Jan 27 20:19:47 crc kubenswrapper[4915]: I0127 20:19:47.082853 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-gvkrl" podStartSLOduration=8.082832878 podStartE2EDuration="8.082832878s" podCreationTimestamp="2026-01-27 20:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:19:47.069993613 +0000 UTC m=+5878.427847277" watchObservedRunningTime="2026-01-27 20:19:47.082832878 +0000 UTC m=+5878.440686532" Jan 27 20:19:47 crc kubenswrapper[4915]: I0127 20:19:47.480892 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwr2k-config-cmvq7" Jan 27 20:19:47 crc kubenswrapper[4915]: I0127 20:19:47.537409 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-additional-scripts\") pod \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\" (UID: \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\") " Jan 27 20:19:47 crc kubenswrapper[4915]: I0127 20:19:47.537474 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-scripts\") pod \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\" (UID: \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\") " Jan 27 20:19:47 crc kubenswrapper[4915]: I0127 20:19:47.538516 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "9c0618b0-e233-4ae9-8f9f-ff6292a53c04" (UID: "9c0618b0-e233-4ae9-8f9f-ff6292a53c04"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:19:47 crc kubenswrapper[4915]: I0127 20:19:47.538621 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-var-run" (OuterVolumeSpecName: "var-run") pod "9c0618b0-e233-4ae9-8f9f-ff6292a53c04" (UID: "9c0618b0-e233-4ae9-8f9f-ff6292a53c04"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:19:47 crc kubenswrapper[4915]: I0127 20:19:47.539295 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-scripts" (OuterVolumeSpecName: "scripts") pod "9c0618b0-e233-4ae9-8f9f-ff6292a53c04" (UID: "9c0618b0-e233-4ae9-8f9f-ff6292a53c04"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:19:47 crc kubenswrapper[4915]: I0127 20:19:47.539355 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-var-run\") pod \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\" (UID: \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\") " Jan 27 20:19:47 crc kubenswrapper[4915]: I0127 20:19:47.539468 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-var-run-ovn\") pod \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\" (UID: \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\") " Jan 27 20:19:47 crc kubenswrapper[4915]: I0127 20:19:47.539533 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d55gf\" (UniqueName: \"kubernetes.io/projected/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-kube-api-access-d55gf\") pod \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\" (UID: \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\") " Jan 27 20:19:47 crc kubenswrapper[4915]: I0127 20:19:47.539652 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-var-log-ovn\") pod \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\" (UID: \"9c0618b0-e233-4ae9-8f9f-ff6292a53c04\") " Jan 27 20:19:47 crc kubenswrapper[4915]: I0127 20:19:47.540166 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9c0618b0-e233-4ae9-8f9f-ff6292a53c04" (UID: "9c0618b0-e233-4ae9-8f9f-ff6292a53c04"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:19:47 crc kubenswrapper[4915]: I0127 20:19:47.540213 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9c0618b0-e233-4ae9-8f9f-ff6292a53c04" (UID: "9c0618b0-e233-4ae9-8f9f-ff6292a53c04"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:19:47 crc kubenswrapper[4915]: I0127 20:19:47.540355 4915 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 20:19:47 crc kubenswrapper[4915]: I0127 20:19:47.540373 4915 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 20:19:47 crc kubenswrapper[4915]: I0127 20:19:47.540388 4915 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 20:19:47 crc kubenswrapper[4915]: I0127 20:19:47.540401 4915 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:19:47 crc kubenswrapper[4915]: I0127 20:19:47.540413 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:19:47 crc kubenswrapper[4915]: I0127 20:19:47.568682 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-kube-api-access-d55gf" (OuterVolumeSpecName: "kube-api-access-d55gf") pod "9c0618b0-e233-4ae9-8f9f-ff6292a53c04" (UID: "9c0618b0-e233-4ae9-8f9f-ff6292a53c04"). InnerVolumeSpecName "kube-api-access-d55gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:19:47 crc kubenswrapper[4915]: I0127 20:19:47.643233 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d55gf\" (UniqueName: \"kubernetes.io/projected/9c0618b0-e233-4ae9-8f9f-ff6292a53c04-kube-api-access-d55gf\") on node \"crc\" DevicePath \"\"" Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.075637 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-l4k55" event={"ID":"0757156c-25f5-4574-856c-c7973a6ab716","Type":"ContainerStarted","Data":"de231e7d7eaa1a7f78e072936b5c217100284a3eecba555801fe9119710d6ad4"} Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.078667 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwr2k-config-cmvq7" event={"ID":"9c0618b0-e233-4ae9-8f9f-ff6292a53c04","Type":"ContainerDied","Data":"4cdc1cfb57b8ed79c5a3581f869f4c7ca9eb8b8c2532b740d957adcd63d751b1"} Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.078901 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cdc1cfb57b8ed79c5a3581f869f4c7ca9eb8b8c2532b740d957adcd63d751b1" Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.078695 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwr2k-config-cmvq7" Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.138897 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-59f8cff499-l4k55" podStartSLOduration=1.52142201 podStartE2EDuration="15.138869864s" podCreationTimestamp="2026-01-27 20:19:33 +0000 UTC" firstStartedPulling="2026-01-27 20:19:34.116163971 +0000 UTC m=+5865.474017635" lastFinishedPulling="2026-01-27 20:19:47.733611825 +0000 UTC m=+5879.091465489" observedRunningTime="2026-01-27 20:19:48.104184192 +0000 UTC m=+5879.462037856" watchObservedRunningTime="2026-01-27 20:19:48.138869864 +0000 UTC m=+5879.496723528" Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.175019 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-mwr2k-config-cmvq7"] Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.185518 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-mwr2k-config-cmvq7"] Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.297581 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mwr2k-config-6rmn6"] Jan 27 20:19:48 crc kubenswrapper[4915]: E0127 20:19:48.298450 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c0618b0-e233-4ae9-8f9f-ff6292a53c04" containerName="ovn-config" Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.298482 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c0618b0-e233-4ae9-8f9f-ff6292a53c04" containerName="ovn-config" Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.298906 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c0618b0-e233-4ae9-8f9f-ff6292a53c04" containerName="ovn-config" Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.300215 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwr2k-config-6rmn6" Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.306166 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.320374 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mwr2k-config-6rmn6"] Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.377151 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-additional-scripts\") pod \"ovn-controller-mwr2k-config-6rmn6\" (UID: \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\") " pod="openstack/ovn-controller-mwr2k-config-6rmn6" Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.391210 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-var-log-ovn\") pod \"ovn-controller-mwr2k-config-6rmn6\" (UID: \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\") " pod="openstack/ovn-controller-mwr2k-config-6rmn6" Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.391284 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-scripts\") pod \"ovn-controller-mwr2k-config-6rmn6\" (UID: \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\") " pod="openstack/ovn-controller-mwr2k-config-6rmn6" Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.391529 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-var-run-ovn\") pod \"ovn-controller-mwr2k-config-6rmn6\" (UID: \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\") " pod="openstack/ovn-controller-mwr2k-config-6rmn6" Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.391614 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-var-run\") pod \"ovn-controller-mwr2k-config-6rmn6\" (UID: \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\") " pod="openstack/ovn-controller-mwr2k-config-6rmn6" Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.391705 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9m6n\" (UniqueName: \"kubernetes.io/projected/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-kube-api-access-k9m6n\") pod \"ovn-controller-mwr2k-config-6rmn6\" (UID: \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\") " pod="openstack/ovn-controller-mwr2k-config-6rmn6" Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.493961 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-var-run-ovn\") pod \"ovn-controller-mwr2k-config-6rmn6\" (UID: \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\") " pod="openstack/ovn-controller-mwr2k-config-6rmn6" Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.494079 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-var-run\") pod \"ovn-controller-mwr2k-config-6rmn6\" (UID: \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\") " pod="openstack/ovn-controller-mwr2k-config-6rmn6" Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.494162 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9m6n\" (UniqueName: \"kubernetes.io/projected/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-kube-api-access-k9m6n\") pod \"ovn-controller-mwr2k-config-6rmn6\" (UID: \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\") " pod="openstack/ovn-controller-mwr2k-config-6rmn6" Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.494228 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-additional-scripts\") pod \"ovn-controller-mwr2k-config-6rmn6\" (UID: \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\") " pod="openstack/ovn-controller-mwr2k-config-6rmn6" Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.494277 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-var-log-ovn\") pod \"ovn-controller-mwr2k-config-6rmn6\" (UID: \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\") " pod="openstack/ovn-controller-mwr2k-config-6rmn6" Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.494320 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-scripts\") pod \"ovn-controller-mwr2k-config-6rmn6\" (UID: \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\") " pod="openstack/ovn-controller-mwr2k-config-6rmn6" Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.495204 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-var-run-ovn\") pod \"ovn-controller-mwr2k-config-6rmn6\" (UID: \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\") " pod="openstack/ovn-controller-mwr2k-config-6rmn6" Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.497356 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-var-run\") pod \"ovn-controller-mwr2k-config-6rmn6\" (UID: \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\") " pod="openstack/ovn-controller-mwr2k-config-6rmn6" Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.497425 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-var-log-ovn\") pod \"ovn-controller-mwr2k-config-6rmn6\" (UID: \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\") " pod="openstack/ovn-controller-mwr2k-config-6rmn6" Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.498161 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-additional-scripts\") pod \"ovn-controller-mwr2k-config-6rmn6\" (UID: \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\") " pod="openstack/ovn-controller-mwr2k-config-6rmn6" Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.500173 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-scripts\") pod \"ovn-controller-mwr2k-config-6rmn6\" (UID: \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\") " pod="openstack/ovn-controller-mwr2k-config-6rmn6" Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.526420 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9m6n\" (UniqueName: \"kubernetes.io/projected/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-kube-api-access-k9m6n\") pod \"ovn-controller-mwr2k-config-6rmn6\" (UID: \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\") " pod="openstack/ovn-controller-mwr2k-config-6rmn6" Jan 27 20:19:48 crc kubenswrapper[4915]: I0127 20:19:48.644762 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwr2k-config-6rmn6" Jan 27 20:19:49 crc kubenswrapper[4915]: I0127 20:19:49.204502 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mwr2k-config-6rmn6"] Jan 27 20:19:49 crc kubenswrapper[4915]: I0127 20:19:49.381186 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c0618b0-e233-4ae9-8f9f-ff6292a53c04" path="/var/lib/kubelet/pods/9c0618b0-e233-4ae9-8f9f-ff6292a53c04/volumes" Jan 27 20:19:50 crc kubenswrapper[4915]: I0127 20:19:50.106496 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwr2k-config-6rmn6" event={"ID":"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b","Type":"ContainerStarted","Data":"877d891a17dd9f0e4ad7c4b9ad67a33b437503ad19206744a372b28b0103f6bf"} Jan 27 20:19:51 crc kubenswrapper[4915]: I0127 20:19:51.111268 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-5db958498-dmvb4" Jan 27 20:19:51 crc kubenswrapper[4915]: I0127 20:19:51.123249 4915 generic.go:334] "Generic (PLEG): container finished" podID="a234b7bb-a32d-4a0e-836c-0c9354c6cb9b" containerID="50d2b040595422025d81955e328a55534245fc5f052189d1b50f723556d66d47" exitCode=0 Jan 27 20:19:51 crc kubenswrapper[4915]: I0127 20:19:51.123294 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwr2k-config-6rmn6" event={"ID":"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b","Type":"ContainerDied","Data":"50d2b040595422025d81955e328a55534245fc5f052189d1b50f723556d66d47"} Jan 27 20:19:51 crc kubenswrapper[4915]: I0127 20:19:51.166625 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-5db958498-dmvb4" Jan 27 20:19:51 crc kubenswrapper[4915]: I0127 20:19:51.359700 4915 scope.go:117] "RemoveContainer" containerID="fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114" Jan 27 20:19:51 crc kubenswrapper[4915]: E0127 20:19:51.360338 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:19:52 crc kubenswrapper[4915]: I0127 20:19:52.132654 4915 generic.go:334] "Generic (PLEG): container finished" podID="fd34b342-7ecd-4767-ae22-bcc59df27294" containerID="97e9d564cc1d583b44c1ba1c9d614d693c0beb0b58ded612e553ca8257957774" exitCode=0 Jan 27 20:19:52 crc kubenswrapper[4915]: I0127 20:19:52.132815 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-gvkrl" event={"ID":"fd34b342-7ecd-4767-ae22-bcc59df27294","Type":"ContainerDied","Data":"97e9d564cc1d583b44c1ba1c9d614d693c0beb0b58ded612e553ca8257957774"} Jan 27 20:19:52 crc kubenswrapper[4915]: I0127 20:19:52.524702 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwr2k-config-6rmn6" Jan 27 20:19:52 crc kubenswrapper[4915]: I0127 20:19:52.687256 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-scripts\") pod \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\" (UID: \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\") " Jan 27 20:19:52 crc kubenswrapper[4915]: I0127 20:19:52.687689 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-additional-scripts\") pod \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\" (UID: \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\") " Jan 27 20:19:52 crc kubenswrapper[4915]: I0127 20:19:52.687833 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-var-log-ovn\") pod \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\" (UID: \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\") " Jan 27 20:19:52 crc kubenswrapper[4915]: I0127 20:19:52.687918 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-var-run-ovn\") pod \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\" (UID: \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\") " Jan 27 20:19:52 crc kubenswrapper[4915]: I0127 20:19:52.687954 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9m6n\" (UniqueName: \"kubernetes.io/projected/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-kube-api-access-k9m6n\") pod \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\" (UID: \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\") " Jan 27 20:19:52 crc kubenswrapper[4915]: I0127 20:19:52.687980 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "a234b7bb-a32d-4a0e-836c-0c9354c6cb9b" (UID: "a234b7bb-a32d-4a0e-836c-0c9354c6cb9b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:19:52 crc kubenswrapper[4915]: I0127 20:19:52.688032 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "a234b7bb-a32d-4a0e-836c-0c9354c6cb9b" (UID: "a234b7bb-a32d-4a0e-836c-0c9354c6cb9b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:19:52 crc kubenswrapper[4915]: I0127 20:19:52.688076 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-var-run\") pod \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\" (UID: \"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b\") " Jan 27 20:19:52 crc kubenswrapper[4915]: I0127 20:19:52.688604 4915 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 20:19:52 crc kubenswrapper[4915]: I0127 20:19:52.688630 4915 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 20:19:52 crc kubenswrapper[4915]: I0127 20:19:52.688662 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-var-run" (OuterVolumeSpecName: "var-run") pod "a234b7bb-a32d-4a0e-836c-0c9354c6cb9b" (UID: "a234b7bb-a32d-4a0e-836c-0c9354c6cb9b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:19:52 crc kubenswrapper[4915]: I0127 20:19:52.689107 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "a234b7bb-a32d-4a0e-836c-0c9354c6cb9b" (UID: "a234b7bb-a32d-4a0e-836c-0c9354c6cb9b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:19:52 crc kubenswrapper[4915]: I0127 20:19:52.690218 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-scripts" (OuterVolumeSpecName: "scripts") pod "a234b7bb-a32d-4a0e-836c-0c9354c6cb9b" (UID: "a234b7bb-a32d-4a0e-836c-0c9354c6cb9b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:19:52 crc kubenswrapper[4915]: I0127 20:19:52.697527 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-kube-api-access-k9m6n" (OuterVolumeSpecName: "kube-api-access-k9m6n") pod "a234b7bb-a32d-4a0e-836c-0c9354c6cb9b" (UID: "a234b7bb-a32d-4a0e-836c-0c9354c6cb9b"). InnerVolumeSpecName "kube-api-access-k9m6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:19:52 crc kubenswrapper[4915]: I0127 20:19:52.791723 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9m6n\" (UniqueName: \"kubernetes.io/projected/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-kube-api-access-k9m6n\") on node \"crc\" DevicePath \"\"" Jan 27 20:19:52 crc kubenswrapper[4915]: I0127 20:19:52.791838 4915 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 20:19:52 crc kubenswrapper[4915]: I0127 20:19:52.791858 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:19:52 crc kubenswrapper[4915]: I0127 20:19:52.791871 4915 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:19:53 crc kubenswrapper[4915]: I0127 20:19:53.146974 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwr2k-config-6rmn6" Jan 27 20:19:53 crc kubenswrapper[4915]: I0127 20:19:53.163299 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwr2k-config-6rmn6" event={"ID":"a234b7bb-a32d-4a0e-836c-0c9354c6cb9b","Type":"ContainerDied","Data":"877d891a17dd9f0e4ad7c4b9ad67a33b437503ad19206744a372b28b0103f6bf"} Jan 27 20:19:53 crc kubenswrapper[4915]: I0127 20:19:53.163460 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="877d891a17dd9f0e4ad7c4b9ad67a33b437503ad19206744a372b28b0103f6bf" Jan 27 20:19:53 crc kubenswrapper[4915]: I0127 20:19:53.603641 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-gvkrl" Jan 27 20:19:53 crc kubenswrapper[4915]: I0127 20:19:53.607699 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-mwr2k-config-6rmn6"] Jan 27 20:19:53 crc kubenswrapper[4915]: I0127 20:19:53.621575 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-mwr2k-config-6rmn6"] Jan 27 20:19:53 crc kubenswrapper[4915]: I0127 20:19:53.713219 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/fd34b342-7ecd-4767-ae22-bcc59df27294-config-data-merged\") pod \"fd34b342-7ecd-4767-ae22-bcc59df27294\" (UID: \"fd34b342-7ecd-4767-ae22-bcc59df27294\") " Jan 27 20:19:53 crc kubenswrapper[4915]: I0127 20:19:53.713406 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd34b342-7ecd-4767-ae22-bcc59df27294-config-data\") pod \"fd34b342-7ecd-4767-ae22-bcc59df27294\" (UID: \"fd34b342-7ecd-4767-ae22-bcc59df27294\") " Jan 27 20:19:53 crc kubenswrapper[4915]: I0127 20:19:53.713614 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd34b342-7ecd-4767-ae22-bcc59df27294-scripts\") pod \"fd34b342-7ecd-4767-ae22-bcc59df27294\" (UID: \"fd34b342-7ecd-4767-ae22-bcc59df27294\") " Jan 27 20:19:53 crc kubenswrapper[4915]: I0127 20:19:53.713676 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd34b342-7ecd-4767-ae22-bcc59df27294-combined-ca-bundle\") pod \"fd34b342-7ecd-4767-ae22-bcc59df27294\" (UID: \"fd34b342-7ecd-4767-ae22-bcc59df27294\") " Jan 27 20:19:53 crc kubenswrapper[4915]: I0127 20:19:53.719771 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd34b342-7ecd-4767-ae22-bcc59df27294-config-data" (OuterVolumeSpecName: "config-data") pod "fd34b342-7ecd-4767-ae22-bcc59df27294" (UID: "fd34b342-7ecd-4767-ae22-bcc59df27294"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:19:53 crc kubenswrapper[4915]: I0127 20:19:53.720012 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd34b342-7ecd-4767-ae22-bcc59df27294-scripts" (OuterVolumeSpecName: "scripts") pod "fd34b342-7ecd-4767-ae22-bcc59df27294" (UID: "fd34b342-7ecd-4767-ae22-bcc59df27294"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:19:53 crc kubenswrapper[4915]: I0127 20:19:53.738553 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd34b342-7ecd-4767-ae22-bcc59df27294-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "fd34b342-7ecd-4767-ae22-bcc59df27294" (UID: "fd34b342-7ecd-4767-ae22-bcc59df27294"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:19:53 crc kubenswrapper[4915]: I0127 20:19:53.743701 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd34b342-7ecd-4767-ae22-bcc59df27294-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd34b342-7ecd-4767-ae22-bcc59df27294" (UID: "fd34b342-7ecd-4767-ae22-bcc59df27294"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:19:53 crc kubenswrapper[4915]: I0127 20:19:53.818245 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd34b342-7ecd-4767-ae22-bcc59df27294-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:19:53 crc kubenswrapper[4915]: I0127 20:19:53.818284 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd34b342-7ecd-4767-ae22-bcc59df27294-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:19:53 crc kubenswrapper[4915]: I0127 20:19:53.818297 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd34b342-7ecd-4767-ae22-bcc59df27294-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:19:53 crc kubenswrapper[4915]: I0127 20:19:53.818310 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/fd34b342-7ecd-4767-ae22-bcc59df27294-config-data-merged\") on node \"crc\" DevicePath \"\"" Jan 27 20:19:54 crc kubenswrapper[4915]: I0127 20:19:54.166528 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-gvkrl" event={"ID":"fd34b342-7ecd-4767-ae22-bcc59df27294","Type":"ContainerDied","Data":"246dba141e74406b9b4ca8e5ef41243058baba7b2e2bb7e3da0284654b28b0bd"} Jan 27 20:19:54 crc kubenswrapper[4915]: I0127 20:19:54.166570 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="246dba141e74406b9b4ca8e5ef41243058baba7b2e2bb7e3da0284654b28b0bd" Jan 27 20:19:54 crc kubenswrapper[4915]: I0127 20:19:54.166616 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-gvkrl" Jan 27 20:19:55 crc kubenswrapper[4915]: I0127 20:19:55.370174 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a234b7bb-a32d-4a0e-836c-0c9354c6cb9b" path="/var/lib/kubelet/pods/a234b7bb-a32d-4a0e-836c-0c9354c6cb9b/volumes" Jan 27 20:19:57 crc kubenswrapper[4915]: I0127 20:19:57.061593 4915 scope.go:117] "RemoveContainer" containerID="1eac1d864ecbe95a7219bd1585dd9aad27314599f16be69218f0cd2589607878" Jan 27 20:19:57 crc kubenswrapper[4915]: I0127 20:19:57.101374 4915 scope.go:117] "RemoveContainer" containerID="9deabbead6040a871c97f757bd55286ddf89c449ef92688af48b8877574b328a" Jan 27 20:19:57 crc kubenswrapper[4915]: I0127 20:19:57.154784 4915 scope.go:117] "RemoveContainer" containerID="1f71c75cb432c24c2fa48dc464d58a7b71ae37a1f95615b832795542499c11e1" Jan 27 20:19:57 crc kubenswrapper[4915]: I0127 20:19:57.183098 4915 scope.go:117] "RemoveContainer" containerID="731c17d61e9351362f4c4180e150274fdaced8e7abbce4e463d0bf5107a3376f" Jan 27 20:20:02 crc kubenswrapper[4915]: I0127 20:20:02.864484 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-mbmqb" Jan 27 20:20:05 crc kubenswrapper[4915]: I0127 20:20:05.358184 4915 scope.go:117] "RemoveContainer" containerID="fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114" Jan 27 20:20:05 crc kubenswrapper[4915]: E0127 20:20:05.359008 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:20:10 crc kubenswrapper[4915]: I0127 20:20:10.293259 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xplwc"] Jan 27 20:20:10 crc kubenswrapper[4915]: E0127 20:20:10.294342 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd34b342-7ecd-4767-ae22-bcc59df27294" containerName="octavia-db-sync" Jan 27 20:20:10 crc kubenswrapper[4915]: I0127 20:20:10.294358 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd34b342-7ecd-4767-ae22-bcc59df27294" containerName="octavia-db-sync" Jan 27 20:20:10 crc kubenswrapper[4915]: E0127 20:20:10.294388 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a234b7bb-a32d-4a0e-836c-0c9354c6cb9b" containerName="ovn-config" Jan 27 20:20:10 crc kubenswrapper[4915]: I0127 20:20:10.294397 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a234b7bb-a32d-4a0e-836c-0c9354c6cb9b" containerName="ovn-config" Jan 27 20:20:10 crc kubenswrapper[4915]: E0127 20:20:10.294424 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd34b342-7ecd-4767-ae22-bcc59df27294" containerName="init" Jan 27 20:20:10 crc kubenswrapper[4915]: I0127 20:20:10.294435 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd34b342-7ecd-4767-ae22-bcc59df27294" containerName="init" Jan 27 20:20:10 crc kubenswrapper[4915]: I0127 20:20:10.294670 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a234b7bb-a32d-4a0e-836c-0c9354c6cb9b" containerName="ovn-config" Jan 27 20:20:10 crc kubenswrapper[4915]: I0127 20:20:10.294686 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd34b342-7ecd-4767-ae22-bcc59df27294" containerName="octavia-db-sync" Jan 27 20:20:10 crc kubenswrapper[4915]: I0127 20:20:10.296496 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xplwc" Jan 27 20:20:10 crc kubenswrapper[4915]: I0127 20:20:10.309311 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xplwc"] Jan 27 20:20:10 crc kubenswrapper[4915]: I0127 20:20:10.489082 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/030d232f-7a93-461e-8005-00d01fc39c6d-utilities\") pod \"redhat-marketplace-xplwc\" (UID: \"030d232f-7a93-461e-8005-00d01fc39c6d\") " pod="openshift-marketplace/redhat-marketplace-xplwc" Jan 27 20:20:10 crc kubenswrapper[4915]: I0127 20:20:10.489156 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7x8b\" (UniqueName: \"kubernetes.io/projected/030d232f-7a93-461e-8005-00d01fc39c6d-kube-api-access-j7x8b\") pod \"redhat-marketplace-xplwc\" (UID: \"030d232f-7a93-461e-8005-00d01fc39c6d\") " pod="openshift-marketplace/redhat-marketplace-xplwc" Jan 27 20:20:10 crc kubenswrapper[4915]: I0127 20:20:10.489209 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/030d232f-7a93-461e-8005-00d01fc39c6d-catalog-content\") pod \"redhat-marketplace-xplwc\" (UID: \"030d232f-7a93-461e-8005-00d01fc39c6d\") " pod="openshift-marketplace/redhat-marketplace-xplwc" Jan 27 20:20:10 crc kubenswrapper[4915]: I0127 20:20:10.590681 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/030d232f-7a93-461e-8005-00d01fc39c6d-utilities\") pod \"redhat-marketplace-xplwc\" (UID: \"030d232f-7a93-461e-8005-00d01fc39c6d\") " pod="openshift-marketplace/redhat-marketplace-xplwc" Jan 27 20:20:10 crc kubenswrapper[4915]: I0127 20:20:10.590733 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7x8b\" (UniqueName: \"kubernetes.io/projected/030d232f-7a93-461e-8005-00d01fc39c6d-kube-api-access-j7x8b\") pod \"redhat-marketplace-xplwc\" (UID: \"030d232f-7a93-461e-8005-00d01fc39c6d\") " pod="openshift-marketplace/redhat-marketplace-xplwc" Jan 27 20:20:10 crc kubenswrapper[4915]: I0127 20:20:10.590767 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/030d232f-7a93-461e-8005-00d01fc39c6d-catalog-content\") pod \"redhat-marketplace-xplwc\" (UID: \"030d232f-7a93-461e-8005-00d01fc39c6d\") " pod="openshift-marketplace/redhat-marketplace-xplwc" Jan 27 20:20:10 crc kubenswrapper[4915]: I0127 20:20:10.591219 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/030d232f-7a93-461e-8005-00d01fc39c6d-utilities\") pod \"redhat-marketplace-xplwc\" (UID: \"030d232f-7a93-461e-8005-00d01fc39c6d\") " pod="openshift-marketplace/redhat-marketplace-xplwc" Jan 27 20:20:10 crc kubenswrapper[4915]: I0127 20:20:10.591269 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/030d232f-7a93-461e-8005-00d01fc39c6d-catalog-content\") pod \"redhat-marketplace-xplwc\" (UID: \"030d232f-7a93-461e-8005-00d01fc39c6d\") " pod="openshift-marketplace/redhat-marketplace-xplwc" Jan 27 20:20:10 crc kubenswrapper[4915]: I0127 20:20:10.611203 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7x8b\" (UniqueName: \"kubernetes.io/projected/030d232f-7a93-461e-8005-00d01fc39c6d-kube-api-access-j7x8b\") pod \"redhat-marketplace-xplwc\" (UID: \"030d232f-7a93-461e-8005-00d01fc39c6d\") " pod="openshift-marketplace/redhat-marketplace-xplwc" Jan 27 20:20:10 crc kubenswrapper[4915]: I0127 20:20:10.615013 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xplwc" Jan 27 20:20:11 crc kubenswrapper[4915]: I0127 20:20:11.092406 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xplwc"] Jan 27 20:20:11 crc kubenswrapper[4915]: W0127 20:20:11.097045 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod030d232f_7a93_461e_8005_00d01fc39c6d.slice/crio-f7356ad0aaa76481341b48c37e87c811b380b351485155a710d1ef05ab5ce6ac WatchSource:0}: Error finding container f7356ad0aaa76481341b48c37e87c811b380b351485155a710d1ef05ab5ce6ac: Status 404 returned error can't find the container with id f7356ad0aaa76481341b48c37e87c811b380b351485155a710d1ef05ab5ce6ac Jan 27 20:20:11 crc kubenswrapper[4915]: I0127 20:20:11.352341 4915 generic.go:334] "Generic (PLEG): container finished" podID="030d232f-7a93-461e-8005-00d01fc39c6d" containerID="33f9a3586b5412159c3a16ba31ac82b54605001710d1e845e1134df926f535a7" exitCode=0 Jan 27 20:20:11 crc kubenswrapper[4915]: I0127 20:20:11.352418 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xplwc" event={"ID":"030d232f-7a93-461e-8005-00d01fc39c6d","Type":"ContainerDied","Data":"33f9a3586b5412159c3a16ba31ac82b54605001710d1e845e1134df926f535a7"} Jan 27 20:20:11 crc kubenswrapper[4915]: I0127 20:20:11.352659 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xplwc" event={"ID":"030d232f-7a93-461e-8005-00d01fc39c6d","Type":"ContainerStarted","Data":"f7356ad0aaa76481341b48c37e87c811b380b351485155a710d1ef05ab5ce6ac"} Jan 27 20:20:13 crc kubenswrapper[4915]: I0127 20:20:13.370364 4915 generic.go:334] "Generic (PLEG): container finished" podID="030d232f-7a93-461e-8005-00d01fc39c6d" containerID="0c128f54d928160c8eb1e4674330ef00b32e7bd54c5c4da77310a6e9be0a89f2" exitCode=0 Jan 27 20:20:13 crc kubenswrapper[4915]: I0127 20:20:13.370473 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xplwc" event={"ID":"030d232f-7a93-461e-8005-00d01fc39c6d","Type":"ContainerDied","Data":"0c128f54d928160c8eb1e4674330ef00b32e7bd54c5c4da77310a6e9be0a89f2"} Jan 27 20:20:14 crc kubenswrapper[4915]: I0127 20:20:14.383845 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xplwc" event={"ID":"030d232f-7a93-461e-8005-00d01fc39c6d","Type":"ContainerStarted","Data":"bc8ab2597fb11d7c80606071405de0b26af26a728b4cf66356154edf7a27bf3a"} Jan 27 20:20:14 crc kubenswrapper[4915]: I0127 20:20:14.406400 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xplwc" podStartSLOduration=1.9742242129999998 podStartE2EDuration="4.406376822s" podCreationTimestamp="2026-01-27 20:20:10 +0000 UTC" firstStartedPulling="2026-01-27 20:20:11.354349645 +0000 UTC m=+5902.712203309" lastFinishedPulling="2026-01-27 20:20:13.786502254 +0000 UTC m=+5905.144355918" observedRunningTime="2026-01-27 20:20:14.403494111 +0000 UTC m=+5905.761347775" watchObservedRunningTime="2026-01-27 20:20:14.406376822 +0000 UTC m=+5905.764230486" Jan 27 20:20:16 crc kubenswrapper[4915]: I0127 20:20:16.357434 4915 scope.go:117] "RemoveContainer" containerID="fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114" Jan 27 20:20:16 crc kubenswrapper[4915]: E0127 20:20:16.358018 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:20:16 crc kubenswrapper[4915]: I0127 20:20:16.690103 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-l4k55"] Jan 27 20:20:16 crc kubenswrapper[4915]: I0127 20:20:16.690328 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-59f8cff499-l4k55" podUID="0757156c-25f5-4574-856c-c7973a6ab716" containerName="octavia-amphora-httpd" containerID="cri-o://de231e7d7eaa1a7f78e072936b5c217100284a3eecba555801fe9119710d6ad4" gracePeriod=30 Jan 27 20:20:17 crc kubenswrapper[4915]: I0127 20:20:17.153590 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-l4k55" Jan 27 20:20:17 crc kubenswrapper[4915]: I0127 20:20:17.213635 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0757156c-25f5-4574-856c-c7973a6ab716-httpd-config\") pod \"0757156c-25f5-4574-856c-c7973a6ab716\" (UID: \"0757156c-25f5-4574-856c-c7973a6ab716\") " Jan 27 20:20:17 crc kubenswrapper[4915]: I0127 20:20:17.213817 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/0757156c-25f5-4574-856c-c7973a6ab716-amphora-image\") pod \"0757156c-25f5-4574-856c-c7973a6ab716\" (UID: \"0757156c-25f5-4574-856c-c7973a6ab716\") " Jan 27 20:20:17 crc kubenswrapper[4915]: I0127 20:20:17.251327 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0757156c-25f5-4574-856c-c7973a6ab716-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0757156c-25f5-4574-856c-c7973a6ab716" (UID: "0757156c-25f5-4574-856c-c7973a6ab716"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:20:17 crc kubenswrapper[4915]: I0127 20:20:17.294990 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0757156c-25f5-4574-856c-c7973a6ab716-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "0757156c-25f5-4574-856c-c7973a6ab716" (UID: "0757156c-25f5-4574-856c-c7973a6ab716"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:20:17 crc kubenswrapper[4915]: I0127 20:20:17.316105 4915 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0757156c-25f5-4574-856c-c7973a6ab716-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:20:17 crc kubenswrapper[4915]: I0127 20:20:17.316137 4915 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/0757156c-25f5-4574-856c-c7973a6ab716-amphora-image\") on node \"crc\" DevicePath \"\"" Jan 27 20:20:17 crc kubenswrapper[4915]: I0127 20:20:17.422364 4915 generic.go:334] "Generic (PLEG): container finished" podID="0757156c-25f5-4574-856c-c7973a6ab716" containerID="de231e7d7eaa1a7f78e072936b5c217100284a3eecba555801fe9119710d6ad4" exitCode=0 Jan 27 20:20:17 crc kubenswrapper[4915]: I0127 20:20:17.422405 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-l4k55" event={"ID":"0757156c-25f5-4574-856c-c7973a6ab716","Type":"ContainerDied","Data":"de231e7d7eaa1a7f78e072936b5c217100284a3eecba555801fe9119710d6ad4"} Jan 27 20:20:17 crc kubenswrapper[4915]: I0127 20:20:17.422432 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-l4k55" event={"ID":"0757156c-25f5-4574-856c-c7973a6ab716","Type":"ContainerDied","Data":"9ea8c3aee7aba4e5d5474ebef58ba5a0c95a26b3ec029af1bf8defe79f656e47"} Jan 27 20:20:17 crc kubenswrapper[4915]: I0127 20:20:17.422448 4915 scope.go:117] "RemoveContainer" containerID="de231e7d7eaa1a7f78e072936b5c217100284a3eecba555801fe9119710d6ad4" Jan 27 20:20:17 crc kubenswrapper[4915]: I0127 20:20:17.422451 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-l4k55" Jan 27 20:20:17 crc kubenswrapper[4915]: I0127 20:20:17.453079 4915 scope.go:117] "RemoveContainer" containerID="bf13eb3ea4415a35416ecad8c79e9b991614839b997c40869df22035494884b5" Jan 27 20:20:17 crc kubenswrapper[4915]: I0127 20:20:17.463155 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-l4k55"] Jan 27 20:20:17 crc kubenswrapper[4915]: I0127 20:20:17.471002 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-l4k55"] Jan 27 20:20:17 crc kubenswrapper[4915]: I0127 20:20:17.477038 4915 scope.go:117] "RemoveContainer" containerID="de231e7d7eaa1a7f78e072936b5c217100284a3eecba555801fe9119710d6ad4" Jan 27 20:20:17 crc kubenswrapper[4915]: E0127 20:20:17.477487 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de231e7d7eaa1a7f78e072936b5c217100284a3eecba555801fe9119710d6ad4\": container with ID starting with de231e7d7eaa1a7f78e072936b5c217100284a3eecba555801fe9119710d6ad4 not found: ID does not exist" containerID="de231e7d7eaa1a7f78e072936b5c217100284a3eecba555801fe9119710d6ad4" Jan 27 20:20:17 crc kubenswrapper[4915]: I0127 20:20:17.477524 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de231e7d7eaa1a7f78e072936b5c217100284a3eecba555801fe9119710d6ad4"} err="failed to get container status \"de231e7d7eaa1a7f78e072936b5c217100284a3eecba555801fe9119710d6ad4\": rpc error: code = NotFound desc = could not find container \"de231e7d7eaa1a7f78e072936b5c217100284a3eecba555801fe9119710d6ad4\": container with ID starting with de231e7d7eaa1a7f78e072936b5c217100284a3eecba555801fe9119710d6ad4 not found: ID does not exist" Jan 27 20:20:17 crc kubenswrapper[4915]: I0127 20:20:17.477549 4915 scope.go:117] "RemoveContainer" containerID="bf13eb3ea4415a35416ecad8c79e9b991614839b997c40869df22035494884b5" Jan 27 20:20:17 crc kubenswrapper[4915]: E0127 20:20:17.478056 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf13eb3ea4415a35416ecad8c79e9b991614839b997c40869df22035494884b5\": container with ID starting with bf13eb3ea4415a35416ecad8c79e9b991614839b997c40869df22035494884b5 not found: ID does not exist" containerID="bf13eb3ea4415a35416ecad8c79e9b991614839b997c40869df22035494884b5" Jan 27 20:20:17 crc kubenswrapper[4915]: I0127 20:20:17.478102 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf13eb3ea4415a35416ecad8c79e9b991614839b997c40869df22035494884b5"} err="failed to get container status \"bf13eb3ea4415a35416ecad8c79e9b991614839b997c40869df22035494884b5\": rpc error: code = NotFound desc = could not find container \"bf13eb3ea4415a35416ecad8c79e9b991614839b997c40869df22035494884b5\": container with ID starting with bf13eb3ea4415a35416ecad8c79e9b991614839b997c40869df22035494884b5 not found: ID does not exist" Jan 27 20:20:19 crc kubenswrapper[4915]: I0127 20:20:19.370137 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0757156c-25f5-4574-856c-c7973a6ab716" path="/var/lib/kubelet/pods/0757156c-25f5-4574-856c-c7973a6ab716/volumes" Jan 27 20:20:19 crc kubenswrapper[4915]: I0127 20:20:19.421702 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-59f8cff499-wpc9q"] Jan 27 20:20:19 crc kubenswrapper[4915]: E0127 20:20:19.422136 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0757156c-25f5-4574-856c-c7973a6ab716" containerName="octavia-amphora-httpd" Jan 27 20:20:19 crc kubenswrapper[4915]: I0127 20:20:19.422152 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0757156c-25f5-4574-856c-c7973a6ab716" containerName="octavia-amphora-httpd" Jan 27 20:20:19 crc kubenswrapper[4915]: E0127 20:20:19.422167 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0757156c-25f5-4574-856c-c7973a6ab716" containerName="init" Jan 27 20:20:19 crc kubenswrapper[4915]: I0127 20:20:19.422173 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0757156c-25f5-4574-856c-c7973a6ab716" containerName="init" Jan 27 20:20:19 crc kubenswrapper[4915]: I0127 20:20:19.422335 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="0757156c-25f5-4574-856c-c7973a6ab716" containerName="octavia-amphora-httpd" Jan 27 20:20:19 crc kubenswrapper[4915]: I0127 20:20:19.423267 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-wpc9q" Jan 27 20:20:19 crc kubenswrapper[4915]: I0127 20:20:19.429219 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Jan 27 20:20:19 crc kubenswrapper[4915]: I0127 20:20:19.432394 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-wpc9q"] Jan 27 20:20:19 crc kubenswrapper[4915]: I0127 20:20:19.558712 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/3daa5b09-3c8c-45d8-820c-e5ec689d5776-amphora-image\") pod \"octavia-image-upload-59f8cff499-wpc9q\" (UID: \"3daa5b09-3c8c-45d8-820c-e5ec689d5776\") " pod="openstack/octavia-image-upload-59f8cff499-wpc9q" Jan 27 20:20:19 crc kubenswrapper[4915]: I0127 20:20:19.559017 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3daa5b09-3c8c-45d8-820c-e5ec689d5776-httpd-config\") pod \"octavia-image-upload-59f8cff499-wpc9q\" (UID: \"3daa5b09-3c8c-45d8-820c-e5ec689d5776\") " pod="openstack/octavia-image-upload-59f8cff499-wpc9q" Jan 27 20:20:19 crc kubenswrapper[4915]: I0127 20:20:19.660905 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/3daa5b09-3c8c-45d8-820c-e5ec689d5776-amphora-image\") pod \"octavia-image-upload-59f8cff499-wpc9q\" (UID: \"3daa5b09-3c8c-45d8-820c-e5ec689d5776\") " pod="openstack/octavia-image-upload-59f8cff499-wpc9q" Jan 27 20:20:19 crc kubenswrapper[4915]: I0127 20:20:19.660998 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3daa5b09-3c8c-45d8-820c-e5ec689d5776-httpd-config\") pod \"octavia-image-upload-59f8cff499-wpc9q\" (UID: \"3daa5b09-3c8c-45d8-820c-e5ec689d5776\") " pod="openstack/octavia-image-upload-59f8cff499-wpc9q" Jan 27 20:20:19 crc kubenswrapper[4915]: I0127 20:20:19.661322 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/3daa5b09-3c8c-45d8-820c-e5ec689d5776-amphora-image\") pod \"octavia-image-upload-59f8cff499-wpc9q\" (UID: \"3daa5b09-3c8c-45d8-820c-e5ec689d5776\") " pod="openstack/octavia-image-upload-59f8cff499-wpc9q" Jan 27 20:20:19 crc kubenswrapper[4915]: I0127 20:20:19.670042 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3daa5b09-3c8c-45d8-820c-e5ec689d5776-httpd-config\") pod \"octavia-image-upload-59f8cff499-wpc9q\" (UID: \"3daa5b09-3c8c-45d8-820c-e5ec689d5776\") " pod="openstack/octavia-image-upload-59f8cff499-wpc9q" Jan 27 20:20:19 crc kubenswrapper[4915]: I0127 20:20:19.751706 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-wpc9q" Jan 27 20:20:20 crc kubenswrapper[4915]: I0127 20:20:20.213163 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-wpc9q"] Jan 27 20:20:20 crc kubenswrapper[4915]: I0127 20:20:20.462334 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-wpc9q" event={"ID":"3daa5b09-3c8c-45d8-820c-e5ec689d5776","Type":"ContainerStarted","Data":"c504ffd2d2e762c6dd2d3bf6cb9b32ca6069bff9f9c4630013cca35da78affe4"} Jan 27 20:20:20 crc kubenswrapper[4915]: I0127 20:20:20.615337 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xplwc" Jan 27 20:20:20 crc kubenswrapper[4915]: I0127 20:20:20.615397 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xplwc" Jan 27 20:20:20 crc kubenswrapper[4915]: I0127 20:20:20.667189 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xplwc" Jan 27 20:20:21 crc kubenswrapper[4915]: I0127 20:20:21.470728 4915 generic.go:334] "Generic (PLEG): container finished" podID="3daa5b09-3c8c-45d8-820c-e5ec689d5776" containerID="cfa75ec585cb275f3b2241f627c6ee5674d1d310fdce3b51563addd9107e0f49" exitCode=0 Jan 27 20:20:21 crc kubenswrapper[4915]: I0127 20:20:21.470839 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-wpc9q" event={"ID":"3daa5b09-3c8c-45d8-820c-e5ec689d5776","Type":"ContainerDied","Data":"cfa75ec585cb275f3b2241f627c6ee5674d1d310fdce3b51563addd9107e0f49"} Jan 27 20:20:21 crc kubenswrapper[4915]: I0127 20:20:21.516341 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xplwc" Jan 27 20:20:21 crc kubenswrapper[4915]: I0127 20:20:21.574688 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xplwc"] Jan 27 20:20:23 crc kubenswrapper[4915]: I0127 20:20:23.494910 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xplwc" podUID="030d232f-7a93-461e-8005-00d01fc39c6d" containerName="registry-server" containerID="cri-o://bc8ab2597fb11d7c80606071405de0b26af26a728b4cf66356154edf7a27bf3a" gracePeriod=2 Jan 27 20:20:23 crc kubenswrapper[4915]: I0127 20:20:23.495482 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-wpc9q" event={"ID":"3daa5b09-3c8c-45d8-820c-e5ec689d5776","Type":"ContainerStarted","Data":"7423bc0d0395914fd34184f793b37bf06ec6f1171693511159d534ff9823e60b"} Jan 27 20:20:23 crc kubenswrapper[4915]: I0127 20:20:23.529449 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-59f8cff499-wpc9q" podStartSLOduration=2.308373344 podStartE2EDuration="4.529427949s" podCreationTimestamp="2026-01-27 20:20:19 +0000 UTC" firstStartedPulling="2026-01-27 20:20:20.216595731 +0000 UTC m=+5911.574449395" lastFinishedPulling="2026-01-27 20:20:22.437650336 +0000 UTC m=+5913.795504000" observedRunningTime="2026-01-27 20:20:23.52007087 +0000 UTC m=+5914.877924534" watchObservedRunningTime="2026-01-27 20:20:23.529427949 +0000 UTC m=+5914.887281613" Jan 27 20:20:23 crc kubenswrapper[4915]: I0127 20:20:23.953586 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xplwc" Jan 27 20:20:24 crc kubenswrapper[4915]: I0127 20:20:24.149165 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7x8b\" (UniqueName: \"kubernetes.io/projected/030d232f-7a93-461e-8005-00d01fc39c6d-kube-api-access-j7x8b\") pod \"030d232f-7a93-461e-8005-00d01fc39c6d\" (UID: \"030d232f-7a93-461e-8005-00d01fc39c6d\") " Jan 27 20:20:24 crc kubenswrapper[4915]: I0127 20:20:24.149247 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/030d232f-7a93-461e-8005-00d01fc39c6d-utilities\") pod \"030d232f-7a93-461e-8005-00d01fc39c6d\" (UID: \"030d232f-7a93-461e-8005-00d01fc39c6d\") " Jan 27 20:20:24 crc kubenswrapper[4915]: I0127 20:20:24.149278 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/030d232f-7a93-461e-8005-00d01fc39c6d-catalog-content\") pod \"030d232f-7a93-461e-8005-00d01fc39c6d\" (UID: \"030d232f-7a93-461e-8005-00d01fc39c6d\") " Jan 27 20:20:24 crc kubenswrapper[4915]: I0127 20:20:24.150293 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/030d232f-7a93-461e-8005-00d01fc39c6d-utilities" (OuterVolumeSpecName: "utilities") pod "030d232f-7a93-461e-8005-00d01fc39c6d" (UID: "030d232f-7a93-461e-8005-00d01fc39c6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:20:24 crc kubenswrapper[4915]: I0127 20:20:24.156607 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/030d232f-7a93-461e-8005-00d01fc39c6d-kube-api-access-j7x8b" (OuterVolumeSpecName: "kube-api-access-j7x8b") pod "030d232f-7a93-461e-8005-00d01fc39c6d" (UID: "030d232f-7a93-461e-8005-00d01fc39c6d"). InnerVolumeSpecName "kube-api-access-j7x8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:20:24 crc kubenswrapper[4915]: I0127 20:20:24.176274 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/030d232f-7a93-461e-8005-00d01fc39c6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "030d232f-7a93-461e-8005-00d01fc39c6d" (UID: "030d232f-7a93-461e-8005-00d01fc39c6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:20:24 crc kubenswrapper[4915]: I0127 20:20:24.252629 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7x8b\" (UniqueName: \"kubernetes.io/projected/030d232f-7a93-461e-8005-00d01fc39c6d-kube-api-access-j7x8b\") on node \"crc\" DevicePath \"\"" Jan 27 20:20:24 crc kubenswrapper[4915]: I0127 20:20:24.252671 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/030d232f-7a93-461e-8005-00d01fc39c6d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:20:24 crc kubenswrapper[4915]: I0127 20:20:24.252685 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/030d232f-7a93-461e-8005-00d01fc39c6d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:20:24 crc kubenswrapper[4915]: I0127 20:20:24.522538 4915 generic.go:334] "Generic (PLEG): container finished" podID="030d232f-7a93-461e-8005-00d01fc39c6d" containerID="bc8ab2597fb11d7c80606071405de0b26af26a728b4cf66356154edf7a27bf3a" exitCode=0 Jan 27 20:20:24 crc kubenswrapper[4915]: I0127 20:20:24.522637 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xplwc" event={"ID":"030d232f-7a93-461e-8005-00d01fc39c6d","Type":"ContainerDied","Data":"bc8ab2597fb11d7c80606071405de0b26af26a728b4cf66356154edf7a27bf3a"} Jan 27 20:20:24 crc kubenswrapper[4915]: I0127 20:20:24.522691 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xplwc" event={"ID":"030d232f-7a93-461e-8005-00d01fc39c6d","Type":"ContainerDied","Data":"f7356ad0aaa76481341b48c37e87c811b380b351485155a710d1ef05ab5ce6ac"} Jan 27 20:20:24 crc kubenswrapper[4915]: I0127 20:20:24.522722 4915 scope.go:117] "RemoveContainer" containerID="bc8ab2597fb11d7c80606071405de0b26af26a728b4cf66356154edf7a27bf3a" Jan 27 20:20:24 crc kubenswrapper[4915]: I0127 20:20:24.524123 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xplwc" Jan 27 20:20:24 crc kubenswrapper[4915]: I0127 20:20:24.554880 4915 scope.go:117] "RemoveContainer" containerID="0c128f54d928160c8eb1e4674330ef00b32e7bd54c5c4da77310a6e9be0a89f2" Jan 27 20:20:24 crc kubenswrapper[4915]: I0127 20:20:24.561251 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xplwc"] Jan 27 20:20:24 crc kubenswrapper[4915]: I0127 20:20:24.570233 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xplwc"] Jan 27 20:20:24 crc kubenswrapper[4915]: I0127 20:20:24.587987 4915 scope.go:117] "RemoveContainer" containerID="33f9a3586b5412159c3a16ba31ac82b54605001710d1e845e1134df926f535a7" Jan 27 20:20:24 crc kubenswrapper[4915]: I0127 20:20:24.621573 4915 scope.go:117] "RemoveContainer" containerID="bc8ab2597fb11d7c80606071405de0b26af26a728b4cf66356154edf7a27bf3a" Jan 27 20:20:24 crc kubenswrapper[4915]: E0127 20:20:24.622478 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc8ab2597fb11d7c80606071405de0b26af26a728b4cf66356154edf7a27bf3a\": container with ID starting with bc8ab2597fb11d7c80606071405de0b26af26a728b4cf66356154edf7a27bf3a not found: ID does not exist" containerID="bc8ab2597fb11d7c80606071405de0b26af26a728b4cf66356154edf7a27bf3a" Jan 27 20:20:24 crc kubenswrapper[4915]: I0127 20:20:24.622537 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc8ab2597fb11d7c80606071405de0b26af26a728b4cf66356154edf7a27bf3a"} err="failed to get container status \"bc8ab2597fb11d7c80606071405de0b26af26a728b4cf66356154edf7a27bf3a\": rpc error: code = NotFound desc = could not find container \"bc8ab2597fb11d7c80606071405de0b26af26a728b4cf66356154edf7a27bf3a\": container with ID starting with bc8ab2597fb11d7c80606071405de0b26af26a728b4cf66356154edf7a27bf3a not found: ID does not exist" Jan 27 20:20:24 crc kubenswrapper[4915]: I0127 20:20:24.622572 4915 scope.go:117] "RemoveContainer" containerID="0c128f54d928160c8eb1e4674330ef00b32e7bd54c5c4da77310a6e9be0a89f2" Jan 27 20:20:24 crc kubenswrapper[4915]: E0127 20:20:24.623111 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c128f54d928160c8eb1e4674330ef00b32e7bd54c5c4da77310a6e9be0a89f2\": container with ID starting with 0c128f54d928160c8eb1e4674330ef00b32e7bd54c5c4da77310a6e9be0a89f2 not found: ID does not exist" containerID="0c128f54d928160c8eb1e4674330ef00b32e7bd54c5c4da77310a6e9be0a89f2" Jan 27 20:20:24 crc kubenswrapper[4915]: I0127 20:20:24.623151 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c128f54d928160c8eb1e4674330ef00b32e7bd54c5c4da77310a6e9be0a89f2"} err="failed to get container status \"0c128f54d928160c8eb1e4674330ef00b32e7bd54c5c4da77310a6e9be0a89f2\": rpc error: code = NotFound desc = could not find container \"0c128f54d928160c8eb1e4674330ef00b32e7bd54c5c4da77310a6e9be0a89f2\": container with ID starting with 0c128f54d928160c8eb1e4674330ef00b32e7bd54c5c4da77310a6e9be0a89f2 not found: ID does not exist" Jan 27 20:20:24 crc kubenswrapper[4915]: I0127 20:20:24.623183 4915 scope.go:117] "RemoveContainer" containerID="33f9a3586b5412159c3a16ba31ac82b54605001710d1e845e1134df926f535a7" Jan 27 20:20:24 crc kubenswrapper[4915]: E0127 20:20:24.623723 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33f9a3586b5412159c3a16ba31ac82b54605001710d1e845e1134df926f535a7\": container with ID starting with 33f9a3586b5412159c3a16ba31ac82b54605001710d1e845e1134df926f535a7 not found: ID does not exist" containerID="33f9a3586b5412159c3a16ba31ac82b54605001710d1e845e1134df926f535a7" Jan 27 20:20:24 crc kubenswrapper[4915]: I0127 20:20:24.623751 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33f9a3586b5412159c3a16ba31ac82b54605001710d1e845e1134df926f535a7"} err="failed to get container status \"33f9a3586b5412159c3a16ba31ac82b54605001710d1e845e1134df926f535a7\": rpc error: code = NotFound desc = could not find container \"33f9a3586b5412159c3a16ba31ac82b54605001710d1e845e1134df926f535a7\": container with ID starting with 33f9a3586b5412159c3a16ba31ac82b54605001710d1e845e1134df926f535a7 not found: ID does not exist" Jan 27 20:20:25 crc kubenswrapper[4915]: I0127 20:20:25.370093 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="030d232f-7a93-461e-8005-00d01fc39c6d" path="/var/lib/kubelet/pods/030d232f-7a93-461e-8005-00d01fc39c6d/volumes" Jan 27 20:20:30 crc kubenswrapper[4915]: I0127 20:20:30.359812 4915 scope.go:117] "RemoveContainer" containerID="fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114" Jan 27 20:20:30 crc kubenswrapper[4915]: E0127 20:20:30.360922 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.529132 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-lkpg8"] Jan 27 20:20:38 crc kubenswrapper[4915]: E0127 20:20:38.531255 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030d232f-7a93-461e-8005-00d01fc39c6d" containerName="extract-utilities" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.531272 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="030d232f-7a93-461e-8005-00d01fc39c6d" containerName="extract-utilities" Jan 27 20:20:38 crc kubenswrapper[4915]: E0127 20:20:38.531306 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030d232f-7a93-461e-8005-00d01fc39c6d" containerName="extract-content" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.531314 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="030d232f-7a93-461e-8005-00d01fc39c6d" containerName="extract-content" Jan 27 20:20:38 crc kubenswrapper[4915]: E0127 20:20:38.531325 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="030d232f-7a93-461e-8005-00d01fc39c6d" containerName="registry-server" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.531333 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="030d232f-7a93-461e-8005-00d01fc39c6d" containerName="registry-server" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.531538 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="030d232f-7a93-461e-8005-00d01fc39c6d" containerName="registry-server" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.532737 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-lkpg8" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.538194 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.538263 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.538430 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.548917 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-lkpg8"] Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.631116 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c14c777-a5d8-495b-b85a-43f045df5aa7-config-data\") pod \"octavia-healthmanager-lkpg8\" (UID: \"2c14c777-a5d8-495b-b85a-43f045df5aa7\") " pod="openstack/octavia-healthmanager-lkpg8" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.631180 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c14c777-a5d8-495b-b85a-43f045df5aa7-combined-ca-bundle\") pod \"octavia-healthmanager-lkpg8\" (UID: \"2c14c777-a5d8-495b-b85a-43f045df5aa7\") " pod="openstack/octavia-healthmanager-lkpg8" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.631342 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c14c777-a5d8-495b-b85a-43f045df5aa7-scripts\") pod \"octavia-healthmanager-lkpg8\" (UID: \"2c14c777-a5d8-495b-b85a-43f045df5aa7\") " pod="openstack/octavia-healthmanager-lkpg8" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.631541 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2c14c777-a5d8-495b-b85a-43f045df5aa7-config-data-merged\") pod \"octavia-healthmanager-lkpg8\" (UID: \"2c14c777-a5d8-495b-b85a-43f045df5aa7\") " pod="openstack/octavia-healthmanager-lkpg8" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.631739 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/2c14c777-a5d8-495b-b85a-43f045df5aa7-hm-ports\") pod \"octavia-healthmanager-lkpg8\" (UID: \"2c14c777-a5d8-495b-b85a-43f045df5aa7\") " pod="openstack/octavia-healthmanager-lkpg8" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.632190 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/2c14c777-a5d8-495b-b85a-43f045df5aa7-amphora-certs\") pod \"octavia-healthmanager-lkpg8\" (UID: \"2c14c777-a5d8-495b-b85a-43f045df5aa7\") " pod="openstack/octavia-healthmanager-lkpg8" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.736699 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c14c777-a5d8-495b-b85a-43f045df5aa7-config-data\") pod \"octavia-healthmanager-lkpg8\" (UID: \"2c14c777-a5d8-495b-b85a-43f045df5aa7\") " pod="openstack/octavia-healthmanager-lkpg8" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.736849 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c14c777-a5d8-495b-b85a-43f045df5aa7-combined-ca-bundle\") pod \"octavia-healthmanager-lkpg8\" (UID: \"2c14c777-a5d8-495b-b85a-43f045df5aa7\") " pod="openstack/octavia-healthmanager-lkpg8" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.736906 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c14c777-a5d8-495b-b85a-43f045df5aa7-scripts\") pod \"octavia-healthmanager-lkpg8\" (UID: \"2c14c777-a5d8-495b-b85a-43f045df5aa7\") " pod="openstack/octavia-healthmanager-lkpg8" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.736999 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2c14c777-a5d8-495b-b85a-43f045df5aa7-config-data-merged\") pod \"octavia-healthmanager-lkpg8\" (UID: \"2c14c777-a5d8-495b-b85a-43f045df5aa7\") " pod="openstack/octavia-healthmanager-lkpg8" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.737067 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/2c14c777-a5d8-495b-b85a-43f045df5aa7-hm-ports\") pod \"octavia-healthmanager-lkpg8\" (UID: \"2c14c777-a5d8-495b-b85a-43f045df5aa7\") " pod="openstack/octavia-healthmanager-lkpg8" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.737156 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/2c14c777-a5d8-495b-b85a-43f045df5aa7-amphora-certs\") pod \"octavia-healthmanager-lkpg8\" (UID: \"2c14c777-a5d8-495b-b85a-43f045df5aa7\") " pod="openstack/octavia-healthmanager-lkpg8" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.745553 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/2c14c777-a5d8-495b-b85a-43f045df5aa7-amphora-certs\") pod \"octavia-healthmanager-lkpg8\" (UID: \"2c14c777-a5d8-495b-b85a-43f045df5aa7\") " pod="openstack/octavia-healthmanager-lkpg8" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.750294 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c14c777-a5d8-495b-b85a-43f045df5aa7-config-data\") pod \"octavia-healthmanager-lkpg8\" (UID: \"2c14c777-a5d8-495b-b85a-43f045df5aa7\") " pod="openstack/octavia-healthmanager-lkpg8" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.753301 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2c14c777-a5d8-495b-b85a-43f045df5aa7-config-data-merged\") pod \"octavia-healthmanager-lkpg8\" (UID: \"2c14c777-a5d8-495b-b85a-43f045df5aa7\") " pod="openstack/octavia-healthmanager-lkpg8" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.754001 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/2c14c777-a5d8-495b-b85a-43f045df5aa7-hm-ports\") pod \"octavia-healthmanager-lkpg8\" (UID: \"2c14c777-a5d8-495b-b85a-43f045df5aa7\") " pod="openstack/octavia-healthmanager-lkpg8" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.754928 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c14c777-a5d8-495b-b85a-43f045df5aa7-combined-ca-bundle\") pod \"octavia-healthmanager-lkpg8\" (UID: \"2c14c777-a5d8-495b-b85a-43f045df5aa7\") " pod="openstack/octavia-healthmanager-lkpg8" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.761647 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c14c777-a5d8-495b-b85a-43f045df5aa7-scripts\") pod \"octavia-healthmanager-lkpg8\" (UID: \"2c14c777-a5d8-495b-b85a-43f045df5aa7\") " pod="openstack/octavia-healthmanager-lkpg8" Jan 27 20:20:38 crc kubenswrapper[4915]: I0127 20:20:38.853117 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-lkpg8" Jan 27 20:20:39 crc kubenswrapper[4915]: I0127 20:20:39.550158 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-lkpg8"] Jan 27 20:20:39 crc kubenswrapper[4915]: W0127 20:20:39.553478 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c14c777_a5d8_495b_b85a_43f045df5aa7.slice/crio-b9c9f9ba1c8a7e7f5498865e050cfc1dfc0170e44fc314548bc33f1e1123c39d WatchSource:0}: Error finding container b9c9f9ba1c8a7e7f5498865e050cfc1dfc0170e44fc314548bc33f1e1123c39d: Status 404 returned error can't find the container with id b9c9f9ba1c8a7e7f5498865e050cfc1dfc0170e44fc314548bc33f1e1123c39d Jan 27 20:20:40 crc kubenswrapper[4915]: I0127 20:20:40.055900 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-f8s9t"] Jan 27 20:20:40 crc kubenswrapper[4915]: I0127 20:20:40.058100 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-f8s9t" Jan 27 20:20:40 crc kubenswrapper[4915]: I0127 20:20:40.060379 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Jan 27 20:20:40 crc kubenswrapper[4915]: I0127 20:20:40.060581 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Jan 27 20:20:40 crc kubenswrapper[4915]: I0127 20:20:40.083197 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-f8s9t"] Jan 27 20:20:40 crc kubenswrapper[4915]: I0127 20:20:40.164883 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/53ac311c-9520-4cc5-a3a8-3ae1f822ff23-hm-ports\") pod \"octavia-housekeeping-f8s9t\" (UID: \"53ac311c-9520-4cc5-a3a8-3ae1f822ff23\") " pod="openstack/octavia-housekeeping-f8s9t" Jan 27 20:20:40 crc kubenswrapper[4915]: I0127 20:20:40.165215 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ac311c-9520-4cc5-a3a8-3ae1f822ff23-combined-ca-bundle\") pod \"octavia-housekeeping-f8s9t\" (UID: \"53ac311c-9520-4cc5-a3a8-3ae1f822ff23\") " pod="openstack/octavia-housekeeping-f8s9t" Jan 27 20:20:40 crc kubenswrapper[4915]: I0127 20:20:40.165350 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/53ac311c-9520-4cc5-a3a8-3ae1f822ff23-amphora-certs\") pod \"octavia-housekeeping-f8s9t\" (UID: \"53ac311c-9520-4cc5-a3a8-3ae1f822ff23\") " pod="openstack/octavia-housekeeping-f8s9t" Jan 27 20:20:40 crc kubenswrapper[4915]: I0127 20:20:40.165493 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ac311c-9520-4cc5-a3a8-3ae1f822ff23-config-data\") pod \"octavia-housekeeping-f8s9t\" (UID: \"53ac311c-9520-4cc5-a3a8-3ae1f822ff23\") " pod="openstack/octavia-housekeeping-f8s9t" Jan 27 20:20:40 crc kubenswrapper[4915]: I0127 20:20:40.165554 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53ac311c-9520-4cc5-a3a8-3ae1f822ff23-scripts\") pod \"octavia-housekeeping-f8s9t\" (UID: \"53ac311c-9520-4cc5-a3a8-3ae1f822ff23\") " pod="openstack/octavia-housekeeping-f8s9t" Jan 27 20:20:40 crc kubenswrapper[4915]: I0127 20:20:40.165638 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/53ac311c-9520-4cc5-a3a8-3ae1f822ff23-config-data-merged\") pod \"octavia-housekeeping-f8s9t\" (UID: \"53ac311c-9520-4cc5-a3a8-3ae1f822ff23\") " pod="openstack/octavia-housekeeping-f8s9t" Jan 27 20:20:40 crc kubenswrapper[4915]: I0127 20:20:40.266866 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/53ac311c-9520-4cc5-a3a8-3ae1f822ff23-hm-ports\") pod \"octavia-housekeeping-f8s9t\" (UID: \"53ac311c-9520-4cc5-a3a8-3ae1f822ff23\") " pod="openstack/octavia-housekeeping-f8s9t" Jan 27 20:20:40 crc kubenswrapper[4915]: I0127 20:20:40.266931 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ac311c-9520-4cc5-a3a8-3ae1f822ff23-combined-ca-bundle\") pod \"octavia-housekeeping-f8s9t\" (UID: \"53ac311c-9520-4cc5-a3a8-3ae1f822ff23\") " pod="openstack/octavia-housekeeping-f8s9t" Jan 27 20:20:40 crc kubenswrapper[4915]: I0127 20:20:40.266965 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/53ac311c-9520-4cc5-a3a8-3ae1f822ff23-amphora-certs\") pod \"octavia-housekeeping-f8s9t\" (UID: \"53ac311c-9520-4cc5-a3a8-3ae1f822ff23\") " pod="openstack/octavia-housekeeping-f8s9t" Jan 27 20:20:40 crc kubenswrapper[4915]: I0127 20:20:40.266985 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ac311c-9520-4cc5-a3a8-3ae1f822ff23-config-data\") pod \"octavia-housekeeping-f8s9t\" (UID: \"53ac311c-9520-4cc5-a3a8-3ae1f822ff23\") " pod="openstack/octavia-housekeeping-f8s9t" Jan 27 20:20:40 crc kubenswrapper[4915]: I0127 20:20:40.267016 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53ac311c-9520-4cc5-a3a8-3ae1f822ff23-scripts\") pod \"octavia-housekeeping-f8s9t\" (UID: \"53ac311c-9520-4cc5-a3a8-3ae1f822ff23\") " pod="openstack/octavia-housekeeping-f8s9t" Jan 27 20:20:40 crc kubenswrapper[4915]: I0127 20:20:40.267075 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/53ac311c-9520-4cc5-a3a8-3ae1f822ff23-config-data-merged\") pod \"octavia-housekeeping-f8s9t\" (UID: \"53ac311c-9520-4cc5-a3a8-3ae1f822ff23\") " pod="openstack/octavia-housekeeping-f8s9t" Jan 27 20:20:40 crc kubenswrapper[4915]: I0127 20:20:40.267386 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/53ac311c-9520-4cc5-a3a8-3ae1f822ff23-config-data-merged\") pod \"octavia-housekeeping-f8s9t\" (UID: \"53ac311c-9520-4cc5-a3a8-3ae1f822ff23\") " pod="openstack/octavia-housekeeping-f8s9t" Jan 27 20:20:40 crc kubenswrapper[4915]: I0127 20:20:40.267842 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/53ac311c-9520-4cc5-a3a8-3ae1f822ff23-hm-ports\") pod \"octavia-housekeeping-f8s9t\" (UID: \"53ac311c-9520-4cc5-a3a8-3ae1f822ff23\") " pod="openstack/octavia-housekeeping-f8s9t" Jan 27 20:20:40 crc kubenswrapper[4915]: I0127 20:20:40.273347 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/53ac311c-9520-4cc5-a3a8-3ae1f822ff23-amphora-certs\") pod \"octavia-housekeeping-f8s9t\" (UID: \"53ac311c-9520-4cc5-a3a8-3ae1f822ff23\") " pod="openstack/octavia-housekeeping-f8s9t" Jan 27 20:20:40 crc kubenswrapper[4915]: I0127 20:20:40.275124 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ac311c-9520-4cc5-a3a8-3ae1f822ff23-combined-ca-bundle\") pod \"octavia-housekeeping-f8s9t\" (UID: \"53ac311c-9520-4cc5-a3a8-3ae1f822ff23\") " pod="openstack/octavia-housekeeping-f8s9t" Jan 27 20:20:40 crc kubenswrapper[4915]: I0127 20:20:40.275168 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ac311c-9520-4cc5-a3a8-3ae1f822ff23-config-data\") pod \"octavia-housekeeping-f8s9t\" (UID: \"53ac311c-9520-4cc5-a3a8-3ae1f822ff23\") " pod="openstack/octavia-housekeeping-f8s9t" Jan 27 20:20:40 crc kubenswrapper[4915]: I0127 20:20:40.276833 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53ac311c-9520-4cc5-a3a8-3ae1f822ff23-scripts\") pod \"octavia-housekeeping-f8s9t\" (UID: \"53ac311c-9520-4cc5-a3a8-3ae1f822ff23\") " pod="openstack/octavia-housekeeping-f8s9t" Jan 27 20:20:40 crc kubenswrapper[4915]: I0127 20:20:40.376227 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-f8s9t" Jan 27 20:20:40 crc kubenswrapper[4915]: I0127 20:20:40.486127 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-lkpg8" event={"ID":"2c14c777-a5d8-495b-b85a-43f045df5aa7","Type":"ContainerStarted","Data":"5da07db6ac7ba519947ac11fdf680b2153997cbb74533ff60fd6f6939f368778"} Jan 27 20:20:40 crc kubenswrapper[4915]: I0127 20:20:40.486174 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-lkpg8" event={"ID":"2c14c777-a5d8-495b-b85a-43f045df5aa7","Type":"ContainerStarted","Data":"b9c9f9ba1c8a7e7f5498865e050cfc1dfc0170e44fc314548bc33f1e1123c39d"} Jan 27 20:20:40 crc kubenswrapper[4915]: I0127 20:20:40.888799 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-f8s9t"] Jan 27 20:20:40 crc kubenswrapper[4915]: W0127 20:20:40.894002 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53ac311c_9520_4cc5_a3a8_3ae1f822ff23.slice/crio-717502b464adcd2a4876870d38a951d5effed4cb443f16f6f22b8cb1279884d8 WatchSource:0}: Error finding container 717502b464adcd2a4876870d38a951d5effed4cb443f16f6f22b8cb1279884d8: Status 404 returned error can't find the container with id 717502b464adcd2a4876870d38a951d5effed4cb443f16f6f22b8cb1279884d8 Jan 27 20:20:41 crc kubenswrapper[4915]: I0127 20:20:41.133469 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-ngf58"] Jan 27 20:20:41 crc kubenswrapper[4915]: I0127 20:20:41.135613 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-ngf58" Jan 27 20:20:41 crc kubenswrapper[4915]: I0127 20:20:41.138215 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Jan 27 20:20:41 crc kubenswrapper[4915]: I0127 20:20:41.150755 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Jan 27 20:20:41 crc kubenswrapper[4915]: I0127 20:20:41.151323 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-ngf58"] Jan 27 20:20:41 crc kubenswrapper[4915]: I0127 20:20:41.286541 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25e3000f-52ca-4d59-8f94-3f6ba166b355-scripts\") pod \"octavia-worker-ngf58\" (UID: \"25e3000f-52ca-4d59-8f94-3f6ba166b355\") " pod="openstack/octavia-worker-ngf58" Jan 27 20:20:41 crc kubenswrapper[4915]: I0127 20:20:41.286822 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e3000f-52ca-4d59-8f94-3f6ba166b355-combined-ca-bundle\") pod \"octavia-worker-ngf58\" (UID: \"25e3000f-52ca-4d59-8f94-3f6ba166b355\") " pod="openstack/octavia-worker-ngf58" Jan 27 20:20:41 crc kubenswrapper[4915]: I0127 20:20:41.286974 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/25e3000f-52ca-4d59-8f94-3f6ba166b355-hm-ports\") pod \"octavia-worker-ngf58\" (UID: \"25e3000f-52ca-4d59-8f94-3f6ba166b355\") " pod="openstack/octavia-worker-ngf58" Jan 27 20:20:41 crc kubenswrapper[4915]: I0127 20:20:41.287101 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/25e3000f-52ca-4d59-8f94-3f6ba166b355-amphora-certs\") pod \"octavia-worker-ngf58\" (UID: \"25e3000f-52ca-4d59-8f94-3f6ba166b355\") " pod="openstack/octavia-worker-ngf58" Jan 27 20:20:41 crc kubenswrapper[4915]: I0127 20:20:41.287272 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/25e3000f-52ca-4d59-8f94-3f6ba166b355-config-data-merged\") pod \"octavia-worker-ngf58\" (UID: \"25e3000f-52ca-4d59-8f94-3f6ba166b355\") " pod="openstack/octavia-worker-ngf58" Jan 27 20:20:41 crc kubenswrapper[4915]: I0127 20:20:41.287403 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e3000f-52ca-4d59-8f94-3f6ba166b355-config-data\") pod \"octavia-worker-ngf58\" (UID: \"25e3000f-52ca-4d59-8f94-3f6ba166b355\") " pod="openstack/octavia-worker-ngf58" Jan 27 20:20:41 crc kubenswrapper[4915]: I0127 20:20:41.388859 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e3000f-52ca-4d59-8f94-3f6ba166b355-combined-ca-bundle\") pod \"octavia-worker-ngf58\" (UID: \"25e3000f-52ca-4d59-8f94-3f6ba166b355\") " pod="openstack/octavia-worker-ngf58" Jan 27 20:20:41 crc kubenswrapper[4915]: I0127 20:20:41.388935 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/25e3000f-52ca-4d59-8f94-3f6ba166b355-hm-ports\") pod \"octavia-worker-ngf58\" (UID: \"25e3000f-52ca-4d59-8f94-3f6ba166b355\") " pod="openstack/octavia-worker-ngf58" Jan 27 20:20:41 crc kubenswrapper[4915]: I0127 20:20:41.388975 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/25e3000f-52ca-4d59-8f94-3f6ba166b355-amphora-certs\") pod \"octavia-worker-ngf58\" (UID: \"25e3000f-52ca-4d59-8f94-3f6ba166b355\") " pod="openstack/octavia-worker-ngf58" Jan 27 20:20:41 crc kubenswrapper[4915]: I0127 20:20:41.389023 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/25e3000f-52ca-4d59-8f94-3f6ba166b355-config-data-merged\") pod \"octavia-worker-ngf58\" (UID: \"25e3000f-52ca-4d59-8f94-3f6ba166b355\") " pod="openstack/octavia-worker-ngf58" Jan 27 20:20:41 crc kubenswrapper[4915]: I0127 20:20:41.389057 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e3000f-52ca-4d59-8f94-3f6ba166b355-config-data\") pod \"octavia-worker-ngf58\" (UID: \"25e3000f-52ca-4d59-8f94-3f6ba166b355\") " pod="openstack/octavia-worker-ngf58" Jan 27 20:20:41 crc kubenswrapper[4915]: I0127 20:20:41.389115 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25e3000f-52ca-4d59-8f94-3f6ba166b355-scripts\") pod \"octavia-worker-ngf58\" (UID: \"25e3000f-52ca-4d59-8f94-3f6ba166b355\") " pod="openstack/octavia-worker-ngf58" Jan 27 20:20:41 crc kubenswrapper[4915]: I0127 20:20:41.391586 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/25e3000f-52ca-4d59-8f94-3f6ba166b355-config-data-merged\") pod \"octavia-worker-ngf58\" (UID: \"25e3000f-52ca-4d59-8f94-3f6ba166b355\") " pod="openstack/octavia-worker-ngf58" Jan 27 20:20:41 crc kubenswrapper[4915]: I0127 20:20:41.391930 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/25e3000f-52ca-4d59-8f94-3f6ba166b355-hm-ports\") pod \"octavia-worker-ngf58\" (UID: \"25e3000f-52ca-4d59-8f94-3f6ba166b355\") " pod="openstack/octavia-worker-ngf58" Jan 27 20:20:41 crc kubenswrapper[4915]: I0127 20:20:41.394590 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/25e3000f-52ca-4d59-8f94-3f6ba166b355-amphora-certs\") pod \"octavia-worker-ngf58\" (UID: \"25e3000f-52ca-4d59-8f94-3f6ba166b355\") " pod="openstack/octavia-worker-ngf58" Jan 27 20:20:41 crc kubenswrapper[4915]: I0127 20:20:41.394772 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e3000f-52ca-4d59-8f94-3f6ba166b355-config-data\") pod \"octavia-worker-ngf58\" (UID: \"25e3000f-52ca-4d59-8f94-3f6ba166b355\") " pod="openstack/octavia-worker-ngf58" Jan 27 20:20:41 crc kubenswrapper[4915]: I0127 20:20:41.396183 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e3000f-52ca-4d59-8f94-3f6ba166b355-combined-ca-bundle\") pod \"octavia-worker-ngf58\" (UID: \"25e3000f-52ca-4d59-8f94-3f6ba166b355\") " pod="openstack/octavia-worker-ngf58" Jan 27 20:20:41 crc kubenswrapper[4915]: I0127 20:20:41.409127 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25e3000f-52ca-4d59-8f94-3f6ba166b355-scripts\") pod \"octavia-worker-ngf58\" (UID: \"25e3000f-52ca-4d59-8f94-3f6ba166b355\") " pod="openstack/octavia-worker-ngf58" Jan 27 20:20:41 crc kubenswrapper[4915]: I0127 20:20:41.460054 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-ngf58" Jan 27 20:20:41 crc kubenswrapper[4915]: I0127 20:20:41.518846 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-f8s9t" event={"ID":"53ac311c-9520-4cc5-a3a8-3ae1f822ff23","Type":"ContainerStarted","Data":"717502b464adcd2a4876870d38a951d5effed4cb443f16f6f22b8cb1279884d8"} Jan 27 20:20:42 crc kubenswrapper[4915]: I0127 20:20:42.026281 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-ngf58"] Jan 27 20:20:42 crc kubenswrapper[4915]: W0127 20:20:42.262114 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25e3000f_52ca_4d59_8f94_3f6ba166b355.slice/crio-1fe6a11cfe31a596369800d287d9951775173424bcdef00989962e15c5f11b62 WatchSource:0}: Error finding container 1fe6a11cfe31a596369800d287d9951775173424bcdef00989962e15c5f11b62: Status 404 returned error can't find the container with id 1fe6a11cfe31a596369800d287d9951775173424bcdef00989962e15c5f11b62 Jan 27 20:20:42 crc kubenswrapper[4915]: I0127 20:20:42.524438 4915 generic.go:334] "Generic (PLEG): container finished" podID="2c14c777-a5d8-495b-b85a-43f045df5aa7" containerID="5da07db6ac7ba519947ac11fdf680b2153997cbb74533ff60fd6f6939f368778" exitCode=0 Jan 27 20:20:42 crc kubenswrapper[4915]: I0127 20:20:42.524525 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-lkpg8" event={"ID":"2c14c777-a5d8-495b-b85a-43f045df5aa7","Type":"ContainerDied","Data":"5da07db6ac7ba519947ac11fdf680b2153997cbb74533ff60fd6f6939f368778"} Jan 27 20:20:42 crc kubenswrapper[4915]: I0127 20:20:42.526195 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-ngf58" event={"ID":"25e3000f-52ca-4d59-8f94-3f6ba166b355","Type":"ContainerStarted","Data":"1fe6a11cfe31a596369800d287d9951775173424bcdef00989962e15c5f11b62"} Jan 27 20:20:42 crc kubenswrapper[4915]: I0127 20:20:42.794775 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-lkpg8"] Jan 27 20:20:43 crc kubenswrapper[4915]: I0127 20:20:43.358099 4915 scope.go:117] "RemoveContainer" containerID="fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114" Jan 27 20:20:43 crc kubenswrapper[4915]: E0127 20:20:43.358739 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:20:43 crc kubenswrapper[4915]: I0127 20:20:43.536256 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-lkpg8" event={"ID":"2c14c777-a5d8-495b-b85a-43f045df5aa7","Type":"ContainerStarted","Data":"5b1dc8145cf6f37e5a481707f47c80d9a34e97a0d3b7c12d04591fdf807cde14"} Jan 27 20:20:43 crc kubenswrapper[4915]: I0127 20:20:43.536459 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-lkpg8" Jan 27 20:20:43 crc kubenswrapper[4915]: I0127 20:20:43.538595 4915 generic.go:334] "Generic (PLEG): container finished" podID="53ac311c-9520-4cc5-a3a8-3ae1f822ff23" containerID="25508f3d810be9871a4b8a278b967d086181503a5e32ae76e23b50342e06bfe6" exitCode=0 Jan 27 20:20:43 crc kubenswrapper[4915]: I0127 20:20:43.538637 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-f8s9t" event={"ID":"53ac311c-9520-4cc5-a3a8-3ae1f822ff23","Type":"ContainerDied","Data":"25508f3d810be9871a4b8a278b967d086181503a5e32ae76e23b50342e06bfe6"} Jan 27 20:20:43 crc kubenswrapper[4915]: I0127 20:20:43.566283 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-lkpg8" podStartSLOduration=5.566264337 podStartE2EDuration="5.566264337s" podCreationTimestamp="2026-01-27 20:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:20:43.55987313 +0000 UTC m=+5934.917726814" watchObservedRunningTime="2026-01-27 20:20:43.566264337 +0000 UTC m=+5934.924118001" Jan 27 20:20:44 crc kubenswrapper[4915]: I0127 20:20:44.552498 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-ngf58" event={"ID":"25e3000f-52ca-4d59-8f94-3f6ba166b355","Type":"ContainerStarted","Data":"a9a2c4f62d0d343688b8c6cb2367196a8f68ea9ea406e6d65c55c0b1cdf99e6b"} Jan 27 20:20:44 crc kubenswrapper[4915]: I0127 20:20:44.557646 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-f8s9t" event={"ID":"53ac311c-9520-4cc5-a3a8-3ae1f822ff23","Type":"ContainerStarted","Data":"3703059ac264ff7994e914bf39db7f4528cd9f490bae3bcade8aec2903880c09"} Jan 27 20:20:44 crc kubenswrapper[4915]: I0127 20:20:44.557691 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-f8s9t" Jan 27 20:20:44 crc kubenswrapper[4915]: I0127 20:20:44.596610 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-f8s9t" podStartSLOduration=3.190037901 podStartE2EDuration="4.596588641s" podCreationTimestamp="2026-01-27 20:20:40 +0000 UTC" firstStartedPulling="2026-01-27 20:20:40.89668665 +0000 UTC m=+5932.254540314" lastFinishedPulling="2026-01-27 20:20:42.30323738 +0000 UTC m=+5933.661091054" observedRunningTime="2026-01-27 20:20:44.592400868 +0000 UTC m=+5935.950254542" watchObservedRunningTime="2026-01-27 20:20:44.596588641 +0000 UTC m=+5935.954442315" Jan 27 20:20:45 crc kubenswrapper[4915]: I0127 20:20:45.570088 4915 generic.go:334] "Generic (PLEG): container finished" podID="25e3000f-52ca-4d59-8f94-3f6ba166b355" containerID="a9a2c4f62d0d343688b8c6cb2367196a8f68ea9ea406e6d65c55c0b1cdf99e6b" exitCode=0 Jan 27 20:20:45 crc kubenswrapper[4915]: I0127 20:20:45.570148 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-ngf58" event={"ID":"25e3000f-52ca-4d59-8f94-3f6ba166b355","Type":"ContainerDied","Data":"a9a2c4f62d0d343688b8c6cb2367196a8f68ea9ea406e6d65c55c0b1cdf99e6b"} Jan 27 20:20:46 crc kubenswrapper[4915]: I0127 20:20:46.585975 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-ngf58" event={"ID":"25e3000f-52ca-4d59-8f94-3f6ba166b355","Type":"ContainerStarted","Data":"22aeac20279e5a9e9c87481793d16cd60f11a80218b170bcf1655b73b15f5821"} Jan 27 20:20:46 crc kubenswrapper[4915]: I0127 20:20:46.586428 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-ngf58" Jan 27 20:20:46 crc kubenswrapper[4915]: I0127 20:20:46.615299 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-ngf58" podStartSLOduration=4.307257219 podStartE2EDuration="5.61527739s" podCreationTimestamp="2026-01-27 20:20:41 +0000 UTC" firstStartedPulling="2026-01-27 20:20:42.265924384 +0000 UTC m=+5933.623778088" lastFinishedPulling="2026-01-27 20:20:43.573944595 +0000 UTC m=+5934.931798259" observedRunningTime="2026-01-27 20:20:46.60877775 +0000 UTC m=+5937.966631424" watchObservedRunningTime="2026-01-27 20:20:46.61527739 +0000 UTC m=+5937.973131064" Jan 27 20:20:53 crc kubenswrapper[4915]: I0127 20:20:53.882764 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-lkpg8" Jan 27 20:20:55 crc kubenswrapper[4915]: I0127 20:20:55.410614 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-f8s9t" Jan 27 20:20:56 crc kubenswrapper[4915]: I0127 20:20:56.358706 4915 scope.go:117] "RemoveContainer" containerID="fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114" Jan 27 20:20:56 crc kubenswrapper[4915]: E0127 20:20:56.359096 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:20:56 crc kubenswrapper[4915]: I0127 20:20:56.496241 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-ngf58" Jan 27 20:21:08 crc kubenswrapper[4915]: I0127 20:21:08.357817 4915 scope.go:117] "RemoveContainer" containerID="fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114" Jan 27 20:21:08 crc kubenswrapper[4915]: E0127 20:21:08.358442 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:21:21 crc kubenswrapper[4915]: I0127 20:21:21.358385 4915 scope.go:117] "RemoveContainer" containerID="fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114" Jan 27 20:21:21 crc kubenswrapper[4915]: E0127 20:21:21.359354 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:21:36 crc kubenswrapper[4915]: I0127 20:21:36.358512 4915 scope.go:117] "RemoveContainer" containerID="fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114" Jan 27 20:21:36 crc kubenswrapper[4915]: E0127 20:21:36.359631 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:21:43 crc kubenswrapper[4915]: I0127 20:21:43.050323 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-8dtdm"] Jan 27 20:21:43 crc kubenswrapper[4915]: I0127 20:21:43.064673 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-a1c8-account-create-update-xmztl"] Jan 27 20:21:43 crc kubenswrapper[4915]: I0127 20:21:43.078321 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-a1c8-account-create-update-xmztl"] Jan 27 20:21:43 crc kubenswrapper[4915]: I0127 20:21:43.088321 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-8dtdm"] Jan 27 20:21:43 crc kubenswrapper[4915]: I0127 20:21:43.367718 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71be97d2-7ddc-4a74-8aae-e1db009018fe" path="/var/lib/kubelet/pods/71be97d2-7ddc-4a74-8aae-e1db009018fe/volumes" Jan 27 20:21:43 crc kubenswrapper[4915]: I0127 20:21:43.369546 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ea86373-6961-41e9-818c-68a2f3f0805f" path="/var/lib/kubelet/pods/9ea86373-6961-41e9-818c-68a2f3f0805f/volumes" Jan 27 20:21:49 crc kubenswrapper[4915]: I0127 20:21:49.032524 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-dhghd"] Jan 27 20:21:49 crc kubenswrapper[4915]: I0127 20:21:49.040534 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-dhghd"] Jan 27 20:21:49 crc kubenswrapper[4915]: I0127 20:21:49.370813 4915 scope.go:117] "RemoveContainer" containerID="fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114" Jan 27 20:21:49 crc kubenswrapper[4915]: E0127 20:21:49.371092 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:21:49 crc kubenswrapper[4915]: I0127 20:21:49.379065 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93c4e332-4694-40df-b17a-fc0755eebddc" path="/var/lib/kubelet/pods/93c4e332-4694-40df-b17a-fc0755eebddc/volumes" Jan 27 20:21:57 crc kubenswrapper[4915]: I0127 20:21:57.369494 4915 scope.go:117] "RemoveContainer" containerID="ef561dc8e739a2c43dd4fcb1546fa37e6ff5d701ac46a606a8583f76261a149f" Jan 27 20:21:57 crc kubenswrapper[4915]: I0127 20:21:57.401349 4915 scope.go:117] "RemoveContainer" containerID="1c22f088a4a7ca8c44e753f1c279892adeb439bc461ced75809af22ba214b798" Jan 27 20:21:57 crc kubenswrapper[4915]: I0127 20:21:57.437553 4915 scope.go:117] "RemoveContainer" containerID="afca5d0db5e09e2e2f16c3ba5666a9f6849f0e3b45dd82110151438e59a4e31b" Jan 27 20:21:57 crc kubenswrapper[4915]: I0127 20:21:57.467325 4915 scope.go:117] "RemoveContainer" containerID="34c909baded24626db964fa82be273a30a88ffc8025559bd81c37ca1adaeb8ba" Jan 27 20:21:57 crc kubenswrapper[4915]: I0127 20:21:57.499668 4915 scope.go:117] "RemoveContainer" containerID="75ca7a4d14b11f7ecc0d50303e549b9349cc3096a61dedfb1c72bece53a0fc6f" Jan 27 20:22:01 crc kubenswrapper[4915]: I0127 20:22:01.358196 4915 scope.go:117] "RemoveContainer" containerID="fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114" Jan 27 20:22:01 crc kubenswrapper[4915]: E0127 20:22:01.359263 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:22:12 crc kubenswrapper[4915]: I0127 20:22:12.357520 4915 scope.go:117] "RemoveContainer" containerID="fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114" Jan 27 20:22:12 crc kubenswrapper[4915]: E0127 20:22:12.358261 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:22:16 crc kubenswrapper[4915]: I0127 20:22:16.051387 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-smsxm"] Jan 27 20:22:16 crc kubenswrapper[4915]: I0127 20:22:16.063925 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-a45d-account-create-update-xzgwt"] Jan 27 20:22:16 crc kubenswrapper[4915]: I0127 20:22:16.075191 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-a45d-account-create-update-xzgwt"] Jan 27 20:22:16 crc kubenswrapper[4915]: I0127 20:22:16.086150 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-smsxm"] Jan 27 20:22:17 crc kubenswrapper[4915]: I0127 20:22:17.369606 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36bdf6cd-11c3-4fba-9508-76c9e9a69467" path="/var/lib/kubelet/pods/36bdf6cd-11c3-4fba-9508-76c9e9a69467/volumes" Jan 27 20:22:17 crc kubenswrapper[4915]: I0127 20:22:17.371577 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c232a473-6549-423f-8c63-79e5507506e5" path="/var/lib/kubelet/pods/c232a473-6549-423f-8c63-79e5507506e5/volumes" Jan 27 20:22:23 crc kubenswrapper[4915]: I0127 20:22:23.358142 4915 scope.go:117] "RemoveContainer" containerID="fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114" Jan 27 20:22:23 crc kubenswrapper[4915]: E0127 20:22:23.359066 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:22:27 crc kubenswrapper[4915]: I0127 20:22:27.042145 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-nx2wg"] Jan 27 20:22:27 crc kubenswrapper[4915]: I0127 20:22:27.053909 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-nx2wg"] Jan 27 20:22:27 crc kubenswrapper[4915]: I0127 20:22:27.370089 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f5de868-ebe4-4f41-b480-4a71a76b1d0f" path="/var/lib/kubelet/pods/2f5de868-ebe4-4f41-b480-4a71a76b1d0f/volumes" Jan 27 20:22:38 crc kubenswrapper[4915]: I0127 20:22:38.358283 4915 scope.go:117] "RemoveContainer" containerID="fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114" Jan 27 20:22:38 crc kubenswrapper[4915]: E0127 20:22:38.358975 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:22:51 crc kubenswrapper[4915]: I0127 20:22:51.358443 4915 scope.go:117] "RemoveContainer" containerID="fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114" Jan 27 20:22:51 crc kubenswrapper[4915]: I0127 20:22:51.997289 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"75d63496c16b64c31133696aced61b5f6a564e9bc414f0b949920a0c880d57d5"} Jan 27 20:22:56 crc kubenswrapper[4915]: I0127 20:22:56.538943 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4bv9q"] Jan 27 20:22:56 crc kubenswrapper[4915]: I0127 20:22:56.548094 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bv9q" Jan 27 20:22:56 crc kubenswrapper[4915]: I0127 20:22:56.587040 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4bv9q"] Jan 27 20:22:56 crc kubenswrapper[4915]: I0127 20:22:56.688064 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lz2p\" (UniqueName: \"kubernetes.io/projected/38658324-1b1a-455e-bf26-47a23aa440e8-kube-api-access-6lz2p\") pod \"redhat-operators-4bv9q\" (UID: \"38658324-1b1a-455e-bf26-47a23aa440e8\") " pod="openshift-marketplace/redhat-operators-4bv9q" Jan 27 20:22:56 crc kubenswrapper[4915]: I0127 20:22:56.688409 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38658324-1b1a-455e-bf26-47a23aa440e8-catalog-content\") pod \"redhat-operators-4bv9q\" (UID: \"38658324-1b1a-455e-bf26-47a23aa440e8\") " pod="openshift-marketplace/redhat-operators-4bv9q" Jan 27 20:22:56 crc kubenswrapper[4915]: I0127 20:22:56.688659 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38658324-1b1a-455e-bf26-47a23aa440e8-utilities\") pod \"redhat-operators-4bv9q\" (UID: \"38658324-1b1a-455e-bf26-47a23aa440e8\") " pod="openshift-marketplace/redhat-operators-4bv9q" Jan 27 20:22:56 crc kubenswrapper[4915]: I0127 20:22:56.790325 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38658324-1b1a-455e-bf26-47a23aa440e8-utilities\") pod \"redhat-operators-4bv9q\" (UID: \"38658324-1b1a-455e-bf26-47a23aa440e8\") " pod="openshift-marketplace/redhat-operators-4bv9q" Jan 27 20:22:56 crc kubenswrapper[4915]: I0127 20:22:56.790443 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lz2p\" (UniqueName: \"kubernetes.io/projected/38658324-1b1a-455e-bf26-47a23aa440e8-kube-api-access-6lz2p\") pod \"redhat-operators-4bv9q\" (UID: \"38658324-1b1a-455e-bf26-47a23aa440e8\") " pod="openshift-marketplace/redhat-operators-4bv9q" Jan 27 20:22:56 crc kubenswrapper[4915]: I0127 20:22:56.790569 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38658324-1b1a-455e-bf26-47a23aa440e8-catalog-content\") pod \"redhat-operators-4bv9q\" (UID: \"38658324-1b1a-455e-bf26-47a23aa440e8\") " pod="openshift-marketplace/redhat-operators-4bv9q" Jan 27 20:22:56 crc kubenswrapper[4915]: I0127 20:22:56.790980 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38658324-1b1a-455e-bf26-47a23aa440e8-utilities\") pod \"redhat-operators-4bv9q\" (UID: \"38658324-1b1a-455e-bf26-47a23aa440e8\") " pod="openshift-marketplace/redhat-operators-4bv9q" Jan 27 20:22:56 crc kubenswrapper[4915]: I0127 20:22:56.790982 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38658324-1b1a-455e-bf26-47a23aa440e8-catalog-content\") pod \"redhat-operators-4bv9q\" (UID: \"38658324-1b1a-455e-bf26-47a23aa440e8\") " pod="openshift-marketplace/redhat-operators-4bv9q" Jan 27 20:22:56 crc kubenswrapper[4915]: I0127 20:22:56.815856 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lz2p\" (UniqueName: \"kubernetes.io/projected/38658324-1b1a-455e-bf26-47a23aa440e8-kube-api-access-6lz2p\") pod \"redhat-operators-4bv9q\" (UID: \"38658324-1b1a-455e-bf26-47a23aa440e8\") " pod="openshift-marketplace/redhat-operators-4bv9q" Jan 27 20:22:56 crc kubenswrapper[4915]: I0127 20:22:56.897609 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bv9q" Jan 27 20:22:57 crc kubenswrapper[4915]: I0127 20:22:57.370217 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4bv9q"] Jan 27 20:22:57 crc kubenswrapper[4915]: I0127 20:22:57.608982 4915 scope.go:117] "RemoveContainer" containerID="6f3665eb13118bd0967bedead4ce7cbedb3aa1bdbb7e0c28c4c83c8e489ca220" Jan 27 20:22:57 crc kubenswrapper[4915]: I0127 20:22:57.655462 4915 scope.go:117] "RemoveContainer" containerID="a2387b76251fe60d9695f160467d8820dd9ea061351d508066b5c2396ced25cb" Jan 27 20:22:57 crc kubenswrapper[4915]: I0127 20:22:57.673122 4915 scope.go:117] "RemoveContainer" containerID="a1c4853e09911892c1f28e538d09d6be40809596cf5a44be7ccc31186e0bc81a" Jan 27 20:22:58 crc kubenswrapper[4915]: I0127 20:22:58.075947 4915 generic.go:334] "Generic (PLEG): container finished" podID="38658324-1b1a-455e-bf26-47a23aa440e8" containerID="8b01f8ff3b757fbd0c5192e328bd027127013f6657687044677718ae4df91d58" exitCode=0 Jan 27 20:22:58 crc kubenswrapper[4915]: I0127 20:22:58.076006 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bv9q" event={"ID":"38658324-1b1a-455e-bf26-47a23aa440e8","Type":"ContainerDied","Data":"8b01f8ff3b757fbd0c5192e328bd027127013f6657687044677718ae4df91d58"} Jan 27 20:22:58 crc kubenswrapper[4915]: I0127 20:22:58.076284 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bv9q" event={"ID":"38658324-1b1a-455e-bf26-47a23aa440e8","Type":"ContainerStarted","Data":"5bd89e3a93c3e6250681afbd1bad9f7ea29da3a05e866917794603bc7308a5b5"} Jan 27 20:22:59 crc kubenswrapper[4915]: I0127 20:22:59.091706 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bv9q" event={"ID":"38658324-1b1a-455e-bf26-47a23aa440e8","Type":"ContainerStarted","Data":"ede4749c88f206f30ed3ded3b6a32acec7d66ec77613cfba536ed5a9b854bb44"} Jan 27 20:23:01 crc kubenswrapper[4915]: I0127 20:23:01.114207 4915 generic.go:334] "Generic (PLEG): container finished" podID="38658324-1b1a-455e-bf26-47a23aa440e8" containerID="ede4749c88f206f30ed3ded3b6a32acec7d66ec77613cfba536ed5a9b854bb44" exitCode=0 Jan 27 20:23:01 crc kubenswrapper[4915]: I0127 20:23:01.114305 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bv9q" event={"ID":"38658324-1b1a-455e-bf26-47a23aa440e8","Type":"ContainerDied","Data":"ede4749c88f206f30ed3ded3b6a32acec7d66ec77613cfba536ed5a9b854bb44"} Jan 27 20:23:02 crc kubenswrapper[4915]: I0127 20:23:02.129658 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bv9q" event={"ID":"38658324-1b1a-455e-bf26-47a23aa440e8","Type":"ContainerStarted","Data":"0b513527391e28cf0f221897b63b1dde864a5e76089e34fc6efa0e2bda86b9c4"} Jan 27 20:23:02 crc kubenswrapper[4915]: I0127 20:23:02.153657 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4bv9q" podStartSLOduration=2.722822495 podStartE2EDuration="6.15363156s" podCreationTimestamp="2026-01-27 20:22:56 +0000 UTC" firstStartedPulling="2026-01-27 20:22:58.07863455 +0000 UTC m=+6069.436488214" lastFinishedPulling="2026-01-27 20:23:01.509443575 +0000 UTC m=+6072.867297279" observedRunningTime="2026-01-27 20:23:02.148549235 +0000 UTC m=+6073.506402949" watchObservedRunningTime="2026-01-27 20:23:02.15363156 +0000 UTC m=+6073.511485254" Jan 27 20:23:06 crc kubenswrapper[4915]: I0127 20:23:06.897763 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4bv9q" Jan 27 20:23:06 crc kubenswrapper[4915]: I0127 20:23:06.898370 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4bv9q" Jan 27 20:23:07 crc kubenswrapper[4915]: I0127 20:23:07.939436 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4bv9q" podUID="38658324-1b1a-455e-bf26-47a23aa440e8" containerName="registry-server" probeResult="failure" output=< Jan 27 20:23:07 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Jan 27 20:23:07 crc kubenswrapper[4915]: > Jan 27 20:23:08 crc kubenswrapper[4915]: I0127 20:23:08.045094 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-6079-account-create-update-5cdgf"] Jan 27 20:23:08 crc kubenswrapper[4915]: I0127 20:23:08.062150 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-6079-account-create-update-5cdgf"] Jan 27 20:23:09 crc kubenswrapper[4915]: I0127 20:23:09.036651 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-kmmcp"] Jan 27 20:23:09 crc kubenswrapper[4915]: I0127 20:23:09.052388 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-kmmcp"] Jan 27 20:23:09 crc kubenswrapper[4915]: I0127 20:23:09.381658 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="848fb3e1-4209-4d4a-8e31-b66952b1a767" path="/var/lib/kubelet/pods/848fb3e1-4209-4d4a-8e31-b66952b1a767/volumes" Jan 27 20:23:09 crc kubenswrapper[4915]: I0127 20:23:09.384068 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9e8b539-5c97-4710-94a4-8bd3395ea16a" path="/var/lib/kubelet/pods/c9e8b539-5c97-4710-94a4-8bd3395ea16a/volumes" Jan 27 20:23:16 crc kubenswrapper[4915]: I0127 20:23:16.950779 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4bv9q" Jan 27 20:23:16 crc kubenswrapper[4915]: I0127 20:23:16.998710 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4bv9q" Jan 27 20:23:17 crc kubenswrapper[4915]: I0127 20:23:17.039703 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-nqxnm"] Jan 27 20:23:17 crc kubenswrapper[4915]: I0127 20:23:17.049881 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-nqxnm"] Jan 27 20:23:17 crc kubenswrapper[4915]: I0127 20:23:17.195299 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4bv9q"] Jan 27 20:23:17 crc kubenswrapper[4915]: I0127 20:23:17.371095 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00c9fb59-a8c5-4411-975d-d2751dca2344" path="/var/lib/kubelet/pods/00c9fb59-a8c5-4411-975d-d2751dca2344/volumes" Jan 27 20:23:18 crc kubenswrapper[4915]: I0127 20:23:18.267684 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4bv9q" podUID="38658324-1b1a-455e-bf26-47a23aa440e8" containerName="registry-server" containerID="cri-o://0b513527391e28cf0f221897b63b1dde864a5e76089e34fc6efa0e2bda86b9c4" gracePeriod=2 Jan 27 20:23:18 crc kubenswrapper[4915]: I0127 20:23:18.740477 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bv9q" Jan 27 20:23:18 crc kubenswrapper[4915]: I0127 20:23:18.826512 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38658324-1b1a-455e-bf26-47a23aa440e8-utilities\") pod \"38658324-1b1a-455e-bf26-47a23aa440e8\" (UID: \"38658324-1b1a-455e-bf26-47a23aa440e8\") " Jan 27 20:23:18 crc kubenswrapper[4915]: I0127 20:23:18.826626 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38658324-1b1a-455e-bf26-47a23aa440e8-catalog-content\") pod \"38658324-1b1a-455e-bf26-47a23aa440e8\" (UID: \"38658324-1b1a-455e-bf26-47a23aa440e8\") " Jan 27 20:23:18 crc kubenswrapper[4915]: I0127 20:23:18.826716 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lz2p\" (UniqueName: \"kubernetes.io/projected/38658324-1b1a-455e-bf26-47a23aa440e8-kube-api-access-6lz2p\") pod \"38658324-1b1a-455e-bf26-47a23aa440e8\" (UID: \"38658324-1b1a-455e-bf26-47a23aa440e8\") " Jan 27 20:23:18 crc kubenswrapper[4915]: I0127 20:23:18.828026 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38658324-1b1a-455e-bf26-47a23aa440e8-utilities" (OuterVolumeSpecName: "utilities") pod "38658324-1b1a-455e-bf26-47a23aa440e8" (UID: "38658324-1b1a-455e-bf26-47a23aa440e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:23:18 crc kubenswrapper[4915]: I0127 20:23:18.835909 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38658324-1b1a-455e-bf26-47a23aa440e8-kube-api-access-6lz2p" (OuterVolumeSpecName: "kube-api-access-6lz2p") pod "38658324-1b1a-455e-bf26-47a23aa440e8" (UID: "38658324-1b1a-455e-bf26-47a23aa440e8"). InnerVolumeSpecName "kube-api-access-6lz2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:23:18 crc kubenswrapper[4915]: I0127 20:23:18.929187 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lz2p\" (UniqueName: \"kubernetes.io/projected/38658324-1b1a-455e-bf26-47a23aa440e8-kube-api-access-6lz2p\") on node \"crc\" DevicePath \"\"" Jan 27 20:23:18 crc kubenswrapper[4915]: I0127 20:23:18.929235 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38658324-1b1a-455e-bf26-47a23aa440e8-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:23:18 crc kubenswrapper[4915]: I0127 20:23:18.949437 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38658324-1b1a-455e-bf26-47a23aa440e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38658324-1b1a-455e-bf26-47a23aa440e8" (UID: "38658324-1b1a-455e-bf26-47a23aa440e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:23:19 crc kubenswrapper[4915]: I0127 20:23:19.031367 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38658324-1b1a-455e-bf26-47a23aa440e8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:23:19 crc kubenswrapper[4915]: I0127 20:23:19.278445 4915 generic.go:334] "Generic (PLEG): container finished" podID="38658324-1b1a-455e-bf26-47a23aa440e8" containerID="0b513527391e28cf0f221897b63b1dde864a5e76089e34fc6efa0e2bda86b9c4" exitCode=0 Jan 27 20:23:19 crc kubenswrapper[4915]: I0127 20:23:19.278505 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bv9q" event={"ID":"38658324-1b1a-455e-bf26-47a23aa440e8","Type":"ContainerDied","Data":"0b513527391e28cf0f221897b63b1dde864a5e76089e34fc6efa0e2bda86b9c4"} Jan 27 20:23:19 crc kubenswrapper[4915]: I0127 20:23:19.278509 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bv9q" Jan 27 20:23:19 crc kubenswrapper[4915]: I0127 20:23:19.278560 4915 scope.go:117] "RemoveContainer" containerID="0b513527391e28cf0f221897b63b1dde864a5e76089e34fc6efa0e2bda86b9c4" Jan 27 20:23:19 crc kubenswrapper[4915]: I0127 20:23:19.278546 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bv9q" event={"ID":"38658324-1b1a-455e-bf26-47a23aa440e8","Type":"ContainerDied","Data":"5bd89e3a93c3e6250681afbd1bad9f7ea29da3a05e866917794603bc7308a5b5"} Jan 27 20:23:19 crc kubenswrapper[4915]: I0127 20:23:19.313760 4915 scope.go:117] "RemoveContainer" containerID="ede4749c88f206f30ed3ded3b6a32acec7d66ec77613cfba536ed5a9b854bb44" Jan 27 20:23:19 crc kubenswrapper[4915]: I0127 20:23:19.327544 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4bv9q"] Jan 27 20:23:19 crc kubenswrapper[4915]: I0127 20:23:19.334635 4915 scope.go:117] "RemoveContainer" containerID="8b01f8ff3b757fbd0c5192e328bd027127013f6657687044677718ae4df91d58" Jan 27 20:23:19 crc kubenswrapper[4915]: I0127 20:23:19.335663 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4bv9q"] Jan 27 20:23:19 crc kubenswrapper[4915]: I0127 20:23:19.371695 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38658324-1b1a-455e-bf26-47a23aa440e8" path="/var/lib/kubelet/pods/38658324-1b1a-455e-bf26-47a23aa440e8/volumes" Jan 27 20:23:19 crc kubenswrapper[4915]: I0127 20:23:19.391622 4915 scope.go:117] "RemoveContainer" containerID="0b513527391e28cf0f221897b63b1dde864a5e76089e34fc6efa0e2bda86b9c4" Jan 27 20:23:19 crc kubenswrapper[4915]: E0127 20:23:19.392072 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b513527391e28cf0f221897b63b1dde864a5e76089e34fc6efa0e2bda86b9c4\": container with ID starting with 0b513527391e28cf0f221897b63b1dde864a5e76089e34fc6efa0e2bda86b9c4 not found: ID does not exist" containerID="0b513527391e28cf0f221897b63b1dde864a5e76089e34fc6efa0e2bda86b9c4" Jan 27 20:23:19 crc kubenswrapper[4915]: I0127 20:23:19.392120 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b513527391e28cf0f221897b63b1dde864a5e76089e34fc6efa0e2bda86b9c4"} err="failed to get container status \"0b513527391e28cf0f221897b63b1dde864a5e76089e34fc6efa0e2bda86b9c4\": rpc error: code = NotFound desc = could not find container \"0b513527391e28cf0f221897b63b1dde864a5e76089e34fc6efa0e2bda86b9c4\": container with ID starting with 0b513527391e28cf0f221897b63b1dde864a5e76089e34fc6efa0e2bda86b9c4 not found: ID does not exist" Jan 27 20:23:19 crc kubenswrapper[4915]: I0127 20:23:19.392152 4915 scope.go:117] "RemoveContainer" containerID="ede4749c88f206f30ed3ded3b6a32acec7d66ec77613cfba536ed5a9b854bb44" Jan 27 20:23:19 crc kubenswrapper[4915]: E0127 20:23:19.392502 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ede4749c88f206f30ed3ded3b6a32acec7d66ec77613cfba536ed5a9b854bb44\": container with ID starting with ede4749c88f206f30ed3ded3b6a32acec7d66ec77613cfba536ed5a9b854bb44 not found: ID does not exist" containerID="ede4749c88f206f30ed3ded3b6a32acec7d66ec77613cfba536ed5a9b854bb44" Jan 27 20:23:19 crc kubenswrapper[4915]: I0127 20:23:19.392526 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ede4749c88f206f30ed3ded3b6a32acec7d66ec77613cfba536ed5a9b854bb44"} err="failed to get container status \"ede4749c88f206f30ed3ded3b6a32acec7d66ec77613cfba536ed5a9b854bb44\": rpc error: code = NotFound desc = could not find container \"ede4749c88f206f30ed3ded3b6a32acec7d66ec77613cfba536ed5a9b854bb44\": container with ID starting with ede4749c88f206f30ed3ded3b6a32acec7d66ec77613cfba536ed5a9b854bb44 not found: ID does not exist" Jan 27 20:23:19 crc kubenswrapper[4915]: I0127 20:23:19.392542 4915 scope.go:117] "RemoveContainer" containerID="8b01f8ff3b757fbd0c5192e328bd027127013f6657687044677718ae4df91d58" Jan 27 20:23:19 crc kubenswrapper[4915]: E0127 20:23:19.392814 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b01f8ff3b757fbd0c5192e328bd027127013f6657687044677718ae4df91d58\": container with ID starting with 8b01f8ff3b757fbd0c5192e328bd027127013f6657687044677718ae4df91d58 not found: ID does not exist" containerID="8b01f8ff3b757fbd0c5192e328bd027127013f6657687044677718ae4df91d58" Jan 27 20:23:19 crc kubenswrapper[4915]: I0127 20:23:19.392844 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b01f8ff3b757fbd0c5192e328bd027127013f6657687044677718ae4df91d58"} err="failed to get container status \"8b01f8ff3b757fbd0c5192e328bd027127013f6657687044677718ae4df91d58\": rpc error: code = NotFound desc = could not find container \"8b01f8ff3b757fbd0c5192e328bd027127013f6657687044677718ae4df91d58\": container with ID starting with 8b01f8ff3b757fbd0c5192e328bd027127013f6657687044677718ae4df91d58 not found: ID does not exist" Jan 27 20:23:31 crc kubenswrapper[4915]: I0127 20:23:31.707323 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9pzpb"] Jan 27 20:23:31 crc kubenswrapper[4915]: E0127 20:23:31.708500 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38658324-1b1a-455e-bf26-47a23aa440e8" containerName="extract-content" Jan 27 20:23:31 crc kubenswrapper[4915]: I0127 20:23:31.708520 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="38658324-1b1a-455e-bf26-47a23aa440e8" containerName="extract-content" Jan 27 20:23:31 crc kubenswrapper[4915]: E0127 20:23:31.708534 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38658324-1b1a-455e-bf26-47a23aa440e8" containerName="extract-utilities" Jan 27 20:23:31 crc kubenswrapper[4915]: I0127 20:23:31.708543 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="38658324-1b1a-455e-bf26-47a23aa440e8" containerName="extract-utilities" Jan 27 20:23:31 crc kubenswrapper[4915]: E0127 20:23:31.708579 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38658324-1b1a-455e-bf26-47a23aa440e8" containerName="registry-server" Jan 27 20:23:31 crc kubenswrapper[4915]: I0127 20:23:31.708587 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="38658324-1b1a-455e-bf26-47a23aa440e8" containerName="registry-server" Jan 27 20:23:31 crc kubenswrapper[4915]: I0127 20:23:31.708849 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="38658324-1b1a-455e-bf26-47a23aa440e8" containerName="registry-server" Jan 27 20:23:31 crc kubenswrapper[4915]: I0127 20:23:31.716710 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pzpb" Jan 27 20:23:31 crc kubenswrapper[4915]: I0127 20:23:31.748038 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9pzpb"] Jan 27 20:23:31 crc kubenswrapper[4915]: I0127 20:23:31.836851 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f8f39e5-539e-47da-ba19-386e4f761ea6-utilities\") pod \"certified-operators-9pzpb\" (UID: \"0f8f39e5-539e-47da-ba19-386e4f761ea6\") " pod="openshift-marketplace/certified-operators-9pzpb" Jan 27 20:23:31 crc kubenswrapper[4915]: I0127 20:23:31.836928 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfnlt\" (UniqueName: \"kubernetes.io/projected/0f8f39e5-539e-47da-ba19-386e4f761ea6-kube-api-access-kfnlt\") pod \"certified-operators-9pzpb\" (UID: \"0f8f39e5-539e-47da-ba19-386e4f761ea6\") " pod="openshift-marketplace/certified-operators-9pzpb" Jan 27 20:23:31 crc kubenswrapper[4915]: I0127 20:23:31.837033 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f8f39e5-539e-47da-ba19-386e4f761ea6-catalog-content\") pod \"certified-operators-9pzpb\" (UID: \"0f8f39e5-539e-47da-ba19-386e4f761ea6\") " pod="openshift-marketplace/certified-operators-9pzpb" Jan 27 20:23:31 crc kubenswrapper[4915]: I0127 20:23:31.938821 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f8f39e5-539e-47da-ba19-386e4f761ea6-utilities\") pod \"certified-operators-9pzpb\" (UID: \"0f8f39e5-539e-47da-ba19-386e4f761ea6\") " pod="openshift-marketplace/certified-operators-9pzpb" Jan 27 20:23:31 crc kubenswrapper[4915]: I0127 20:23:31.938908 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfnlt\" (UniqueName: \"kubernetes.io/projected/0f8f39e5-539e-47da-ba19-386e4f761ea6-kube-api-access-kfnlt\") pod \"certified-operators-9pzpb\" (UID: \"0f8f39e5-539e-47da-ba19-386e4f761ea6\") " pod="openshift-marketplace/certified-operators-9pzpb" Jan 27 20:23:31 crc kubenswrapper[4915]: I0127 20:23:31.939018 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f8f39e5-539e-47da-ba19-386e4f761ea6-catalog-content\") pod \"certified-operators-9pzpb\" (UID: \"0f8f39e5-539e-47da-ba19-386e4f761ea6\") " pod="openshift-marketplace/certified-operators-9pzpb" Jan 27 20:23:31 crc kubenswrapper[4915]: I0127 20:23:31.939498 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f8f39e5-539e-47da-ba19-386e4f761ea6-catalog-content\") pod \"certified-operators-9pzpb\" (UID: \"0f8f39e5-539e-47da-ba19-386e4f761ea6\") " pod="openshift-marketplace/certified-operators-9pzpb" Jan 27 20:23:31 crc kubenswrapper[4915]: I0127 20:23:31.939527 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f8f39e5-539e-47da-ba19-386e4f761ea6-utilities\") pod \"certified-operators-9pzpb\" (UID: \"0f8f39e5-539e-47da-ba19-386e4f761ea6\") " pod="openshift-marketplace/certified-operators-9pzpb" Jan 27 20:23:31 crc kubenswrapper[4915]: I0127 20:23:31.958990 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfnlt\" (UniqueName: \"kubernetes.io/projected/0f8f39e5-539e-47da-ba19-386e4f761ea6-kube-api-access-kfnlt\") pod \"certified-operators-9pzpb\" (UID: \"0f8f39e5-539e-47da-ba19-386e4f761ea6\") " pod="openshift-marketplace/certified-operators-9pzpb" Jan 27 20:23:32 crc kubenswrapper[4915]: I0127 20:23:32.052513 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pzpb" Jan 27 20:23:32 crc kubenswrapper[4915]: I0127 20:23:32.630966 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9pzpb"] Jan 27 20:23:32 crc kubenswrapper[4915]: W0127 20:23:32.641714 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f8f39e5_539e_47da_ba19_386e4f761ea6.slice/crio-139e84b0b77ec235627f8943cf5f7535bdbae4a1c82bc9c4a7695700e1b70bc0 WatchSource:0}: Error finding container 139e84b0b77ec235627f8943cf5f7535bdbae4a1c82bc9c4a7695700e1b70bc0: Status 404 returned error can't find the container with id 139e84b0b77ec235627f8943cf5f7535bdbae4a1c82bc9c4a7695700e1b70bc0 Jan 27 20:23:33 crc kubenswrapper[4915]: I0127 20:23:33.423116 4915 generic.go:334] "Generic (PLEG): container finished" podID="0f8f39e5-539e-47da-ba19-386e4f761ea6" containerID="c2e05c5f4ac35a0313180e2657fd0ac692dd60e612a1588e1652c62e57fa4bb5" exitCode=0 Jan 27 20:23:33 crc kubenswrapper[4915]: I0127 20:23:33.423240 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pzpb" event={"ID":"0f8f39e5-539e-47da-ba19-386e4f761ea6","Type":"ContainerDied","Data":"c2e05c5f4ac35a0313180e2657fd0ac692dd60e612a1588e1652c62e57fa4bb5"} Jan 27 20:23:33 crc kubenswrapper[4915]: I0127 20:23:33.423439 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pzpb" event={"ID":"0f8f39e5-539e-47da-ba19-386e4f761ea6","Type":"ContainerStarted","Data":"139e84b0b77ec235627f8943cf5f7535bdbae4a1c82bc9c4a7695700e1b70bc0"} Jan 27 20:23:34 crc kubenswrapper[4915]: I0127 20:23:34.435473 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pzpb" event={"ID":"0f8f39e5-539e-47da-ba19-386e4f761ea6","Type":"ContainerStarted","Data":"263844e04b6a1a247ad689e618ca91c18a33246dffa66cbf2855dac8214b34c5"} Jan 27 20:23:35 crc kubenswrapper[4915]: I0127 20:23:35.447572 4915 generic.go:334] "Generic (PLEG): container finished" podID="0f8f39e5-539e-47da-ba19-386e4f761ea6" containerID="263844e04b6a1a247ad689e618ca91c18a33246dffa66cbf2855dac8214b34c5" exitCode=0 Jan 27 20:23:35 crc kubenswrapper[4915]: I0127 20:23:35.447819 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pzpb" event={"ID":"0f8f39e5-539e-47da-ba19-386e4f761ea6","Type":"ContainerDied","Data":"263844e04b6a1a247ad689e618ca91c18a33246dffa66cbf2855dac8214b34c5"} Jan 27 20:23:36 crc kubenswrapper[4915]: I0127 20:23:36.461902 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pzpb" event={"ID":"0f8f39e5-539e-47da-ba19-386e4f761ea6","Type":"ContainerStarted","Data":"cc3c3124e20b7fc31bcc17b5ca2b1d1e5a77cfe61363ff074eb03641b75ad356"} Jan 27 20:23:42 crc kubenswrapper[4915]: I0127 20:23:42.053655 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9pzpb" Jan 27 20:23:42 crc kubenswrapper[4915]: I0127 20:23:42.054320 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9pzpb" Jan 27 20:23:42 crc kubenswrapper[4915]: I0127 20:23:42.128615 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9pzpb" Jan 27 20:23:42 crc kubenswrapper[4915]: I0127 20:23:42.165431 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9pzpb" podStartSLOduration=8.735923231 podStartE2EDuration="11.165402754s" podCreationTimestamp="2026-01-27 20:23:31 +0000 UTC" firstStartedPulling="2026-01-27 20:23:33.425268957 +0000 UTC m=+6104.783122661" lastFinishedPulling="2026-01-27 20:23:35.85474849 +0000 UTC m=+6107.212602184" observedRunningTime="2026-01-27 20:23:36.48856463 +0000 UTC m=+6107.846418344" watchObservedRunningTime="2026-01-27 20:23:42.165402754 +0000 UTC m=+6113.523256458" Jan 27 20:23:42 crc kubenswrapper[4915]: I0127 20:23:42.562998 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9pzpb" Jan 27 20:23:42 crc kubenswrapper[4915]: I0127 20:23:42.622404 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9pzpb"] Jan 27 20:23:44 crc kubenswrapper[4915]: I0127 20:23:44.060915 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7084-account-create-update-k24wr"] Jan 27 20:23:44 crc kubenswrapper[4915]: I0127 20:23:44.079393 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-z6tzp"] Jan 27 20:23:44 crc kubenswrapper[4915]: I0127 20:23:44.091195 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-z6tzp"] Jan 27 20:23:44 crc kubenswrapper[4915]: I0127 20:23:44.100105 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7084-account-create-update-k24wr"] Jan 27 20:23:44 crc kubenswrapper[4915]: I0127 20:23:44.538636 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9pzpb" podUID="0f8f39e5-539e-47da-ba19-386e4f761ea6" containerName="registry-server" containerID="cri-o://cc3c3124e20b7fc31bcc17b5ca2b1d1e5a77cfe61363ff074eb03641b75ad356" gracePeriod=2 Jan 27 20:23:45 crc kubenswrapper[4915]: I0127 20:23:45.095018 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pzpb" Jan 27 20:23:45 crc kubenswrapper[4915]: I0127 20:23:45.221069 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfnlt\" (UniqueName: \"kubernetes.io/projected/0f8f39e5-539e-47da-ba19-386e4f761ea6-kube-api-access-kfnlt\") pod \"0f8f39e5-539e-47da-ba19-386e4f761ea6\" (UID: \"0f8f39e5-539e-47da-ba19-386e4f761ea6\") " Jan 27 20:23:45 crc kubenswrapper[4915]: I0127 20:23:45.221195 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f8f39e5-539e-47da-ba19-386e4f761ea6-utilities\") pod \"0f8f39e5-539e-47da-ba19-386e4f761ea6\" (UID: \"0f8f39e5-539e-47da-ba19-386e4f761ea6\") " Jan 27 20:23:45 crc kubenswrapper[4915]: I0127 20:23:45.221227 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f8f39e5-539e-47da-ba19-386e4f761ea6-catalog-content\") pod \"0f8f39e5-539e-47da-ba19-386e4f761ea6\" (UID: \"0f8f39e5-539e-47da-ba19-386e4f761ea6\") " Jan 27 20:23:45 crc kubenswrapper[4915]: I0127 20:23:45.222461 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f8f39e5-539e-47da-ba19-386e4f761ea6-utilities" (OuterVolumeSpecName: "utilities") pod "0f8f39e5-539e-47da-ba19-386e4f761ea6" (UID: "0f8f39e5-539e-47da-ba19-386e4f761ea6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:23:45 crc kubenswrapper[4915]: I0127 20:23:45.243021 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f8f39e5-539e-47da-ba19-386e4f761ea6-kube-api-access-kfnlt" (OuterVolumeSpecName: "kube-api-access-kfnlt") pod "0f8f39e5-539e-47da-ba19-386e4f761ea6" (UID: "0f8f39e5-539e-47da-ba19-386e4f761ea6"). InnerVolumeSpecName "kube-api-access-kfnlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:23:45 crc kubenswrapper[4915]: I0127 20:23:45.323886 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfnlt\" (UniqueName: \"kubernetes.io/projected/0f8f39e5-539e-47da-ba19-386e4f761ea6-kube-api-access-kfnlt\") on node \"crc\" DevicePath \"\"" Jan 27 20:23:45 crc kubenswrapper[4915]: I0127 20:23:45.323931 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f8f39e5-539e-47da-ba19-386e4f761ea6-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:23:45 crc kubenswrapper[4915]: I0127 20:23:45.370711 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="478c9727-7969-4a55-a51b-0f10d52bbcee" path="/var/lib/kubelet/pods/478c9727-7969-4a55-a51b-0f10d52bbcee/volumes" Jan 27 20:23:45 crc kubenswrapper[4915]: I0127 20:23:45.371730 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="531d7f69-5afe-4721-8600-7cba22e2a02f" path="/var/lib/kubelet/pods/531d7f69-5afe-4721-8600-7cba22e2a02f/volumes" Jan 27 20:23:45 crc kubenswrapper[4915]: I0127 20:23:45.419995 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f8f39e5-539e-47da-ba19-386e4f761ea6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f8f39e5-539e-47da-ba19-386e4f761ea6" (UID: "0f8f39e5-539e-47da-ba19-386e4f761ea6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:23:45 crc kubenswrapper[4915]: I0127 20:23:45.426073 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f8f39e5-539e-47da-ba19-386e4f761ea6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:23:45 crc kubenswrapper[4915]: I0127 20:23:45.553603 4915 generic.go:334] "Generic (PLEG): container finished" podID="0f8f39e5-539e-47da-ba19-386e4f761ea6" containerID="cc3c3124e20b7fc31bcc17b5ca2b1d1e5a77cfe61363ff074eb03641b75ad356" exitCode=0 Jan 27 20:23:45 crc kubenswrapper[4915]: I0127 20:23:45.553943 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pzpb" event={"ID":"0f8f39e5-539e-47da-ba19-386e4f761ea6","Type":"ContainerDied","Data":"cc3c3124e20b7fc31bcc17b5ca2b1d1e5a77cfe61363ff074eb03641b75ad356"} Jan 27 20:23:45 crc kubenswrapper[4915]: I0127 20:23:45.554107 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pzpb" event={"ID":"0f8f39e5-539e-47da-ba19-386e4f761ea6","Type":"ContainerDied","Data":"139e84b0b77ec235627f8943cf5f7535bdbae4a1c82bc9c4a7695700e1b70bc0"} Jan 27 20:23:45 crc kubenswrapper[4915]: I0127 20:23:45.554181 4915 scope.go:117] "RemoveContainer" containerID="cc3c3124e20b7fc31bcc17b5ca2b1d1e5a77cfe61363ff074eb03641b75ad356" Jan 27 20:23:45 crc kubenswrapper[4915]: I0127 20:23:45.554021 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pzpb" Jan 27 20:23:45 crc kubenswrapper[4915]: I0127 20:23:45.587222 4915 scope.go:117] "RemoveContainer" containerID="263844e04b6a1a247ad689e618ca91c18a33246dffa66cbf2855dac8214b34c5" Jan 27 20:23:45 crc kubenswrapper[4915]: I0127 20:23:45.601428 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9pzpb"] Jan 27 20:23:45 crc kubenswrapper[4915]: I0127 20:23:45.610626 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9pzpb"] Jan 27 20:23:45 crc kubenswrapper[4915]: I0127 20:23:45.629837 4915 scope.go:117] "RemoveContainer" containerID="c2e05c5f4ac35a0313180e2657fd0ac692dd60e612a1588e1652c62e57fa4bb5" Jan 27 20:23:45 crc kubenswrapper[4915]: I0127 20:23:45.686890 4915 scope.go:117] "RemoveContainer" containerID="cc3c3124e20b7fc31bcc17b5ca2b1d1e5a77cfe61363ff074eb03641b75ad356" Jan 27 20:23:45 crc kubenswrapper[4915]: E0127 20:23:45.687580 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc3c3124e20b7fc31bcc17b5ca2b1d1e5a77cfe61363ff074eb03641b75ad356\": container with ID starting with cc3c3124e20b7fc31bcc17b5ca2b1d1e5a77cfe61363ff074eb03641b75ad356 not found: ID does not exist" containerID="cc3c3124e20b7fc31bcc17b5ca2b1d1e5a77cfe61363ff074eb03641b75ad356" Jan 27 20:23:45 crc kubenswrapper[4915]: I0127 20:23:45.687740 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc3c3124e20b7fc31bcc17b5ca2b1d1e5a77cfe61363ff074eb03641b75ad356"} err="failed to get container status \"cc3c3124e20b7fc31bcc17b5ca2b1d1e5a77cfe61363ff074eb03641b75ad356\": rpc error: code = NotFound desc = could not find container \"cc3c3124e20b7fc31bcc17b5ca2b1d1e5a77cfe61363ff074eb03641b75ad356\": container with ID starting with cc3c3124e20b7fc31bcc17b5ca2b1d1e5a77cfe61363ff074eb03641b75ad356 not found: ID does not exist" Jan 27 20:23:45 crc kubenswrapper[4915]: I0127 20:23:45.687890 4915 scope.go:117] "RemoveContainer" containerID="263844e04b6a1a247ad689e618ca91c18a33246dffa66cbf2855dac8214b34c5" Jan 27 20:23:45 crc kubenswrapper[4915]: E0127 20:23:45.688349 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"263844e04b6a1a247ad689e618ca91c18a33246dffa66cbf2855dac8214b34c5\": container with ID starting with 263844e04b6a1a247ad689e618ca91c18a33246dffa66cbf2855dac8214b34c5 not found: ID does not exist" containerID="263844e04b6a1a247ad689e618ca91c18a33246dffa66cbf2855dac8214b34c5" Jan 27 20:23:45 crc kubenswrapper[4915]: I0127 20:23:45.688496 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"263844e04b6a1a247ad689e618ca91c18a33246dffa66cbf2855dac8214b34c5"} err="failed to get container status \"263844e04b6a1a247ad689e618ca91c18a33246dffa66cbf2855dac8214b34c5\": rpc error: code = NotFound desc = could not find container \"263844e04b6a1a247ad689e618ca91c18a33246dffa66cbf2855dac8214b34c5\": container with ID starting with 263844e04b6a1a247ad689e618ca91c18a33246dffa66cbf2855dac8214b34c5 not found: ID does not exist" Jan 27 20:23:45 crc kubenswrapper[4915]: I0127 20:23:45.688611 4915 scope.go:117] "RemoveContainer" containerID="c2e05c5f4ac35a0313180e2657fd0ac692dd60e612a1588e1652c62e57fa4bb5" Jan 27 20:23:45 crc kubenswrapper[4915]: E0127 20:23:45.689137 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2e05c5f4ac35a0313180e2657fd0ac692dd60e612a1588e1652c62e57fa4bb5\": container with ID starting with c2e05c5f4ac35a0313180e2657fd0ac692dd60e612a1588e1652c62e57fa4bb5 not found: ID does not exist" containerID="c2e05c5f4ac35a0313180e2657fd0ac692dd60e612a1588e1652c62e57fa4bb5" Jan 27 20:23:45 crc kubenswrapper[4915]: I0127 20:23:45.689203 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2e05c5f4ac35a0313180e2657fd0ac692dd60e612a1588e1652c62e57fa4bb5"} err="failed to get container status \"c2e05c5f4ac35a0313180e2657fd0ac692dd60e612a1588e1652c62e57fa4bb5\": rpc error: code = NotFound desc = could not find container \"c2e05c5f4ac35a0313180e2657fd0ac692dd60e612a1588e1652c62e57fa4bb5\": container with ID starting with c2e05c5f4ac35a0313180e2657fd0ac692dd60e612a1588e1652c62e57fa4bb5 not found: ID does not exist" Jan 27 20:23:47 crc kubenswrapper[4915]: I0127 20:23:47.368278 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f8f39e5-539e-47da-ba19-386e4f761ea6" path="/var/lib/kubelet/pods/0f8f39e5-539e-47da-ba19-386e4f761ea6/volumes" Jan 27 20:23:51 crc kubenswrapper[4915]: I0127 20:23:51.047729 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-gpx4t"] Jan 27 20:23:51 crc kubenswrapper[4915]: I0127 20:23:51.062011 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-gpx4t"] Jan 27 20:23:51 crc kubenswrapper[4915]: I0127 20:23:51.374354 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa2606ee-f8a5-4465-8ab4-feb4ccea59e2" path="/var/lib/kubelet/pods/aa2606ee-f8a5-4465-8ab4-feb4ccea59e2/volumes" Jan 27 20:23:57 crc kubenswrapper[4915]: I0127 20:23:57.744575 4915 scope.go:117] "RemoveContainer" containerID="fb8d352cc44eeba36594b18ae9a6b1a992faf783308def8cfed1feaecfb3e191" Jan 27 20:23:57 crc kubenswrapper[4915]: I0127 20:23:57.776010 4915 scope.go:117] "RemoveContainer" containerID="a6bae7069833d21acf5e928f21cd290ea796ed4117037b497884d21f42618186" Jan 27 20:23:57 crc kubenswrapper[4915]: I0127 20:23:57.827662 4915 scope.go:117] "RemoveContainer" containerID="7e126cc296def1ec61a71aa659721f23cbd6b572935b12f84bd4e1cf792489ba" Jan 27 20:23:57 crc kubenswrapper[4915]: I0127 20:23:57.852238 4915 scope.go:117] "RemoveContainer" containerID="418dd9be86baa5d7c19a7cbe1f5cf3b8c10eeb767727fe741f8d83aa05fc19f4" Jan 27 20:23:57 crc kubenswrapper[4915]: I0127 20:23:57.912173 4915 scope.go:117] "RemoveContainer" containerID="1834a31a0cf740981d5d79adccad04217d9d433a78c0044513e76e4ed2fc1868" Jan 27 20:23:57 crc kubenswrapper[4915]: I0127 20:23:57.947641 4915 scope.go:117] "RemoveContainer" containerID="42a8a3c304454fc5ad99caa855897210179275d3b63b7b50215b5d1a5df0a868" Jan 27 20:24:47 crc kubenswrapper[4915]: I0127 20:24:47.061979 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-k85wj"] Jan 27 20:24:47 crc kubenswrapper[4915]: I0127 20:24:47.076737 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-s5whp"] Jan 27 20:24:47 crc kubenswrapper[4915]: I0127 20:24:47.084962 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-43f6-account-create-update-8rx5z"] Jan 27 20:24:47 crc kubenswrapper[4915]: I0127 20:24:47.092239 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-s5whp"] Jan 27 20:24:47 crc kubenswrapper[4915]: I0127 20:24:47.099714 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-k85wj"] Jan 27 20:24:47 crc kubenswrapper[4915]: I0127 20:24:47.109905 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-43f6-account-create-update-8rx5z"] Jan 27 20:24:47 crc kubenswrapper[4915]: I0127 20:24:47.376344 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40f70372-2930-4901-a532-193c12377d2e" path="/var/lib/kubelet/pods/40f70372-2930-4901-a532-193c12377d2e/volumes" Jan 27 20:24:47 crc kubenswrapper[4915]: I0127 20:24:47.377149 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd7e9679-5e88-434f-9060-cb5da12c3217" path="/var/lib/kubelet/pods/bd7e9679-5e88-434f-9060-cb5da12c3217/volumes" Jan 27 20:24:47 crc kubenswrapper[4915]: I0127 20:24:47.377908 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f81bdbbf-7100-4e76-82e3-92585b3fe6f4" path="/var/lib/kubelet/pods/f81bdbbf-7100-4e76-82e3-92585b3fe6f4/volumes" Jan 27 20:24:48 crc kubenswrapper[4915]: I0127 20:24:48.034025 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-25tx8"] Jan 27 20:24:48 crc kubenswrapper[4915]: I0127 20:24:48.045241 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-64f9-account-create-update-mwfg6"] Jan 27 20:24:48 crc kubenswrapper[4915]: I0127 20:24:48.060992 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-1f37-account-create-update-ns88t"] Jan 27 20:24:48 crc kubenswrapper[4915]: I0127 20:24:48.071173 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-64f9-account-create-update-mwfg6"] Jan 27 20:24:48 crc kubenswrapper[4915]: I0127 20:24:48.082222 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-25tx8"] Jan 27 20:24:48 crc kubenswrapper[4915]: I0127 20:24:48.091267 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-1f37-account-create-update-ns88t"] Jan 27 20:24:49 crc kubenswrapper[4915]: I0127 20:24:49.373698 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ffbb1fa-90be-4883-beac-6ab4e9d05117" path="/var/lib/kubelet/pods/2ffbb1fa-90be-4883-beac-6ab4e9d05117/volumes" Jan 27 20:24:49 crc kubenswrapper[4915]: I0127 20:24:49.374581 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7508feea-1859-4276-9235-4008ec0767a0" path="/var/lib/kubelet/pods/7508feea-1859-4276-9235-4008ec0767a0/volumes" Jan 27 20:24:49 crc kubenswrapper[4915]: I0127 20:24:49.375379 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ba08e34-40c6-440c-961b-757a4716a9fc" path="/var/lib/kubelet/pods/8ba08e34-40c6-440c-961b-757a4716a9fc/volumes" Jan 27 20:24:58 crc kubenswrapper[4915]: I0127 20:24:58.036768 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-29rl9"] Jan 27 20:24:58 crc kubenswrapper[4915]: I0127 20:24:58.049864 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-29rl9"] Jan 27 20:24:58 crc kubenswrapper[4915]: I0127 20:24:58.126755 4915 scope.go:117] "RemoveContainer" containerID="9885e32ed0c96cee550752d0ac99b55ffb6a503001315d538f9adc9a67570ae7" Jan 27 20:24:58 crc kubenswrapper[4915]: I0127 20:24:58.156470 4915 scope.go:117] "RemoveContainer" containerID="bec161609d482b98248b2d7c3142c06ec62b97f617c5d49589ae8f1e6237e719" Jan 27 20:24:58 crc kubenswrapper[4915]: I0127 20:24:58.208262 4915 scope.go:117] "RemoveContainer" containerID="cff9f4960953613ee14fa28f22270d0640c1ea1c42b00b50a3c28bd353f5cadf" Jan 27 20:24:58 crc kubenswrapper[4915]: I0127 20:24:58.250933 4915 scope.go:117] "RemoveContainer" containerID="a9a8515a7269f60718b5e8af61fe9b289fa1c7273f62047c5613709939a8ce57" Jan 27 20:24:58 crc kubenswrapper[4915]: I0127 20:24:58.287212 4915 scope.go:117] "RemoveContainer" containerID="f5eed7dc7178965cbc12a7bcc0efe005209b9ad321a354055d06a5ecbb5200b2" Jan 27 20:24:58 crc kubenswrapper[4915]: I0127 20:24:58.329043 4915 scope.go:117] "RemoveContainer" containerID="0473993f63356ca9caf6b2642d6fecbf4977660ac309f73249f86aa440f4a2ab" Jan 27 20:24:59 crc kubenswrapper[4915]: I0127 20:24:59.378814 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e95f84a-c23d-46b0-b1bc-619ca8ae283c" path="/var/lib/kubelet/pods/8e95f84a-c23d-46b0-b1bc-619ca8ae283c/volumes" Jan 27 20:25:16 crc kubenswrapper[4915]: I0127 20:25:16.056765 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-s687l"] Jan 27 20:25:16 crc kubenswrapper[4915]: I0127 20:25:16.065537 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-s687l"] Jan 27 20:25:17 crc kubenswrapper[4915]: I0127 20:25:17.039101 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-qw7g5"] Jan 27 20:25:17 crc kubenswrapper[4915]: I0127 20:25:17.055110 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-qw7g5"] Jan 27 20:25:17 crc kubenswrapper[4915]: I0127 20:25:17.370724 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65254045-1103-45cd-b7ad-da904497894e" path="/var/lib/kubelet/pods/65254045-1103-45cd-b7ad-da904497894e/volumes" Jan 27 20:25:17 crc kubenswrapper[4915]: I0127 20:25:17.372055 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85014245-b9ed-4ed1-834f-224ba6e918b0" path="/var/lib/kubelet/pods/85014245-b9ed-4ed1-834f-224ba6e918b0/volumes" Jan 27 20:25:20 crc kubenswrapper[4915]: I0127 20:25:20.624233 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:25:20 crc kubenswrapper[4915]: I0127 20:25:20.624920 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:25:30 crc kubenswrapper[4915]: I0127 20:25:30.031106 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-t6qgw"] Jan 27 20:25:30 crc kubenswrapper[4915]: I0127 20:25:30.039140 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-t6qgw"] Jan 27 20:25:31 crc kubenswrapper[4915]: I0127 20:25:31.374519 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d602920-84b6-42be-936d-95dfc8c05b96" path="/var/lib/kubelet/pods/6d602920-84b6-42be-936d-95dfc8c05b96/volumes" Jan 27 20:25:50 crc kubenswrapper[4915]: I0127 20:25:50.624880 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:25:50 crc kubenswrapper[4915]: I0127 20:25:50.625901 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:25:58 crc kubenswrapper[4915]: I0127 20:25:58.467574 4915 scope.go:117] "RemoveContainer" containerID="e96ea98ec725452c407773b6130ea4282ab8a6570f3c448c8e95ab1c531b1ca8" Jan 27 20:25:58 crc kubenswrapper[4915]: I0127 20:25:58.519552 4915 scope.go:117] "RemoveContainer" containerID="50d2b040595422025d81955e328a55534245fc5f052189d1b50f723556d66d47" Jan 27 20:25:58 crc kubenswrapper[4915]: I0127 20:25:58.583435 4915 scope.go:117] "RemoveContainer" containerID="ce21af0f387886bdb895c158c3d058a99f94f7ff8aeaa61153332404d699aeea" Jan 27 20:25:58 crc kubenswrapper[4915]: I0127 20:25:58.652895 4915 scope.go:117] "RemoveContainer" containerID="969f029243ffebf7028457f97459cf302f8157643118e8acefd8fd27814c358a" Jan 27 20:25:58 crc kubenswrapper[4915]: I0127 20:25:58.719438 4915 scope.go:117] "RemoveContainer" containerID="cbc41f903edc3f3d4ea202d58435dbc6878ae0e7e57c864f6164cd6e3e754f6c" Jan 27 20:25:58 crc kubenswrapper[4915]: I0127 20:25:58.767910 4915 scope.go:117] "RemoveContainer" containerID="ed649be42c4904a753bb18e29f66456767f94078c79d8d9a28233c88df28e18e" Jan 27 20:26:14 crc kubenswrapper[4915]: I0127 20:26:14.051337 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-45f5-account-create-update-4mgch"] Jan 27 20:26:14 crc kubenswrapper[4915]: I0127 20:26:14.059945 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-wm86m"] Jan 27 20:26:14 crc kubenswrapper[4915]: I0127 20:26:14.067535 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-45f5-account-create-update-4mgch"] Jan 27 20:26:14 crc kubenswrapper[4915]: I0127 20:26:14.073855 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-wm86m"] Jan 27 20:26:15 crc kubenswrapper[4915]: I0127 20:26:15.371031 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29a9cc90-238c-46d4-9adb-84aa486ac4b3" path="/var/lib/kubelet/pods/29a9cc90-238c-46d4-9adb-84aa486ac4b3/volumes" Jan 27 20:26:15 crc kubenswrapper[4915]: I0127 20:26:15.375958 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af11f2e1-23ce-4bfa-af41-f4ad85e784d2" path="/var/lib/kubelet/pods/af11f2e1-23ce-4bfa-af41-f4ad85e784d2/volumes" Jan 27 20:26:20 crc kubenswrapper[4915]: I0127 20:26:20.625406 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:26:20 crc kubenswrapper[4915]: I0127 20:26:20.626160 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:26:20 crc kubenswrapper[4915]: I0127 20:26:20.626230 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 20:26:20 crc kubenswrapper[4915]: I0127 20:26:20.627351 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"75d63496c16b64c31133696aced61b5f6a564e9bc414f0b949920a0c880d57d5"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 20:26:20 crc kubenswrapper[4915]: I0127 20:26:20.627489 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://75d63496c16b64c31133696aced61b5f6a564e9bc414f0b949920a0c880d57d5" gracePeriod=600 Jan 27 20:26:21 crc kubenswrapper[4915]: I0127 20:26:21.028233 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-nhh6j"] Jan 27 20:26:21 crc kubenswrapper[4915]: I0127 20:26:21.036293 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-nhh6j"] Jan 27 20:26:21 crc kubenswrapper[4915]: I0127 20:26:21.094088 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="75d63496c16b64c31133696aced61b5f6a564e9bc414f0b949920a0c880d57d5" exitCode=0 Jan 27 20:26:21 crc kubenswrapper[4915]: I0127 20:26:21.094138 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"75d63496c16b64c31133696aced61b5f6a564e9bc414f0b949920a0c880d57d5"} Jan 27 20:26:21 crc kubenswrapper[4915]: I0127 20:26:21.094175 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619"} Jan 27 20:26:21 crc kubenswrapper[4915]: I0127 20:26:21.094196 4915 scope.go:117] "RemoveContainer" containerID="fee94f51c314411a8a2fd57f5303f1fe14aa92840674cc04dec68801ad0a7114" Jan 27 20:26:21 crc kubenswrapper[4915]: I0127 20:26:21.371635 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53f8b501-4793-4736-947c-9782978ee613" path="/var/lib/kubelet/pods/53f8b501-4793-4736-947c-9782978ee613/volumes" Jan 27 20:26:58 crc kubenswrapper[4915]: I0127 20:26:58.930958 4915 scope.go:117] "RemoveContainer" containerID="fc426fba05219d0f4345a4616e02896fe1ceda2521a57bf6853e42ada8db0537" Jan 27 20:26:58 crc kubenswrapper[4915]: I0127 20:26:58.973647 4915 scope.go:117] "RemoveContainer" containerID="dde7e48565783ec89cd137f8224a76f3fb17f39dceccd92e96cea00675497aa0" Jan 27 20:26:59 crc kubenswrapper[4915]: I0127 20:26:59.001123 4915 scope.go:117] "RemoveContainer" containerID="e46a1f50336a102dc80d5d23eb54518b77210129002f00b4243ba23ae57edd48" Jan 27 20:28:20 crc kubenswrapper[4915]: I0127 20:28:20.625268 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:28:20 crc kubenswrapper[4915]: I0127 20:28:20.625962 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:28:50 crc kubenswrapper[4915]: I0127 20:28:50.624759 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:28:50 crc kubenswrapper[4915]: I0127 20:28:50.625436 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:29:05 crc kubenswrapper[4915]: I0127 20:29:05.034396 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-jlkcq"] Jan 27 20:29:05 crc kubenswrapper[4915]: I0127 20:29:05.041533 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-jlkcq"] Jan 27 20:29:05 crc kubenswrapper[4915]: I0127 20:29:05.372054 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12e3005f-a1f7-4107-99da-dce54d08fb8f" path="/var/lib/kubelet/pods/12e3005f-a1f7-4107-99da-dce54d08fb8f/volumes" Jan 27 20:29:07 crc kubenswrapper[4915]: I0127 20:29:07.031102 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-6c17-account-create-update-b7x64"] Jan 27 20:29:07 crc kubenswrapper[4915]: I0127 20:29:07.038687 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-6c17-account-create-update-b7x64"] Jan 27 20:29:07 crc kubenswrapper[4915]: I0127 20:29:07.370717 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b4669e3-bdc5-4c02-bec9-2d354c3a7333" path="/var/lib/kubelet/pods/3b4669e3-bdc5-4c02-bec9-2d354c3a7333/volumes" Jan 27 20:29:13 crc kubenswrapper[4915]: I0127 20:29:13.030754 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-rwn2l"] Jan 27 20:29:13 crc kubenswrapper[4915]: I0127 20:29:13.042363 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-rwn2l"] Jan 27 20:29:13 crc kubenswrapper[4915]: I0127 20:29:13.368522 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2019a39c-dbff-4cdc-abed-9b74dda1f9c8" path="/var/lib/kubelet/pods/2019a39c-dbff-4cdc-abed-9b74dda1f9c8/volumes" Jan 27 20:29:14 crc kubenswrapper[4915]: I0127 20:29:14.025613 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-e734-account-create-update-wqtdl"] Jan 27 20:29:14 crc kubenswrapper[4915]: I0127 20:29:14.033866 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-e734-account-create-update-wqtdl"] Jan 27 20:29:15 crc kubenswrapper[4915]: I0127 20:29:15.367312 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc" path="/var/lib/kubelet/pods/eb14c5eb-b50a-4c28-98e5-8a88ab29b2cc/volumes" Jan 27 20:29:20 crc kubenswrapper[4915]: I0127 20:29:20.624698 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:29:20 crc kubenswrapper[4915]: I0127 20:29:20.625401 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:29:20 crc kubenswrapper[4915]: I0127 20:29:20.625454 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 20:29:20 crc kubenswrapper[4915]: I0127 20:29:20.626308 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 20:29:20 crc kubenswrapper[4915]: I0127 20:29:20.626373 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" gracePeriod=600 Jan 27 20:29:20 crc kubenswrapper[4915]: E0127 20:29:20.752162 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:29:20 crc kubenswrapper[4915]: I0127 20:29:20.815979 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" exitCode=0 Jan 27 20:29:20 crc kubenswrapper[4915]: I0127 20:29:20.816042 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619"} Jan 27 20:29:20 crc kubenswrapper[4915]: I0127 20:29:20.816377 4915 scope.go:117] "RemoveContainer" containerID="75d63496c16b64c31133696aced61b5f6a564e9bc414f0b949920a0c880d57d5" Jan 27 20:29:20 crc kubenswrapper[4915]: I0127 20:29:20.817395 4915 scope.go:117] "RemoveContainer" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" Jan 27 20:29:20 crc kubenswrapper[4915]: E0127 20:29:20.817953 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:29:36 crc kubenswrapper[4915]: I0127 20:29:36.358187 4915 scope.go:117] "RemoveContainer" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" Jan 27 20:29:36 crc kubenswrapper[4915]: E0127 20:29:36.359068 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:29:48 crc kubenswrapper[4915]: I0127 20:29:48.358092 4915 scope.go:117] "RemoveContainer" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" Jan 27 20:29:48 crc kubenswrapper[4915]: E0127 20:29:48.358927 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:29:54 crc kubenswrapper[4915]: I0127 20:29:54.049126 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-gvkrl"] Jan 27 20:29:54 crc kubenswrapper[4915]: I0127 20:29:54.059367 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-gvkrl"] Jan 27 20:29:55 crc kubenswrapper[4915]: I0127 20:29:55.369497 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd34b342-7ecd-4767-ae22-bcc59df27294" path="/var/lib/kubelet/pods/fd34b342-7ecd-4767-ae22-bcc59df27294/volumes" Jan 27 20:29:59 crc kubenswrapper[4915]: I0127 20:29:59.146006 4915 scope.go:117] "RemoveContainer" containerID="5f2e914382dc72d2cdefcfb329826a5243207431ddb241c1be323d682549ce3b" Jan 27 20:29:59 crc kubenswrapper[4915]: I0127 20:29:59.171559 4915 scope.go:117] "RemoveContainer" containerID="0364aa449493f950bdbcc73b039529bf0150f68454f17554f47e92e40d75796a" Jan 27 20:29:59 crc kubenswrapper[4915]: I0127 20:29:59.220874 4915 scope.go:117] "RemoveContainer" containerID="b66b54aa04890aff15548bccfa38bbad809750f3abd02ed7421266f34d335397" Jan 27 20:29:59 crc kubenswrapper[4915]: I0127 20:29:59.257936 4915 scope.go:117] "RemoveContainer" containerID="cf4ac298c9cfa9200337278db4f3025a0fd0911a6bc7c68a14ec8239186ece61" Jan 27 20:29:59 crc kubenswrapper[4915]: I0127 20:29:59.297337 4915 scope.go:117] "RemoveContainer" containerID="97e9d564cc1d583b44c1ba1c9d614d693c0beb0b58ded612e553ca8257957774" Jan 27 20:29:59 crc kubenswrapper[4915]: I0127 20:29:59.356340 4915 scope.go:117] "RemoveContainer" containerID="1b7395f9b71ad2bcbcea1c90e477e62fbc86e4ab9cd07a4ea01fe3f43b35791e" Jan 27 20:29:59 crc kubenswrapper[4915]: I0127 20:29:59.365580 4915 scope.go:117] "RemoveContainer" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" Jan 27 20:29:59 crc kubenswrapper[4915]: E0127 20:29:59.365780 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:30:00 crc kubenswrapper[4915]: I0127 20:30:00.149089 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492430-pzhw4"] Jan 27 20:30:00 crc kubenswrapper[4915]: E0127 20:30:00.149652 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f8f39e5-539e-47da-ba19-386e4f761ea6" containerName="extract-utilities" Jan 27 20:30:00 crc kubenswrapper[4915]: I0127 20:30:00.149667 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f8f39e5-539e-47da-ba19-386e4f761ea6" containerName="extract-utilities" Jan 27 20:30:00 crc kubenswrapper[4915]: E0127 20:30:00.149712 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f8f39e5-539e-47da-ba19-386e4f761ea6" containerName="extract-content" Jan 27 20:30:00 crc kubenswrapper[4915]: I0127 20:30:00.149722 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f8f39e5-539e-47da-ba19-386e4f761ea6" containerName="extract-content" Jan 27 20:30:00 crc kubenswrapper[4915]: E0127 20:30:00.149733 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f8f39e5-539e-47da-ba19-386e4f761ea6" containerName="registry-server" Jan 27 20:30:00 crc kubenswrapper[4915]: I0127 20:30:00.149740 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f8f39e5-539e-47da-ba19-386e4f761ea6" containerName="registry-server" Jan 27 20:30:00 crc kubenswrapper[4915]: I0127 20:30:00.150011 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f8f39e5-539e-47da-ba19-386e4f761ea6" containerName="registry-server" Jan 27 20:30:00 crc kubenswrapper[4915]: I0127 20:30:00.150889 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-pzhw4" Jan 27 20:30:00 crc kubenswrapper[4915]: I0127 20:30:00.155668 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 20:30:00 crc kubenswrapper[4915]: I0127 20:30:00.155997 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 20:30:00 crc kubenswrapper[4915]: I0127 20:30:00.177276 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492430-pzhw4"] Jan 27 20:30:00 crc kubenswrapper[4915]: I0127 20:30:00.231478 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f0fc2e7-ca2f-4479-a3ba-8512004f1298-secret-volume\") pod \"collect-profiles-29492430-pzhw4\" (UID: \"2f0fc2e7-ca2f-4479-a3ba-8512004f1298\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-pzhw4" Jan 27 20:30:00 crc kubenswrapper[4915]: I0127 20:30:00.231547 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk5dr\" (UniqueName: \"kubernetes.io/projected/2f0fc2e7-ca2f-4479-a3ba-8512004f1298-kube-api-access-vk5dr\") pod \"collect-profiles-29492430-pzhw4\" (UID: \"2f0fc2e7-ca2f-4479-a3ba-8512004f1298\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-pzhw4" Jan 27 20:30:00 crc kubenswrapper[4915]: I0127 20:30:00.231768 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f0fc2e7-ca2f-4479-a3ba-8512004f1298-config-volume\") pod \"collect-profiles-29492430-pzhw4\" (UID: \"2f0fc2e7-ca2f-4479-a3ba-8512004f1298\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-pzhw4" Jan 27 20:30:00 crc kubenswrapper[4915]: I0127 20:30:00.333961 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk5dr\" (UniqueName: \"kubernetes.io/projected/2f0fc2e7-ca2f-4479-a3ba-8512004f1298-kube-api-access-vk5dr\") pod \"collect-profiles-29492430-pzhw4\" (UID: \"2f0fc2e7-ca2f-4479-a3ba-8512004f1298\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-pzhw4" Jan 27 20:30:00 crc kubenswrapper[4915]: I0127 20:30:00.334155 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f0fc2e7-ca2f-4479-a3ba-8512004f1298-config-volume\") pod \"collect-profiles-29492430-pzhw4\" (UID: \"2f0fc2e7-ca2f-4479-a3ba-8512004f1298\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-pzhw4" Jan 27 20:30:00 crc kubenswrapper[4915]: I0127 20:30:00.334190 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f0fc2e7-ca2f-4479-a3ba-8512004f1298-secret-volume\") pod \"collect-profiles-29492430-pzhw4\" (UID: \"2f0fc2e7-ca2f-4479-a3ba-8512004f1298\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-pzhw4" Jan 27 20:30:00 crc kubenswrapper[4915]: I0127 20:30:00.335571 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f0fc2e7-ca2f-4479-a3ba-8512004f1298-config-volume\") pod \"collect-profiles-29492430-pzhw4\" (UID: \"2f0fc2e7-ca2f-4479-a3ba-8512004f1298\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-pzhw4" Jan 27 20:30:00 crc kubenswrapper[4915]: I0127 20:30:00.342838 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f0fc2e7-ca2f-4479-a3ba-8512004f1298-secret-volume\") pod \"collect-profiles-29492430-pzhw4\" (UID: \"2f0fc2e7-ca2f-4479-a3ba-8512004f1298\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-pzhw4" Jan 27 20:30:00 crc kubenswrapper[4915]: I0127 20:30:00.349672 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk5dr\" (UniqueName: \"kubernetes.io/projected/2f0fc2e7-ca2f-4479-a3ba-8512004f1298-kube-api-access-vk5dr\") pod \"collect-profiles-29492430-pzhw4\" (UID: \"2f0fc2e7-ca2f-4479-a3ba-8512004f1298\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-pzhw4" Jan 27 20:30:00 crc kubenswrapper[4915]: I0127 20:30:00.519212 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-pzhw4" Jan 27 20:30:00 crc kubenswrapper[4915]: I0127 20:30:00.983480 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492430-pzhw4"] Jan 27 20:30:01 crc kubenswrapper[4915]: I0127 20:30:01.216266 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-pzhw4" event={"ID":"2f0fc2e7-ca2f-4479-a3ba-8512004f1298","Type":"ContainerStarted","Data":"fbd10a4cd783b1c991e66052af6fbc2fe56831fc5ed435310b4cb64511279579"} Jan 27 20:30:01 crc kubenswrapper[4915]: I0127 20:30:01.216402 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-pzhw4" event={"ID":"2f0fc2e7-ca2f-4479-a3ba-8512004f1298","Type":"ContainerStarted","Data":"3409c48ee14f67a8dd526f7c699094d6b9ddd6282e6c88db4826b312992dee8f"} Jan 27 20:30:01 crc kubenswrapper[4915]: I0127 20:30:01.235323 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-pzhw4" podStartSLOduration=1.23530227 podStartE2EDuration="1.23530227s" podCreationTimestamp="2026-01-27 20:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:30:01.229956599 +0000 UTC m=+6492.587810263" watchObservedRunningTime="2026-01-27 20:30:01.23530227 +0000 UTC m=+6492.593155934" Jan 27 20:30:02 crc kubenswrapper[4915]: I0127 20:30:02.225483 4915 generic.go:334] "Generic (PLEG): container finished" podID="2f0fc2e7-ca2f-4479-a3ba-8512004f1298" containerID="fbd10a4cd783b1c991e66052af6fbc2fe56831fc5ed435310b4cb64511279579" exitCode=0 Jan 27 20:30:02 crc kubenswrapper[4915]: I0127 20:30:02.225576 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-pzhw4" event={"ID":"2f0fc2e7-ca2f-4479-a3ba-8512004f1298","Type":"ContainerDied","Data":"fbd10a4cd783b1c991e66052af6fbc2fe56831fc5ed435310b4cb64511279579"} Jan 27 20:30:03 crc kubenswrapper[4915]: I0127 20:30:03.598534 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-pzhw4" Jan 27 20:30:03 crc kubenswrapper[4915]: I0127 20:30:03.703459 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f0fc2e7-ca2f-4479-a3ba-8512004f1298-config-volume\") pod \"2f0fc2e7-ca2f-4479-a3ba-8512004f1298\" (UID: \"2f0fc2e7-ca2f-4479-a3ba-8512004f1298\") " Jan 27 20:30:03 crc kubenswrapper[4915]: I0127 20:30:03.703891 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vk5dr\" (UniqueName: \"kubernetes.io/projected/2f0fc2e7-ca2f-4479-a3ba-8512004f1298-kube-api-access-vk5dr\") pod \"2f0fc2e7-ca2f-4479-a3ba-8512004f1298\" (UID: \"2f0fc2e7-ca2f-4479-a3ba-8512004f1298\") " Jan 27 20:30:03 crc kubenswrapper[4915]: I0127 20:30:03.704038 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f0fc2e7-ca2f-4479-a3ba-8512004f1298-config-volume" (OuterVolumeSpecName: "config-volume") pod "2f0fc2e7-ca2f-4479-a3ba-8512004f1298" (UID: "2f0fc2e7-ca2f-4479-a3ba-8512004f1298"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:30:03 crc kubenswrapper[4915]: I0127 20:30:03.704065 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f0fc2e7-ca2f-4479-a3ba-8512004f1298-secret-volume\") pod \"2f0fc2e7-ca2f-4479-a3ba-8512004f1298\" (UID: \"2f0fc2e7-ca2f-4479-a3ba-8512004f1298\") " Jan 27 20:30:03 crc kubenswrapper[4915]: I0127 20:30:03.704541 4915 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f0fc2e7-ca2f-4479-a3ba-8512004f1298-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:03 crc kubenswrapper[4915]: I0127 20:30:03.709226 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f0fc2e7-ca2f-4479-a3ba-8512004f1298-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2f0fc2e7-ca2f-4479-a3ba-8512004f1298" (UID: "2f0fc2e7-ca2f-4479-a3ba-8512004f1298"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:03 crc kubenswrapper[4915]: I0127 20:30:03.709496 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f0fc2e7-ca2f-4479-a3ba-8512004f1298-kube-api-access-vk5dr" (OuterVolumeSpecName: "kube-api-access-vk5dr") pod "2f0fc2e7-ca2f-4479-a3ba-8512004f1298" (UID: "2f0fc2e7-ca2f-4479-a3ba-8512004f1298"). InnerVolumeSpecName "kube-api-access-vk5dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:30:03 crc kubenswrapper[4915]: I0127 20:30:03.806272 4915 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f0fc2e7-ca2f-4479-a3ba-8512004f1298-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:03 crc kubenswrapper[4915]: I0127 20:30:03.806327 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vk5dr\" (UniqueName: \"kubernetes.io/projected/2f0fc2e7-ca2f-4479-a3ba-8512004f1298-kube-api-access-vk5dr\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:04 crc kubenswrapper[4915]: I0127 20:30:04.246212 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-pzhw4" event={"ID":"2f0fc2e7-ca2f-4479-a3ba-8512004f1298","Type":"ContainerDied","Data":"3409c48ee14f67a8dd526f7c699094d6b9ddd6282e6c88db4826b312992dee8f"} Jan 27 20:30:04 crc kubenswrapper[4915]: I0127 20:30:04.246239 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-pzhw4" Jan 27 20:30:04 crc kubenswrapper[4915]: I0127 20:30:04.246255 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3409c48ee14f67a8dd526f7c699094d6b9ddd6282e6c88db4826b312992dee8f" Jan 27 20:30:04 crc kubenswrapper[4915]: I0127 20:30:04.329728 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492385-4s645"] Jan 27 20:30:04 crc kubenswrapper[4915]: I0127 20:30:04.408161 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492385-4s645"] Jan 27 20:30:05 crc kubenswrapper[4915]: I0127 20:30:05.369186 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36574015-0446-4376-ad57-bf528f20af86" path="/var/lib/kubelet/pods/36574015-0446-4376-ad57-bf528f20af86/volumes" Jan 27 20:30:13 crc kubenswrapper[4915]: I0127 20:30:13.357874 4915 scope.go:117] "RemoveContainer" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" Jan 27 20:30:13 crc kubenswrapper[4915]: E0127 20:30:13.359091 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:30:28 crc kubenswrapper[4915]: I0127 20:30:28.358098 4915 scope.go:117] "RemoveContainer" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" Jan 27 20:30:28 crc kubenswrapper[4915]: E0127 20:30:28.358868 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:30:31 crc kubenswrapper[4915]: I0127 20:30:31.985830 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sqds7"] Jan 27 20:30:31 crc kubenswrapper[4915]: E0127 20:30:31.986755 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f0fc2e7-ca2f-4479-a3ba-8512004f1298" containerName="collect-profiles" Jan 27 20:30:31 crc kubenswrapper[4915]: I0127 20:30:31.986775 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f0fc2e7-ca2f-4479-a3ba-8512004f1298" containerName="collect-profiles" Jan 27 20:30:31 crc kubenswrapper[4915]: I0127 20:30:31.986990 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f0fc2e7-ca2f-4479-a3ba-8512004f1298" containerName="collect-profiles" Jan 27 20:30:32 crc kubenswrapper[4915]: I0127 20:30:31.988493 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sqds7" Jan 27 20:30:32 crc kubenswrapper[4915]: I0127 20:30:32.000604 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sqds7"] Jan 27 20:30:32 crc kubenswrapper[4915]: I0127 20:30:32.154936 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/095426f8-8107-4fa0-b1d6-a4107621e0ae-utilities\") pod \"community-operators-sqds7\" (UID: \"095426f8-8107-4fa0-b1d6-a4107621e0ae\") " pod="openshift-marketplace/community-operators-sqds7" Jan 27 20:30:32 crc kubenswrapper[4915]: I0127 20:30:32.155423 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/095426f8-8107-4fa0-b1d6-a4107621e0ae-catalog-content\") pod \"community-operators-sqds7\" (UID: \"095426f8-8107-4fa0-b1d6-a4107621e0ae\") " pod="openshift-marketplace/community-operators-sqds7" Jan 27 20:30:32 crc kubenswrapper[4915]: I0127 20:30:32.155536 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v2kg\" (UniqueName: \"kubernetes.io/projected/095426f8-8107-4fa0-b1d6-a4107621e0ae-kube-api-access-8v2kg\") pod \"community-operators-sqds7\" (UID: \"095426f8-8107-4fa0-b1d6-a4107621e0ae\") " pod="openshift-marketplace/community-operators-sqds7" Jan 27 20:30:32 crc kubenswrapper[4915]: I0127 20:30:32.257630 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/095426f8-8107-4fa0-b1d6-a4107621e0ae-catalog-content\") pod \"community-operators-sqds7\" (UID: \"095426f8-8107-4fa0-b1d6-a4107621e0ae\") " pod="openshift-marketplace/community-operators-sqds7" Jan 27 20:30:32 crc kubenswrapper[4915]: I0127 20:30:32.257830 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v2kg\" (UniqueName: \"kubernetes.io/projected/095426f8-8107-4fa0-b1d6-a4107621e0ae-kube-api-access-8v2kg\") pod \"community-operators-sqds7\" (UID: \"095426f8-8107-4fa0-b1d6-a4107621e0ae\") " pod="openshift-marketplace/community-operators-sqds7" Jan 27 20:30:32 crc kubenswrapper[4915]: I0127 20:30:32.257918 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/095426f8-8107-4fa0-b1d6-a4107621e0ae-utilities\") pod \"community-operators-sqds7\" (UID: \"095426f8-8107-4fa0-b1d6-a4107621e0ae\") " pod="openshift-marketplace/community-operators-sqds7" Jan 27 20:30:32 crc kubenswrapper[4915]: I0127 20:30:32.258570 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/095426f8-8107-4fa0-b1d6-a4107621e0ae-catalog-content\") pod \"community-operators-sqds7\" (UID: \"095426f8-8107-4fa0-b1d6-a4107621e0ae\") " pod="openshift-marketplace/community-operators-sqds7" Jan 27 20:30:32 crc kubenswrapper[4915]: I0127 20:30:32.258634 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/095426f8-8107-4fa0-b1d6-a4107621e0ae-utilities\") pod \"community-operators-sqds7\" (UID: \"095426f8-8107-4fa0-b1d6-a4107621e0ae\") " pod="openshift-marketplace/community-operators-sqds7" Jan 27 20:30:32 crc kubenswrapper[4915]: I0127 20:30:32.287607 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v2kg\" (UniqueName: \"kubernetes.io/projected/095426f8-8107-4fa0-b1d6-a4107621e0ae-kube-api-access-8v2kg\") pod \"community-operators-sqds7\" (UID: \"095426f8-8107-4fa0-b1d6-a4107621e0ae\") " pod="openshift-marketplace/community-operators-sqds7" Jan 27 20:30:32 crc kubenswrapper[4915]: I0127 20:30:32.374552 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sqds7" Jan 27 20:30:32 crc kubenswrapper[4915]: I0127 20:30:32.914760 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sqds7"] Jan 27 20:30:32 crc kubenswrapper[4915]: W0127 20:30:32.925067 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod095426f8_8107_4fa0_b1d6_a4107621e0ae.slice/crio-0f767c454fc47a9cb0771afd9976941c5c7fe042798b985965e6491e34080f5e WatchSource:0}: Error finding container 0f767c454fc47a9cb0771afd9976941c5c7fe042798b985965e6491e34080f5e: Status 404 returned error can't find the container with id 0f767c454fc47a9cb0771afd9976941c5c7fe042798b985965e6491e34080f5e Jan 27 20:30:33 crc kubenswrapper[4915]: I0127 20:30:33.558927 4915 generic.go:334] "Generic (PLEG): container finished" podID="095426f8-8107-4fa0-b1d6-a4107621e0ae" containerID="4d0a8afdb78340fc08cd676788ced54531572b25d4be9681cb67858e3cd9ca6f" exitCode=0 Jan 27 20:30:33 crc kubenswrapper[4915]: I0127 20:30:33.559034 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqds7" event={"ID":"095426f8-8107-4fa0-b1d6-a4107621e0ae","Type":"ContainerDied","Data":"4d0a8afdb78340fc08cd676788ced54531572b25d4be9681cb67858e3cd9ca6f"} Jan 27 20:30:33 crc kubenswrapper[4915]: I0127 20:30:33.559279 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqds7" event={"ID":"095426f8-8107-4fa0-b1d6-a4107621e0ae","Type":"ContainerStarted","Data":"0f767c454fc47a9cb0771afd9976941c5c7fe042798b985965e6491e34080f5e"} Jan 27 20:30:33 crc kubenswrapper[4915]: I0127 20:30:33.563523 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 20:30:34 crc kubenswrapper[4915]: I0127 20:30:34.569542 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqds7" event={"ID":"095426f8-8107-4fa0-b1d6-a4107621e0ae","Type":"ContainerStarted","Data":"b970c5275f5f5730815d63dd61fa3eaec5baf20c28672ac0ee7d9ad2f0efde9c"} Jan 27 20:30:34 crc kubenswrapper[4915]: I0127 20:30:34.572293 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-48hf2"] Jan 27 20:30:34 crc kubenswrapper[4915]: I0127 20:30:34.574092 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-48hf2" Jan 27 20:30:34 crc kubenswrapper[4915]: I0127 20:30:34.589195 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-48hf2"] Jan 27 20:30:34 crc kubenswrapper[4915]: I0127 20:30:34.718443 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2278bd7a-81a6-4a74-aa73-28e38a4781c8-catalog-content\") pod \"redhat-marketplace-48hf2\" (UID: \"2278bd7a-81a6-4a74-aa73-28e38a4781c8\") " pod="openshift-marketplace/redhat-marketplace-48hf2" Jan 27 20:30:34 crc kubenswrapper[4915]: I0127 20:30:34.718923 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ffr7\" (UniqueName: \"kubernetes.io/projected/2278bd7a-81a6-4a74-aa73-28e38a4781c8-kube-api-access-9ffr7\") pod \"redhat-marketplace-48hf2\" (UID: \"2278bd7a-81a6-4a74-aa73-28e38a4781c8\") " pod="openshift-marketplace/redhat-marketplace-48hf2" Jan 27 20:30:34 crc kubenswrapper[4915]: I0127 20:30:34.718989 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2278bd7a-81a6-4a74-aa73-28e38a4781c8-utilities\") pod \"redhat-marketplace-48hf2\" (UID: \"2278bd7a-81a6-4a74-aa73-28e38a4781c8\") " pod="openshift-marketplace/redhat-marketplace-48hf2" Jan 27 20:30:34 crc kubenswrapper[4915]: I0127 20:30:34.820477 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2278bd7a-81a6-4a74-aa73-28e38a4781c8-catalog-content\") pod \"redhat-marketplace-48hf2\" (UID: \"2278bd7a-81a6-4a74-aa73-28e38a4781c8\") " pod="openshift-marketplace/redhat-marketplace-48hf2" Jan 27 20:30:34 crc kubenswrapper[4915]: I0127 20:30:34.820583 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ffr7\" (UniqueName: \"kubernetes.io/projected/2278bd7a-81a6-4a74-aa73-28e38a4781c8-kube-api-access-9ffr7\") pod \"redhat-marketplace-48hf2\" (UID: \"2278bd7a-81a6-4a74-aa73-28e38a4781c8\") " pod="openshift-marketplace/redhat-marketplace-48hf2" Jan 27 20:30:34 crc kubenswrapper[4915]: I0127 20:30:34.820660 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2278bd7a-81a6-4a74-aa73-28e38a4781c8-utilities\") pod \"redhat-marketplace-48hf2\" (UID: \"2278bd7a-81a6-4a74-aa73-28e38a4781c8\") " pod="openshift-marketplace/redhat-marketplace-48hf2" Jan 27 20:30:34 crc kubenswrapper[4915]: I0127 20:30:34.821055 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2278bd7a-81a6-4a74-aa73-28e38a4781c8-catalog-content\") pod \"redhat-marketplace-48hf2\" (UID: \"2278bd7a-81a6-4a74-aa73-28e38a4781c8\") " pod="openshift-marketplace/redhat-marketplace-48hf2" Jan 27 20:30:34 crc kubenswrapper[4915]: I0127 20:30:34.821203 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2278bd7a-81a6-4a74-aa73-28e38a4781c8-utilities\") pod \"redhat-marketplace-48hf2\" (UID: \"2278bd7a-81a6-4a74-aa73-28e38a4781c8\") " pod="openshift-marketplace/redhat-marketplace-48hf2" Jan 27 20:30:34 crc kubenswrapper[4915]: I0127 20:30:34.841462 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ffr7\" (UniqueName: \"kubernetes.io/projected/2278bd7a-81a6-4a74-aa73-28e38a4781c8-kube-api-access-9ffr7\") pod \"redhat-marketplace-48hf2\" (UID: \"2278bd7a-81a6-4a74-aa73-28e38a4781c8\") " pod="openshift-marketplace/redhat-marketplace-48hf2" Jan 27 20:30:34 crc kubenswrapper[4915]: I0127 20:30:34.892912 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-48hf2" Jan 27 20:30:35 crc kubenswrapper[4915]: I0127 20:30:35.400393 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-48hf2"] Jan 27 20:30:35 crc kubenswrapper[4915]: W0127 20:30:35.404335 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2278bd7a_81a6_4a74_aa73_28e38a4781c8.slice/crio-a492b18013541d24b2e1d78bda11ec93b7904fe0e2d6b8ae06bc2d1ade06245b WatchSource:0}: Error finding container a492b18013541d24b2e1d78bda11ec93b7904fe0e2d6b8ae06bc2d1ade06245b: Status 404 returned error can't find the container with id a492b18013541d24b2e1d78bda11ec93b7904fe0e2d6b8ae06bc2d1ade06245b Jan 27 20:30:35 crc kubenswrapper[4915]: I0127 20:30:35.579192 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48hf2" event={"ID":"2278bd7a-81a6-4a74-aa73-28e38a4781c8","Type":"ContainerStarted","Data":"a492b18013541d24b2e1d78bda11ec93b7904fe0e2d6b8ae06bc2d1ade06245b"} Jan 27 20:30:35 crc kubenswrapper[4915]: I0127 20:30:35.581569 4915 generic.go:334] "Generic (PLEG): container finished" podID="095426f8-8107-4fa0-b1d6-a4107621e0ae" containerID="b970c5275f5f5730815d63dd61fa3eaec5baf20c28672ac0ee7d9ad2f0efde9c" exitCode=0 Jan 27 20:30:35 crc kubenswrapper[4915]: I0127 20:30:35.581664 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqds7" event={"ID":"095426f8-8107-4fa0-b1d6-a4107621e0ae","Type":"ContainerDied","Data":"b970c5275f5f5730815d63dd61fa3eaec5baf20c28672ac0ee7d9ad2f0efde9c"} Jan 27 20:30:36 crc kubenswrapper[4915]: I0127 20:30:36.593306 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqds7" event={"ID":"095426f8-8107-4fa0-b1d6-a4107621e0ae","Type":"ContainerStarted","Data":"9e5e0fd5575874836faec12006c962012b1fde783b8696f69f18b70ca734368c"} Jan 27 20:30:36 crc kubenswrapper[4915]: I0127 20:30:36.595191 4915 generic.go:334] "Generic (PLEG): container finished" podID="2278bd7a-81a6-4a74-aa73-28e38a4781c8" containerID="a9529810122d5edbc0aeed8076d9f68844c19b6dcf36ad4303a6b5351790ffed" exitCode=0 Jan 27 20:30:36 crc kubenswrapper[4915]: I0127 20:30:36.595227 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48hf2" event={"ID":"2278bd7a-81a6-4a74-aa73-28e38a4781c8","Type":"ContainerDied","Data":"a9529810122d5edbc0aeed8076d9f68844c19b6dcf36ad4303a6b5351790ffed"} Jan 27 20:30:36 crc kubenswrapper[4915]: I0127 20:30:36.615085 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sqds7" podStartSLOduration=2.9932111839999997 podStartE2EDuration="5.615069799s" podCreationTimestamp="2026-01-27 20:30:31 +0000 UTC" firstStartedPulling="2026-01-27 20:30:33.562889146 +0000 UTC m=+6524.920742860" lastFinishedPulling="2026-01-27 20:30:36.184747801 +0000 UTC m=+6527.542601475" observedRunningTime="2026-01-27 20:30:36.614608078 +0000 UTC m=+6527.972461742" watchObservedRunningTime="2026-01-27 20:30:36.615069799 +0000 UTC m=+6527.972923463" Jan 27 20:30:37 crc kubenswrapper[4915]: I0127 20:30:37.605138 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48hf2" event={"ID":"2278bd7a-81a6-4a74-aa73-28e38a4781c8","Type":"ContainerStarted","Data":"fc2b0b39677feaafe1316c6915ae2d80aef90b316fc05c937db49686869c652b"} Jan 27 20:30:38 crc kubenswrapper[4915]: I0127 20:30:38.615349 4915 generic.go:334] "Generic (PLEG): container finished" podID="2278bd7a-81a6-4a74-aa73-28e38a4781c8" containerID="fc2b0b39677feaafe1316c6915ae2d80aef90b316fc05c937db49686869c652b" exitCode=0 Jan 27 20:30:38 crc kubenswrapper[4915]: I0127 20:30:38.615673 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48hf2" event={"ID":"2278bd7a-81a6-4a74-aa73-28e38a4781c8","Type":"ContainerDied","Data":"fc2b0b39677feaafe1316c6915ae2d80aef90b316fc05c937db49686869c652b"} Jan 27 20:30:39 crc kubenswrapper[4915]: I0127 20:30:39.625419 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48hf2" event={"ID":"2278bd7a-81a6-4a74-aa73-28e38a4781c8","Type":"ContainerStarted","Data":"f5b5b14a049b2300f0632a8ceee551cc43dc41b02a35bd1af5ee60a332fb5e43"} Jan 27 20:30:39 crc kubenswrapper[4915]: I0127 20:30:39.645551 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-48hf2" podStartSLOduration=3.203413018 podStartE2EDuration="5.645528881s" podCreationTimestamp="2026-01-27 20:30:34 +0000 UTC" firstStartedPulling="2026-01-27 20:30:36.596842953 +0000 UTC m=+6527.954696617" lastFinishedPulling="2026-01-27 20:30:39.038958796 +0000 UTC m=+6530.396812480" observedRunningTime="2026-01-27 20:30:39.642431665 +0000 UTC m=+6531.000285339" watchObservedRunningTime="2026-01-27 20:30:39.645528881 +0000 UTC m=+6531.003382545" Jan 27 20:30:41 crc kubenswrapper[4915]: I0127 20:30:41.358174 4915 scope.go:117] "RemoveContainer" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" Jan 27 20:30:41 crc kubenswrapper[4915]: E0127 20:30:41.358972 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:30:42 crc kubenswrapper[4915]: I0127 20:30:42.375366 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sqds7" Jan 27 20:30:42 crc kubenswrapper[4915]: I0127 20:30:42.375429 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sqds7" Jan 27 20:30:42 crc kubenswrapper[4915]: I0127 20:30:42.424846 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sqds7" Jan 27 20:30:42 crc kubenswrapper[4915]: I0127 20:30:42.707121 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sqds7" Jan 27 20:30:43 crc kubenswrapper[4915]: I0127 20:30:43.371874 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sqds7"] Jan 27 20:30:44 crc kubenswrapper[4915]: I0127 20:30:44.670973 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sqds7" podUID="095426f8-8107-4fa0-b1d6-a4107621e0ae" containerName="registry-server" containerID="cri-o://9e5e0fd5575874836faec12006c962012b1fde783b8696f69f18b70ca734368c" gracePeriod=2 Jan 27 20:30:44 crc kubenswrapper[4915]: I0127 20:30:44.893034 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-48hf2" Jan 27 20:30:44 crc kubenswrapper[4915]: I0127 20:30:44.893411 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-48hf2" Jan 27 20:30:44 crc kubenswrapper[4915]: I0127 20:30:44.981920 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-48hf2" Jan 27 20:30:45 crc kubenswrapper[4915]: I0127 20:30:45.189928 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sqds7" Jan 27 20:30:45 crc kubenswrapper[4915]: I0127 20:30:45.354410 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/095426f8-8107-4fa0-b1d6-a4107621e0ae-utilities\") pod \"095426f8-8107-4fa0-b1d6-a4107621e0ae\" (UID: \"095426f8-8107-4fa0-b1d6-a4107621e0ae\") " Jan 27 20:30:45 crc kubenswrapper[4915]: I0127 20:30:45.354558 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8v2kg\" (UniqueName: \"kubernetes.io/projected/095426f8-8107-4fa0-b1d6-a4107621e0ae-kube-api-access-8v2kg\") pod \"095426f8-8107-4fa0-b1d6-a4107621e0ae\" (UID: \"095426f8-8107-4fa0-b1d6-a4107621e0ae\") " Jan 27 20:30:45 crc kubenswrapper[4915]: I0127 20:30:45.354756 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/095426f8-8107-4fa0-b1d6-a4107621e0ae-catalog-content\") pod \"095426f8-8107-4fa0-b1d6-a4107621e0ae\" (UID: \"095426f8-8107-4fa0-b1d6-a4107621e0ae\") " Jan 27 20:30:45 crc kubenswrapper[4915]: I0127 20:30:45.355430 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/095426f8-8107-4fa0-b1d6-a4107621e0ae-utilities" (OuterVolumeSpecName: "utilities") pod "095426f8-8107-4fa0-b1d6-a4107621e0ae" (UID: "095426f8-8107-4fa0-b1d6-a4107621e0ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:30:45 crc kubenswrapper[4915]: I0127 20:30:45.361970 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/095426f8-8107-4fa0-b1d6-a4107621e0ae-kube-api-access-8v2kg" (OuterVolumeSpecName: "kube-api-access-8v2kg") pod "095426f8-8107-4fa0-b1d6-a4107621e0ae" (UID: "095426f8-8107-4fa0-b1d6-a4107621e0ae"). InnerVolumeSpecName "kube-api-access-8v2kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:30:45 crc kubenswrapper[4915]: I0127 20:30:45.402335 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/095426f8-8107-4fa0-b1d6-a4107621e0ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "095426f8-8107-4fa0-b1d6-a4107621e0ae" (UID: "095426f8-8107-4fa0-b1d6-a4107621e0ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:30:45 crc kubenswrapper[4915]: I0127 20:30:45.457330 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/095426f8-8107-4fa0-b1d6-a4107621e0ae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:45 crc kubenswrapper[4915]: I0127 20:30:45.457364 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/095426f8-8107-4fa0-b1d6-a4107621e0ae-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:45 crc kubenswrapper[4915]: I0127 20:30:45.457378 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8v2kg\" (UniqueName: \"kubernetes.io/projected/095426f8-8107-4fa0-b1d6-a4107621e0ae-kube-api-access-8v2kg\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:45 crc kubenswrapper[4915]: I0127 20:30:45.686293 4915 generic.go:334] "Generic (PLEG): container finished" podID="095426f8-8107-4fa0-b1d6-a4107621e0ae" containerID="9e5e0fd5575874836faec12006c962012b1fde783b8696f69f18b70ca734368c" exitCode=0 Jan 27 20:30:45 crc kubenswrapper[4915]: I0127 20:30:45.686347 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sqds7" Jan 27 20:30:45 crc kubenswrapper[4915]: I0127 20:30:45.686379 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqds7" event={"ID":"095426f8-8107-4fa0-b1d6-a4107621e0ae","Type":"ContainerDied","Data":"9e5e0fd5575874836faec12006c962012b1fde783b8696f69f18b70ca734368c"} Jan 27 20:30:45 crc kubenswrapper[4915]: I0127 20:30:45.687528 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqds7" event={"ID":"095426f8-8107-4fa0-b1d6-a4107621e0ae","Type":"ContainerDied","Data":"0f767c454fc47a9cb0771afd9976941c5c7fe042798b985965e6491e34080f5e"} Jan 27 20:30:45 crc kubenswrapper[4915]: I0127 20:30:45.687570 4915 scope.go:117] "RemoveContainer" containerID="9e5e0fd5575874836faec12006c962012b1fde783b8696f69f18b70ca734368c" Jan 27 20:30:45 crc kubenswrapper[4915]: I0127 20:30:45.717001 4915 scope.go:117] "RemoveContainer" containerID="b970c5275f5f5730815d63dd61fa3eaec5baf20c28672ac0ee7d9ad2f0efde9c" Jan 27 20:30:45 crc kubenswrapper[4915]: I0127 20:30:45.759749 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sqds7"] Jan 27 20:30:45 crc kubenswrapper[4915]: I0127 20:30:45.761044 4915 scope.go:117] "RemoveContainer" containerID="4d0a8afdb78340fc08cd676788ced54531572b25d4be9681cb67858e3cd9ca6f" Jan 27 20:30:45 crc kubenswrapper[4915]: I0127 20:30:45.766833 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-48hf2" Jan 27 20:30:45 crc kubenswrapper[4915]: I0127 20:30:45.772165 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sqds7"] Jan 27 20:30:45 crc kubenswrapper[4915]: I0127 20:30:45.816543 4915 scope.go:117] "RemoveContainer" containerID="9e5e0fd5575874836faec12006c962012b1fde783b8696f69f18b70ca734368c" Jan 27 20:30:45 crc kubenswrapper[4915]: E0127 20:30:45.817021 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e5e0fd5575874836faec12006c962012b1fde783b8696f69f18b70ca734368c\": container with ID starting with 9e5e0fd5575874836faec12006c962012b1fde783b8696f69f18b70ca734368c not found: ID does not exist" containerID="9e5e0fd5575874836faec12006c962012b1fde783b8696f69f18b70ca734368c" Jan 27 20:30:45 crc kubenswrapper[4915]: I0127 20:30:45.817081 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e5e0fd5575874836faec12006c962012b1fde783b8696f69f18b70ca734368c"} err="failed to get container status \"9e5e0fd5575874836faec12006c962012b1fde783b8696f69f18b70ca734368c\": rpc error: code = NotFound desc = could not find container \"9e5e0fd5575874836faec12006c962012b1fde783b8696f69f18b70ca734368c\": container with ID starting with 9e5e0fd5575874836faec12006c962012b1fde783b8696f69f18b70ca734368c not found: ID does not exist" Jan 27 20:30:45 crc kubenswrapper[4915]: I0127 20:30:45.817112 4915 scope.go:117] "RemoveContainer" containerID="b970c5275f5f5730815d63dd61fa3eaec5baf20c28672ac0ee7d9ad2f0efde9c" Jan 27 20:30:45 crc kubenswrapper[4915]: E0127 20:30:45.817609 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b970c5275f5f5730815d63dd61fa3eaec5baf20c28672ac0ee7d9ad2f0efde9c\": container with ID starting with b970c5275f5f5730815d63dd61fa3eaec5baf20c28672ac0ee7d9ad2f0efde9c not found: ID does not exist" containerID="b970c5275f5f5730815d63dd61fa3eaec5baf20c28672ac0ee7d9ad2f0efde9c" Jan 27 20:30:45 crc kubenswrapper[4915]: I0127 20:30:45.817635 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b970c5275f5f5730815d63dd61fa3eaec5baf20c28672ac0ee7d9ad2f0efde9c"} err="failed to get container status \"b970c5275f5f5730815d63dd61fa3eaec5baf20c28672ac0ee7d9ad2f0efde9c\": rpc error: code = NotFound desc = could not find container \"b970c5275f5f5730815d63dd61fa3eaec5baf20c28672ac0ee7d9ad2f0efde9c\": container with ID starting with b970c5275f5f5730815d63dd61fa3eaec5baf20c28672ac0ee7d9ad2f0efde9c not found: ID does not exist" Jan 27 20:30:45 crc kubenswrapper[4915]: I0127 20:30:45.817654 4915 scope.go:117] "RemoveContainer" containerID="4d0a8afdb78340fc08cd676788ced54531572b25d4be9681cb67858e3cd9ca6f" Jan 27 20:30:45 crc kubenswrapper[4915]: E0127 20:30:45.817883 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d0a8afdb78340fc08cd676788ced54531572b25d4be9681cb67858e3cd9ca6f\": container with ID starting with 4d0a8afdb78340fc08cd676788ced54531572b25d4be9681cb67858e3cd9ca6f not found: ID does not exist" containerID="4d0a8afdb78340fc08cd676788ced54531572b25d4be9681cb67858e3cd9ca6f" Jan 27 20:30:45 crc kubenswrapper[4915]: I0127 20:30:45.817909 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d0a8afdb78340fc08cd676788ced54531572b25d4be9681cb67858e3cd9ca6f"} err="failed to get container status \"4d0a8afdb78340fc08cd676788ced54531572b25d4be9681cb67858e3cd9ca6f\": rpc error: code = NotFound desc = could not find container \"4d0a8afdb78340fc08cd676788ced54531572b25d4be9681cb67858e3cd9ca6f\": container with ID starting with 4d0a8afdb78340fc08cd676788ced54531572b25d4be9681cb67858e3cd9ca6f not found: ID does not exist" Jan 27 20:30:47 crc kubenswrapper[4915]: I0127 20:30:47.370407 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="095426f8-8107-4fa0-b1d6-a4107621e0ae" path="/var/lib/kubelet/pods/095426f8-8107-4fa0-b1d6-a4107621e0ae/volumes" Jan 27 20:30:48 crc kubenswrapper[4915]: I0127 20:30:48.157963 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-48hf2"] Jan 27 20:30:48 crc kubenswrapper[4915]: I0127 20:30:48.158469 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-48hf2" podUID="2278bd7a-81a6-4a74-aa73-28e38a4781c8" containerName="registry-server" containerID="cri-o://f5b5b14a049b2300f0632a8ceee551cc43dc41b02a35bd1af5ee60a332fb5e43" gracePeriod=2 Jan 27 20:30:48 crc kubenswrapper[4915]: I0127 20:30:48.590389 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-48hf2" Jan 27 20:30:48 crc kubenswrapper[4915]: I0127 20:30:48.630883 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ffr7\" (UniqueName: \"kubernetes.io/projected/2278bd7a-81a6-4a74-aa73-28e38a4781c8-kube-api-access-9ffr7\") pod \"2278bd7a-81a6-4a74-aa73-28e38a4781c8\" (UID: \"2278bd7a-81a6-4a74-aa73-28e38a4781c8\") " Jan 27 20:30:48 crc kubenswrapper[4915]: I0127 20:30:48.631379 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2278bd7a-81a6-4a74-aa73-28e38a4781c8-catalog-content\") pod \"2278bd7a-81a6-4a74-aa73-28e38a4781c8\" (UID: \"2278bd7a-81a6-4a74-aa73-28e38a4781c8\") " Jan 27 20:30:48 crc kubenswrapper[4915]: I0127 20:30:48.631439 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2278bd7a-81a6-4a74-aa73-28e38a4781c8-utilities\") pod \"2278bd7a-81a6-4a74-aa73-28e38a4781c8\" (UID: \"2278bd7a-81a6-4a74-aa73-28e38a4781c8\") " Jan 27 20:30:48 crc kubenswrapper[4915]: I0127 20:30:48.632583 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2278bd7a-81a6-4a74-aa73-28e38a4781c8-utilities" (OuterVolumeSpecName: "utilities") pod "2278bd7a-81a6-4a74-aa73-28e38a4781c8" (UID: "2278bd7a-81a6-4a74-aa73-28e38a4781c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:30:48 crc kubenswrapper[4915]: I0127 20:30:48.637002 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2278bd7a-81a6-4a74-aa73-28e38a4781c8-kube-api-access-9ffr7" (OuterVolumeSpecName: "kube-api-access-9ffr7") pod "2278bd7a-81a6-4a74-aa73-28e38a4781c8" (UID: "2278bd7a-81a6-4a74-aa73-28e38a4781c8"). InnerVolumeSpecName "kube-api-access-9ffr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:30:48 crc kubenswrapper[4915]: I0127 20:30:48.657559 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2278bd7a-81a6-4a74-aa73-28e38a4781c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2278bd7a-81a6-4a74-aa73-28e38a4781c8" (UID: "2278bd7a-81a6-4a74-aa73-28e38a4781c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:30:48 crc kubenswrapper[4915]: I0127 20:30:48.719703 4915 generic.go:334] "Generic (PLEG): container finished" podID="2278bd7a-81a6-4a74-aa73-28e38a4781c8" containerID="f5b5b14a049b2300f0632a8ceee551cc43dc41b02a35bd1af5ee60a332fb5e43" exitCode=0 Jan 27 20:30:48 crc kubenswrapper[4915]: I0127 20:30:48.719742 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48hf2" event={"ID":"2278bd7a-81a6-4a74-aa73-28e38a4781c8","Type":"ContainerDied","Data":"f5b5b14a049b2300f0632a8ceee551cc43dc41b02a35bd1af5ee60a332fb5e43"} Jan 27 20:30:48 crc kubenswrapper[4915]: I0127 20:30:48.719754 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-48hf2" Jan 27 20:30:48 crc kubenswrapper[4915]: I0127 20:30:48.719771 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48hf2" event={"ID":"2278bd7a-81a6-4a74-aa73-28e38a4781c8","Type":"ContainerDied","Data":"a492b18013541d24b2e1d78bda11ec93b7904fe0e2d6b8ae06bc2d1ade06245b"} Jan 27 20:30:48 crc kubenswrapper[4915]: I0127 20:30:48.719795 4915 scope.go:117] "RemoveContainer" containerID="f5b5b14a049b2300f0632a8ceee551cc43dc41b02a35bd1af5ee60a332fb5e43" Jan 27 20:30:48 crc kubenswrapper[4915]: I0127 20:30:48.733924 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ffr7\" (UniqueName: \"kubernetes.io/projected/2278bd7a-81a6-4a74-aa73-28e38a4781c8-kube-api-access-9ffr7\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:48 crc kubenswrapper[4915]: I0127 20:30:48.733989 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2278bd7a-81a6-4a74-aa73-28e38a4781c8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:48 crc kubenswrapper[4915]: I0127 20:30:48.734016 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2278bd7a-81a6-4a74-aa73-28e38a4781c8-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:48 crc kubenswrapper[4915]: I0127 20:30:48.752569 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-48hf2"] Jan 27 20:30:48 crc kubenswrapper[4915]: I0127 20:30:48.758153 4915 scope.go:117] "RemoveContainer" containerID="fc2b0b39677feaafe1316c6915ae2d80aef90b316fc05c937db49686869c652b" Jan 27 20:30:48 crc kubenswrapper[4915]: I0127 20:30:48.760312 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-48hf2"] Jan 27 20:30:48 crc kubenswrapper[4915]: I0127 20:30:48.776513 4915 scope.go:117] "RemoveContainer" containerID="a9529810122d5edbc0aeed8076d9f68844c19b6dcf36ad4303a6b5351790ffed" Jan 27 20:30:48 crc kubenswrapper[4915]: I0127 20:30:48.813951 4915 scope.go:117] "RemoveContainer" containerID="f5b5b14a049b2300f0632a8ceee551cc43dc41b02a35bd1af5ee60a332fb5e43" Jan 27 20:30:48 crc kubenswrapper[4915]: E0127 20:30:48.814380 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5b5b14a049b2300f0632a8ceee551cc43dc41b02a35bd1af5ee60a332fb5e43\": container with ID starting with f5b5b14a049b2300f0632a8ceee551cc43dc41b02a35bd1af5ee60a332fb5e43 not found: ID does not exist" containerID="f5b5b14a049b2300f0632a8ceee551cc43dc41b02a35bd1af5ee60a332fb5e43" Jan 27 20:30:48 crc kubenswrapper[4915]: I0127 20:30:48.814414 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5b5b14a049b2300f0632a8ceee551cc43dc41b02a35bd1af5ee60a332fb5e43"} err="failed to get container status \"f5b5b14a049b2300f0632a8ceee551cc43dc41b02a35bd1af5ee60a332fb5e43\": rpc error: code = NotFound desc = could not find container \"f5b5b14a049b2300f0632a8ceee551cc43dc41b02a35bd1af5ee60a332fb5e43\": container with ID starting with f5b5b14a049b2300f0632a8ceee551cc43dc41b02a35bd1af5ee60a332fb5e43 not found: ID does not exist" Jan 27 20:30:48 crc kubenswrapper[4915]: I0127 20:30:48.814435 4915 scope.go:117] "RemoveContainer" containerID="fc2b0b39677feaafe1316c6915ae2d80aef90b316fc05c937db49686869c652b" Jan 27 20:30:48 crc kubenswrapper[4915]: E0127 20:30:48.814774 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc2b0b39677feaafe1316c6915ae2d80aef90b316fc05c937db49686869c652b\": container with ID starting with fc2b0b39677feaafe1316c6915ae2d80aef90b316fc05c937db49686869c652b not found: ID does not exist" containerID="fc2b0b39677feaafe1316c6915ae2d80aef90b316fc05c937db49686869c652b" Jan 27 20:30:48 crc kubenswrapper[4915]: I0127 20:30:48.814810 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc2b0b39677feaafe1316c6915ae2d80aef90b316fc05c937db49686869c652b"} err="failed to get container status \"fc2b0b39677feaafe1316c6915ae2d80aef90b316fc05c937db49686869c652b\": rpc error: code = NotFound desc = could not find container \"fc2b0b39677feaafe1316c6915ae2d80aef90b316fc05c937db49686869c652b\": container with ID starting with fc2b0b39677feaafe1316c6915ae2d80aef90b316fc05c937db49686869c652b not found: ID does not exist" Jan 27 20:30:48 crc kubenswrapper[4915]: I0127 20:30:48.814835 4915 scope.go:117] "RemoveContainer" containerID="a9529810122d5edbc0aeed8076d9f68844c19b6dcf36ad4303a6b5351790ffed" Jan 27 20:30:48 crc kubenswrapper[4915]: E0127 20:30:48.815057 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9529810122d5edbc0aeed8076d9f68844c19b6dcf36ad4303a6b5351790ffed\": container with ID starting with a9529810122d5edbc0aeed8076d9f68844c19b6dcf36ad4303a6b5351790ffed not found: ID does not exist" containerID="a9529810122d5edbc0aeed8076d9f68844c19b6dcf36ad4303a6b5351790ffed" Jan 27 20:30:48 crc kubenswrapper[4915]: I0127 20:30:48.815103 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9529810122d5edbc0aeed8076d9f68844c19b6dcf36ad4303a6b5351790ffed"} err="failed to get container status \"a9529810122d5edbc0aeed8076d9f68844c19b6dcf36ad4303a6b5351790ffed\": rpc error: code = NotFound desc = could not find container \"a9529810122d5edbc0aeed8076d9f68844c19b6dcf36ad4303a6b5351790ffed\": container with ID starting with a9529810122d5edbc0aeed8076d9f68844c19b6dcf36ad4303a6b5351790ffed not found: ID does not exist" Jan 27 20:30:49 crc kubenswrapper[4915]: I0127 20:30:49.374741 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2278bd7a-81a6-4a74-aa73-28e38a4781c8" path="/var/lib/kubelet/pods/2278bd7a-81a6-4a74-aa73-28e38a4781c8/volumes" Jan 27 20:30:55 crc kubenswrapper[4915]: I0127 20:30:55.357629 4915 scope.go:117] "RemoveContainer" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" Jan 27 20:30:55 crc kubenswrapper[4915]: E0127 20:30:55.358495 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:30:59 crc kubenswrapper[4915]: I0127 20:30:59.462972 4915 scope.go:117] "RemoveContainer" containerID="077aec338595ada7c31a2a9397417f7765784b2b2d0e6940e6145f228fd7fbc9" Jan 27 20:31:06 crc kubenswrapper[4915]: I0127 20:31:06.358320 4915 scope.go:117] "RemoveContainer" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" Jan 27 20:31:06 crc kubenswrapper[4915]: E0127 20:31:06.359076 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:31:17 crc kubenswrapper[4915]: I0127 20:31:17.358040 4915 scope.go:117] "RemoveContainer" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" Jan 27 20:31:17 crc kubenswrapper[4915]: E0127 20:31:17.358952 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:31:29 crc kubenswrapper[4915]: I0127 20:31:29.364884 4915 scope.go:117] "RemoveContainer" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" Jan 27 20:31:29 crc kubenswrapper[4915]: E0127 20:31:29.365602 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:31:40 crc kubenswrapper[4915]: I0127 20:31:40.358017 4915 scope.go:117] "RemoveContainer" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" Jan 27 20:31:40 crc kubenswrapper[4915]: E0127 20:31:40.358984 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:31:51 crc kubenswrapper[4915]: I0127 20:31:51.358592 4915 scope.go:117] "RemoveContainer" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" Jan 27 20:31:51 crc kubenswrapper[4915]: E0127 20:31:51.359296 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:32:04 crc kubenswrapper[4915]: I0127 20:32:04.358285 4915 scope.go:117] "RemoveContainer" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" Jan 27 20:32:04 crc kubenswrapper[4915]: E0127 20:32:04.359161 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:32:16 crc kubenswrapper[4915]: I0127 20:32:16.358487 4915 scope.go:117] "RemoveContainer" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" Jan 27 20:32:16 crc kubenswrapper[4915]: E0127 20:32:16.359258 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:32:31 crc kubenswrapper[4915]: I0127 20:32:31.358279 4915 scope.go:117] "RemoveContainer" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" Jan 27 20:32:31 crc kubenswrapper[4915]: E0127 20:32:31.359192 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:32:43 crc kubenswrapper[4915]: I0127 20:32:43.357762 4915 scope.go:117] "RemoveContainer" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" Jan 27 20:32:43 crc kubenswrapper[4915]: E0127 20:32:43.358738 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:32:56 crc kubenswrapper[4915]: I0127 20:32:56.358149 4915 scope.go:117] "RemoveContainer" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" Jan 27 20:32:56 crc kubenswrapper[4915]: E0127 20:32:56.359075 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:33:07 crc kubenswrapper[4915]: I0127 20:33:07.358614 4915 scope.go:117] "RemoveContainer" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" Jan 27 20:33:07 crc kubenswrapper[4915]: E0127 20:33:07.359789 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:33:16 crc kubenswrapper[4915]: I0127 20:33:16.151265 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qbq97"] Jan 27 20:33:16 crc kubenswrapper[4915]: E0127 20:33:16.152664 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2278bd7a-81a6-4a74-aa73-28e38a4781c8" containerName="extract-content" Jan 27 20:33:16 crc kubenswrapper[4915]: I0127 20:33:16.152691 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="2278bd7a-81a6-4a74-aa73-28e38a4781c8" containerName="extract-content" Jan 27 20:33:16 crc kubenswrapper[4915]: E0127 20:33:16.152718 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="095426f8-8107-4fa0-b1d6-a4107621e0ae" containerName="registry-server" Jan 27 20:33:16 crc kubenswrapper[4915]: I0127 20:33:16.152729 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="095426f8-8107-4fa0-b1d6-a4107621e0ae" containerName="registry-server" Jan 27 20:33:16 crc kubenswrapper[4915]: E0127 20:33:16.152752 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2278bd7a-81a6-4a74-aa73-28e38a4781c8" containerName="extract-utilities" Jan 27 20:33:16 crc kubenswrapper[4915]: I0127 20:33:16.152765 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="2278bd7a-81a6-4a74-aa73-28e38a4781c8" containerName="extract-utilities" Jan 27 20:33:16 crc kubenswrapper[4915]: E0127 20:33:16.152784 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="095426f8-8107-4fa0-b1d6-a4107621e0ae" containerName="extract-utilities" Jan 27 20:33:16 crc kubenswrapper[4915]: I0127 20:33:16.152819 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="095426f8-8107-4fa0-b1d6-a4107621e0ae" containerName="extract-utilities" Jan 27 20:33:16 crc kubenswrapper[4915]: E0127 20:33:16.152843 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2278bd7a-81a6-4a74-aa73-28e38a4781c8" containerName="registry-server" Jan 27 20:33:16 crc kubenswrapper[4915]: I0127 20:33:16.152853 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="2278bd7a-81a6-4a74-aa73-28e38a4781c8" containerName="registry-server" Jan 27 20:33:16 crc kubenswrapper[4915]: E0127 20:33:16.152886 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="095426f8-8107-4fa0-b1d6-a4107621e0ae" containerName="extract-content" Jan 27 20:33:16 crc kubenswrapper[4915]: I0127 20:33:16.152897 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="095426f8-8107-4fa0-b1d6-a4107621e0ae" containerName="extract-content" Jan 27 20:33:16 crc kubenswrapper[4915]: I0127 20:33:16.153218 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="095426f8-8107-4fa0-b1d6-a4107621e0ae" containerName="registry-server" Jan 27 20:33:16 crc kubenswrapper[4915]: I0127 20:33:16.153251 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="2278bd7a-81a6-4a74-aa73-28e38a4781c8" containerName="registry-server" Jan 27 20:33:16 crc kubenswrapper[4915]: I0127 20:33:16.155600 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qbq97" Jan 27 20:33:16 crc kubenswrapper[4915]: I0127 20:33:16.170202 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qbq97"] Jan 27 20:33:16 crc kubenswrapper[4915]: I0127 20:33:16.332038 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af1c3774-9208-478b-b3c1-edd8cc407169-catalog-content\") pod \"redhat-operators-qbq97\" (UID: \"af1c3774-9208-478b-b3c1-edd8cc407169\") " pod="openshift-marketplace/redhat-operators-qbq97" Jan 27 20:33:16 crc kubenswrapper[4915]: I0127 20:33:16.333396 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsvxq\" (UniqueName: \"kubernetes.io/projected/af1c3774-9208-478b-b3c1-edd8cc407169-kube-api-access-tsvxq\") pod \"redhat-operators-qbq97\" (UID: \"af1c3774-9208-478b-b3c1-edd8cc407169\") " pod="openshift-marketplace/redhat-operators-qbq97" Jan 27 20:33:16 crc kubenswrapper[4915]: I0127 20:33:16.333530 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af1c3774-9208-478b-b3c1-edd8cc407169-utilities\") pod \"redhat-operators-qbq97\" (UID: \"af1c3774-9208-478b-b3c1-edd8cc407169\") " pod="openshift-marketplace/redhat-operators-qbq97" Jan 27 20:33:16 crc kubenswrapper[4915]: I0127 20:33:16.435285 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af1c3774-9208-478b-b3c1-edd8cc407169-catalog-content\") pod \"redhat-operators-qbq97\" (UID: \"af1c3774-9208-478b-b3c1-edd8cc407169\") " pod="openshift-marketplace/redhat-operators-qbq97" Jan 27 20:33:16 crc kubenswrapper[4915]: I0127 20:33:16.435878 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af1c3774-9208-478b-b3c1-edd8cc407169-catalog-content\") pod \"redhat-operators-qbq97\" (UID: \"af1c3774-9208-478b-b3c1-edd8cc407169\") " pod="openshift-marketplace/redhat-operators-qbq97" Jan 27 20:33:16 crc kubenswrapper[4915]: I0127 20:33:16.436134 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsvxq\" (UniqueName: \"kubernetes.io/projected/af1c3774-9208-478b-b3c1-edd8cc407169-kube-api-access-tsvxq\") pod \"redhat-operators-qbq97\" (UID: \"af1c3774-9208-478b-b3c1-edd8cc407169\") " pod="openshift-marketplace/redhat-operators-qbq97" Jan 27 20:33:16 crc kubenswrapper[4915]: I0127 20:33:16.436517 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af1c3774-9208-478b-b3c1-edd8cc407169-utilities\") pod \"redhat-operators-qbq97\" (UID: \"af1c3774-9208-478b-b3c1-edd8cc407169\") " pod="openshift-marketplace/redhat-operators-qbq97" Jan 27 20:33:16 crc kubenswrapper[4915]: I0127 20:33:16.436861 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af1c3774-9208-478b-b3c1-edd8cc407169-utilities\") pod \"redhat-operators-qbq97\" (UID: \"af1c3774-9208-478b-b3c1-edd8cc407169\") " pod="openshift-marketplace/redhat-operators-qbq97" Jan 27 20:33:16 crc kubenswrapper[4915]: I0127 20:33:16.464046 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsvxq\" (UniqueName: \"kubernetes.io/projected/af1c3774-9208-478b-b3c1-edd8cc407169-kube-api-access-tsvxq\") pod \"redhat-operators-qbq97\" (UID: \"af1c3774-9208-478b-b3c1-edd8cc407169\") " pod="openshift-marketplace/redhat-operators-qbq97" Jan 27 20:33:16 crc kubenswrapper[4915]: I0127 20:33:16.499137 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qbq97" Jan 27 20:33:16 crc kubenswrapper[4915]: I0127 20:33:16.938566 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qbq97"] Jan 27 20:33:17 crc kubenswrapper[4915]: I0127 20:33:17.205292 4915 generic.go:334] "Generic (PLEG): container finished" podID="af1c3774-9208-478b-b3c1-edd8cc407169" containerID="d42a111a5a2462410bd31c47b2101e387b96e544a01d5d46761425356a2e616b" exitCode=0 Jan 27 20:33:17 crc kubenswrapper[4915]: I0127 20:33:17.205342 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qbq97" event={"ID":"af1c3774-9208-478b-b3c1-edd8cc407169","Type":"ContainerDied","Data":"d42a111a5a2462410bd31c47b2101e387b96e544a01d5d46761425356a2e616b"} Jan 27 20:33:17 crc kubenswrapper[4915]: I0127 20:33:17.205399 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qbq97" event={"ID":"af1c3774-9208-478b-b3c1-edd8cc407169","Type":"ContainerStarted","Data":"5626015c19ed48d9f96b86d833269d4fc33f9381d96e8287e5b2d3d64fa5c504"} Jan 27 20:33:19 crc kubenswrapper[4915]: I0127 20:33:19.231054 4915 generic.go:334] "Generic (PLEG): container finished" podID="af1c3774-9208-478b-b3c1-edd8cc407169" containerID="746f589283da1cf7ae7edfcff1a4519c51f63520bdb3f8b0da754e2fa446cf44" exitCode=0 Jan 27 20:33:19 crc kubenswrapper[4915]: I0127 20:33:19.231146 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qbq97" event={"ID":"af1c3774-9208-478b-b3c1-edd8cc407169","Type":"ContainerDied","Data":"746f589283da1cf7ae7edfcff1a4519c51f63520bdb3f8b0da754e2fa446cf44"} Jan 27 20:33:20 crc kubenswrapper[4915]: I0127 20:33:20.243877 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qbq97" event={"ID":"af1c3774-9208-478b-b3c1-edd8cc407169","Type":"ContainerStarted","Data":"ae573ec3e75dc7bdb28666025fee64f10f8d327240496eb013fa4ba4014eb99f"} Jan 27 20:33:20 crc kubenswrapper[4915]: I0127 20:33:20.277997 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qbq97" podStartSLOduration=1.871127452 podStartE2EDuration="4.277979121s" podCreationTimestamp="2026-01-27 20:33:16 +0000 UTC" firstStartedPulling="2026-01-27 20:33:17.206976207 +0000 UTC m=+6688.564829871" lastFinishedPulling="2026-01-27 20:33:19.613827866 +0000 UTC m=+6690.971681540" observedRunningTime="2026-01-27 20:33:20.275711195 +0000 UTC m=+6691.633564909" watchObservedRunningTime="2026-01-27 20:33:20.277979121 +0000 UTC m=+6691.635832795" Jan 27 20:33:20 crc kubenswrapper[4915]: I0127 20:33:20.358074 4915 scope.go:117] "RemoveContainer" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" Jan 27 20:33:20 crc kubenswrapper[4915]: E0127 20:33:20.358339 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:33:26 crc kubenswrapper[4915]: I0127 20:33:26.499400 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qbq97" Jan 27 20:33:26 crc kubenswrapper[4915]: I0127 20:33:26.501012 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qbq97" Jan 27 20:33:26 crc kubenswrapper[4915]: I0127 20:33:26.594718 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qbq97" Jan 27 20:33:27 crc kubenswrapper[4915]: I0127 20:33:27.341967 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qbq97" Jan 27 20:33:27 crc kubenswrapper[4915]: I0127 20:33:27.385989 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qbq97"] Jan 27 20:33:29 crc kubenswrapper[4915]: I0127 20:33:29.322149 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qbq97" podUID="af1c3774-9208-478b-b3c1-edd8cc407169" containerName="registry-server" containerID="cri-o://ae573ec3e75dc7bdb28666025fee64f10f8d327240496eb013fa4ba4014eb99f" gracePeriod=2 Jan 27 20:33:31 crc kubenswrapper[4915]: I0127 20:33:31.341204 4915 generic.go:334] "Generic (PLEG): container finished" podID="af1c3774-9208-478b-b3c1-edd8cc407169" containerID="ae573ec3e75dc7bdb28666025fee64f10f8d327240496eb013fa4ba4014eb99f" exitCode=0 Jan 27 20:33:31 crc kubenswrapper[4915]: I0127 20:33:31.341555 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qbq97" event={"ID":"af1c3774-9208-478b-b3c1-edd8cc407169","Type":"ContainerDied","Data":"ae573ec3e75dc7bdb28666025fee64f10f8d327240496eb013fa4ba4014eb99f"} Jan 27 20:33:31 crc kubenswrapper[4915]: I0127 20:33:31.593384 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qbq97" Jan 27 20:33:31 crc kubenswrapper[4915]: I0127 20:33:31.759354 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsvxq\" (UniqueName: \"kubernetes.io/projected/af1c3774-9208-478b-b3c1-edd8cc407169-kube-api-access-tsvxq\") pod \"af1c3774-9208-478b-b3c1-edd8cc407169\" (UID: \"af1c3774-9208-478b-b3c1-edd8cc407169\") " Jan 27 20:33:31 crc kubenswrapper[4915]: I0127 20:33:31.759967 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af1c3774-9208-478b-b3c1-edd8cc407169-catalog-content\") pod \"af1c3774-9208-478b-b3c1-edd8cc407169\" (UID: \"af1c3774-9208-478b-b3c1-edd8cc407169\") " Jan 27 20:33:31 crc kubenswrapper[4915]: I0127 20:33:31.760244 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af1c3774-9208-478b-b3c1-edd8cc407169-utilities\") pod \"af1c3774-9208-478b-b3c1-edd8cc407169\" (UID: \"af1c3774-9208-478b-b3c1-edd8cc407169\") " Jan 27 20:33:31 crc kubenswrapper[4915]: I0127 20:33:31.761359 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af1c3774-9208-478b-b3c1-edd8cc407169-utilities" (OuterVolumeSpecName: "utilities") pod "af1c3774-9208-478b-b3c1-edd8cc407169" (UID: "af1c3774-9208-478b-b3c1-edd8cc407169"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:33:31 crc kubenswrapper[4915]: I0127 20:33:31.770091 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af1c3774-9208-478b-b3c1-edd8cc407169-kube-api-access-tsvxq" (OuterVolumeSpecName: "kube-api-access-tsvxq") pod "af1c3774-9208-478b-b3c1-edd8cc407169" (UID: "af1c3774-9208-478b-b3c1-edd8cc407169"). InnerVolumeSpecName "kube-api-access-tsvxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:33:31 crc kubenswrapper[4915]: I0127 20:33:31.863760 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af1c3774-9208-478b-b3c1-edd8cc407169-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:31 crc kubenswrapper[4915]: I0127 20:33:31.863835 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsvxq\" (UniqueName: \"kubernetes.io/projected/af1c3774-9208-478b-b3c1-edd8cc407169-kube-api-access-tsvxq\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:31 crc kubenswrapper[4915]: I0127 20:33:31.932744 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af1c3774-9208-478b-b3c1-edd8cc407169-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af1c3774-9208-478b-b3c1-edd8cc407169" (UID: "af1c3774-9208-478b-b3c1-edd8cc407169"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:33:31 crc kubenswrapper[4915]: I0127 20:33:31.966929 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af1c3774-9208-478b-b3c1-edd8cc407169-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:32 crc kubenswrapper[4915]: I0127 20:33:32.356582 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qbq97" event={"ID":"af1c3774-9208-478b-b3c1-edd8cc407169","Type":"ContainerDied","Data":"5626015c19ed48d9f96b86d833269d4fc33f9381d96e8287e5b2d3d64fa5c504"} Jan 27 20:33:32 crc kubenswrapper[4915]: I0127 20:33:32.356652 4915 scope.go:117] "RemoveContainer" containerID="ae573ec3e75dc7bdb28666025fee64f10f8d327240496eb013fa4ba4014eb99f" Jan 27 20:33:32 crc kubenswrapper[4915]: I0127 20:33:32.356824 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qbq97" Jan 27 20:33:32 crc kubenswrapper[4915]: I0127 20:33:32.399712 4915 scope.go:117] "RemoveContainer" containerID="746f589283da1cf7ae7edfcff1a4519c51f63520bdb3f8b0da754e2fa446cf44" Jan 27 20:33:32 crc kubenswrapper[4915]: I0127 20:33:32.407473 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qbq97"] Jan 27 20:33:32 crc kubenswrapper[4915]: I0127 20:33:32.427345 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qbq97"] Jan 27 20:33:32 crc kubenswrapper[4915]: I0127 20:33:32.436656 4915 scope.go:117] "RemoveContainer" containerID="d42a111a5a2462410bd31c47b2101e387b96e544a01d5d46761425356a2e616b" Jan 27 20:33:33 crc kubenswrapper[4915]: I0127 20:33:33.358635 4915 scope.go:117] "RemoveContainer" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" Jan 27 20:33:33 crc kubenswrapper[4915]: E0127 20:33:33.360261 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:33:33 crc kubenswrapper[4915]: I0127 20:33:33.372529 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af1c3774-9208-478b-b3c1-edd8cc407169" path="/var/lib/kubelet/pods/af1c3774-9208-478b-b3c1-edd8cc407169/volumes" Jan 27 20:33:36 crc kubenswrapper[4915]: I0127 20:33:36.032467 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-975c5"] Jan 27 20:33:36 crc kubenswrapper[4915]: E0127 20:33:36.034378 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af1c3774-9208-478b-b3c1-edd8cc407169" containerName="registry-server" Jan 27 20:33:36 crc kubenswrapper[4915]: I0127 20:33:36.034395 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="af1c3774-9208-478b-b3c1-edd8cc407169" containerName="registry-server" Jan 27 20:33:36 crc kubenswrapper[4915]: E0127 20:33:36.034412 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af1c3774-9208-478b-b3c1-edd8cc407169" containerName="extract-utilities" Jan 27 20:33:36 crc kubenswrapper[4915]: I0127 20:33:36.034421 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="af1c3774-9208-478b-b3c1-edd8cc407169" containerName="extract-utilities" Jan 27 20:33:36 crc kubenswrapper[4915]: E0127 20:33:36.034432 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af1c3774-9208-478b-b3c1-edd8cc407169" containerName="extract-content" Jan 27 20:33:36 crc kubenswrapper[4915]: I0127 20:33:36.034438 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="af1c3774-9208-478b-b3c1-edd8cc407169" containerName="extract-content" Jan 27 20:33:36 crc kubenswrapper[4915]: I0127 20:33:36.034658 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="af1c3774-9208-478b-b3c1-edd8cc407169" containerName="registry-server" Jan 27 20:33:36 crc kubenswrapper[4915]: I0127 20:33:36.036882 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-975c5" Jan 27 20:33:36 crc kubenswrapper[4915]: I0127 20:33:36.043926 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-975c5"] Jan 27 20:33:36 crc kubenswrapper[4915]: I0127 20:33:36.048543 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81368b67-b485-4b0b-926e-332225f9022c-utilities\") pod \"certified-operators-975c5\" (UID: \"81368b67-b485-4b0b-926e-332225f9022c\") " pod="openshift-marketplace/certified-operators-975c5" Jan 27 20:33:36 crc kubenswrapper[4915]: I0127 20:33:36.048620 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctdqd\" (UniqueName: \"kubernetes.io/projected/81368b67-b485-4b0b-926e-332225f9022c-kube-api-access-ctdqd\") pod \"certified-operators-975c5\" (UID: \"81368b67-b485-4b0b-926e-332225f9022c\") " pod="openshift-marketplace/certified-operators-975c5" Jan 27 20:33:36 crc kubenswrapper[4915]: I0127 20:33:36.048920 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81368b67-b485-4b0b-926e-332225f9022c-catalog-content\") pod \"certified-operators-975c5\" (UID: \"81368b67-b485-4b0b-926e-332225f9022c\") " pod="openshift-marketplace/certified-operators-975c5" Jan 27 20:33:36 crc kubenswrapper[4915]: I0127 20:33:36.150177 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctdqd\" (UniqueName: \"kubernetes.io/projected/81368b67-b485-4b0b-926e-332225f9022c-kube-api-access-ctdqd\") pod \"certified-operators-975c5\" (UID: \"81368b67-b485-4b0b-926e-332225f9022c\") " pod="openshift-marketplace/certified-operators-975c5" Jan 27 20:33:36 crc kubenswrapper[4915]: I0127 20:33:36.150306 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81368b67-b485-4b0b-926e-332225f9022c-catalog-content\") pod \"certified-operators-975c5\" (UID: \"81368b67-b485-4b0b-926e-332225f9022c\") " pod="openshift-marketplace/certified-operators-975c5" Jan 27 20:33:36 crc kubenswrapper[4915]: I0127 20:33:36.150364 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81368b67-b485-4b0b-926e-332225f9022c-utilities\") pod \"certified-operators-975c5\" (UID: \"81368b67-b485-4b0b-926e-332225f9022c\") " pod="openshift-marketplace/certified-operators-975c5" Jan 27 20:33:36 crc kubenswrapper[4915]: I0127 20:33:36.150805 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81368b67-b485-4b0b-926e-332225f9022c-utilities\") pod \"certified-operators-975c5\" (UID: \"81368b67-b485-4b0b-926e-332225f9022c\") " pod="openshift-marketplace/certified-operators-975c5" Jan 27 20:33:36 crc kubenswrapper[4915]: I0127 20:33:36.151764 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81368b67-b485-4b0b-926e-332225f9022c-catalog-content\") pod \"certified-operators-975c5\" (UID: \"81368b67-b485-4b0b-926e-332225f9022c\") " pod="openshift-marketplace/certified-operators-975c5" Jan 27 20:33:36 crc kubenswrapper[4915]: I0127 20:33:36.175530 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctdqd\" (UniqueName: \"kubernetes.io/projected/81368b67-b485-4b0b-926e-332225f9022c-kube-api-access-ctdqd\") pod \"certified-operators-975c5\" (UID: \"81368b67-b485-4b0b-926e-332225f9022c\") " pod="openshift-marketplace/certified-operators-975c5" Jan 27 20:33:36 crc kubenswrapper[4915]: I0127 20:33:36.359218 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-975c5" Jan 27 20:33:36 crc kubenswrapper[4915]: I0127 20:33:36.826427 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-975c5"] Jan 27 20:33:37 crc kubenswrapper[4915]: I0127 20:33:37.404373 4915 generic.go:334] "Generic (PLEG): container finished" podID="81368b67-b485-4b0b-926e-332225f9022c" containerID="7fcc3425c52bd168c22a6701273e0aecf26f9a7a22dce8a2d4f1904c1c4a4d2f" exitCode=0 Jan 27 20:33:37 crc kubenswrapper[4915]: I0127 20:33:37.404465 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-975c5" event={"ID":"81368b67-b485-4b0b-926e-332225f9022c","Type":"ContainerDied","Data":"7fcc3425c52bd168c22a6701273e0aecf26f9a7a22dce8a2d4f1904c1c4a4d2f"} Jan 27 20:33:37 crc kubenswrapper[4915]: I0127 20:33:37.404616 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-975c5" event={"ID":"81368b67-b485-4b0b-926e-332225f9022c","Type":"ContainerStarted","Data":"b86817b0d85304cd7b536da2d5ad1cc0bf64f799735afcb226716cee7b2f5f83"} Jan 27 20:33:42 crc kubenswrapper[4915]: I0127 20:33:42.447189 4915 generic.go:334] "Generic (PLEG): container finished" podID="81368b67-b485-4b0b-926e-332225f9022c" containerID="3128f4483b4e02a7ec0fc569ee2109d9ebd8b4b58024b8573e06fb137c2db445" exitCode=0 Jan 27 20:33:42 crc kubenswrapper[4915]: I0127 20:33:42.447338 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-975c5" event={"ID":"81368b67-b485-4b0b-926e-332225f9022c","Type":"ContainerDied","Data":"3128f4483b4e02a7ec0fc569ee2109d9ebd8b4b58024b8573e06fb137c2db445"} Jan 27 20:33:43 crc kubenswrapper[4915]: I0127 20:33:43.458783 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-975c5" event={"ID":"81368b67-b485-4b0b-926e-332225f9022c","Type":"ContainerStarted","Data":"ced7ad80eaa1ede1b37fd59407d754caf4d48ff06aed8e0f7bb6f600e698a70a"} Jan 27 20:33:43 crc kubenswrapper[4915]: I0127 20:33:43.487126 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-975c5" podStartSLOduration=1.9654024890000001 podStartE2EDuration="7.487101638s" podCreationTimestamp="2026-01-27 20:33:36 +0000 UTC" firstStartedPulling="2026-01-27 20:33:37.40721194 +0000 UTC m=+6708.765065614" lastFinishedPulling="2026-01-27 20:33:42.928911059 +0000 UTC m=+6714.286764763" observedRunningTime="2026-01-27 20:33:43.4847321 +0000 UTC m=+6714.842585764" watchObservedRunningTime="2026-01-27 20:33:43.487101638 +0000 UTC m=+6714.844955312" Jan 27 20:33:46 crc kubenswrapper[4915]: I0127 20:33:46.360200 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-975c5" Jan 27 20:33:46 crc kubenswrapper[4915]: I0127 20:33:46.360497 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-975c5" Jan 27 20:33:46 crc kubenswrapper[4915]: I0127 20:33:46.417198 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-975c5" Jan 27 20:33:47 crc kubenswrapper[4915]: I0127 20:33:47.357966 4915 scope.go:117] "RemoveContainer" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" Jan 27 20:33:47 crc kubenswrapper[4915]: E0127 20:33:47.358593 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:33:56 crc kubenswrapper[4915]: I0127 20:33:56.438934 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-975c5" Jan 27 20:33:56 crc kubenswrapper[4915]: I0127 20:33:56.524647 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-975c5"] Jan 27 20:33:56 crc kubenswrapper[4915]: I0127 20:33:56.573613 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p6dnm"] Jan 27 20:33:56 crc kubenswrapper[4915]: I0127 20:33:56.573829 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p6dnm" podUID="b018046c-4f05-4969-ae24-3017ac28f3f7" containerName="registry-server" containerID="cri-o://ec0a2741de0339bc6c4484dad325b0901422a32767965c4efa458aa95ffbeba5" gracePeriod=2 Jan 27 20:33:57 crc kubenswrapper[4915]: I0127 20:33:57.055135 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p6dnm" Jan 27 20:33:57 crc kubenswrapper[4915]: I0127 20:33:57.241720 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2n25\" (UniqueName: \"kubernetes.io/projected/b018046c-4f05-4969-ae24-3017ac28f3f7-kube-api-access-w2n25\") pod \"b018046c-4f05-4969-ae24-3017ac28f3f7\" (UID: \"b018046c-4f05-4969-ae24-3017ac28f3f7\") " Jan 27 20:33:57 crc kubenswrapper[4915]: I0127 20:33:57.241830 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b018046c-4f05-4969-ae24-3017ac28f3f7-utilities\") pod \"b018046c-4f05-4969-ae24-3017ac28f3f7\" (UID: \"b018046c-4f05-4969-ae24-3017ac28f3f7\") " Jan 27 20:33:57 crc kubenswrapper[4915]: I0127 20:33:57.241873 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b018046c-4f05-4969-ae24-3017ac28f3f7-catalog-content\") pod \"b018046c-4f05-4969-ae24-3017ac28f3f7\" (UID: \"b018046c-4f05-4969-ae24-3017ac28f3f7\") " Jan 27 20:33:57 crc kubenswrapper[4915]: I0127 20:33:57.244949 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b018046c-4f05-4969-ae24-3017ac28f3f7-utilities" (OuterVolumeSpecName: "utilities") pod "b018046c-4f05-4969-ae24-3017ac28f3f7" (UID: "b018046c-4f05-4969-ae24-3017ac28f3f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:33:57 crc kubenswrapper[4915]: I0127 20:33:57.253116 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b018046c-4f05-4969-ae24-3017ac28f3f7-kube-api-access-w2n25" (OuterVolumeSpecName: "kube-api-access-w2n25") pod "b018046c-4f05-4969-ae24-3017ac28f3f7" (UID: "b018046c-4f05-4969-ae24-3017ac28f3f7"). InnerVolumeSpecName "kube-api-access-w2n25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:33:57 crc kubenswrapper[4915]: I0127 20:33:57.305735 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b018046c-4f05-4969-ae24-3017ac28f3f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b018046c-4f05-4969-ae24-3017ac28f3f7" (UID: "b018046c-4f05-4969-ae24-3017ac28f3f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:33:57 crc kubenswrapper[4915]: I0127 20:33:57.350336 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2n25\" (UniqueName: \"kubernetes.io/projected/b018046c-4f05-4969-ae24-3017ac28f3f7-kube-api-access-w2n25\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:57 crc kubenswrapper[4915]: I0127 20:33:57.350818 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b018046c-4f05-4969-ae24-3017ac28f3f7-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:57 crc kubenswrapper[4915]: I0127 20:33:57.350833 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b018046c-4f05-4969-ae24-3017ac28f3f7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:57 crc kubenswrapper[4915]: I0127 20:33:57.578518 4915 generic.go:334] "Generic (PLEG): container finished" podID="b018046c-4f05-4969-ae24-3017ac28f3f7" containerID="ec0a2741de0339bc6c4484dad325b0901422a32767965c4efa458aa95ffbeba5" exitCode=0 Jan 27 20:33:57 crc kubenswrapper[4915]: I0127 20:33:57.578571 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p6dnm" Jan 27 20:33:57 crc kubenswrapper[4915]: I0127 20:33:57.578581 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6dnm" event={"ID":"b018046c-4f05-4969-ae24-3017ac28f3f7","Type":"ContainerDied","Data":"ec0a2741de0339bc6c4484dad325b0901422a32767965c4efa458aa95ffbeba5"} Jan 27 20:33:57 crc kubenswrapper[4915]: I0127 20:33:57.578694 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6dnm" event={"ID":"b018046c-4f05-4969-ae24-3017ac28f3f7","Type":"ContainerDied","Data":"9706ac2cd0961ae5ee755025d0174311c44410845ea8400264652d3c335b8b07"} Jan 27 20:33:57 crc kubenswrapper[4915]: I0127 20:33:57.578723 4915 scope.go:117] "RemoveContainer" containerID="ec0a2741de0339bc6c4484dad325b0901422a32767965c4efa458aa95ffbeba5" Jan 27 20:33:57 crc kubenswrapper[4915]: I0127 20:33:57.600215 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p6dnm"] Jan 27 20:33:57 crc kubenswrapper[4915]: I0127 20:33:57.606603 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p6dnm"] Jan 27 20:33:57 crc kubenswrapper[4915]: I0127 20:33:57.608614 4915 scope.go:117] "RemoveContainer" containerID="7c6839705d15db5e46f3cf46de1fc7476c5b17f2bc90eef13e7774d2df5c5140" Jan 27 20:33:57 crc kubenswrapper[4915]: I0127 20:33:57.632739 4915 scope.go:117] "RemoveContainer" containerID="b4976f61b3b27d4aeffe0aa433c361eb8d31b86b5f528722817a4e8c89d719aa" Jan 27 20:33:57 crc kubenswrapper[4915]: I0127 20:33:57.668350 4915 scope.go:117] "RemoveContainer" containerID="ec0a2741de0339bc6c4484dad325b0901422a32767965c4efa458aa95ffbeba5" Jan 27 20:33:57 crc kubenswrapper[4915]: E0127 20:33:57.669077 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec0a2741de0339bc6c4484dad325b0901422a32767965c4efa458aa95ffbeba5\": container with ID starting with ec0a2741de0339bc6c4484dad325b0901422a32767965c4efa458aa95ffbeba5 not found: ID does not exist" containerID="ec0a2741de0339bc6c4484dad325b0901422a32767965c4efa458aa95ffbeba5" Jan 27 20:33:57 crc kubenswrapper[4915]: I0127 20:33:57.669120 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec0a2741de0339bc6c4484dad325b0901422a32767965c4efa458aa95ffbeba5"} err="failed to get container status \"ec0a2741de0339bc6c4484dad325b0901422a32767965c4efa458aa95ffbeba5\": rpc error: code = NotFound desc = could not find container \"ec0a2741de0339bc6c4484dad325b0901422a32767965c4efa458aa95ffbeba5\": container with ID starting with ec0a2741de0339bc6c4484dad325b0901422a32767965c4efa458aa95ffbeba5 not found: ID does not exist" Jan 27 20:33:57 crc kubenswrapper[4915]: I0127 20:33:57.669152 4915 scope.go:117] "RemoveContainer" containerID="7c6839705d15db5e46f3cf46de1fc7476c5b17f2bc90eef13e7774d2df5c5140" Jan 27 20:33:57 crc kubenswrapper[4915]: E0127 20:33:57.669637 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c6839705d15db5e46f3cf46de1fc7476c5b17f2bc90eef13e7774d2df5c5140\": container with ID starting with 7c6839705d15db5e46f3cf46de1fc7476c5b17f2bc90eef13e7774d2df5c5140 not found: ID does not exist" containerID="7c6839705d15db5e46f3cf46de1fc7476c5b17f2bc90eef13e7774d2df5c5140" Jan 27 20:33:57 crc kubenswrapper[4915]: I0127 20:33:57.669665 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c6839705d15db5e46f3cf46de1fc7476c5b17f2bc90eef13e7774d2df5c5140"} err="failed to get container status \"7c6839705d15db5e46f3cf46de1fc7476c5b17f2bc90eef13e7774d2df5c5140\": rpc error: code = NotFound desc = could not find container \"7c6839705d15db5e46f3cf46de1fc7476c5b17f2bc90eef13e7774d2df5c5140\": container with ID starting with 7c6839705d15db5e46f3cf46de1fc7476c5b17f2bc90eef13e7774d2df5c5140 not found: ID does not exist" Jan 27 20:33:57 crc kubenswrapper[4915]: I0127 20:33:57.669685 4915 scope.go:117] "RemoveContainer" containerID="b4976f61b3b27d4aeffe0aa433c361eb8d31b86b5f528722817a4e8c89d719aa" Jan 27 20:33:57 crc kubenswrapper[4915]: E0127 20:33:57.669903 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4976f61b3b27d4aeffe0aa433c361eb8d31b86b5f528722817a4e8c89d719aa\": container with ID starting with b4976f61b3b27d4aeffe0aa433c361eb8d31b86b5f528722817a4e8c89d719aa not found: ID does not exist" containerID="b4976f61b3b27d4aeffe0aa433c361eb8d31b86b5f528722817a4e8c89d719aa" Jan 27 20:33:57 crc kubenswrapper[4915]: I0127 20:33:57.669922 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4976f61b3b27d4aeffe0aa433c361eb8d31b86b5f528722817a4e8c89d719aa"} err="failed to get container status \"b4976f61b3b27d4aeffe0aa433c361eb8d31b86b5f528722817a4e8c89d719aa\": rpc error: code = NotFound desc = could not find container \"b4976f61b3b27d4aeffe0aa433c361eb8d31b86b5f528722817a4e8c89d719aa\": container with ID starting with b4976f61b3b27d4aeffe0aa433c361eb8d31b86b5f528722817a4e8c89d719aa not found: ID does not exist" Jan 27 20:33:58 crc kubenswrapper[4915]: I0127 20:33:58.357670 4915 scope.go:117] "RemoveContainer" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" Jan 27 20:33:58 crc kubenswrapper[4915]: E0127 20:33:58.358078 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:33:59 crc kubenswrapper[4915]: I0127 20:33:59.372443 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b018046c-4f05-4969-ae24-3017ac28f3f7" path="/var/lib/kubelet/pods/b018046c-4f05-4969-ae24-3017ac28f3f7/volumes" Jan 27 20:34:09 crc kubenswrapper[4915]: I0127 20:34:09.364889 4915 scope.go:117] "RemoveContainer" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" Jan 27 20:34:09 crc kubenswrapper[4915]: E0127 20:34:09.365840 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:34:20 crc kubenswrapper[4915]: I0127 20:34:20.357973 4915 scope.go:117] "RemoveContainer" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" Jan 27 20:34:20 crc kubenswrapper[4915]: E0127 20:34:20.358773 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:34:31 crc kubenswrapper[4915]: I0127 20:34:31.358027 4915 scope.go:117] "RemoveContainer" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" Jan 27 20:34:31 crc kubenswrapper[4915]: I0127 20:34:31.905828 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"cadcb103a1f11210dad31465c68e57e5873d4cf4dfce5be9927759a38963f7fd"} Jan 27 20:36:50 crc kubenswrapper[4915]: I0127 20:36:50.625289 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:36:50 crc kubenswrapper[4915]: I0127 20:36:50.625980 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:37:20 crc kubenswrapper[4915]: I0127 20:37:20.624662 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:37:20 crc kubenswrapper[4915]: I0127 20:37:20.626124 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:37:50 crc kubenswrapper[4915]: I0127 20:37:50.625346 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:37:50 crc kubenswrapper[4915]: I0127 20:37:50.626350 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:37:50 crc kubenswrapper[4915]: I0127 20:37:50.626423 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 20:37:50 crc kubenswrapper[4915]: I0127 20:37:50.627692 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cadcb103a1f11210dad31465c68e57e5873d4cf4dfce5be9927759a38963f7fd"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 20:37:50 crc kubenswrapper[4915]: I0127 20:37:50.627767 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://cadcb103a1f11210dad31465c68e57e5873d4cf4dfce5be9927759a38963f7fd" gracePeriod=600 Jan 27 20:37:51 crc kubenswrapper[4915]: I0127 20:37:51.055966 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="cadcb103a1f11210dad31465c68e57e5873d4cf4dfce5be9927759a38963f7fd" exitCode=0 Jan 27 20:37:51 crc kubenswrapper[4915]: I0127 20:37:51.056055 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"cadcb103a1f11210dad31465c68e57e5873d4cf4dfce5be9927759a38963f7fd"} Jan 27 20:37:51 crc kubenswrapper[4915]: I0127 20:37:51.056588 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25"} Jan 27 20:37:51 crc kubenswrapper[4915]: I0127 20:37:51.056626 4915 scope.go:117] "RemoveContainer" containerID="f1ea29bc183ea4452bd8d581ccbb49ad3274143fccd98f8f9fdd6f8e47b53619" Jan 27 20:39:50 crc kubenswrapper[4915]: I0127 20:39:50.624985 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:39:50 crc kubenswrapper[4915]: I0127 20:39:50.625845 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:40:20 crc kubenswrapper[4915]: I0127 20:40:20.624270 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:40:20 crc kubenswrapper[4915]: I0127 20:40:20.624900 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:40:40 crc kubenswrapper[4915]: I0127 20:40:40.684156 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-78whm"] Jan 27 20:40:40 crc kubenswrapper[4915]: E0127 20:40:40.685296 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b018046c-4f05-4969-ae24-3017ac28f3f7" containerName="extract-content" Jan 27 20:40:40 crc kubenswrapper[4915]: I0127 20:40:40.685311 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b018046c-4f05-4969-ae24-3017ac28f3f7" containerName="extract-content" Jan 27 20:40:40 crc kubenswrapper[4915]: E0127 20:40:40.685337 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b018046c-4f05-4969-ae24-3017ac28f3f7" containerName="registry-server" Jan 27 20:40:40 crc kubenswrapper[4915]: I0127 20:40:40.685344 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b018046c-4f05-4969-ae24-3017ac28f3f7" containerName="registry-server" Jan 27 20:40:40 crc kubenswrapper[4915]: E0127 20:40:40.685372 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b018046c-4f05-4969-ae24-3017ac28f3f7" containerName="extract-utilities" Jan 27 20:40:40 crc kubenswrapper[4915]: I0127 20:40:40.685379 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b018046c-4f05-4969-ae24-3017ac28f3f7" containerName="extract-utilities" Jan 27 20:40:40 crc kubenswrapper[4915]: I0127 20:40:40.685569 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="b018046c-4f05-4969-ae24-3017ac28f3f7" containerName="registry-server" Jan 27 20:40:40 crc kubenswrapper[4915]: I0127 20:40:40.687035 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-78whm" Jan 27 20:40:40 crc kubenswrapper[4915]: I0127 20:40:40.696597 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-78whm"] Jan 27 20:40:40 crc kubenswrapper[4915]: I0127 20:40:40.779658 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffhtb\" (UniqueName: \"kubernetes.io/projected/ee3f256e-bc7c-40ce-8e88-5f886c0e1a40-kube-api-access-ffhtb\") pod \"redhat-marketplace-78whm\" (UID: \"ee3f256e-bc7c-40ce-8e88-5f886c0e1a40\") " pod="openshift-marketplace/redhat-marketplace-78whm" Jan 27 20:40:40 crc kubenswrapper[4915]: I0127 20:40:40.780154 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee3f256e-bc7c-40ce-8e88-5f886c0e1a40-catalog-content\") pod \"redhat-marketplace-78whm\" (UID: \"ee3f256e-bc7c-40ce-8e88-5f886c0e1a40\") " pod="openshift-marketplace/redhat-marketplace-78whm" Jan 27 20:40:40 crc kubenswrapper[4915]: I0127 20:40:40.780202 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee3f256e-bc7c-40ce-8e88-5f886c0e1a40-utilities\") pod \"redhat-marketplace-78whm\" (UID: \"ee3f256e-bc7c-40ce-8e88-5f886c0e1a40\") " pod="openshift-marketplace/redhat-marketplace-78whm" Jan 27 20:40:40 crc kubenswrapper[4915]: I0127 20:40:40.882400 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffhtb\" (UniqueName: \"kubernetes.io/projected/ee3f256e-bc7c-40ce-8e88-5f886c0e1a40-kube-api-access-ffhtb\") pod \"redhat-marketplace-78whm\" (UID: \"ee3f256e-bc7c-40ce-8e88-5f886c0e1a40\") " pod="openshift-marketplace/redhat-marketplace-78whm" Jan 27 20:40:40 crc kubenswrapper[4915]: I0127 20:40:40.882539 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee3f256e-bc7c-40ce-8e88-5f886c0e1a40-catalog-content\") pod \"redhat-marketplace-78whm\" (UID: \"ee3f256e-bc7c-40ce-8e88-5f886c0e1a40\") " pod="openshift-marketplace/redhat-marketplace-78whm" Jan 27 20:40:40 crc kubenswrapper[4915]: I0127 20:40:40.882571 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee3f256e-bc7c-40ce-8e88-5f886c0e1a40-utilities\") pod \"redhat-marketplace-78whm\" (UID: \"ee3f256e-bc7c-40ce-8e88-5f886c0e1a40\") " pod="openshift-marketplace/redhat-marketplace-78whm" Jan 27 20:40:40 crc kubenswrapper[4915]: I0127 20:40:40.883241 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee3f256e-bc7c-40ce-8e88-5f886c0e1a40-utilities\") pod \"redhat-marketplace-78whm\" (UID: \"ee3f256e-bc7c-40ce-8e88-5f886c0e1a40\") " pod="openshift-marketplace/redhat-marketplace-78whm" Jan 27 20:40:40 crc kubenswrapper[4915]: I0127 20:40:40.883433 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee3f256e-bc7c-40ce-8e88-5f886c0e1a40-catalog-content\") pod \"redhat-marketplace-78whm\" (UID: \"ee3f256e-bc7c-40ce-8e88-5f886c0e1a40\") " pod="openshift-marketplace/redhat-marketplace-78whm" Jan 27 20:40:40 crc kubenswrapper[4915]: I0127 20:40:40.900549 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffhtb\" (UniqueName: \"kubernetes.io/projected/ee3f256e-bc7c-40ce-8e88-5f886c0e1a40-kube-api-access-ffhtb\") pod \"redhat-marketplace-78whm\" (UID: \"ee3f256e-bc7c-40ce-8e88-5f886c0e1a40\") " pod="openshift-marketplace/redhat-marketplace-78whm" Jan 27 20:40:41 crc kubenswrapper[4915]: I0127 20:40:41.016964 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-78whm" Jan 27 20:40:41 crc kubenswrapper[4915]: I0127 20:40:41.512177 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-78whm"] Jan 27 20:40:42 crc kubenswrapper[4915]: I0127 20:40:42.112051 4915 generic.go:334] "Generic (PLEG): container finished" podID="ee3f256e-bc7c-40ce-8e88-5f886c0e1a40" containerID="1643e94587daf746dc8d8976f7fb3e354435f3e9e8ce8fc7cf4f7e2376d21ee6" exitCode=0 Jan 27 20:40:42 crc kubenswrapper[4915]: I0127 20:40:42.112339 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78whm" event={"ID":"ee3f256e-bc7c-40ce-8e88-5f886c0e1a40","Type":"ContainerDied","Data":"1643e94587daf746dc8d8976f7fb3e354435f3e9e8ce8fc7cf4f7e2376d21ee6"} Jan 27 20:40:42 crc kubenswrapper[4915]: I0127 20:40:42.112366 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78whm" event={"ID":"ee3f256e-bc7c-40ce-8e88-5f886c0e1a40","Type":"ContainerStarted","Data":"f6df787680a5936395bdf3729563bfe7e2ce512841c99e85b4939310ca736222"} Jan 27 20:40:42 crc kubenswrapper[4915]: I0127 20:40:42.114270 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 20:40:43 crc kubenswrapper[4915]: I0127 20:40:43.123913 4915 generic.go:334] "Generic (PLEG): container finished" podID="ee3f256e-bc7c-40ce-8e88-5f886c0e1a40" containerID="454d187eb830d8f4c443c86d39038ca09d863858c1bc43074ac11bd1ad1fc748" exitCode=0 Jan 27 20:40:43 crc kubenswrapper[4915]: I0127 20:40:43.124094 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78whm" event={"ID":"ee3f256e-bc7c-40ce-8e88-5f886c0e1a40","Type":"ContainerDied","Data":"454d187eb830d8f4c443c86d39038ca09d863858c1bc43074ac11bd1ad1fc748"} Jan 27 20:40:44 crc kubenswrapper[4915]: I0127 20:40:44.135178 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78whm" event={"ID":"ee3f256e-bc7c-40ce-8e88-5f886c0e1a40","Type":"ContainerStarted","Data":"4c50d5848ef14cb2ccbd58623536e0c3aa139c7a8e350e2f4263a284a12009f3"} Jan 27 20:40:44 crc kubenswrapper[4915]: I0127 20:40:44.152590 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-78whm" podStartSLOduration=2.673303996 podStartE2EDuration="4.15256956s" podCreationTimestamp="2026-01-27 20:40:40 +0000 UTC" firstStartedPulling="2026-01-27 20:40:42.113993659 +0000 UTC m=+7133.471847323" lastFinishedPulling="2026-01-27 20:40:43.593259213 +0000 UTC m=+7134.951112887" observedRunningTime="2026-01-27 20:40:44.151306359 +0000 UTC m=+7135.509160043" watchObservedRunningTime="2026-01-27 20:40:44.15256956 +0000 UTC m=+7135.510423224" Jan 27 20:40:50 crc kubenswrapper[4915]: I0127 20:40:50.624624 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:40:50 crc kubenswrapper[4915]: I0127 20:40:50.625239 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:40:50 crc kubenswrapper[4915]: I0127 20:40:50.625296 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 20:40:50 crc kubenswrapper[4915]: I0127 20:40:50.626147 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 20:40:50 crc kubenswrapper[4915]: I0127 20:40:50.626231 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25" gracePeriod=600 Jan 27 20:40:50 crc kubenswrapper[4915]: E0127 20:40:50.755343 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:40:51 crc kubenswrapper[4915]: I0127 20:40:51.017638 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-78whm" Jan 27 20:40:51 crc kubenswrapper[4915]: I0127 20:40:51.017690 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-78whm" Jan 27 20:40:51 crc kubenswrapper[4915]: I0127 20:40:51.064842 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-78whm" Jan 27 20:40:51 crc kubenswrapper[4915]: I0127 20:40:51.195598 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25" exitCode=0 Jan 27 20:40:51 crc kubenswrapper[4915]: I0127 20:40:51.196508 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25"} Jan 27 20:40:51 crc kubenswrapper[4915]: I0127 20:40:51.196546 4915 scope.go:117] "RemoveContainer" containerID="cadcb103a1f11210dad31465c68e57e5873d4cf4dfce5be9927759a38963f7fd" Jan 27 20:40:51 crc kubenswrapper[4915]: I0127 20:40:51.196856 4915 scope.go:117] "RemoveContainer" containerID="e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25" Jan 27 20:40:51 crc kubenswrapper[4915]: E0127 20:40:51.197064 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:40:51 crc kubenswrapper[4915]: I0127 20:40:51.248221 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-78whm" Jan 27 20:40:51 crc kubenswrapper[4915]: I0127 20:40:51.309939 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-78whm"] Jan 27 20:40:53 crc kubenswrapper[4915]: I0127 20:40:53.216922 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-78whm" podUID="ee3f256e-bc7c-40ce-8e88-5f886c0e1a40" containerName="registry-server" containerID="cri-o://4c50d5848ef14cb2ccbd58623536e0c3aa139c7a8e350e2f4263a284a12009f3" gracePeriod=2 Jan 27 20:40:53 crc kubenswrapper[4915]: I0127 20:40:53.723464 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-78whm" Jan 27 20:40:53 crc kubenswrapper[4915]: I0127 20:40:53.859830 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee3f256e-bc7c-40ce-8e88-5f886c0e1a40-catalog-content\") pod \"ee3f256e-bc7c-40ce-8e88-5f886c0e1a40\" (UID: \"ee3f256e-bc7c-40ce-8e88-5f886c0e1a40\") " Jan 27 20:40:53 crc kubenswrapper[4915]: I0127 20:40:53.860257 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffhtb\" (UniqueName: \"kubernetes.io/projected/ee3f256e-bc7c-40ce-8e88-5f886c0e1a40-kube-api-access-ffhtb\") pod \"ee3f256e-bc7c-40ce-8e88-5f886c0e1a40\" (UID: \"ee3f256e-bc7c-40ce-8e88-5f886c0e1a40\") " Jan 27 20:40:53 crc kubenswrapper[4915]: I0127 20:40:53.860391 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee3f256e-bc7c-40ce-8e88-5f886c0e1a40-utilities\") pod \"ee3f256e-bc7c-40ce-8e88-5f886c0e1a40\" (UID: \"ee3f256e-bc7c-40ce-8e88-5f886c0e1a40\") " Jan 27 20:40:53 crc kubenswrapper[4915]: I0127 20:40:53.862045 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee3f256e-bc7c-40ce-8e88-5f886c0e1a40-utilities" (OuterVolumeSpecName: "utilities") pod "ee3f256e-bc7c-40ce-8e88-5f886c0e1a40" (UID: "ee3f256e-bc7c-40ce-8e88-5f886c0e1a40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:40:53 crc kubenswrapper[4915]: I0127 20:40:53.865950 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee3f256e-bc7c-40ce-8e88-5f886c0e1a40-kube-api-access-ffhtb" (OuterVolumeSpecName: "kube-api-access-ffhtb") pod "ee3f256e-bc7c-40ce-8e88-5f886c0e1a40" (UID: "ee3f256e-bc7c-40ce-8e88-5f886c0e1a40"). InnerVolumeSpecName "kube-api-access-ffhtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:40:53 crc kubenswrapper[4915]: I0127 20:40:53.881932 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee3f256e-bc7c-40ce-8e88-5f886c0e1a40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee3f256e-bc7c-40ce-8e88-5f886c0e1a40" (UID: "ee3f256e-bc7c-40ce-8e88-5f886c0e1a40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:40:53 crc kubenswrapper[4915]: I0127 20:40:53.963358 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee3f256e-bc7c-40ce-8e88-5f886c0e1a40-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:40:53 crc kubenswrapper[4915]: I0127 20:40:53.963409 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee3f256e-bc7c-40ce-8e88-5f886c0e1a40-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:40:53 crc kubenswrapper[4915]: I0127 20:40:53.963433 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffhtb\" (UniqueName: \"kubernetes.io/projected/ee3f256e-bc7c-40ce-8e88-5f886c0e1a40-kube-api-access-ffhtb\") on node \"crc\" DevicePath \"\"" Jan 27 20:40:54 crc kubenswrapper[4915]: I0127 20:40:54.226532 4915 generic.go:334] "Generic (PLEG): container finished" podID="ee3f256e-bc7c-40ce-8e88-5f886c0e1a40" containerID="4c50d5848ef14cb2ccbd58623536e0c3aa139c7a8e350e2f4263a284a12009f3" exitCode=0 Jan 27 20:40:54 crc kubenswrapper[4915]: I0127 20:40:54.226593 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78whm" event={"ID":"ee3f256e-bc7c-40ce-8e88-5f886c0e1a40","Type":"ContainerDied","Data":"4c50d5848ef14cb2ccbd58623536e0c3aa139c7a8e350e2f4263a284a12009f3"} Jan 27 20:40:54 crc kubenswrapper[4915]: I0127 20:40:54.226625 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78whm" event={"ID":"ee3f256e-bc7c-40ce-8e88-5f886c0e1a40","Type":"ContainerDied","Data":"f6df787680a5936395bdf3729563bfe7e2ce512841c99e85b4939310ca736222"} Jan 27 20:40:54 crc kubenswrapper[4915]: I0127 20:40:54.226645 4915 scope.go:117] "RemoveContainer" containerID="4c50d5848ef14cb2ccbd58623536e0c3aa139c7a8e350e2f4263a284a12009f3" Jan 27 20:40:54 crc kubenswrapper[4915]: I0127 20:40:54.226666 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-78whm" Jan 27 20:40:54 crc kubenswrapper[4915]: I0127 20:40:54.271046 4915 scope.go:117] "RemoveContainer" containerID="454d187eb830d8f4c443c86d39038ca09d863858c1bc43074ac11bd1ad1fc748" Jan 27 20:40:54 crc kubenswrapper[4915]: I0127 20:40:54.278730 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-78whm"] Jan 27 20:40:54 crc kubenswrapper[4915]: I0127 20:40:54.288559 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-78whm"] Jan 27 20:40:54 crc kubenswrapper[4915]: I0127 20:40:54.321967 4915 scope.go:117] "RemoveContainer" containerID="1643e94587daf746dc8d8976f7fb3e354435f3e9e8ce8fc7cf4f7e2376d21ee6" Jan 27 20:40:54 crc kubenswrapper[4915]: I0127 20:40:54.355779 4915 scope.go:117] "RemoveContainer" containerID="4c50d5848ef14cb2ccbd58623536e0c3aa139c7a8e350e2f4263a284a12009f3" Jan 27 20:40:54 crc kubenswrapper[4915]: E0127 20:40:54.356182 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c50d5848ef14cb2ccbd58623536e0c3aa139c7a8e350e2f4263a284a12009f3\": container with ID starting with 4c50d5848ef14cb2ccbd58623536e0c3aa139c7a8e350e2f4263a284a12009f3 not found: ID does not exist" containerID="4c50d5848ef14cb2ccbd58623536e0c3aa139c7a8e350e2f4263a284a12009f3" Jan 27 20:40:54 crc kubenswrapper[4915]: I0127 20:40:54.356232 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c50d5848ef14cb2ccbd58623536e0c3aa139c7a8e350e2f4263a284a12009f3"} err="failed to get container status \"4c50d5848ef14cb2ccbd58623536e0c3aa139c7a8e350e2f4263a284a12009f3\": rpc error: code = NotFound desc = could not find container \"4c50d5848ef14cb2ccbd58623536e0c3aa139c7a8e350e2f4263a284a12009f3\": container with ID starting with 4c50d5848ef14cb2ccbd58623536e0c3aa139c7a8e350e2f4263a284a12009f3 not found: ID does not exist" Jan 27 20:40:54 crc kubenswrapper[4915]: I0127 20:40:54.356255 4915 scope.go:117] "RemoveContainer" containerID="454d187eb830d8f4c443c86d39038ca09d863858c1bc43074ac11bd1ad1fc748" Jan 27 20:40:54 crc kubenswrapper[4915]: E0127 20:40:54.356556 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"454d187eb830d8f4c443c86d39038ca09d863858c1bc43074ac11bd1ad1fc748\": container with ID starting with 454d187eb830d8f4c443c86d39038ca09d863858c1bc43074ac11bd1ad1fc748 not found: ID does not exist" containerID="454d187eb830d8f4c443c86d39038ca09d863858c1bc43074ac11bd1ad1fc748" Jan 27 20:40:54 crc kubenswrapper[4915]: I0127 20:40:54.356616 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"454d187eb830d8f4c443c86d39038ca09d863858c1bc43074ac11bd1ad1fc748"} err="failed to get container status \"454d187eb830d8f4c443c86d39038ca09d863858c1bc43074ac11bd1ad1fc748\": rpc error: code = NotFound desc = could not find container \"454d187eb830d8f4c443c86d39038ca09d863858c1bc43074ac11bd1ad1fc748\": container with ID starting with 454d187eb830d8f4c443c86d39038ca09d863858c1bc43074ac11bd1ad1fc748 not found: ID does not exist" Jan 27 20:40:54 crc kubenswrapper[4915]: I0127 20:40:54.356663 4915 scope.go:117] "RemoveContainer" containerID="1643e94587daf746dc8d8976f7fb3e354435f3e9e8ce8fc7cf4f7e2376d21ee6" Jan 27 20:40:54 crc kubenswrapper[4915]: E0127 20:40:54.357041 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1643e94587daf746dc8d8976f7fb3e354435f3e9e8ce8fc7cf4f7e2376d21ee6\": container with ID starting with 1643e94587daf746dc8d8976f7fb3e354435f3e9e8ce8fc7cf4f7e2376d21ee6 not found: ID does not exist" containerID="1643e94587daf746dc8d8976f7fb3e354435f3e9e8ce8fc7cf4f7e2376d21ee6" Jan 27 20:40:54 crc kubenswrapper[4915]: I0127 20:40:54.357067 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1643e94587daf746dc8d8976f7fb3e354435f3e9e8ce8fc7cf4f7e2376d21ee6"} err="failed to get container status \"1643e94587daf746dc8d8976f7fb3e354435f3e9e8ce8fc7cf4f7e2376d21ee6\": rpc error: code = NotFound desc = could not find container \"1643e94587daf746dc8d8976f7fb3e354435f3e9e8ce8fc7cf4f7e2376d21ee6\": container with ID starting with 1643e94587daf746dc8d8976f7fb3e354435f3e9e8ce8fc7cf4f7e2376d21ee6 not found: ID does not exist" Jan 27 20:40:55 crc kubenswrapper[4915]: I0127 20:40:55.369624 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee3f256e-bc7c-40ce-8e88-5f886c0e1a40" path="/var/lib/kubelet/pods/ee3f256e-bc7c-40ce-8e88-5f886c0e1a40/volumes" Jan 27 20:41:06 crc kubenswrapper[4915]: I0127 20:41:06.359150 4915 scope.go:117] "RemoveContainer" containerID="e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25" Jan 27 20:41:06 crc kubenswrapper[4915]: E0127 20:41:06.360489 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:41:18 crc kubenswrapper[4915]: I0127 20:41:18.358104 4915 scope.go:117] "RemoveContainer" containerID="e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25" Jan 27 20:41:18 crc kubenswrapper[4915]: E0127 20:41:18.358897 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:41:24 crc kubenswrapper[4915]: I0127 20:41:24.421030 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9ldtq"] Jan 27 20:41:24 crc kubenswrapper[4915]: E0127 20:41:24.422255 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3f256e-bc7c-40ce-8e88-5f886c0e1a40" containerName="extract-utilities" Jan 27 20:41:24 crc kubenswrapper[4915]: I0127 20:41:24.422279 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3f256e-bc7c-40ce-8e88-5f886c0e1a40" containerName="extract-utilities" Jan 27 20:41:24 crc kubenswrapper[4915]: E0127 20:41:24.422319 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3f256e-bc7c-40ce-8e88-5f886c0e1a40" containerName="extract-content" Jan 27 20:41:24 crc kubenswrapper[4915]: I0127 20:41:24.422330 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3f256e-bc7c-40ce-8e88-5f886c0e1a40" containerName="extract-content" Jan 27 20:41:24 crc kubenswrapper[4915]: E0127 20:41:24.422349 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3f256e-bc7c-40ce-8e88-5f886c0e1a40" containerName="registry-server" Jan 27 20:41:24 crc kubenswrapper[4915]: I0127 20:41:24.422359 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3f256e-bc7c-40ce-8e88-5f886c0e1a40" containerName="registry-server" Jan 27 20:41:24 crc kubenswrapper[4915]: I0127 20:41:24.422665 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee3f256e-bc7c-40ce-8e88-5f886c0e1a40" containerName="registry-server" Jan 27 20:41:24 crc kubenswrapper[4915]: I0127 20:41:24.424964 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ldtq" Jan 27 20:41:24 crc kubenswrapper[4915]: I0127 20:41:24.450554 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9ldtq"] Jan 27 20:41:24 crc kubenswrapper[4915]: I0127 20:41:24.523254 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2f7962e-1a50-4613-900c-98b2b9dc6149-catalog-content\") pod \"community-operators-9ldtq\" (UID: \"e2f7962e-1a50-4613-900c-98b2b9dc6149\") " pod="openshift-marketplace/community-operators-9ldtq" Jan 27 20:41:24 crc kubenswrapper[4915]: I0127 20:41:24.523436 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rdbs\" (UniqueName: \"kubernetes.io/projected/e2f7962e-1a50-4613-900c-98b2b9dc6149-kube-api-access-2rdbs\") pod \"community-operators-9ldtq\" (UID: \"e2f7962e-1a50-4613-900c-98b2b9dc6149\") " pod="openshift-marketplace/community-operators-9ldtq" Jan 27 20:41:24 crc kubenswrapper[4915]: I0127 20:41:24.523496 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2f7962e-1a50-4613-900c-98b2b9dc6149-utilities\") pod \"community-operators-9ldtq\" (UID: \"e2f7962e-1a50-4613-900c-98b2b9dc6149\") " pod="openshift-marketplace/community-operators-9ldtq" Jan 27 20:41:24 crc kubenswrapper[4915]: I0127 20:41:24.625898 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2f7962e-1a50-4613-900c-98b2b9dc6149-catalog-content\") pod \"community-operators-9ldtq\" (UID: \"e2f7962e-1a50-4613-900c-98b2b9dc6149\") " pod="openshift-marketplace/community-operators-9ldtq" Jan 27 20:41:24 crc kubenswrapper[4915]: I0127 20:41:24.626019 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rdbs\" (UniqueName: \"kubernetes.io/projected/e2f7962e-1a50-4613-900c-98b2b9dc6149-kube-api-access-2rdbs\") pod \"community-operators-9ldtq\" (UID: \"e2f7962e-1a50-4613-900c-98b2b9dc6149\") " pod="openshift-marketplace/community-operators-9ldtq" Jan 27 20:41:24 crc kubenswrapper[4915]: I0127 20:41:24.626065 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2f7962e-1a50-4613-900c-98b2b9dc6149-utilities\") pod \"community-operators-9ldtq\" (UID: \"e2f7962e-1a50-4613-900c-98b2b9dc6149\") " pod="openshift-marketplace/community-operators-9ldtq" Jan 27 20:41:24 crc kubenswrapper[4915]: I0127 20:41:24.626616 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2f7962e-1a50-4613-900c-98b2b9dc6149-utilities\") pod \"community-operators-9ldtq\" (UID: \"e2f7962e-1a50-4613-900c-98b2b9dc6149\") " pod="openshift-marketplace/community-operators-9ldtq" Jan 27 20:41:24 crc kubenswrapper[4915]: I0127 20:41:24.627017 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2f7962e-1a50-4613-900c-98b2b9dc6149-catalog-content\") pod \"community-operators-9ldtq\" (UID: \"e2f7962e-1a50-4613-900c-98b2b9dc6149\") " pod="openshift-marketplace/community-operators-9ldtq" Jan 27 20:41:24 crc kubenswrapper[4915]: I0127 20:41:24.650004 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rdbs\" (UniqueName: \"kubernetes.io/projected/e2f7962e-1a50-4613-900c-98b2b9dc6149-kube-api-access-2rdbs\") pod \"community-operators-9ldtq\" (UID: \"e2f7962e-1a50-4613-900c-98b2b9dc6149\") " pod="openshift-marketplace/community-operators-9ldtq" Jan 27 20:41:24 crc kubenswrapper[4915]: I0127 20:41:24.750517 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ldtq" Jan 27 20:41:25 crc kubenswrapper[4915]: I0127 20:41:25.329196 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9ldtq"] Jan 27 20:41:25 crc kubenswrapper[4915]: I0127 20:41:25.532417 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ldtq" event={"ID":"e2f7962e-1a50-4613-900c-98b2b9dc6149","Type":"ContainerStarted","Data":"9e75d89556e4c8564e25dae90a7fefd6dd447b3dae5b6aada0f92632ba1ad544"} Jan 27 20:41:26 crc kubenswrapper[4915]: I0127 20:41:26.542051 4915 generic.go:334] "Generic (PLEG): container finished" podID="e2f7962e-1a50-4613-900c-98b2b9dc6149" containerID="6c8c1a6ef56443ea112cdcbc1b0455a73a0f8305df5b7ba6373a559b953f51be" exitCode=0 Jan 27 20:41:26 crc kubenswrapper[4915]: I0127 20:41:26.542124 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ldtq" event={"ID":"e2f7962e-1a50-4613-900c-98b2b9dc6149","Type":"ContainerDied","Data":"6c8c1a6ef56443ea112cdcbc1b0455a73a0f8305df5b7ba6373a559b953f51be"} Jan 27 20:41:28 crc kubenswrapper[4915]: I0127 20:41:28.571383 4915 generic.go:334] "Generic (PLEG): container finished" podID="e2f7962e-1a50-4613-900c-98b2b9dc6149" containerID="93b9c104e3e169ff7ad1336816c9d333631aa154dc5f7e7f09739062e306ab05" exitCode=0 Jan 27 20:41:28 crc kubenswrapper[4915]: I0127 20:41:28.571446 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ldtq" event={"ID":"e2f7962e-1a50-4613-900c-98b2b9dc6149","Type":"ContainerDied","Data":"93b9c104e3e169ff7ad1336816c9d333631aa154dc5f7e7f09739062e306ab05"} Jan 27 20:41:29 crc kubenswrapper[4915]: I0127 20:41:29.583681 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ldtq" event={"ID":"e2f7962e-1a50-4613-900c-98b2b9dc6149","Type":"ContainerStarted","Data":"346dd8ebd3ef1341aad0b651a39d60d623fe9d934d50836c33abd9137c8f141f"} Jan 27 20:41:29 crc kubenswrapper[4915]: I0127 20:41:29.616117 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9ldtq" podStartSLOduration=3.218450158 podStartE2EDuration="5.6160981s" podCreationTimestamp="2026-01-27 20:41:24 +0000 UTC" firstStartedPulling="2026-01-27 20:41:26.544675746 +0000 UTC m=+7177.902529420" lastFinishedPulling="2026-01-27 20:41:28.942323698 +0000 UTC m=+7180.300177362" observedRunningTime="2026-01-27 20:41:29.612247285 +0000 UTC m=+7180.970101009" watchObservedRunningTime="2026-01-27 20:41:29.6160981 +0000 UTC m=+7180.973951764" Jan 27 20:41:30 crc kubenswrapper[4915]: I0127 20:41:30.357350 4915 scope.go:117] "RemoveContainer" containerID="e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25" Jan 27 20:41:30 crc kubenswrapper[4915]: E0127 20:41:30.357954 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:41:34 crc kubenswrapper[4915]: I0127 20:41:34.751593 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9ldtq" Jan 27 20:41:34 crc kubenswrapper[4915]: I0127 20:41:34.752195 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9ldtq" Jan 27 20:41:34 crc kubenswrapper[4915]: I0127 20:41:34.812723 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9ldtq" Jan 27 20:41:35 crc kubenswrapper[4915]: I0127 20:41:35.725261 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9ldtq" Jan 27 20:41:35 crc kubenswrapper[4915]: I0127 20:41:35.789074 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9ldtq"] Jan 27 20:41:37 crc kubenswrapper[4915]: I0127 20:41:37.664827 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9ldtq" podUID="e2f7962e-1a50-4613-900c-98b2b9dc6149" containerName="registry-server" containerID="cri-o://346dd8ebd3ef1341aad0b651a39d60d623fe9d934d50836c33abd9137c8f141f" gracePeriod=2 Jan 27 20:41:38 crc kubenswrapper[4915]: I0127 20:41:38.127573 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ldtq" Jan 27 20:41:38 crc kubenswrapper[4915]: I0127 20:41:38.210168 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rdbs\" (UniqueName: \"kubernetes.io/projected/e2f7962e-1a50-4613-900c-98b2b9dc6149-kube-api-access-2rdbs\") pod \"e2f7962e-1a50-4613-900c-98b2b9dc6149\" (UID: \"e2f7962e-1a50-4613-900c-98b2b9dc6149\") " Jan 27 20:41:38 crc kubenswrapper[4915]: I0127 20:41:38.210307 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2f7962e-1a50-4613-900c-98b2b9dc6149-utilities\") pod \"e2f7962e-1a50-4613-900c-98b2b9dc6149\" (UID: \"e2f7962e-1a50-4613-900c-98b2b9dc6149\") " Jan 27 20:41:38 crc kubenswrapper[4915]: I0127 20:41:38.210459 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2f7962e-1a50-4613-900c-98b2b9dc6149-catalog-content\") pod \"e2f7962e-1a50-4613-900c-98b2b9dc6149\" (UID: \"e2f7962e-1a50-4613-900c-98b2b9dc6149\") " Jan 27 20:41:38 crc kubenswrapper[4915]: I0127 20:41:38.211349 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2f7962e-1a50-4613-900c-98b2b9dc6149-utilities" (OuterVolumeSpecName: "utilities") pod "e2f7962e-1a50-4613-900c-98b2b9dc6149" (UID: "e2f7962e-1a50-4613-900c-98b2b9dc6149"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:41:38 crc kubenswrapper[4915]: I0127 20:41:38.216454 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2f7962e-1a50-4613-900c-98b2b9dc6149-kube-api-access-2rdbs" (OuterVolumeSpecName: "kube-api-access-2rdbs") pod "e2f7962e-1a50-4613-900c-98b2b9dc6149" (UID: "e2f7962e-1a50-4613-900c-98b2b9dc6149"). InnerVolumeSpecName "kube-api-access-2rdbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:41:38 crc kubenswrapper[4915]: I0127 20:41:38.281347 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2f7962e-1a50-4613-900c-98b2b9dc6149-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2f7962e-1a50-4613-900c-98b2b9dc6149" (UID: "e2f7962e-1a50-4613-900c-98b2b9dc6149"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:41:38 crc kubenswrapper[4915]: I0127 20:41:38.312831 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rdbs\" (UniqueName: \"kubernetes.io/projected/e2f7962e-1a50-4613-900c-98b2b9dc6149-kube-api-access-2rdbs\") on node \"crc\" DevicePath \"\"" Jan 27 20:41:38 crc kubenswrapper[4915]: I0127 20:41:38.312874 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2f7962e-1a50-4613-900c-98b2b9dc6149-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:41:38 crc kubenswrapper[4915]: I0127 20:41:38.312886 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2f7962e-1a50-4613-900c-98b2b9dc6149-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:41:38 crc kubenswrapper[4915]: I0127 20:41:38.689389 4915 generic.go:334] "Generic (PLEG): container finished" podID="e2f7962e-1a50-4613-900c-98b2b9dc6149" containerID="346dd8ebd3ef1341aad0b651a39d60d623fe9d934d50836c33abd9137c8f141f" exitCode=0 Jan 27 20:41:38 crc kubenswrapper[4915]: I0127 20:41:38.689512 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ldtq" event={"ID":"e2f7962e-1a50-4613-900c-98b2b9dc6149","Type":"ContainerDied","Data":"346dd8ebd3ef1341aad0b651a39d60d623fe9d934d50836c33abd9137c8f141f"} Jan 27 20:41:38 crc kubenswrapper[4915]: I0127 20:41:38.689552 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ldtq" event={"ID":"e2f7962e-1a50-4613-900c-98b2b9dc6149","Type":"ContainerDied","Data":"9e75d89556e4c8564e25dae90a7fefd6dd447b3dae5b6aada0f92632ba1ad544"} Jan 27 20:41:38 crc kubenswrapper[4915]: I0127 20:41:38.689578 4915 scope.go:117] "RemoveContainer" containerID="346dd8ebd3ef1341aad0b651a39d60d623fe9d934d50836c33abd9137c8f141f" Jan 27 20:41:38 crc kubenswrapper[4915]: I0127 20:41:38.689586 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ldtq" Jan 27 20:41:38 crc kubenswrapper[4915]: I0127 20:41:38.717062 4915 scope.go:117] "RemoveContainer" containerID="93b9c104e3e169ff7ad1336816c9d333631aa154dc5f7e7f09739062e306ab05" Jan 27 20:41:38 crc kubenswrapper[4915]: I0127 20:41:38.737416 4915 scope.go:117] "RemoveContainer" containerID="6c8c1a6ef56443ea112cdcbc1b0455a73a0f8305df5b7ba6373a559b953f51be" Jan 27 20:41:38 crc kubenswrapper[4915]: I0127 20:41:38.794454 4915 scope.go:117] "RemoveContainer" containerID="346dd8ebd3ef1341aad0b651a39d60d623fe9d934d50836c33abd9137c8f141f" Jan 27 20:41:38 crc kubenswrapper[4915]: E0127 20:41:38.795061 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"346dd8ebd3ef1341aad0b651a39d60d623fe9d934d50836c33abd9137c8f141f\": container with ID starting with 346dd8ebd3ef1341aad0b651a39d60d623fe9d934d50836c33abd9137c8f141f not found: ID does not exist" containerID="346dd8ebd3ef1341aad0b651a39d60d623fe9d934d50836c33abd9137c8f141f" Jan 27 20:41:38 crc kubenswrapper[4915]: I0127 20:41:38.795117 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"346dd8ebd3ef1341aad0b651a39d60d623fe9d934d50836c33abd9137c8f141f"} err="failed to get container status \"346dd8ebd3ef1341aad0b651a39d60d623fe9d934d50836c33abd9137c8f141f\": rpc error: code = NotFound desc = could not find container \"346dd8ebd3ef1341aad0b651a39d60d623fe9d934d50836c33abd9137c8f141f\": container with ID starting with 346dd8ebd3ef1341aad0b651a39d60d623fe9d934d50836c33abd9137c8f141f not found: ID does not exist" Jan 27 20:41:38 crc kubenswrapper[4915]: I0127 20:41:38.795147 4915 scope.go:117] "RemoveContainer" containerID="93b9c104e3e169ff7ad1336816c9d333631aa154dc5f7e7f09739062e306ab05" Jan 27 20:41:38 crc kubenswrapper[4915]: E0127 20:41:38.795750 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93b9c104e3e169ff7ad1336816c9d333631aa154dc5f7e7f09739062e306ab05\": container with ID starting with 93b9c104e3e169ff7ad1336816c9d333631aa154dc5f7e7f09739062e306ab05 not found: ID does not exist" containerID="93b9c104e3e169ff7ad1336816c9d333631aa154dc5f7e7f09739062e306ab05" Jan 27 20:41:38 crc kubenswrapper[4915]: I0127 20:41:38.796023 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93b9c104e3e169ff7ad1336816c9d333631aa154dc5f7e7f09739062e306ab05"} err="failed to get container status \"93b9c104e3e169ff7ad1336816c9d333631aa154dc5f7e7f09739062e306ab05\": rpc error: code = NotFound desc = could not find container \"93b9c104e3e169ff7ad1336816c9d333631aa154dc5f7e7f09739062e306ab05\": container with ID starting with 93b9c104e3e169ff7ad1336816c9d333631aa154dc5f7e7f09739062e306ab05 not found: ID does not exist" Jan 27 20:41:38 crc kubenswrapper[4915]: I0127 20:41:38.796089 4915 scope.go:117] "RemoveContainer" containerID="6c8c1a6ef56443ea112cdcbc1b0455a73a0f8305df5b7ba6373a559b953f51be" Jan 27 20:41:38 crc kubenswrapper[4915]: E0127 20:41:38.797285 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c8c1a6ef56443ea112cdcbc1b0455a73a0f8305df5b7ba6373a559b953f51be\": container with ID starting with 6c8c1a6ef56443ea112cdcbc1b0455a73a0f8305df5b7ba6373a559b953f51be not found: ID does not exist" containerID="6c8c1a6ef56443ea112cdcbc1b0455a73a0f8305df5b7ba6373a559b953f51be" Jan 27 20:41:38 crc kubenswrapper[4915]: I0127 20:41:38.797328 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c8c1a6ef56443ea112cdcbc1b0455a73a0f8305df5b7ba6373a559b953f51be"} err="failed to get container status \"6c8c1a6ef56443ea112cdcbc1b0455a73a0f8305df5b7ba6373a559b953f51be\": rpc error: code = NotFound desc = could not find container \"6c8c1a6ef56443ea112cdcbc1b0455a73a0f8305df5b7ba6373a559b953f51be\": container with ID starting with 6c8c1a6ef56443ea112cdcbc1b0455a73a0f8305df5b7ba6373a559b953f51be not found: ID does not exist" Jan 27 20:41:38 crc kubenswrapper[4915]: I0127 20:41:38.859336 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9ldtq"] Jan 27 20:41:38 crc kubenswrapper[4915]: I0127 20:41:38.870092 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9ldtq"] Jan 27 20:41:39 crc kubenswrapper[4915]: I0127 20:41:39.378274 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2f7962e-1a50-4613-900c-98b2b9dc6149" path="/var/lib/kubelet/pods/e2f7962e-1a50-4613-900c-98b2b9dc6149/volumes" Jan 27 20:41:42 crc kubenswrapper[4915]: I0127 20:41:42.357577 4915 scope.go:117] "RemoveContainer" containerID="e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25" Jan 27 20:41:42 crc kubenswrapper[4915]: E0127 20:41:42.359426 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:41:57 crc kubenswrapper[4915]: I0127 20:41:57.357844 4915 scope.go:117] "RemoveContainer" containerID="e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25" Jan 27 20:41:57 crc kubenswrapper[4915]: E0127 20:41:57.358633 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:42:08 crc kubenswrapper[4915]: I0127 20:42:08.358812 4915 scope.go:117] "RemoveContainer" containerID="e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25" Jan 27 20:42:08 crc kubenswrapper[4915]: E0127 20:42:08.360338 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:42:21 crc kubenswrapper[4915]: I0127 20:42:21.358054 4915 scope.go:117] "RemoveContainer" containerID="e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25" Jan 27 20:42:21 crc kubenswrapper[4915]: E0127 20:42:21.358725 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:42:34 crc kubenswrapper[4915]: I0127 20:42:34.358191 4915 scope.go:117] "RemoveContainer" containerID="e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25" Jan 27 20:42:34 crc kubenswrapper[4915]: E0127 20:42:34.358975 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:42:48 crc kubenswrapper[4915]: I0127 20:42:48.357651 4915 scope.go:117] "RemoveContainer" containerID="e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25" Jan 27 20:42:48 crc kubenswrapper[4915]: E0127 20:42:48.358753 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:43:00 crc kubenswrapper[4915]: I0127 20:43:00.358058 4915 scope.go:117] "RemoveContainer" containerID="e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25" Jan 27 20:43:00 crc kubenswrapper[4915]: E0127 20:43:00.374161 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:43:15 crc kubenswrapper[4915]: I0127 20:43:15.357744 4915 scope.go:117] "RemoveContainer" containerID="e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25" Jan 27 20:43:15 crc kubenswrapper[4915]: E0127 20:43:15.358428 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:43:29 crc kubenswrapper[4915]: I0127 20:43:29.366288 4915 scope.go:117] "RemoveContainer" containerID="e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25" Jan 27 20:43:29 crc kubenswrapper[4915]: E0127 20:43:29.367108 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:43:43 crc kubenswrapper[4915]: I0127 20:43:43.357963 4915 scope.go:117] "RemoveContainer" containerID="e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25" Jan 27 20:43:43 crc kubenswrapper[4915]: E0127 20:43:43.358775 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:43:54 crc kubenswrapper[4915]: I0127 20:43:54.357664 4915 scope.go:117] "RemoveContainer" containerID="e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25" Jan 27 20:43:54 crc kubenswrapper[4915]: E0127 20:43:54.358587 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:44:07 crc kubenswrapper[4915]: I0127 20:44:07.358174 4915 scope.go:117] "RemoveContainer" containerID="e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25" Jan 27 20:44:07 crc kubenswrapper[4915]: E0127 20:44:07.359058 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:44:20 crc kubenswrapper[4915]: I0127 20:44:20.357980 4915 scope.go:117] "RemoveContainer" containerID="e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25" Jan 27 20:44:20 crc kubenswrapper[4915]: E0127 20:44:20.359089 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:44:35 crc kubenswrapper[4915]: I0127 20:44:35.357981 4915 scope.go:117] "RemoveContainer" containerID="e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25" Jan 27 20:44:35 crc kubenswrapper[4915]: E0127 20:44:35.358760 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:44:49 crc kubenswrapper[4915]: I0127 20:44:49.363390 4915 scope.go:117] "RemoveContainer" containerID="e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25" Jan 27 20:44:49 crc kubenswrapper[4915]: E0127 20:44:49.364531 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:44:51 crc kubenswrapper[4915]: I0127 20:44:51.171718 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xcdbw"] Jan 27 20:44:51 crc kubenswrapper[4915]: E0127 20:44:51.172607 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f7962e-1a50-4613-900c-98b2b9dc6149" containerName="extract-content" Jan 27 20:44:51 crc kubenswrapper[4915]: I0127 20:44:51.172626 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f7962e-1a50-4613-900c-98b2b9dc6149" containerName="extract-content" Jan 27 20:44:51 crc kubenswrapper[4915]: E0127 20:44:51.172675 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f7962e-1a50-4613-900c-98b2b9dc6149" containerName="extract-utilities" Jan 27 20:44:51 crc kubenswrapper[4915]: I0127 20:44:51.172686 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f7962e-1a50-4613-900c-98b2b9dc6149" containerName="extract-utilities" Jan 27 20:44:51 crc kubenswrapper[4915]: E0127 20:44:51.172709 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f7962e-1a50-4613-900c-98b2b9dc6149" containerName="registry-server" Jan 27 20:44:51 crc kubenswrapper[4915]: I0127 20:44:51.172718 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f7962e-1a50-4613-900c-98b2b9dc6149" containerName="registry-server" Jan 27 20:44:51 crc kubenswrapper[4915]: I0127 20:44:51.172986 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2f7962e-1a50-4613-900c-98b2b9dc6149" containerName="registry-server" Jan 27 20:44:51 crc kubenswrapper[4915]: I0127 20:44:51.174740 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcdbw" Jan 27 20:44:51 crc kubenswrapper[4915]: I0127 20:44:51.191162 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xcdbw"] Jan 27 20:44:51 crc kubenswrapper[4915]: I0127 20:44:51.226068 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/996da7ac-a127-4ea8-9516-67554e40db6a-utilities\") pod \"certified-operators-xcdbw\" (UID: \"996da7ac-a127-4ea8-9516-67554e40db6a\") " pod="openshift-marketplace/certified-operators-xcdbw" Jan 27 20:44:51 crc kubenswrapper[4915]: I0127 20:44:51.226229 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/996da7ac-a127-4ea8-9516-67554e40db6a-catalog-content\") pod \"certified-operators-xcdbw\" (UID: \"996da7ac-a127-4ea8-9516-67554e40db6a\") " pod="openshift-marketplace/certified-operators-xcdbw" Jan 27 20:44:51 crc kubenswrapper[4915]: I0127 20:44:51.226333 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2wks\" (UniqueName: \"kubernetes.io/projected/996da7ac-a127-4ea8-9516-67554e40db6a-kube-api-access-n2wks\") pod \"certified-operators-xcdbw\" (UID: \"996da7ac-a127-4ea8-9516-67554e40db6a\") " pod="openshift-marketplace/certified-operators-xcdbw" Jan 27 20:44:51 crc kubenswrapper[4915]: I0127 20:44:51.328302 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/996da7ac-a127-4ea8-9516-67554e40db6a-utilities\") pod \"certified-operators-xcdbw\" (UID: \"996da7ac-a127-4ea8-9516-67554e40db6a\") " pod="openshift-marketplace/certified-operators-xcdbw" Jan 27 20:44:51 crc kubenswrapper[4915]: I0127 20:44:51.328346 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/996da7ac-a127-4ea8-9516-67554e40db6a-catalog-content\") pod \"certified-operators-xcdbw\" (UID: \"996da7ac-a127-4ea8-9516-67554e40db6a\") " pod="openshift-marketplace/certified-operators-xcdbw" Jan 27 20:44:51 crc kubenswrapper[4915]: I0127 20:44:51.328391 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2wks\" (UniqueName: \"kubernetes.io/projected/996da7ac-a127-4ea8-9516-67554e40db6a-kube-api-access-n2wks\") pod \"certified-operators-xcdbw\" (UID: \"996da7ac-a127-4ea8-9516-67554e40db6a\") " pod="openshift-marketplace/certified-operators-xcdbw" Jan 27 20:44:51 crc kubenswrapper[4915]: I0127 20:44:51.329105 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/996da7ac-a127-4ea8-9516-67554e40db6a-utilities\") pod \"certified-operators-xcdbw\" (UID: \"996da7ac-a127-4ea8-9516-67554e40db6a\") " pod="openshift-marketplace/certified-operators-xcdbw" Jan 27 20:44:51 crc kubenswrapper[4915]: I0127 20:44:51.329519 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/996da7ac-a127-4ea8-9516-67554e40db6a-catalog-content\") pod \"certified-operators-xcdbw\" (UID: \"996da7ac-a127-4ea8-9516-67554e40db6a\") " pod="openshift-marketplace/certified-operators-xcdbw" Jan 27 20:44:51 crc kubenswrapper[4915]: I0127 20:44:51.362498 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2wks\" (UniqueName: \"kubernetes.io/projected/996da7ac-a127-4ea8-9516-67554e40db6a-kube-api-access-n2wks\") pod \"certified-operators-xcdbw\" (UID: \"996da7ac-a127-4ea8-9516-67554e40db6a\") " pod="openshift-marketplace/certified-operators-xcdbw" Jan 27 20:44:51 crc kubenswrapper[4915]: I0127 20:44:51.504311 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcdbw" Jan 27 20:44:51 crc kubenswrapper[4915]: I0127 20:44:51.959069 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xcdbw"] Jan 27 20:44:52 crc kubenswrapper[4915]: I0127 20:44:52.861280 4915 generic.go:334] "Generic (PLEG): container finished" podID="996da7ac-a127-4ea8-9516-67554e40db6a" containerID="079bb4bca371f72b317fbfb20e4eb0626ae7904ef608bc7e496d97611c74328a" exitCode=0 Jan 27 20:44:52 crc kubenswrapper[4915]: I0127 20:44:52.861357 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcdbw" event={"ID":"996da7ac-a127-4ea8-9516-67554e40db6a","Type":"ContainerDied","Data":"079bb4bca371f72b317fbfb20e4eb0626ae7904ef608bc7e496d97611c74328a"} Jan 27 20:44:52 crc kubenswrapper[4915]: I0127 20:44:52.861605 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcdbw" event={"ID":"996da7ac-a127-4ea8-9516-67554e40db6a","Type":"ContainerStarted","Data":"db87762c4c04ae78e993cf108c8b6759cfd2106604978f6da74cac82ef65f2bc"} Jan 27 20:44:53 crc kubenswrapper[4915]: I0127 20:44:53.874946 4915 generic.go:334] "Generic (PLEG): container finished" podID="996da7ac-a127-4ea8-9516-67554e40db6a" containerID="69111c351cf12f1042aa1176c390eb8f993928bc7e461b359493b54c8f143280" exitCode=0 Jan 27 20:44:53 crc kubenswrapper[4915]: I0127 20:44:53.875078 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcdbw" event={"ID":"996da7ac-a127-4ea8-9516-67554e40db6a","Type":"ContainerDied","Data":"69111c351cf12f1042aa1176c390eb8f993928bc7e461b359493b54c8f143280"} Jan 27 20:44:54 crc kubenswrapper[4915]: I0127 20:44:54.902107 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcdbw" event={"ID":"996da7ac-a127-4ea8-9516-67554e40db6a","Type":"ContainerStarted","Data":"11265678e6f1ca63b806599a35385c4f96e34e0fd0b581522aca53d79ea4a70d"} Jan 27 20:44:54 crc kubenswrapper[4915]: I0127 20:44:54.927430 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xcdbw" podStartSLOduration=2.409333021 podStartE2EDuration="3.927403939s" podCreationTimestamp="2026-01-27 20:44:51 +0000 UTC" firstStartedPulling="2026-01-27 20:44:52.863532981 +0000 UTC m=+7384.221386645" lastFinishedPulling="2026-01-27 20:44:54.381603879 +0000 UTC m=+7385.739457563" observedRunningTime="2026-01-27 20:44:54.925756948 +0000 UTC m=+7386.283610652" watchObservedRunningTime="2026-01-27 20:44:54.927403939 +0000 UTC m=+7386.285257593" Jan 27 20:45:00 crc kubenswrapper[4915]: I0127 20:45:00.170842 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492445-pzzdq"] Jan 27 20:45:00 crc kubenswrapper[4915]: I0127 20:45:00.172771 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-pzzdq" Jan 27 20:45:00 crc kubenswrapper[4915]: I0127 20:45:00.177432 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 20:45:00 crc kubenswrapper[4915]: I0127 20:45:00.177747 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 20:45:00 crc kubenswrapper[4915]: I0127 20:45:00.187589 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492445-pzzdq"] Jan 27 20:45:00 crc kubenswrapper[4915]: I0127 20:45:00.214516 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7d78de6-9776-40cd-8198-126773f4ebc8-config-volume\") pod \"collect-profiles-29492445-pzzdq\" (UID: \"a7d78de6-9776-40cd-8198-126773f4ebc8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-pzzdq" Jan 27 20:45:00 crc kubenswrapper[4915]: I0127 20:45:00.214587 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sgdx\" (UniqueName: \"kubernetes.io/projected/a7d78de6-9776-40cd-8198-126773f4ebc8-kube-api-access-7sgdx\") pod \"collect-profiles-29492445-pzzdq\" (UID: \"a7d78de6-9776-40cd-8198-126773f4ebc8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-pzzdq" Jan 27 20:45:00 crc kubenswrapper[4915]: I0127 20:45:00.214766 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a7d78de6-9776-40cd-8198-126773f4ebc8-secret-volume\") pod \"collect-profiles-29492445-pzzdq\" (UID: \"a7d78de6-9776-40cd-8198-126773f4ebc8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-pzzdq" Jan 27 20:45:00 crc kubenswrapper[4915]: I0127 20:45:00.316136 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a7d78de6-9776-40cd-8198-126773f4ebc8-secret-volume\") pod \"collect-profiles-29492445-pzzdq\" (UID: \"a7d78de6-9776-40cd-8198-126773f4ebc8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-pzzdq" Jan 27 20:45:00 crc kubenswrapper[4915]: I0127 20:45:00.316243 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7d78de6-9776-40cd-8198-126773f4ebc8-config-volume\") pod \"collect-profiles-29492445-pzzdq\" (UID: \"a7d78de6-9776-40cd-8198-126773f4ebc8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-pzzdq" Jan 27 20:45:00 crc kubenswrapper[4915]: I0127 20:45:00.316294 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sgdx\" (UniqueName: \"kubernetes.io/projected/a7d78de6-9776-40cd-8198-126773f4ebc8-kube-api-access-7sgdx\") pod \"collect-profiles-29492445-pzzdq\" (UID: \"a7d78de6-9776-40cd-8198-126773f4ebc8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-pzzdq" Jan 27 20:45:00 crc kubenswrapper[4915]: I0127 20:45:00.317187 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7d78de6-9776-40cd-8198-126773f4ebc8-config-volume\") pod \"collect-profiles-29492445-pzzdq\" (UID: \"a7d78de6-9776-40cd-8198-126773f4ebc8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-pzzdq" Jan 27 20:45:00 crc kubenswrapper[4915]: I0127 20:45:00.321260 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a7d78de6-9776-40cd-8198-126773f4ebc8-secret-volume\") pod \"collect-profiles-29492445-pzzdq\" (UID: \"a7d78de6-9776-40cd-8198-126773f4ebc8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-pzzdq" Jan 27 20:45:00 crc kubenswrapper[4915]: I0127 20:45:00.331172 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sgdx\" (UniqueName: \"kubernetes.io/projected/a7d78de6-9776-40cd-8198-126773f4ebc8-kube-api-access-7sgdx\") pod \"collect-profiles-29492445-pzzdq\" (UID: \"a7d78de6-9776-40cd-8198-126773f4ebc8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-pzzdq" Jan 27 20:45:00 crc kubenswrapper[4915]: I0127 20:45:00.498282 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-pzzdq" Jan 27 20:45:00 crc kubenswrapper[4915]: I0127 20:45:00.961509 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492445-pzzdq"] Jan 27 20:45:01 crc kubenswrapper[4915]: I0127 20:45:01.505380 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xcdbw" Jan 27 20:45:01 crc kubenswrapper[4915]: I0127 20:45:01.505883 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xcdbw" Jan 27 20:45:01 crc kubenswrapper[4915]: I0127 20:45:01.555452 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xcdbw" Jan 27 20:45:01 crc kubenswrapper[4915]: I0127 20:45:01.973031 4915 generic.go:334] "Generic (PLEG): container finished" podID="a7d78de6-9776-40cd-8198-126773f4ebc8" containerID="11c8f6325578761406a1b46c2125f6457a8837cd1a80c573bbb260874c814d10" exitCode=0 Jan 27 20:45:01 crc kubenswrapper[4915]: I0127 20:45:01.973152 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-pzzdq" event={"ID":"a7d78de6-9776-40cd-8198-126773f4ebc8","Type":"ContainerDied","Data":"11c8f6325578761406a1b46c2125f6457a8837cd1a80c573bbb260874c814d10"} Jan 27 20:45:01 crc kubenswrapper[4915]: I0127 20:45:01.973208 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-pzzdq" event={"ID":"a7d78de6-9776-40cd-8198-126773f4ebc8","Type":"ContainerStarted","Data":"ce3169306403d816b4bf5f0bac4030eb235703940fcbee211a83e1591fb9903c"} Jan 27 20:45:02 crc kubenswrapper[4915]: I0127 20:45:02.030585 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xcdbw" Jan 27 20:45:02 crc kubenswrapper[4915]: I0127 20:45:02.074022 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xcdbw"] Jan 27 20:45:03 crc kubenswrapper[4915]: I0127 20:45:03.357532 4915 scope.go:117] "RemoveContainer" containerID="e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25" Jan 27 20:45:03 crc kubenswrapper[4915]: E0127 20:45:03.357943 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:45:03 crc kubenswrapper[4915]: I0127 20:45:03.361968 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-pzzdq" Jan 27 20:45:03 crc kubenswrapper[4915]: I0127 20:45:03.485618 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sgdx\" (UniqueName: \"kubernetes.io/projected/a7d78de6-9776-40cd-8198-126773f4ebc8-kube-api-access-7sgdx\") pod \"a7d78de6-9776-40cd-8198-126773f4ebc8\" (UID: \"a7d78de6-9776-40cd-8198-126773f4ebc8\") " Jan 27 20:45:03 crc kubenswrapper[4915]: I0127 20:45:03.485877 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a7d78de6-9776-40cd-8198-126773f4ebc8-secret-volume\") pod \"a7d78de6-9776-40cd-8198-126773f4ebc8\" (UID: \"a7d78de6-9776-40cd-8198-126773f4ebc8\") " Jan 27 20:45:03 crc kubenswrapper[4915]: I0127 20:45:03.485966 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7d78de6-9776-40cd-8198-126773f4ebc8-config-volume\") pod \"a7d78de6-9776-40cd-8198-126773f4ebc8\" (UID: \"a7d78de6-9776-40cd-8198-126773f4ebc8\") " Jan 27 20:45:03 crc kubenswrapper[4915]: I0127 20:45:03.488165 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7d78de6-9776-40cd-8198-126773f4ebc8-config-volume" (OuterVolumeSpecName: "config-volume") pod "a7d78de6-9776-40cd-8198-126773f4ebc8" (UID: "a7d78de6-9776-40cd-8198-126773f4ebc8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:45:03 crc kubenswrapper[4915]: I0127 20:45:03.492973 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7d78de6-9776-40cd-8198-126773f4ebc8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a7d78de6-9776-40cd-8198-126773f4ebc8" (UID: "a7d78de6-9776-40cd-8198-126773f4ebc8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:45:03 crc kubenswrapper[4915]: I0127 20:45:03.493035 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7d78de6-9776-40cd-8198-126773f4ebc8-kube-api-access-7sgdx" (OuterVolumeSpecName: "kube-api-access-7sgdx") pod "a7d78de6-9776-40cd-8198-126773f4ebc8" (UID: "a7d78de6-9776-40cd-8198-126773f4ebc8"). InnerVolumeSpecName "kube-api-access-7sgdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:45:03 crc kubenswrapper[4915]: I0127 20:45:03.587858 4915 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a7d78de6-9776-40cd-8198-126773f4ebc8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 20:45:03 crc kubenswrapper[4915]: I0127 20:45:03.587893 4915 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7d78de6-9776-40cd-8198-126773f4ebc8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 20:45:03 crc kubenswrapper[4915]: I0127 20:45:03.587905 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sgdx\" (UniqueName: \"kubernetes.io/projected/a7d78de6-9776-40cd-8198-126773f4ebc8-kube-api-access-7sgdx\") on node \"crc\" DevicePath \"\"" Jan 27 20:45:03 crc kubenswrapper[4915]: I0127 20:45:03.991012 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-pzzdq" event={"ID":"a7d78de6-9776-40cd-8198-126773f4ebc8","Type":"ContainerDied","Data":"ce3169306403d816b4bf5f0bac4030eb235703940fcbee211a83e1591fb9903c"} Jan 27 20:45:03 crc kubenswrapper[4915]: I0127 20:45:03.991066 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce3169306403d816b4bf5f0bac4030eb235703940fcbee211a83e1591fb9903c" Jan 27 20:45:03 crc kubenswrapper[4915]: I0127 20:45:03.991031 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-pzzdq" Jan 27 20:45:03 crc kubenswrapper[4915]: I0127 20:45:03.991180 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xcdbw" podUID="996da7ac-a127-4ea8-9516-67554e40db6a" containerName="registry-server" containerID="cri-o://11265678e6f1ca63b806599a35385c4f96e34e0fd0b581522aca53d79ea4a70d" gracePeriod=2 Jan 27 20:45:04 crc kubenswrapper[4915]: I0127 20:45:04.430381 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcdbw" Jan 27 20:45:04 crc kubenswrapper[4915]: I0127 20:45:04.448305 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492400-4m4nb"] Jan 27 20:45:04 crc kubenswrapper[4915]: I0127 20:45:04.457086 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492400-4m4nb"] Jan 27 20:45:04 crc kubenswrapper[4915]: I0127 20:45:04.516940 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/996da7ac-a127-4ea8-9516-67554e40db6a-utilities\") pod \"996da7ac-a127-4ea8-9516-67554e40db6a\" (UID: \"996da7ac-a127-4ea8-9516-67554e40db6a\") " Jan 27 20:45:04 crc kubenswrapper[4915]: I0127 20:45:04.517147 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/996da7ac-a127-4ea8-9516-67554e40db6a-catalog-content\") pod \"996da7ac-a127-4ea8-9516-67554e40db6a\" (UID: \"996da7ac-a127-4ea8-9516-67554e40db6a\") " Jan 27 20:45:04 crc kubenswrapper[4915]: I0127 20:45:04.517197 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2wks\" (UniqueName: \"kubernetes.io/projected/996da7ac-a127-4ea8-9516-67554e40db6a-kube-api-access-n2wks\") pod \"996da7ac-a127-4ea8-9516-67554e40db6a\" (UID: \"996da7ac-a127-4ea8-9516-67554e40db6a\") " Jan 27 20:45:04 crc kubenswrapper[4915]: I0127 20:45:04.518288 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/996da7ac-a127-4ea8-9516-67554e40db6a-utilities" (OuterVolumeSpecName: "utilities") pod "996da7ac-a127-4ea8-9516-67554e40db6a" (UID: "996da7ac-a127-4ea8-9516-67554e40db6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:45:04 crc kubenswrapper[4915]: I0127 20:45:04.542036 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/996da7ac-a127-4ea8-9516-67554e40db6a-kube-api-access-n2wks" (OuterVolumeSpecName: "kube-api-access-n2wks") pod "996da7ac-a127-4ea8-9516-67554e40db6a" (UID: "996da7ac-a127-4ea8-9516-67554e40db6a"). InnerVolumeSpecName "kube-api-access-n2wks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:45:04 crc kubenswrapper[4915]: I0127 20:45:04.574570 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/996da7ac-a127-4ea8-9516-67554e40db6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "996da7ac-a127-4ea8-9516-67554e40db6a" (UID: "996da7ac-a127-4ea8-9516-67554e40db6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:45:04 crc kubenswrapper[4915]: I0127 20:45:04.619921 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/996da7ac-a127-4ea8-9516-67554e40db6a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:45:04 crc kubenswrapper[4915]: I0127 20:45:04.620009 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/996da7ac-a127-4ea8-9516-67554e40db6a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:45:04 crc kubenswrapper[4915]: I0127 20:45:04.620023 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2wks\" (UniqueName: \"kubernetes.io/projected/996da7ac-a127-4ea8-9516-67554e40db6a-kube-api-access-n2wks\") on node \"crc\" DevicePath \"\"" Jan 27 20:45:05 crc kubenswrapper[4915]: I0127 20:45:05.009771 4915 generic.go:334] "Generic (PLEG): container finished" podID="996da7ac-a127-4ea8-9516-67554e40db6a" containerID="11265678e6f1ca63b806599a35385c4f96e34e0fd0b581522aca53d79ea4a70d" exitCode=0 Jan 27 20:45:05 crc kubenswrapper[4915]: I0127 20:45:05.009851 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcdbw" event={"ID":"996da7ac-a127-4ea8-9516-67554e40db6a","Type":"ContainerDied","Data":"11265678e6f1ca63b806599a35385c4f96e34e0fd0b581522aca53d79ea4a70d"} Jan 27 20:45:05 crc kubenswrapper[4915]: I0127 20:45:05.009885 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcdbw" event={"ID":"996da7ac-a127-4ea8-9516-67554e40db6a","Type":"ContainerDied","Data":"db87762c4c04ae78e993cf108c8b6759cfd2106604978f6da74cac82ef65f2bc"} Jan 27 20:45:05 crc kubenswrapper[4915]: I0127 20:45:05.009909 4915 scope.go:117] "RemoveContainer" containerID="11265678e6f1ca63b806599a35385c4f96e34e0fd0b581522aca53d79ea4a70d" Jan 27 20:45:05 crc kubenswrapper[4915]: I0127 20:45:05.010083 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcdbw" Jan 27 20:45:05 crc kubenswrapper[4915]: I0127 20:45:05.028934 4915 scope.go:117] "RemoveContainer" containerID="69111c351cf12f1042aa1176c390eb8f993928bc7e461b359493b54c8f143280" Jan 27 20:45:05 crc kubenswrapper[4915]: I0127 20:45:05.054488 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xcdbw"] Jan 27 20:45:05 crc kubenswrapper[4915]: I0127 20:45:05.061293 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xcdbw"] Jan 27 20:45:05 crc kubenswrapper[4915]: I0127 20:45:05.088719 4915 scope.go:117] "RemoveContainer" containerID="079bb4bca371f72b317fbfb20e4eb0626ae7904ef608bc7e496d97611c74328a" Jan 27 20:45:05 crc kubenswrapper[4915]: I0127 20:45:05.114862 4915 scope.go:117] "RemoveContainer" containerID="11265678e6f1ca63b806599a35385c4f96e34e0fd0b581522aca53d79ea4a70d" Jan 27 20:45:05 crc kubenswrapper[4915]: E0127 20:45:05.115844 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11265678e6f1ca63b806599a35385c4f96e34e0fd0b581522aca53d79ea4a70d\": container with ID starting with 11265678e6f1ca63b806599a35385c4f96e34e0fd0b581522aca53d79ea4a70d not found: ID does not exist" containerID="11265678e6f1ca63b806599a35385c4f96e34e0fd0b581522aca53d79ea4a70d" Jan 27 20:45:05 crc kubenswrapper[4915]: I0127 20:45:05.115886 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11265678e6f1ca63b806599a35385c4f96e34e0fd0b581522aca53d79ea4a70d"} err="failed to get container status \"11265678e6f1ca63b806599a35385c4f96e34e0fd0b581522aca53d79ea4a70d\": rpc error: code = NotFound desc = could not find container \"11265678e6f1ca63b806599a35385c4f96e34e0fd0b581522aca53d79ea4a70d\": container with ID starting with 11265678e6f1ca63b806599a35385c4f96e34e0fd0b581522aca53d79ea4a70d not found: ID does not exist" Jan 27 20:45:05 crc kubenswrapper[4915]: I0127 20:45:05.115912 4915 scope.go:117] "RemoveContainer" containerID="69111c351cf12f1042aa1176c390eb8f993928bc7e461b359493b54c8f143280" Jan 27 20:45:05 crc kubenswrapper[4915]: E0127 20:45:05.116364 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69111c351cf12f1042aa1176c390eb8f993928bc7e461b359493b54c8f143280\": container with ID starting with 69111c351cf12f1042aa1176c390eb8f993928bc7e461b359493b54c8f143280 not found: ID does not exist" containerID="69111c351cf12f1042aa1176c390eb8f993928bc7e461b359493b54c8f143280" Jan 27 20:45:05 crc kubenswrapper[4915]: I0127 20:45:05.116465 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69111c351cf12f1042aa1176c390eb8f993928bc7e461b359493b54c8f143280"} err="failed to get container status \"69111c351cf12f1042aa1176c390eb8f993928bc7e461b359493b54c8f143280\": rpc error: code = NotFound desc = could not find container \"69111c351cf12f1042aa1176c390eb8f993928bc7e461b359493b54c8f143280\": container with ID starting with 69111c351cf12f1042aa1176c390eb8f993928bc7e461b359493b54c8f143280 not found: ID does not exist" Jan 27 20:45:05 crc kubenswrapper[4915]: I0127 20:45:05.116554 4915 scope.go:117] "RemoveContainer" containerID="079bb4bca371f72b317fbfb20e4eb0626ae7904ef608bc7e496d97611c74328a" Jan 27 20:45:05 crc kubenswrapper[4915]: E0127 20:45:05.116956 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"079bb4bca371f72b317fbfb20e4eb0626ae7904ef608bc7e496d97611c74328a\": container with ID starting with 079bb4bca371f72b317fbfb20e4eb0626ae7904ef608bc7e496d97611c74328a not found: ID does not exist" containerID="079bb4bca371f72b317fbfb20e4eb0626ae7904ef608bc7e496d97611c74328a" Jan 27 20:45:05 crc kubenswrapper[4915]: I0127 20:45:05.116986 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"079bb4bca371f72b317fbfb20e4eb0626ae7904ef608bc7e496d97611c74328a"} err="failed to get container status \"079bb4bca371f72b317fbfb20e4eb0626ae7904ef608bc7e496d97611c74328a\": rpc error: code = NotFound desc = could not find container \"079bb4bca371f72b317fbfb20e4eb0626ae7904ef608bc7e496d97611c74328a\": container with ID starting with 079bb4bca371f72b317fbfb20e4eb0626ae7904ef608bc7e496d97611c74328a not found: ID does not exist" Jan 27 20:45:05 crc kubenswrapper[4915]: I0127 20:45:05.372703 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71005214-b374-41b3-b484-609fe44d2fa1" path="/var/lib/kubelet/pods/71005214-b374-41b3-b484-609fe44d2fa1/volumes" Jan 27 20:45:05 crc kubenswrapper[4915]: I0127 20:45:05.374021 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="996da7ac-a127-4ea8-9516-67554e40db6a" path="/var/lib/kubelet/pods/996da7ac-a127-4ea8-9516-67554e40db6a/volumes" Jan 27 20:45:18 crc kubenswrapper[4915]: I0127 20:45:18.357913 4915 scope.go:117] "RemoveContainer" containerID="e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25" Jan 27 20:45:18 crc kubenswrapper[4915]: E0127 20:45:18.358839 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:45:31 crc kubenswrapper[4915]: I0127 20:45:31.358677 4915 scope.go:117] "RemoveContainer" containerID="e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25" Jan 27 20:45:31 crc kubenswrapper[4915]: E0127 20:45:31.359719 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:45:42 crc kubenswrapper[4915]: I0127 20:45:42.358189 4915 scope.go:117] "RemoveContainer" containerID="e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25" Jan 27 20:45:42 crc kubenswrapper[4915]: E0127 20:45:42.358948 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:45:57 crc kubenswrapper[4915]: I0127 20:45:57.357777 4915 scope.go:117] "RemoveContainer" containerID="e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25" Jan 27 20:45:57 crc kubenswrapper[4915]: I0127 20:45:57.650997 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"f12c833cfb4744d3a994f77594e6c4c5e531e035df8f6dbc7b824547f8ffb00e"} Jan 27 20:45:59 crc kubenswrapper[4915]: I0127 20:45:59.898732 4915 scope.go:117] "RemoveContainer" containerID="c8c0a28d00d971aab6b71b4cfa83b41fb29b0248bc2a90ebaa0a757903062058" Jan 27 20:46:09 crc kubenswrapper[4915]: I0127 20:46:09.954640 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dbgww"] Jan 27 20:46:09 crc kubenswrapper[4915]: E0127 20:46:09.957249 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="996da7ac-a127-4ea8-9516-67554e40db6a" containerName="extract-utilities" Jan 27 20:46:09 crc kubenswrapper[4915]: I0127 20:46:09.957379 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="996da7ac-a127-4ea8-9516-67554e40db6a" containerName="extract-utilities" Jan 27 20:46:09 crc kubenswrapper[4915]: E0127 20:46:09.957470 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="996da7ac-a127-4ea8-9516-67554e40db6a" containerName="extract-content" Jan 27 20:46:09 crc kubenswrapper[4915]: I0127 20:46:09.957553 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="996da7ac-a127-4ea8-9516-67554e40db6a" containerName="extract-content" Jan 27 20:46:09 crc kubenswrapper[4915]: E0127 20:46:09.957641 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7d78de6-9776-40cd-8198-126773f4ebc8" containerName="collect-profiles" Jan 27 20:46:09 crc kubenswrapper[4915]: I0127 20:46:09.957720 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7d78de6-9776-40cd-8198-126773f4ebc8" containerName="collect-profiles" Jan 27 20:46:09 crc kubenswrapper[4915]: E0127 20:46:09.957832 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="996da7ac-a127-4ea8-9516-67554e40db6a" containerName="registry-server" Jan 27 20:46:09 crc kubenswrapper[4915]: I0127 20:46:09.957928 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="996da7ac-a127-4ea8-9516-67554e40db6a" containerName="registry-server" Jan 27 20:46:09 crc kubenswrapper[4915]: I0127 20:46:09.958242 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7d78de6-9776-40cd-8198-126773f4ebc8" containerName="collect-profiles" Jan 27 20:46:09 crc kubenswrapper[4915]: I0127 20:46:09.958330 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="996da7ac-a127-4ea8-9516-67554e40db6a" containerName="registry-server" Jan 27 20:46:09 crc kubenswrapper[4915]: I0127 20:46:09.960668 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dbgww" Jan 27 20:46:10 crc kubenswrapper[4915]: I0127 20:46:10.007832 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dbgww"] Jan 27 20:46:10 crc kubenswrapper[4915]: I0127 20:46:10.132513 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/046bc5e2-a0fc-4a39-94ff-f63ed916c7f9-utilities\") pod \"redhat-operators-dbgww\" (UID: \"046bc5e2-a0fc-4a39-94ff-f63ed916c7f9\") " pod="openshift-marketplace/redhat-operators-dbgww" Jan 27 20:46:10 crc kubenswrapper[4915]: I0127 20:46:10.132912 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfmp4\" (UniqueName: \"kubernetes.io/projected/046bc5e2-a0fc-4a39-94ff-f63ed916c7f9-kube-api-access-lfmp4\") pod \"redhat-operators-dbgww\" (UID: \"046bc5e2-a0fc-4a39-94ff-f63ed916c7f9\") " pod="openshift-marketplace/redhat-operators-dbgww" Jan 27 20:46:10 crc kubenswrapper[4915]: I0127 20:46:10.132938 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/046bc5e2-a0fc-4a39-94ff-f63ed916c7f9-catalog-content\") pod \"redhat-operators-dbgww\" (UID: \"046bc5e2-a0fc-4a39-94ff-f63ed916c7f9\") " pod="openshift-marketplace/redhat-operators-dbgww" Jan 27 20:46:10 crc kubenswrapper[4915]: I0127 20:46:10.234467 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/046bc5e2-a0fc-4a39-94ff-f63ed916c7f9-utilities\") pod \"redhat-operators-dbgww\" (UID: \"046bc5e2-a0fc-4a39-94ff-f63ed916c7f9\") " pod="openshift-marketplace/redhat-operators-dbgww" Jan 27 20:46:10 crc kubenswrapper[4915]: I0127 20:46:10.234561 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfmp4\" (UniqueName: \"kubernetes.io/projected/046bc5e2-a0fc-4a39-94ff-f63ed916c7f9-kube-api-access-lfmp4\") pod \"redhat-operators-dbgww\" (UID: \"046bc5e2-a0fc-4a39-94ff-f63ed916c7f9\") " pod="openshift-marketplace/redhat-operators-dbgww" Jan 27 20:46:10 crc kubenswrapper[4915]: I0127 20:46:10.234583 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/046bc5e2-a0fc-4a39-94ff-f63ed916c7f9-catalog-content\") pod \"redhat-operators-dbgww\" (UID: \"046bc5e2-a0fc-4a39-94ff-f63ed916c7f9\") " pod="openshift-marketplace/redhat-operators-dbgww" Jan 27 20:46:10 crc kubenswrapper[4915]: I0127 20:46:10.235161 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/046bc5e2-a0fc-4a39-94ff-f63ed916c7f9-utilities\") pod \"redhat-operators-dbgww\" (UID: \"046bc5e2-a0fc-4a39-94ff-f63ed916c7f9\") " pod="openshift-marketplace/redhat-operators-dbgww" Jan 27 20:46:10 crc kubenswrapper[4915]: I0127 20:46:10.235226 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/046bc5e2-a0fc-4a39-94ff-f63ed916c7f9-catalog-content\") pod \"redhat-operators-dbgww\" (UID: \"046bc5e2-a0fc-4a39-94ff-f63ed916c7f9\") " pod="openshift-marketplace/redhat-operators-dbgww" Jan 27 20:46:10 crc kubenswrapper[4915]: I0127 20:46:10.259772 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfmp4\" (UniqueName: \"kubernetes.io/projected/046bc5e2-a0fc-4a39-94ff-f63ed916c7f9-kube-api-access-lfmp4\") pod \"redhat-operators-dbgww\" (UID: \"046bc5e2-a0fc-4a39-94ff-f63ed916c7f9\") " pod="openshift-marketplace/redhat-operators-dbgww" Jan 27 20:46:10 crc kubenswrapper[4915]: I0127 20:46:10.325734 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dbgww" Jan 27 20:46:10 crc kubenswrapper[4915]: I0127 20:46:10.783191 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dbgww"] Jan 27 20:46:11 crc kubenswrapper[4915]: I0127 20:46:11.802355 4915 generic.go:334] "Generic (PLEG): container finished" podID="046bc5e2-a0fc-4a39-94ff-f63ed916c7f9" containerID="085bd059efd2bc6bbc687eac114e93dbf98d20d16b7c85677e340a5a5e7c56a1" exitCode=0 Jan 27 20:46:11 crc kubenswrapper[4915]: I0127 20:46:11.802484 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbgww" event={"ID":"046bc5e2-a0fc-4a39-94ff-f63ed916c7f9","Type":"ContainerDied","Data":"085bd059efd2bc6bbc687eac114e93dbf98d20d16b7c85677e340a5a5e7c56a1"} Jan 27 20:46:11 crc kubenswrapper[4915]: I0127 20:46:11.802929 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbgww" event={"ID":"046bc5e2-a0fc-4a39-94ff-f63ed916c7f9","Type":"ContainerStarted","Data":"8ebc7ef94dc72ac10c4fbbab4b3db1fddde7af17c6e60dec41cd15048b1adbbb"} Jan 27 20:46:11 crc kubenswrapper[4915]: I0127 20:46:11.807010 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 20:46:12 crc kubenswrapper[4915]: I0127 20:46:12.815609 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbgww" event={"ID":"046bc5e2-a0fc-4a39-94ff-f63ed916c7f9","Type":"ContainerStarted","Data":"e8ed3da82a333e1c642845cf914b91bfd0c92b18eccbbfda4958fc0bf2200ca8"} Jan 27 20:46:15 crc kubenswrapper[4915]: I0127 20:46:15.843242 4915 generic.go:334] "Generic (PLEG): container finished" podID="046bc5e2-a0fc-4a39-94ff-f63ed916c7f9" containerID="e8ed3da82a333e1c642845cf914b91bfd0c92b18eccbbfda4958fc0bf2200ca8" exitCode=0 Jan 27 20:46:15 crc kubenswrapper[4915]: I0127 20:46:15.843332 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbgww" event={"ID":"046bc5e2-a0fc-4a39-94ff-f63ed916c7f9","Type":"ContainerDied","Data":"e8ed3da82a333e1c642845cf914b91bfd0c92b18eccbbfda4958fc0bf2200ca8"} Jan 27 20:46:16 crc kubenswrapper[4915]: I0127 20:46:16.853639 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbgww" event={"ID":"046bc5e2-a0fc-4a39-94ff-f63ed916c7f9","Type":"ContainerStarted","Data":"08101ab4be2cb0aa78faef96a30f4c50c75474a6a62f75c80f9cf4efe0efa1e0"} Jan 27 20:46:16 crc kubenswrapper[4915]: I0127 20:46:16.876513 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dbgww" podStartSLOduration=3.40607461 podStartE2EDuration="7.876491344s" podCreationTimestamp="2026-01-27 20:46:09 +0000 UTC" firstStartedPulling="2026-01-27 20:46:11.806716925 +0000 UTC m=+7463.164570589" lastFinishedPulling="2026-01-27 20:46:16.277133659 +0000 UTC m=+7467.634987323" observedRunningTime="2026-01-27 20:46:16.871220895 +0000 UTC m=+7468.229074559" watchObservedRunningTime="2026-01-27 20:46:16.876491344 +0000 UTC m=+7468.234345008" Jan 27 20:46:20 crc kubenswrapper[4915]: I0127 20:46:20.325910 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dbgww" Jan 27 20:46:20 crc kubenswrapper[4915]: I0127 20:46:20.326504 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dbgww" Jan 27 20:46:21 crc kubenswrapper[4915]: I0127 20:46:21.425426 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dbgww" podUID="046bc5e2-a0fc-4a39-94ff-f63ed916c7f9" containerName="registry-server" probeResult="failure" output=< Jan 27 20:46:21 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Jan 27 20:46:21 crc kubenswrapper[4915]: > Jan 27 20:46:30 crc kubenswrapper[4915]: I0127 20:46:30.370427 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dbgww" Jan 27 20:46:30 crc kubenswrapper[4915]: I0127 20:46:30.418734 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dbgww" Jan 27 20:46:30 crc kubenswrapper[4915]: I0127 20:46:30.604823 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dbgww"] Jan 27 20:46:32 crc kubenswrapper[4915]: I0127 20:46:32.002490 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dbgww" podUID="046bc5e2-a0fc-4a39-94ff-f63ed916c7f9" containerName="registry-server" containerID="cri-o://08101ab4be2cb0aa78faef96a30f4c50c75474a6a62f75c80f9cf4efe0efa1e0" gracePeriod=2 Jan 27 20:46:32 crc kubenswrapper[4915]: I0127 20:46:32.492623 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dbgww" Jan 27 20:46:32 crc kubenswrapper[4915]: I0127 20:46:32.606405 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/046bc5e2-a0fc-4a39-94ff-f63ed916c7f9-utilities\") pod \"046bc5e2-a0fc-4a39-94ff-f63ed916c7f9\" (UID: \"046bc5e2-a0fc-4a39-94ff-f63ed916c7f9\") " Jan 27 20:46:32 crc kubenswrapper[4915]: I0127 20:46:32.606504 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/046bc5e2-a0fc-4a39-94ff-f63ed916c7f9-catalog-content\") pod \"046bc5e2-a0fc-4a39-94ff-f63ed916c7f9\" (UID: \"046bc5e2-a0fc-4a39-94ff-f63ed916c7f9\") " Jan 27 20:46:32 crc kubenswrapper[4915]: I0127 20:46:32.606547 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfmp4\" (UniqueName: \"kubernetes.io/projected/046bc5e2-a0fc-4a39-94ff-f63ed916c7f9-kube-api-access-lfmp4\") pod \"046bc5e2-a0fc-4a39-94ff-f63ed916c7f9\" (UID: \"046bc5e2-a0fc-4a39-94ff-f63ed916c7f9\") " Jan 27 20:46:32 crc kubenswrapper[4915]: I0127 20:46:32.608274 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/046bc5e2-a0fc-4a39-94ff-f63ed916c7f9-utilities" (OuterVolumeSpecName: "utilities") pod "046bc5e2-a0fc-4a39-94ff-f63ed916c7f9" (UID: "046bc5e2-a0fc-4a39-94ff-f63ed916c7f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:46:32 crc kubenswrapper[4915]: I0127 20:46:32.612138 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/046bc5e2-a0fc-4a39-94ff-f63ed916c7f9-kube-api-access-lfmp4" (OuterVolumeSpecName: "kube-api-access-lfmp4") pod "046bc5e2-a0fc-4a39-94ff-f63ed916c7f9" (UID: "046bc5e2-a0fc-4a39-94ff-f63ed916c7f9"). InnerVolumeSpecName "kube-api-access-lfmp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:46:32 crc kubenswrapper[4915]: I0127 20:46:32.711323 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/046bc5e2-a0fc-4a39-94ff-f63ed916c7f9-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:46:32 crc kubenswrapper[4915]: I0127 20:46:32.711365 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfmp4\" (UniqueName: \"kubernetes.io/projected/046bc5e2-a0fc-4a39-94ff-f63ed916c7f9-kube-api-access-lfmp4\") on node \"crc\" DevicePath \"\"" Jan 27 20:46:32 crc kubenswrapper[4915]: I0127 20:46:32.758626 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/046bc5e2-a0fc-4a39-94ff-f63ed916c7f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "046bc5e2-a0fc-4a39-94ff-f63ed916c7f9" (UID: "046bc5e2-a0fc-4a39-94ff-f63ed916c7f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:46:32 crc kubenswrapper[4915]: I0127 20:46:32.812673 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/046bc5e2-a0fc-4a39-94ff-f63ed916c7f9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:46:33 crc kubenswrapper[4915]: I0127 20:46:33.015809 4915 generic.go:334] "Generic (PLEG): container finished" podID="046bc5e2-a0fc-4a39-94ff-f63ed916c7f9" containerID="08101ab4be2cb0aa78faef96a30f4c50c75474a6a62f75c80f9cf4efe0efa1e0" exitCode=0 Jan 27 20:46:33 crc kubenswrapper[4915]: I0127 20:46:33.015910 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbgww" event={"ID":"046bc5e2-a0fc-4a39-94ff-f63ed916c7f9","Type":"ContainerDied","Data":"08101ab4be2cb0aa78faef96a30f4c50c75474a6a62f75c80f9cf4efe0efa1e0"} Jan 27 20:46:33 crc kubenswrapper[4915]: I0127 20:46:33.016159 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dbgww" event={"ID":"046bc5e2-a0fc-4a39-94ff-f63ed916c7f9","Type":"ContainerDied","Data":"8ebc7ef94dc72ac10c4fbbab4b3db1fddde7af17c6e60dec41cd15048b1adbbb"} Jan 27 20:46:33 crc kubenswrapper[4915]: I0127 20:46:33.016184 4915 scope.go:117] "RemoveContainer" containerID="08101ab4be2cb0aa78faef96a30f4c50c75474a6a62f75c80f9cf4efe0efa1e0" Jan 27 20:46:33 crc kubenswrapper[4915]: I0127 20:46:33.015914 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dbgww" Jan 27 20:46:33 crc kubenswrapper[4915]: I0127 20:46:33.039325 4915 scope.go:117] "RemoveContainer" containerID="e8ed3da82a333e1c642845cf914b91bfd0c92b18eccbbfda4958fc0bf2200ca8" Jan 27 20:46:33 crc kubenswrapper[4915]: I0127 20:46:33.057398 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dbgww"] Jan 27 20:46:33 crc kubenswrapper[4915]: I0127 20:46:33.064945 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dbgww"] Jan 27 20:46:33 crc kubenswrapper[4915]: I0127 20:46:33.105582 4915 scope.go:117] "RemoveContainer" containerID="085bd059efd2bc6bbc687eac114e93dbf98d20d16b7c85677e340a5a5e7c56a1" Jan 27 20:46:33 crc kubenswrapper[4915]: I0127 20:46:33.139105 4915 scope.go:117] "RemoveContainer" containerID="08101ab4be2cb0aa78faef96a30f4c50c75474a6a62f75c80f9cf4efe0efa1e0" Jan 27 20:46:33 crc kubenswrapper[4915]: E0127 20:46:33.139703 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08101ab4be2cb0aa78faef96a30f4c50c75474a6a62f75c80f9cf4efe0efa1e0\": container with ID starting with 08101ab4be2cb0aa78faef96a30f4c50c75474a6a62f75c80f9cf4efe0efa1e0 not found: ID does not exist" containerID="08101ab4be2cb0aa78faef96a30f4c50c75474a6a62f75c80f9cf4efe0efa1e0" Jan 27 20:46:33 crc kubenswrapper[4915]: I0127 20:46:33.139761 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08101ab4be2cb0aa78faef96a30f4c50c75474a6a62f75c80f9cf4efe0efa1e0"} err="failed to get container status \"08101ab4be2cb0aa78faef96a30f4c50c75474a6a62f75c80f9cf4efe0efa1e0\": rpc error: code = NotFound desc = could not find container \"08101ab4be2cb0aa78faef96a30f4c50c75474a6a62f75c80f9cf4efe0efa1e0\": container with ID starting with 08101ab4be2cb0aa78faef96a30f4c50c75474a6a62f75c80f9cf4efe0efa1e0 not found: ID does not exist" Jan 27 20:46:33 crc kubenswrapper[4915]: I0127 20:46:33.140000 4915 scope.go:117] "RemoveContainer" containerID="e8ed3da82a333e1c642845cf914b91bfd0c92b18eccbbfda4958fc0bf2200ca8" Jan 27 20:46:33 crc kubenswrapper[4915]: E0127 20:46:33.140331 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8ed3da82a333e1c642845cf914b91bfd0c92b18eccbbfda4958fc0bf2200ca8\": container with ID starting with e8ed3da82a333e1c642845cf914b91bfd0c92b18eccbbfda4958fc0bf2200ca8 not found: ID does not exist" containerID="e8ed3da82a333e1c642845cf914b91bfd0c92b18eccbbfda4958fc0bf2200ca8" Jan 27 20:46:33 crc kubenswrapper[4915]: I0127 20:46:33.140353 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8ed3da82a333e1c642845cf914b91bfd0c92b18eccbbfda4958fc0bf2200ca8"} err="failed to get container status \"e8ed3da82a333e1c642845cf914b91bfd0c92b18eccbbfda4958fc0bf2200ca8\": rpc error: code = NotFound desc = could not find container \"e8ed3da82a333e1c642845cf914b91bfd0c92b18eccbbfda4958fc0bf2200ca8\": container with ID starting with e8ed3da82a333e1c642845cf914b91bfd0c92b18eccbbfda4958fc0bf2200ca8 not found: ID does not exist" Jan 27 20:46:33 crc kubenswrapper[4915]: I0127 20:46:33.140372 4915 scope.go:117] "RemoveContainer" containerID="085bd059efd2bc6bbc687eac114e93dbf98d20d16b7c85677e340a5a5e7c56a1" Jan 27 20:46:33 crc kubenswrapper[4915]: E0127 20:46:33.140581 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"085bd059efd2bc6bbc687eac114e93dbf98d20d16b7c85677e340a5a5e7c56a1\": container with ID starting with 085bd059efd2bc6bbc687eac114e93dbf98d20d16b7c85677e340a5a5e7c56a1 not found: ID does not exist" containerID="085bd059efd2bc6bbc687eac114e93dbf98d20d16b7c85677e340a5a5e7c56a1" Jan 27 20:46:33 crc kubenswrapper[4915]: I0127 20:46:33.140602 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"085bd059efd2bc6bbc687eac114e93dbf98d20d16b7c85677e340a5a5e7c56a1"} err="failed to get container status \"085bd059efd2bc6bbc687eac114e93dbf98d20d16b7c85677e340a5a5e7c56a1\": rpc error: code = NotFound desc = could not find container \"085bd059efd2bc6bbc687eac114e93dbf98d20d16b7c85677e340a5a5e7c56a1\": container with ID starting with 085bd059efd2bc6bbc687eac114e93dbf98d20d16b7c85677e340a5a5e7c56a1 not found: ID does not exist" Jan 27 20:46:33 crc kubenswrapper[4915]: I0127 20:46:33.384868 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="046bc5e2-a0fc-4a39-94ff-f63ed916c7f9" path="/var/lib/kubelet/pods/046bc5e2-a0fc-4a39-94ff-f63ed916c7f9/volumes" Jan 27 20:48:20 crc kubenswrapper[4915]: I0127 20:48:20.624673 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:48:20 crc kubenswrapper[4915]: I0127 20:48:20.625322 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:48:50 crc kubenswrapper[4915]: I0127 20:48:50.624536 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:48:50 crc kubenswrapper[4915]: I0127 20:48:50.625230 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:49:20 crc kubenswrapper[4915]: I0127 20:49:20.625067 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:49:20 crc kubenswrapper[4915]: I0127 20:49:20.625535 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:49:20 crc kubenswrapper[4915]: I0127 20:49:20.625579 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 20:49:20 crc kubenswrapper[4915]: I0127 20:49:20.626329 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f12c833cfb4744d3a994f77594e6c4c5e531e035df8f6dbc7b824547f8ffb00e"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 20:49:20 crc kubenswrapper[4915]: I0127 20:49:20.626385 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://f12c833cfb4744d3a994f77594e6c4c5e531e035df8f6dbc7b824547f8ffb00e" gracePeriod=600 Jan 27 20:49:20 crc kubenswrapper[4915]: I0127 20:49:20.847073 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="f12c833cfb4744d3a994f77594e6c4c5e531e035df8f6dbc7b824547f8ffb00e" exitCode=0 Jan 27 20:49:20 crc kubenswrapper[4915]: I0127 20:49:20.847123 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"f12c833cfb4744d3a994f77594e6c4c5e531e035df8f6dbc7b824547f8ffb00e"} Jan 27 20:49:20 crc kubenswrapper[4915]: I0127 20:49:20.847437 4915 scope.go:117] "RemoveContainer" containerID="e86da2343e239a7ef3ea37fc376c0a637abb506b06fc0aa05bb42b3b09dc6f25" Jan 27 20:49:21 crc kubenswrapper[4915]: I0127 20:49:21.859131 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731"} Jan 27 20:51:16 crc kubenswrapper[4915]: I0127 20:51:16.707718 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6z2rp"] Jan 27 20:51:16 crc kubenswrapper[4915]: E0127 20:51:16.709251 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="046bc5e2-a0fc-4a39-94ff-f63ed916c7f9" containerName="extract-content" Jan 27 20:51:16 crc kubenswrapper[4915]: I0127 20:51:16.709282 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="046bc5e2-a0fc-4a39-94ff-f63ed916c7f9" containerName="extract-content" Jan 27 20:51:16 crc kubenswrapper[4915]: E0127 20:51:16.709336 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="046bc5e2-a0fc-4a39-94ff-f63ed916c7f9" containerName="extract-utilities" Jan 27 20:51:16 crc kubenswrapper[4915]: I0127 20:51:16.709353 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="046bc5e2-a0fc-4a39-94ff-f63ed916c7f9" containerName="extract-utilities" Jan 27 20:51:16 crc kubenswrapper[4915]: E0127 20:51:16.709389 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="046bc5e2-a0fc-4a39-94ff-f63ed916c7f9" containerName="registry-server" Jan 27 20:51:16 crc kubenswrapper[4915]: I0127 20:51:16.709412 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="046bc5e2-a0fc-4a39-94ff-f63ed916c7f9" containerName="registry-server" Jan 27 20:51:16 crc kubenswrapper[4915]: I0127 20:51:16.709911 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="046bc5e2-a0fc-4a39-94ff-f63ed916c7f9" containerName="registry-server" Jan 27 20:51:16 crc kubenswrapper[4915]: I0127 20:51:16.713230 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6z2rp" Jan 27 20:51:16 crc kubenswrapper[4915]: I0127 20:51:16.717924 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6z2rp"] Jan 27 20:51:16 crc kubenswrapper[4915]: I0127 20:51:16.728050 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e6a2921-9502-4ea8-b158-f029391a1c24-utilities\") pod \"redhat-marketplace-6z2rp\" (UID: \"1e6a2921-9502-4ea8-b158-f029391a1c24\") " pod="openshift-marketplace/redhat-marketplace-6z2rp" Jan 27 20:51:16 crc kubenswrapper[4915]: I0127 20:51:16.728262 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e6a2921-9502-4ea8-b158-f029391a1c24-catalog-content\") pod \"redhat-marketplace-6z2rp\" (UID: \"1e6a2921-9502-4ea8-b158-f029391a1c24\") " pod="openshift-marketplace/redhat-marketplace-6z2rp" Jan 27 20:51:16 crc kubenswrapper[4915]: I0127 20:51:16.728390 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d67c\" (UniqueName: \"kubernetes.io/projected/1e6a2921-9502-4ea8-b158-f029391a1c24-kube-api-access-7d67c\") pod \"redhat-marketplace-6z2rp\" (UID: \"1e6a2921-9502-4ea8-b158-f029391a1c24\") " pod="openshift-marketplace/redhat-marketplace-6z2rp" Jan 27 20:51:16 crc kubenswrapper[4915]: I0127 20:51:16.831087 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e6a2921-9502-4ea8-b158-f029391a1c24-utilities\") pod \"redhat-marketplace-6z2rp\" (UID: \"1e6a2921-9502-4ea8-b158-f029391a1c24\") " pod="openshift-marketplace/redhat-marketplace-6z2rp" Jan 27 20:51:16 crc kubenswrapper[4915]: I0127 20:51:16.831174 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e6a2921-9502-4ea8-b158-f029391a1c24-catalog-content\") pod \"redhat-marketplace-6z2rp\" (UID: \"1e6a2921-9502-4ea8-b158-f029391a1c24\") " pod="openshift-marketplace/redhat-marketplace-6z2rp" Jan 27 20:51:16 crc kubenswrapper[4915]: I0127 20:51:16.831236 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d67c\" (UniqueName: \"kubernetes.io/projected/1e6a2921-9502-4ea8-b158-f029391a1c24-kube-api-access-7d67c\") pod \"redhat-marketplace-6z2rp\" (UID: \"1e6a2921-9502-4ea8-b158-f029391a1c24\") " pod="openshift-marketplace/redhat-marketplace-6z2rp" Jan 27 20:51:16 crc kubenswrapper[4915]: I0127 20:51:16.831949 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e6a2921-9502-4ea8-b158-f029391a1c24-utilities\") pod \"redhat-marketplace-6z2rp\" (UID: \"1e6a2921-9502-4ea8-b158-f029391a1c24\") " pod="openshift-marketplace/redhat-marketplace-6z2rp" Jan 27 20:51:16 crc kubenswrapper[4915]: I0127 20:51:16.832139 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e6a2921-9502-4ea8-b158-f029391a1c24-catalog-content\") pod \"redhat-marketplace-6z2rp\" (UID: \"1e6a2921-9502-4ea8-b158-f029391a1c24\") " pod="openshift-marketplace/redhat-marketplace-6z2rp" Jan 27 20:51:16 crc kubenswrapper[4915]: I0127 20:51:16.864290 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d67c\" (UniqueName: \"kubernetes.io/projected/1e6a2921-9502-4ea8-b158-f029391a1c24-kube-api-access-7d67c\") pod \"redhat-marketplace-6z2rp\" (UID: \"1e6a2921-9502-4ea8-b158-f029391a1c24\") " pod="openshift-marketplace/redhat-marketplace-6z2rp" Jan 27 20:51:17 crc kubenswrapper[4915]: I0127 20:51:17.053485 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6z2rp" Jan 27 20:51:17 crc kubenswrapper[4915]: I0127 20:51:17.523845 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6z2rp"] Jan 27 20:51:18 crc kubenswrapper[4915]: I0127 20:51:18.078177 4915 generic.go:334] "Generic (PLEG): container finished" podID="1e6a2921-9502-4ea8-b158-f029391a1c24" containerID="fbbba3bcc47b19c73e72585da66417fe8fc25a761d723a700f637d99e5b96669" exitCode=0 Jan 27 20:51:18 crc kubenswrapper[4915]: I0127 20:51:18.078258 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6z2rp" event={"ID":"1e6a2921-9502-4ea8-b158-f029391a1c24","Type":"ContainerDied","Data":"fbbba3bcc47b19c73e72585da66417fe8fc25a761d723a700f637d99e5b96669"} Jan 27 20:51:18 crc kubenswrapper[4915]: I0127 20:51:18.078463 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6z2rp" event={"ID":"1e6a2921-9502-4ea8-b158-f029391a1c24","Type":"ContainerStarted","Data":"689525e82757e4bb8910f47309df1358cd79674ee10a0f4bce11af9ac58bb9e6"} Jan 27 20:51:18 crc kubenswrapper[4915]: I0127 20:51:18.080844 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 20:51:20 crc kubenswrapper[4915]: I0127 20:51:20.097684 4915 generic.go:334] "Generic (PLEG): container finished" podID="1e6a2921-9502-4ea8-b158-f029391a1c24" containerID="cab0d8709d50ad5f4fc87f01b1f9fecf5c3cc6a3a34d493a01a46ff2a7ca8503" exitCode=0 Jan 27 20:51:20 crc kubenswrapper[4915]: I0127 20:51:20.097774 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6z2rp" event={"ID":"1e6a2921-9502-4ea8-b158-f029391a1c24","Type":"ContainerDied","Data":"cab0d8709d50ad5f4fc87f01b1f9fecf5c3cc6a3a34d493a01a46ff2a7ca8503"} Jan 27 20:51:20 crc kubenswrapper[4915]: I0127 20:51:20.624309 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:51:20 crc kubenswrapper[4915]: I0127 20:51:20.624371 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:51:21 crc kubenswrapper[4915]: I0127 20:51:21.109463 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6z2rp" event={"ID":"1e6a2921-9502-4ea8-b158-f029391a1c24","Type":"ContainerStarted","Data":"868da6353b3b508198ac2876dcfebbfdebf22830702e7f633719635ac05efc80"} Jan 27 20:51:21 crc kubenswrapper[4915]: I0127 20:51:21.132750 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6z2rp" podStartSLOduration=2.726228869 podStartE2EDuration="5.132732844s" podCreationTimestamp="2026-01-27 20:51:16 +0000 UTC" firstStartedPulling="2026-01-27 20:51:18.080562586 +0000 UTC m=+7769.438416250" lastFinishedPulling="2026-01-27 20:51:20.487066561 +0000 UTC m=+7771.844920225" observedRunningTime="2026-01-27 20:51:21.127899926 +0000 UTC m=+7772.485753600" watchObservedRunningTime="2026-01-27 20:51:21.132732844 +0000 UTC m=+7772.490586508" Jan 27 20:51:27 crc kubenswrapper[4915]: I0127 20:51:27.054078 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6z2rp" Jan 27 20:51:27 crc kubenswrapper[4915]: I0127 20:51:27.055064 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6z2rp" Jan 27 20:51:27 crc kubenswrapper[4915]: I0127 20:51:27.099920 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6z2rp" Jan 27 20:51:27 crc kubenswrapper[4915]: I0127 20:51:27.206313 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6z2rp" Jan 27 20:51:27 crc kubenswrapper[4915]: I0127 20:51:27.334686 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6z2rp"] Jan 27 20:51:29 crc kubenswrapper[4915]: I0127 20:51:29.179874 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6z2rp" podUID="1e6a2921-9502-4ea8-b158-f029391a1c24" containerName="registry-server" containerID="cri-o://868da6353b3b508198ac2876dcfebbfdebf22830702e7f633719635ac05efc80" gracePeriod=2 Jan 27 20:51:29 crc kubenswrapper[4915]: I0127 20:51:29.649008 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6z2rp" Jan 27 20:51:29 crc kubenswrapper[4915]: I0127 20:51:29.816830 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e6a2921-9502-4ea8-b158-f029391a1c24-catalog-content\") pod \"1e6a2921-9502-4ea8-b158-f029391a1c24\" (UID: \"1e6a2921-9502-4ea8-b158-f029391a1c24\") " Jan 27 20:51:29 crc kubenswrapper[4915]: I0127 20:51:29.816905 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d67c\" (UniqueName: \"kubernetes.io/projected/1e6a2921-9502-4ea8-b158-f029391a1c24-kube-api-access-7d67c\") pod \"1e6a2921-9502-4ea8-b158-f029391a1c24\" (UID: \"1e6a2921-9502-4ea8-b158-f029391a1c24\") " Jan 27 20:51:29 crc kubenswrapper[4915]: I0127 20:51:29.816937 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e6a2921-9502-4ea8-b158-f029391a1c24-utilities\") pod \"1e6a2921-9502-4ea8-b158-f029391a1c24\" (UID: \"1e6a2921-9502-4ea8-b158-f029391a1c24\") " Jan 27 20:51:29 crc kubenswrapper[4915]: I0127 20:51:29.818069 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e6a2921-9502-4ea8-b158-f029391a1c24-utilities" (OuterVolumeSpecName: "utilities") pod "1e6a2921-9502-4ea8-b158-f029391a1c24" (UID: "1e6a2921-9502-4ea8-b158-f029391a1c24"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:51:29 crc kubenswrapper[4915]: I0127 20:51:29.826178 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e6a2921-9502-4ea8-b158-f029391a1c24-kube-api-access-7d67c" (OuterVolumeSpecName: "kube-api-access-7d67c") pod "1e6a2921-9502-4ea8-b158-f029391a1c24" (UID: "1e6a2921-9502-4ea8-b158-f029391a1c24"). InnerVolumeSpecName "kube-api-access-7d67c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:51:29 crc kubenswrapper[4915]: I0127 20:51:29.919239 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d67c\" (UniqueName: \"kubernetes.io/projected/1e6a2921-9502-4ea8-b158-f029391a1c24-kube-api-access-7d67c\") on node \"crc\" DevicePath \"\"" Jan 27 20:51:29 crc kubenswrapper[4915]: I0127 20:51:29.919617 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e6a2921-9502-4ea8-b158-f029391a1c24-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:51:29 crc kubenswrapper[4915]: I0127 20:51:29.924916 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e6a2921-9502-4ea8-b158-f029391a1c24-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e6a2921-9502-4ea8-b158-f029391a1c24" (UID: "1e6a2921-9502-4ea8-b158-f029391a1c24"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:51:30 crc kubenswrapper[4915]: I0127 20:51:30.021854 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e6a2921-9502-4ea8-b158-f029391a1c24-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:51:30 crc kubenswrapper[4915]: I0127 20:51:30.190512 4915 generic.go:334] "Generic (PLEG): container finished" podID="1e6a2921-9502-4ea8-b158-f029391a1c24" containerID="868da6353b3b508198ac2876dcfebbfdebf22830702e7f633719635ac05efc80" exitCode=0 Jan 27 20:51:30 crc kubenswrapper[4915]: I0127 20:51:30.190557 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6z2rp" event={"ID":"1e6a2921-9502-4ea8-b158-f029391a1c24","Type":"ContainerDied","Data":"868da6353b3b508198ac2876dcfebbfdebf22830702e7f633719635ac05efc80"} Jan 27 20:51:30 crc kubenswrapper[4915]: I0127 20:51:30.190592 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6z2rp" event={"ID":"1e6a2921-9502-4ea8-b158-f029391a1c24","Type":"ContainerDied","Data":"689525e82757e4bb8910f47309df1358cd79674ee10a0f4bce11af9ac58bb9e6"} Jan 27 20:51:30 crc kubenswrapper[4915]: I0127 20:51:30.190609 4915 scope.go:117] "RemoveContainer" containerID="868da6353b3b508198ac2876dcfebbfdebf22830702e7f633719635ac05efc80" Jan 27 20:51:30 crc kubenswrapper[4915]: I0127 20:51:30.190603 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6z2rp" Jan 27 20:51:30 crc kubenswrapper[4915]: I0127 20:51:30.216862 4915 scope.go:117] "RemoveContainer" containerID="cab0d8709d50ad5f4fc87f01b1f9fecf5c3cc6a3a34d493a01a46ff2a7ca8503" Jan 27 20:51:30 crc kubenswrapper[4915]: I0127 20:51:30.239538 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6z2rp"] Jan 27 20:51:30 crc kubenswrapper[4915]: I0127 20:51:30.245766 4915 scope.go:117] "RemoveContainer" containerID="fbbba3bcc47b19c73e72585da66417fe8fc25a761d723a700f637d99e5b96669" Jan 27 20:51:30 crc kubenswrapper[4915]: I0127 20:51:30.249680 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6z2rp"] Jan 27 20:51:30 crc kubenswrapper[4915]: I0127 20:51:30.284727 4915 scope.go:117] "RemoveContainer" containerID="868da6353b3b508198ac2876dcfebbfdebf22830702e7f633719635ac05efc80" Jan 27 20:51:30 crc kubenswrapper[4915]: E0127 20:51:30.285486 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"868da6353b3b508198ac2876dcfebbfdebf22830702e7f633719635ac05efc80\": container with ID starting with 868da6353b3b508198ac2876dcfebbfdebf22830702e7f633719635ac05efc80 not found: ID does not exist" containerID="868da6353b3b508198ac2876dcfebbfdebf22830702e7f633719635ac05efc80" Jan 27 20:51:30 crc kubenswrapper[4915]: I0127 20:51:30.285538 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"868da6353b3b508198ac2876dcfebbfdebf22830702e7f633719635ac05efc80"} err="failed to get container status \"868da6353b3b508198ac2876dcfebbfdebf22830702e7f633719635ac05efc80\": rpc error: code = NotFound desc = could not find container \"868da6353b3b508198ac2876dcfebbfdebf22830702e7f633719635ac05efc80\": container with ID starting with 868da6353b3b508198ac2876dcfebbfdebf22830702e7f633719635ac05efc80 not found: ID does not exist" Jan 27 20:51:30 crc kubenswrapper[4915]: I0127 20:51:30.285572 4915 scope.go:117] "RemoveContainer" containerID="cab0d8709d50ad5f4fc87f01b1f9fecf5c3cc6a3a34d493a01a46ff2a7ca8503" Jan 27 20:51:30 crc kubenswrapper[4915]: E0127 20:51:30.285985 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cab0d8709d50ad5f4fc87f01b1f9fecf5c3cc6a3a34d493a01a46ff2a7ca8503\": container with ID starting with cab0d8709d50ad5f4fc87f01b1f9fecf5c3cc6a3a34d493a01a46ff2a7ca8503 not found: ID does not exist" containerID="cab0d8709d50ad5f4fc87f01b1f9fecf5c3cc6a3a34d493a01a46ff2a7ca8503" Jan 27 20:51:30 crc kubenswrapper[4915]: I0127 20:51:30.286024 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cab0d8709d50ad5f4fc87f01b1f9fecf5c3cc6a3a34d493a01a46ff2a7ca8503"} err="failed to get container status \"cab0d8709d50ad5f4fc87f01b1f9fecf5c3cc6a3a34d493a01a46ff2a7ca8503\": rpc error: code = NotFound desc = could not find container \"cab0d8709d50ad5f4fc87f01b1f9fecf5c3cc6a3a34d493a01a46ff2a7ca8503\": container with ID starting with cab0d8709d50ad5f4fc87f01b1f9fecf5c3cc6a3a34d493a01a46ff2a7ca8503 not found: ID does not exist" Jan 27 20:51:30 crc kubenswrapper[4915]: I0127 20:51:30.286049 4915 scope.go:117] "RemoveContainer" containerID="fbbba3bcc47b19c73e72585da66417fe8fc25a761d723a700f637d99e5b96669" Jan 27 20:51:30 crc kubenswrapper[4915]: E0127 20:51:30.286428 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbbba3bcc47b19c73e72585da66417fe8fc25a761d723a700f637d99e5b96669\": container with ID starting with fbbba3bcc47b19c73e72585da66417fe8fc25a761d723a700f637d99e5b96669 not found: ID does not exist" containerID="fbbba3bcc47b19c73e72585da66417fe8fc25a761d723a700f637d99e5b96669" Jan 27 20:51:30 crc kubenswrapper[4915]: I0127 20:51:30.286456 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbbba3bcc47b19c73e72585da66417fe8fc25a761d723a700f637d99e5b96669"} err="failed to get container status \"fbbba3bcc47b19c73e72585da66417fe8fc25a761d723a700f637d99e5b96669\": rpc error: code = NotFound desc = could not find container \"fbbba3bcc47b19c73e72585da66417fe8fc25a761d723a700f637d99e5b96669\": container with ID starting with fbbba3bcc47b19c73e72585da66417fe8fc25a761d723a700f637d99e5b96669 not found: ID does not exist" Jan 27 20:51:31 crc kubenswrapper[4915]: I0127 20:51:31.369082 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e6a2921-9502-4ea8-b158-f029391a1c24" path="/var/lib/kubelet/pods/1e6a2921-9502-4ea8-b158-f029391a1c24/volumes" Jan 27 20:51:50 crc kubenswrapper[4915]: I0127 20:51:50.624477 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:51:50 crc kubenswrapper[4915]: I0127 20:51:50.625197 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:52:20 crc kubenswrapper[4915]: I0127 20:52:20.625080 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:52:20 crc kubenswrapper[4915]: I0127 20:52:20.625585 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:52:20 crc kubenswrapper[4915]: I0127 20:52:20.625641 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 20:52:20 crc kubenswrapper[4915]: I0127 20:52:20.626862 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 20:52:20 crc kubenswrapper[4915]: I0127 20:52:20.627035 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" gracePeriod=600 Jan 27 20:52:20 crc kubenswrapper[4915]: E0127 20:52:20.747949 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:52:21 crc kubenswrapper[4915]: I0127 20:52:21.631057 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" exitCode=0 Jan 27 20:52:21 crc kubenswrapper[4915]: I0127 20:52:21.631122 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731"} Jan 27 20:52:21 crc kubenswrapper[4915]: I0127 20:52:21.631409 4915 scope.go:117] "RemoveContainer" containerID="f12c833cfb4744d3a994f77594e6c4c5e531e035df8f6dbc7b824547f8ffb00e" Jan 27 20:52:21 crc kubenswrapper[4915]: I0127 20:52:21.632135 4915 scope.go:117] "RemoveContainer" containerID="af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" Jan 27 20:52:21 crc kubenswrapper[4915]: E0127 20:52:21.632440 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:52:33 crc kubenswrapper[4915]: I0127 20:52:33.358668 4915 scope.go:117] "RemoveContainer" containerID="af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" Jan 27 20:52:33 crc kubenswrapper[4915]: E0127 20:52:33.359777 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:52:36 crc kubenswrapper[4915]: I0127 20:52:36.833440 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-75x76"] Jan 27 20:52:36 crc kubenswrapper[4915]: E0127 20:52:36.834427 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6a2921-9502-4ea8-b158-f029391a1c24" containerName="extract-utilities" Jan 27 20:52:36 crc kubenswrapper[4915]: I0127 20:52:36.834443 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6a2921-9502-4ea8-b158-f029391a1c24" containerName="extract-utilities" Jan 27 20:52:36 crc kubenswrapper[4915]: E0127 20:52:36.834476 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6a2921-9502-4ea8-b158-f029391a1c24" containerName="extract-content" Jan 27 20:52:36 crc kubenswrapper[4915]: I0127 20:52:36.834483 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6a2921-9502-4ea8-b158-f029391a1c24" containerName="extract-content" Jan 27 20:52:36 crc kubenswrapper[4915]: E0127 20:52:36.834513 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6a2921-9502-4ea8-b158-f029391a1c24" containerName="registry-server" Jan 27 20:52:36 crc kubenswrapper[4915]: I0127 20:52:36.834521 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6a2921-9502-4ea8-b158-f029391a1c24" containerName="registry-server" Jan 27 20:52:36 crc kubenswrapper[4915]: I0127 20:52:36.834722 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e6a2921-9502-4ea8-b158-f029391a1c24" containerName="registry-server" Jan 27 20:52:36 crc kubenswrapper[4915]: I0127 20:52:36.836378 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-75x76" Jan 27 20:52:36 crc kubenswrapper[4915]: I0127 20:52:36.846429 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-75x76"] Jan 27 20:52:36 crc kubenswrapper[4915]: I0127 20:52:36.994300 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffd15913-be6c-4cfe-b778-9f41b349b9cd-catalog-content\") pod \"community-operators-75x76\" (UID: \"ffd15913-be6c-4cfe-b778-9f41b349b9cd\") " pod="openshift-marketplace/community-operators-75x76" Jan 27 20:52:36 crc kubenswrapper[4915]: I0127 20:52:36.994386 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzz9p\" (UniqueName: \"kubernetes.io/projected/ffd15913-be6c-4cfe-b778-9f41b349b9cd-kube-api-access-bzz9p\") pod \"community-operators-75x76\" (UID: \"ffd15913-be6c-4cfe-b778-9f41b349b9cd\") " pod="openshift-marketplace/community-operators-75x76" Jan 27 20:52:36 crc kubenswrapper[4915]: I0127 20:52:36.994550 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffd15913-be6c-4cfe-b778-9f41b349b9cd-utilities\") pod \"community-operators-75x76\" (UID: \"ffd15913-be6c-4cfe-b778-9f41b349b9cd\") " pod="openshift-marketplace/community-operators-75x76" Jan 27 20:52:37 crc kubenswrapper[4915]: I0127 20:52:37.095897 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffd15913-be6c-4cfe-b778-9f41b349b9cd-catalog-content\") pod \"community-operators-75x76\" (UID: \"ffd15913-be6c-4cfe-b778-9f41b349b9cd\") " pod="openshift-marketplace/community-operators-75x76" Jan 27 20:52:37 crc kubenswrapper[4915]: I0127 20:52:37.096302 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzz9p\" (UniqueName: \"kubernetes.io/projected/ffd15913-be6c-4cfe-b778-9f41b349b9cd-kube-api-access-bzz9p\") pod \"community-operators-75x76\" (UID: \"ffd15913-be6c-4cfe-b778-9f41b349b9cd\") " pod="openshift-marketplace/community-operators-75x76" Jan 27 20:52:37 crc kubenswrapper[4915]: I0127 20:52:37.096365 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffd15913-be6c-4cfe-b778-9f41b349b9cd-catalog-content\") pod \"community-operators-75x76\" (UID: \"ffd15913-be6c-4cfe-b778-9f41b349b9cd\") " pod="openshift-marketplace/community-operators-75x76" Jan 27 20:52:37 crc kubenswrapper[4915]: I0127 20:52:37.096500 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffd15913-be6c-4cfe-b778-9f41b349b9cd-utilities\") pod \"community-operators-75x76\" (UID: \"ffd15913-be6c-4cfe-b778-9f41b349b9cd\") " pod="openshift-marketplace/community-operators-75x76" Jan 27 20:52:37 crc kubenswrapper[4915]: I0127 20:52:37.096765 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffd15913-be6c-4cfe-b778-9f41b349b9cd-utilities\") pod \"community-operators-75x76\" (UID: \"ffd15913-be6c-4cfe-b778-9f41b349b9cd\") " pod="openshift-marketplace/community-operators-75x76" Jan 27 20:52:37 crc kubenswrapper[4915]: I0127 20:52:37.118894 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzz9p\" (UniqueName: \"kubernetes.io/projected/ffd15913-be6c-4cfe-b778-9f41b349b9cd-kube-api-access-bzz9p\") pod \"community-operators-75x76\" (UID: \"ffd15913-be6c-4cfe-b778-9f41b349b9cd\") " pod="openshift-marketplace/community-operators-75x76" Jan 27 20:52:37 crc kubenswrapper[4915]: I0127 20:52:37.162339 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-75x76" Jan 27 20:52:37 crc kubenswrapper[4915]: I0127 20:52:37.725923 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-75x76"] Jan 27 20:52:37 crc kubenswrapper[4915]: W0127 20:52:37.726680 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffd15913_be6c_4cfe_b778_9f41b349b9cd.slice/crio-a433f8634f66aef5315123a2f295b996a17873a12238548af535b79ecc0e1901 WatchSource:0}: Error finding container a433f8634f66aef5315123a2f295b996a17873a12238548af535b79ecc0e1901: Status 404 returned error can't find the container with id a433f8634f66aef5315123a2f295b996a17873a12238548af535b79ecc0e1901 Jan 27 20:52:37 crc kubenswrapper[4915]: I0127 20:52:37.806355 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75x76" event={"ID":"ffd15913-be6c-4cfe-b778-9f41b349b9cd","Type":"ContainerStarted","Data":"a433f8634f66aef5315123a2f295b996a17873a12238548af535b79ecc0e1901"} Jan 27 20:52:38 crc kubenswrapper[4915]: I0127 20:52:38.815228 4915 generic.go:334] "Generic (PLEG): container finished" podID="ffd15913-be6c-4cfe-b778-9f41b349b9cd" containerID="07b9abe133a33131db410b89ea1b95e22067cb6c6434ebe644fd5bfbe2b1e7cf" exitCode=0 Jan 27 20:52:38 crc kubenswrapper[4915]: I0127 20:52:38.815312 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75x76" event={"ID":"ffd15913-be6c-4cfe-b778-9f41b349b9cd","Type":"ContainerDied","Data":"07b9abe133a33131db410b89ea1b95e22067cb6c6434ebe644fd5bfbe2b1e7cf"} Jan 27 20:52:39 crc kubenswrapper[4915]: I0127 20:52:39.828772 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75x76" event={"ID":"ffd15913-be6c-4cfe-b778-9f41b349b9cd","Type":"ContainerStarted","Data":"4890eb2cb4fc70e7063fdaa72b42b0a174b64a304d796aa6e121f89b41307d17"} Jan 27 20:52:40 crc kubenswrapper[4915]: I0127 20:52:40.845505 4915 generic.go:334] "Generic (PLEG): container finished" podID="ffd15913-be6c-4cfe-b778-9f41b349b9cd" containerID="4890eb2cb4fc70e7063fdaa72b42b0a174b64a304d796aa6e121f89b41307d17" exitCode=0 Jan 27 20:52:40 crc kubenswrapper[4915]: I0127 20:52:40.845610 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75x76" event={"ID":"ffd15913-be6c-4cfe-b778-9f41b349b9cd","Type":"ContainerDied","Data":"4890eb2cb4fc70e7063fdaa72b42b0a174b64a304d796aa6e121f89b41307d17"} Jan 27 20:52:41 crc kubenswrapper[4915]: I0127 20:52:41.860997 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75x76" event={"ID":"ffd15913-be6c-4cfe-b778-9f41b349b9cd","Type":"ContainerStarted","Data":"bea3a1af5710dde43a72f16a154852f29d1214ab68812a92dedc30d5e4326365"} Jan 27 20:52:41 crc kubenswrapper[4915]: I0127 20:52:41.887029 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-75x76" podStartSLOduration=3.472345127 podStartE2EDuration="5.887009823s" podCreationTimestamp="2026-01-27 20:52:36 +0000 UTC" firstStartedPulling="2026-01-27 20:52:38.817423167 +0000 UTC m=+7850.175276831" lastFinishedPulling="2026-01-27 20:52:41.232087863 +0000 UTC m=+7852.589941527" observedRunningTime="2026-01-27 20:52:41.883119157 +0000 UTC m=+7853.240972841" watchObservedRunningTime="2026-01-27 20:52:41.887009823 +0000 UTC m=+7853.244863497" Jan 27 20:52:46 crc kubenswrapper[4915]: I0127 20:52:46.358728 4915 scope.go:117] "RemoveContainer" containerID="af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" Jan 27 20:52:46 crc kubenswrapper[4915]: E0127 20:52:46.359951 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:52:47 crc kubenswrapper[4915]: I0127 20:52:47.163660 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-75x76" Jan 27 20:52:47 crc kubenswrapper[4915]: I0127 20:52:47.163722 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-75x76" Jan 27 20:52:47 crc kubenswrapper[4915]: I0127 20:52:47.210982 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-75x76" Jan 27 20:52:48 crc kubenswrapper[4915]: I0127 20:52:48.001658 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-75x76" Jan 27 20:52:48 crc kubenswrapper[4915]: I0127 20:52:48.064668 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-75x76"] Jan 27 20:52:49 crc kubenswrapper[4915]: I0127 20:52:49.954625 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-75x76" podUID="ffd15913-be6c-4cfe-b778-9f41b349b9cd" containerName="registry-server" containerID="cri-o://bea3a1af5710dde43a72f16a154852f29d1214ab68812a92dedc30d5e4326365" gracePeriod=2 Jan 27 20:52:50 crc kubenswrapper[4915]: I0127 20:52:50.483178 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-75x76" Jan 27 20:52:50 crc kubenswrapper[4915]: I0127 20:52:50.582270 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffd15913-be6c-4cfe-b778-9f41b349b9cd-catalog-content\") pod \"ffd15913-be6c-4cfe-b778-9f41b349b9cd\" (UID: \"ffd15913-be6c-4cfe-b778-9f41b349b9cd\") " Jan 27 20:52:50 crc kubenswrapper[4915]: I0127 20:52:50.582618 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzz9p\" (UniqueName: \"kubernetes.io/projected/ffd15913-be6c-4cfe-b778-9f41b349b9cd-kube-api-access-bzz9p\") pod \"ffd15913-be6c-4cfe-b778-9f41b349b9cd\" (UID: \"ffd15913-be6c-4cfe-b778-9f41b349b9cd\") " Jan 27 20:52:50 crc kubenswrapper[4915]: I0127 20:52:50.582670 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffd15913-be6c-4cfe-b778-9f41b349b9cd-utilities\") pod \"ffd15913-be6c-4cfe-b778-9f41b349b9cd\" (UID: \"ffd15913-be6c-4cfe-b778-9f41b349b9cd\") " Jan 27 20:52:50 crc kubenswrapper[4915]: I0127 20:52:50.583435 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffd15913-be6c-4cfe-b778-9f41b349b9cd-utilities" (OuterVolumeSpecName: "utilities") pod "ffd15913-be6c-4cfe-b778-9f41b349b9cd" (UID: "ffd15913-be6c-4cfe-b778-9f41b349b9cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:52:50 crc kubenswrapper[4915]: I0127 20:52:50.588185 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffd15913-be6c-4cfe-b778-9f41b349b9cd-kube-api-access-bzz9p" (OuterVolumeSpecName: "kube-api-access-bzz9p") pod "ffd15913-be6c-4cfe-b778-9f41b349b9cd" (UID: "ffd15913-be6c-4cfe-b778-9f41b349b9cd"). InnerVolumeSpecName "kube-api-access-bzz9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:52:50 crc kubenswrapper[4915]: I0127 20:52:50.668923 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffd15913-be6c-4cfe-b778-9f41b349b9cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ffd15913-be6c-4cfe-b778-9f41b349b9cd" (UID: "ffd15913-be6c-4cfe-b778-9f41b349b9cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:52:50 crc kubenswrapper[4915]: I0127 20:52:50.685208 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffd15913-be6c-4cfe-b778-9f41b349b9cd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:52:50 crc kubenswrapper[4915]: I0127 20:52:50.685286 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzz9p\" (UniqueName: \"kubernetes.io/projected/ffd15913-be6c-4cfe-b778-9f41b349b9cd-kube-api-access-bzz9p\") on node \"crc\" DevicePath \"\"" Jan 27 20:52:50 crc kubenswrapper[4915]: I0127 20:52:50.685307 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffd15913-be6c-4cfe-b778-9f41b349b9cd-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:52:50 crc kubenswrapper[4915]: I0127 20:52:50.967227 4915 generic.go:334] "Generic (PLEG): container finished" podID="ffd15913-be6c-4cfe-b778-9f41b349b9cd" containerID="bea3a1af5710dde43a72f16a154852f29d1214ab68812a92dedc30d5e4326365" exitCode=0 Jan 27 20:52:50 crc kubenswrapper[4915]: I0127 20:52:50.967312 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75x76" event={"ID":"ffd15913-be6c-4cfe-b778-9f41b349b9cd","Type":"ContainerDied","Data":"bea3a1af5710dde43a72f16a154852f29d1214ab68812a92dedc30d5e4326365"} Jan 27 20:52:50 crc kubenswrapper[4915]: I0127 20:52:50.967637 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75x76" event={"ID":"ffd15913-be6c-4cfe-b778-9f41b349b9cd","Type":"ContainerDied","Data":"a433f8634f66aef5315123a2f295b996a17873a12238548af535b79ecc0e1901"} Jan 27 20:52:50 crc kubenswrapper[4915]: I0127 20:52:50.967666 4915 scope.go:117] "RemoveContainer" containerID="bea3a1af5710dde43a72f16a154852f29d1214ab68812a92dedc30d5e4326365" Jan 27 20:52:50 crc kubenswrapper[4915]: I0127 20:52:50.967342 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-75x76" Jan 27 20:52:51 crc kubenswrapper[4915]: I0127 20:52:51.001013 4915 scope.go:117] "RemoveContainer" containerID="4890eb2cb4fc70e7063fdaa72b42b0a174b64a304d796aa6e121f89b41307d17" Jan 27 20:52:51 crc kubenswrapper[4915]: I0127 20:52:51.016989 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-75x76"] Jan 27 20:52:51 crc kubenswrapper[4915]: I0127 20:52:51.024993 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-75x76"] Jan 27 20:52:51 crc kubenswrapper[4915]: I0127 20:52:51.053058 4915 scope.go:117] "RemoveContainer" containerID="07b9abe133a33131db410b89ea1b95e22067cb6c6434ebe644fd5bfbe2b1e7cf" Jan 27 20:52:51 crc kubenswrapper[4915]: I0127 20:52:51.073122 4915 scope.go:117] "RemoveContainer" containerID="bea3a1af5710dde43a72f16a154852f29d1214ab68812a92dedc30d5e4326365" Jan 27 20:52:51 crc kubenswrapper[4915]: E0127 20:52:51.073721 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bea3a1af5710dde43a72f16a154852f29d1214ab68812a92dedc30d5e4326365\": container with ID starting with bea3a1af5710dde43a72f16a154852f29d1214ab68812a92dedc30d5e4326365 not found: ID does not exist" containerID="bea3a1af5710dde43a72f16a154852f29d1214ab68812a92dedc30d5e4326365" Jan 27 20:52:51 crc kubenswrapper[4915]: I0127 20:52:51.073775 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bea3a1af5710dde43a72f16a154852f29d1214ab68812a92dedc30d5e4326365"} err="failed to get container status \"bea3a1af5710dde43a72f16a154852f29d1214ab68812a92dedc30d5e4326365\": rpc error: code = NotFound desc = could not find container \"bea3a1af5710dde43a72f16a154852f29d1214ab68812a92dedc30d5e4326365\": container with ID starting with bea3a1af5710dde43a72f16a154852f29d1214ab68812a92dedc30d5e4326365 not found: ID does not exist" Jan 27 20:52:51 crc kubenswrapper[4915]: I0127 20:52:51.073819 4915 scope.go:117] "RemoveContainer" containerID="4890eb2cb4fc70e7063fdaa72b42b0a174b64a304d796aa6e121f89b41307d17" Jan 27 20:52:51 crc kubenswrapper[4915]: E0127 20:52:51.074313 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4890eb2cb4fc70e7063fdaa72b42b0a174b64a304d796aa6e121f89b41307d17\": container with ID starting with 4890eb2cb4fc70e7063fdaa72b42b0a174b64a304d796aa6e121f89b41307d17 not found: ID does not exist" containerID="4890eb2cb4fc70e7063fdaa72b42b0a174b64a304d796aa6e121f89b41307d17" Jan 27 20:52:51 crc kubenswrapper[4915]: I0127 20:52:51.074364 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4890eb2cb4fc70e7063fdaa72b42b0a174b64a304d796aa6e121f89b41307d17"} err="failed to get container status \"4890eb2cb4fc70e7063fdaa72b42b0a174b64a304d796aa6e121f89b41307d17\": rpc error: code = NotFound desc = could not find container \"4890eb2cb4fc70e7063fdaa72b42b0a174b64a304d796aa6e121f89b41307d17\": container with ID starting with 4890eb2cb4fc70e7063fdaa72b42b0a174b64a304d796aa6e121f89b41307d17 not found: ID does not exist" Jan 27 20:52:51 crc kubenswrapper[4915]: I0127 20:52:51.074397 4915 scope.go:117] "RemoveContainer" containerID="07b9abe133a33131db410b89ea1b95e22067cb6c6434ebe644fd5bfbe2b1e7cf" Jan 27 20:52:51 crc kubenswrapper[4915]: E0127 20:52:51.074697 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07b9abe133a33131db410b89ea1b95e22067cb6c6434ebe644fd5bfbe2b1e7cf\": container with ID starting with 07b9abe133a33131db410b89ea1b95e22067cb6c6434ebe644fd5bfbe2b1e7cf not found: ID does not exist" containerID="07b9abe133a33131db410b89ea1b95e22067cb6c6434ebe644fd5bfbe2b1e7cf" Jan 27 20:52:51 crc kubenswrapper[4915]: I0127 20:52:51.074725 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07b9abe133a33131db410b89ea1b95e22067cb6c6434ebe644fd5bfbe2b1e7cf"} err="failed to get container status \"07b9abe133a33131db410b89ea1b95e22067cb6c6434ebe644fd5bfbe2b1e7cf\": rpc error: code = NotFound desc = could not find container \"07b9abe133a33131db410b89ea1b95e22067cb6c6434ebe644fd5bfbe2b1e7cf\": container with ID starting with 07b9abe133a33131db410b89ea1b95e22067cb6c6434ebe644fd5bfbe2b1e7cf not found: ID does not exist" Jan 27 20:52:51 crc kubenswrapper[4915]: I0127 20:52:51.373909 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffd15913-be6c-4cfe-b778-9f41b349b9cd" path="/var/lib/kubelet/pods/ffd15913-be6c-4cfe-b778-9f41b349b9cd/volumes" Jan 27 20:52:58 crc kubenswrapper[4915]: I0127 20:52:58.358685 4915 scope.go:117] "RemoveContainer" containerID="af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" Jan 27 20:52:58 crc kubenswrapper[4915]: E0127 20:52:58.359745 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:53:11 crc kubenswrapper[4915]: I0127 20:53:11.359355 4915 scope.go:117] "RemoveContainer" containerID="af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" Jan 27 20:53:11 crc kubenswrapper[4915]: E0127 20:53:11.360869 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:53:22 crc kubenswrapper[4915]: I0127 20:53:22.357642 4915 scope.go:117] "RemoveContainer" containerID="af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" Jan 27 20:53:22 crc kubenswrapper[4915]: E0127 20:53:22.358452 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:53:35 crc kubenswrapper[4915]: I0127 20:53:35.358845 4915 scope.go:117] "RemoveContainer" containerID="af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" Jan 27 20:53:35 crc kubenswrapper[4915]: E0127 20:53:35.360536 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:53:46 crc kubenswrapper[4915]: I0127 20:53:46.358756 4915 scope.go:117] "RemoveContainer" containerID="af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" Jan 27 20:53:46 crc kubenswrapper[4915]: E0127 20:53:46.360433 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:53:57 crc kubenswrapper[4915]: I0127 20:53:57.359309 4915 scope.go:117] "RemoveContainer" containerID="af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" Jan 27 20:53:57 crc kubenswrapper[4915]: E0127 20:53:57.360526 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:54:12 crc kubenswrapper[4915]: I0127 20:54:12.358352 4915 scope.go:117] "RemoveContainer" containerID="af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" Jan 27 20:54:12 crc kubenswrapper[4915]: E0127 20:54:12.359255 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:54:27 crc kubenswrapper[4915]: I0127 20:54:27.358140 4915 scope.go:117] "RemoveContainer" containerID="af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" Jan 27 20:54:27 crc kubenswrapper[4915]: E0127 20:54:27.359296 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:54:38 crc kubenswrapper[4915]: I0127 20:54:38.357592 4915 scope.go:117] "RemoveContainer" containerID="af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" Jan 27 20:54:38 crc kubenswrapper[4915]: E0127 20:54:38.358559 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:54:52 crc kubenswrapper[4915]: I0127 20:54:52.236679 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kvz44"] Jan 27 20:54:52 crc kubenswrapper[4915]: E0127 20:54:52.237769 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffd15913-be6c-4cfe-b778-9f41b349b9cd" containerName="extract-utilities" Jan 27 20:54:52 crc kubenswrapper[4915]: I0127 20:54:52.237807 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffd15913-be6c-4cfe-b778-9f41b349b9cd" containerName="extract-utilities" Jan 27 20:54:52 crc kubenswrapper[4915]: E0127 20:54:52.237849 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffd15913-be6c-4cfe-b778-9f41b349b9cd" containerName="extract-content" Jan 27 20:54:52 crc kubenswrapper[4915]: I0127 20:54:52.237857 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffd15913-be6c-4cfe-b778-9f41b349b9cd" containerName="extract-content" Jan 27 20:54:52 crc kubenswrapper[4915]: E0127 20:54:52.237882 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffd15913-be6c-4cfe-b778-9f41b349b9cd" containerName="registry-server" Jan 27 20:54:52 crc kubenswrapper[4915]: I0127 20:54:52.237889 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffd15913-be6c-4cfe-b778-9f41b349b9cd" containerName="registry-server" Jan 27 20:54:52 crc kubenswrapper[4915]: I0127 20:54:52.238130 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffd15913-be6c-4cfe-b778-9f41b349b9cd" containerName="registry-server" Jan 27 20:54:52 crc kubenswrapper[4915]: I0127 20:54:52.239723 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kvz44" Jan 27 20:54:52 crc kubenswrapper[4915]: I0127 20:54:52.253137 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kvz44"] Jan 27 20:54:52 crc kubenswrapper[4915]: I0127 20:54:52.328835 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7rw9\" (UniqueName: \"kubernetes.io/projected/08bb1cf4-0626-4abc-beda-89db767d1a18-kube-api-access-l7rw9\") pod \"certified-operators-kvz44\" (UID: \"08bb1cf4-0626-4abc-beda-89db767d1a18\") " pod="openshift-marketplace/certified-operators-kvz44" Jan 27 20:54:52 crc kubenswrapper[4915]: I0127 20:54:52.328905 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08bb1cf4-0626-4abc-beda-89db767d1a18-catalog-content\") pod \"certified-operators-kvz44\" (UID: \"08bb1cf4-0626-4abc-beda-89db767d1a18\") " pod="openshift-marketplace/certified-operators-kvz44" Jan 27 20:54:52 crc kubenswrapper[4915]: I0127 20:54:52.328959 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08bb1cf4-0626-4abc-beda-89db767d1a18-utilities\") pod \"certified-operators-kvz44\" (UID: \"08bb1cf4-0626-4abc-beda-89db767d1a18\") " pod="openshift-marketplace/certified-operators-kvz44" Jan 27 20:54:52 crc kubenswrapper[4915]: I0127 20:54:52.432337 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7rw9\" (UniqueName: \"kubernetes.io/projected/08bb1cf4-0626-4abc-beda-89db767d1a18-kube-api-access-l7rw9\") pod \"certified-operators-kvz44\" (UID: \"08bb1cf4-0626-4abc-beda-89db767d1a18\") " pod="openshift-marketplace/certified-operators-kvz44" Jan 27 20:54:52 crc kubenswrapper[4915]: I0127 20:54:52.432420 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08bb1cf4-0626-4abc-beda-89db767d1a18-catalog-content\") pod \"certified-operators-kvz44\" (UID: \"08bb1cf4-0626-4abc-beda-89db767d1a18\") " pod="openshift-marketplace/certified-operators-kvz44" Jan 27 20:54:52 crc kubenswrapper[4915]: I0127 20:54:52.432509 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08bb1cf4-0626-4abc-beda-89db767d1a18-utilities\") pod \"certified-operators-kvz44\" (UID: \"08bb1cf4-0626-4abc-beda-89db767d1a18\") " pod="openshift-marketplace/certified-operators-kvz44" Jan 27 20:54:52 crc kubenswrapper[4915]: I0127 20:54:52.433047 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08bb1cf4-0626-4abc-beda-89db767d1a18-utilities\") pod \"certified-operators-kvz44\" (UID: \"08bb1cf4-0626-4abc-beda-89db767d1a18\") " pod="openshift-marketplace/certified-operators-kvz44" Jan 27 20:54:52 crc kubenswrapper[4915]: I0127 20:54:52.433192 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08bb1cf4-0626-4abc-beda-89db767d1a18-catalog-content\") pod \"certified-operators-kvz44\" (UID: \"08bb1cf4-0626-4abc-beda-89db767d1a18\") " pod="openshift-marketplace/certified-operators-kvz44" Jan 27 20:54:52 crc kubenswrapper[4915]: I0127 20:54:52.457380 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7rw9\" (UniqueName: \"kubernetes.io/projected/08bb1cf4-0626-4abc-beda-89db767d1a18-kube-api-access-l7rw9\") pod \"certified-operators-kvz44\" (UID: \"08bb1cf4-0626-4abc-beda-89db767d1a18\") " pod="openshift-marketplace/certified-operators-kvz44" Jan 27 20:54:52 crc kubenswrapper[4915]: I0127 20:54:52.574321 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kvz44" Jan 27 20:54:53 crc kubenswrapper[4915]: I0127 20:54:53.113182 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kvz44"] Jan 27 20:54:53 crc kubenswrapper[4915]: I0127 20:54:53.242363 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvz44" event={"ID":"08bb1cf4-0626-4abc-beda-89db767d1a18","Type":"ContainerStarted","Data":"1ecc22b4a321d2e9e6685a0a6f74c411fb2247b4a83d8bcb4ba35d28be06cf8c"} Jan 27 20:54:53 crc kubenswrapper[4915]: I0127 20:54:53.358677 4915 scope.go:117] "RemoveContainer" containerID="af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" Jan 27 20:54:53 crc kubenswrapper[4915]: E0127 20:54:53.358964 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:54:54 crc kubenswrapper[4915]: I0127 20:54:54.253230 4915 generic.go:334] "Generic (PLEG): container finished" podID="08bb1cf4-0626-4abc-beda-89db767d1a18" containerID="6b90f14406f6d00ca6061022d488025a2a243c160a8f4f75043ed34b92c3ed7a" exitCode=0 Jan 27 20:54:54 crc kubenswrapper[4915]: I0127 20:54:54.253505 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvz44" event={"ID":"08bb1cf4-0626-4abc-beda-89db767d1a18","Type":"ContainerDied","Data":"6b90f14406f6d00ca6061022d488025a2a243c160a8f4f75043ed34b92c3ed7a"} Jan 27 20:54:56 crc kubenswrapper[4915]: I0127 20:54:56.276497 4915 generic.go:334] "Generic (PLEG): container finished" podID="08bb1cf4-0626-4abc-beda-89db767d1a18" containerID="25a4ed6033a611e3b9b28621319fc972b7226bef90b00f41ea980487044ae192" exitCode=0 Jan 27 20:54:56 crc kubenswrapper[4915]: I0127 20:54:56.276599 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvz44" event={"ID":"08bb1cf4-0626-4abc-beda-89db767d1a18","Type":"ContainerDied","Data":"25a4ed6033a611e3b9b28621319fc972b7226bef90b00f41ea980487044ae192"} Jan 27 20:54:57 crc kubenswrapper[4915]: I0127 20:54:57.293557 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvz44" event={"ID":"08bb1cf4-0626-4abc-beda-89db767d1a18","Type":"ContainerStarted","Data":"cd5911cf6691919a835f5418779ceb9b4559b18d6b164e728ee92b49c6c2d87b"} Jan 27 20:54:57 crc kubenswrapper[4915]: I0127 20:54:57.316953 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kvz44" podStartSLOduration=2.838433707 podStartE2EDuration="5.316932717s" podCreationTimestamp="2026-01-27 20:54:52 +0000 UTC" firstStartedPulling="2026-01-27 20:54:54.257149388 +0000 UTC m=+7985.615003062" lastFinishedPulling="2026-01-27 20:54:56.735648348 +0000 UTC m=+7988.093502072" observedRunningTime="2026-01-27 20:54:57.311801999 +0000 UTC m=+7988.669655683" watchObservedRunningTime="2026-01-27 20:54:57.316932717 +0000 UTC m=+7988.674786381" Jan 27 20:55:02 crc kubenswrapper[4915]: I0127 20:55:02.575155 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kvz44" Jan 27 20:55:02 crc kubenswrapper[4915]: I0127 20:55:02.576295 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kvz44" Jan 27 20:55:02 crc kubenswrapper[4915]: I0127 20:55:02.639255 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kvz44" Jan 27 20:55:03 crc kubenswrapper[4915]: I0127 20:55:03.403560 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kvz44" Jan 27 20:55:03 crc kubenswrapper[4915]: I0127 20:55:03.465241 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kvz44"] Jan 27 20:55:05 crc kubenswrapper[4915]: I0127 20:55:05.375328 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kvz44" podUID="08bb1cf4-0626-4abc-beda-89db767d1a18" containerName="registry-server" containerID="cri-o://cd5911cf6691919a835f5418779ceb9b4559b18d6b164e728ee92b49c6c2d87b" gracePeriod=2 Jan 27 20:55:05 crc kubenswrapper[4915]: I0127 20:55:05.862708 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kvz44" Jan 27 20:55:05 crc kubenswrapper[4915]: I0127 20:55:05.941053 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08bb1cf4-0626-4abc-beda-89db767d1a18-catalog-content\") pod \"08bb1cf4-0626-4abc-beda-89db767d1a18\" (UID: \"08bb1cf4-0626-4abc-beda-89db767d1a18\") " Jan 27 20:55:05 crc kubenswrapper[4915]: I0127 20:55:05.941189 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08bb1cf4-0626-4abc-beda-89db767d1a18-utilities\") pod \"08bb1cf4-0626-4abc-beda-89db767d1a18\" (UID: \"08bb1cf4-0626-4abc-beda-89db767d1a18\") " Jan 27 20:55:05 crc kubenswrapper[4915]: I0127 20:55:05.941450 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7rw9\" (UniqueName: \"kubernetes.io/projected/08bb1cf4-0626-4abc-beda-89db767d1a18-kube-api-access-l7rw9\") pod \"08bb1cf4-0626-4abc-beda-89db767d1a18\" (UID: \"08bb1cf4-0626-4abc-beda-89db767d1a18\") " Jan 27 20:55:05 crc kubenswrapper[4915]: I0127 20:55:05.942616 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08bb1cf4-0626-4abc-beda-89db767d1a18-utilities" (OuterVolumeSpecName: "utilities") pod "08bb1cf4-0626-4abc-beda-89db767d1a18" (UID: "08bb1cf4-0626-4abc-beda-89db767d1a18"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:55:05 crc kubenswrapper[4915]: I0127 20:55:05.949759 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08bb1cf4-0626-4abc-beda-89db767d1a18-kube-api-access-l7rw9" (OuterVolumeSpecName: "kube-api-access-l7rw9") pod "08bb1cf4-0626-4abc-beda-89db767d1a18" (UID: "08bb1cf4-0626-4abc-beda-89db767d1a18"). InnerVolumeSpecName "kube-api-access-l7rw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:55:06 crc kubenswrapper[4915]: I0127 20:55:06.025448 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08bb1cf4-0626-4abc-beda-89db767d1a18-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08bb1cf4-0626-4abc-beda-89db767d1a18" (UID: "08bb1cf4-0626-4abc-beda-89db767d1a18"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:55:06 crc kubenswrapper[4915]: I0127 20:55:06.043989 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7rw9\" (UniqueName: \"kubernetes.io/projected/08bb1cf4-0626-4abc-beda-89db767d1a18-kube-api-access-l7rw9\") on node \"crc\" DevicePath \"\"" Jan 27 20:55:06 crc kubenswrapper[4915]: I0127 20:55:06.044026 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08bb1cf4-0626-4abc-beda-89db767d1a18-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:55:06 crc kubenswrapper[4915]: I0127 20:55:06.044038 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08bb1cf4-0626-4abc-beda-89db767d1a18-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:55:06 crc kubenswrapper[4915]: I0127 20:55:06.385670 4915 generic.go:334] "Generic (PLEG): container finished" podID="08bb1cf4-0626-4abc-beda-89db767d1a18" containerID="cd5911cf6691919a835f5418779ceb9b4559b18d6b164e728ee92b49c6c2d87b" exitCode=0 Jan 27 20:55:06 crc kubenswrapper[4915]: I0127 20:55:06.385711 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvz44" event={"ID":"08bb1cf4-0626-4abc-beda-89db767d1a18","Type":"ContainerDied","Data":"cd5911cf6691919a835f5418779ceb9b4559b18d6b164e728ee92b49c6c2d87b"} Jan 27 20:55:06 crc kubenswrapper[4915]: I0127 20:55:06.385736 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvz44" event={"ID":"08bb1cf4-0626-4abc-beda-89db767d1a18","Type":"ContainerDied","Data":"1ecc22b4a321d2e9e6685a0a6f74c411fb2247b4a83d8bcb4ba35d28be06cf8c"} Jan 27 20:55:06 crc kubenswrapper[4915]: I0127 20:55:06.385752 4915 scope.go:117] "RemoveContainer" containerID="cd5911cf6691919a835f5418779ceb9b4559b18d6b164e728ee92b49c6c2d87b" Jan 27 20:55:06 crc kubenswrapper[4915]: I0127 20:55:06.385888 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kvz44" Jan 27 20:55:06 crc kubenswrapper[4915]: I0127 20:55:06.407246 4915 scope.go:117] "RemoveContainer" containerID="25a4ed6033a611e3b9b28621319fc972b7226bef90b00f41ea980487044ae192" Jan 27 20:55:06 crc kubenswrapper[4915]: I0127 20:55:06.421773 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kvz44"] Jan 27 20:55:06 crc kubenswrapper[4915]: I0127 20:55:06.429374 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kvz44"] Jan 27 20:55:06 crc kubenswrapper[4915]: I0127 20:55:06.442384 4915 scope.go:117] "RemoveContainer" containerID="6b90f14406f6d00ca6061022d488025a2a243c160a8f4f75043ed34b92c3ed7a" Jan 27 20:55:06 crc kubenswrapper[4915]: I0127 20:55:06.478048 4915 scope.go:117] "RemoveContainer" containerID="cd5911cf6691919a835f5418779ceb9b4559b18d6b164e728ee92b49c6c2d87b" Jan 27 20:55:06 crc kubenswrapper[4915]: E0127 20:55:06.478540 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd5911cf6691919a835f5418779ceb9b4559b18d6b164e728ee92b49c6c2d87b\": container with ID starting with cd5911cf6691919a835f5418779ceb9b4559b18d6b164e728ee92b49c6c2d87b not found: ID does not exist" containerID="cd5911cf6691919a835f5418779ceb9b4559b18d6b164e728ee92b49c6c2d87b" Jan 27 20:55:06 crc kubenswrapper[4915]: I0127 20:55:06.478646 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd5911cf6691919a835f5418779ceb9b4559b18d6b164e728ee92b49c6c2d87b"} err="failed to get container status \"cd5911cf6691919a835f5418779ceb9b4559b18d6b164e728ee92b49c6c2d87b\": rpc error: code = NotFound desc = could not find container \"cd5911cf6691919a835f5418779ceb9b4559b18d6b164e728ee92b49c6c2d87b\": container with ID starting with cd5911cf6691919a835f5418779ceb9b4559b18d6b164e728ee92b49c6c2d87b not found: ID does not exist" Jan 27 20:55:06 crc kubenswrapper[4915]: I0127 20:55:06.478727 4915 scope.go:117] "RemoveContainer" containerID="25a4ed6033a611e3b9b28621319fc972b7226bef90b00f41ea980487044ae192" Jan 27 20:55:06 crc kubenswrapper[4915]: E0127 20:55:06.479001 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25a4ed6033a611e3b9b28621319fc972b7226bef90b00f41ea980487044ae192\": container with ID starting with 25a4ed6033a611e3b9b28621319fc972b7226bef90b00f41ea980487044ae192 not found: ID does not exist" containerID="25a4ed6033a611e3b9b28621319fc972b7226bef90b00f41ea980487044ae192" Jan 27 20:55:06 crc kubenswrapper[4915]: I0127 20:55:06.479095 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25a4ed6033a611e3b9b28621319fc972b7226bef90b00f41ea980487044ae192"} err="failed to get container status \"25a4ed6033a611e3b9b28621319fc972b7226bef90b00f41ea980487044ae192\": rpc error: code = NotFound desc = could not find container \"25a4ed6033a611e3b9b28621319fc972b7226bef90b00f41ea980487044ae192\": container with ID starting with 25a4ed6033a611e3b9b28621319fc972b7226bef90b00f41ea980487044ae192 not found: ID does not exist" Jan 27 20:55:06 crc kubenswrapper[4915]: I0127 20:55:06.479168 4915 scope.go:117] "RemoveContainer" containerID="6b90f14406f6d00ca6061022d488025a2a243c160a8f4f75043ed34b92c3ed7a" Jan 27 20:55:06 crc kubenswrapper[4915]: E0127 20:55:06.479445 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b90f14406f6d00ca6061022d488025a2a243c160a8f4f75043ed34b92c3ed7a\": container with ID starting with 6b90f14406f6d00ca6061022d488025a2a243c160a8f4f75043ed34b92c3ed7a not found: ID does not exist" containerID="6b90f14406f6d00ca6061022d488025a2a243c160a8f4f75043ed34b92c3ed7a" Jan 27 20:55:06 crc kubenswrapper[4915]: I0127 20:55:06.479528 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b90f14406f6d00ca6061022d488025a2a243c160a8f4f75043ed34b92c3ed7a"} err="failed to get container status \"6b90f14406f6d00ca6061022d488025a2a243c160a8f4f75043ed34b92c3ed7a\": rpc error: code = NotFound desc = could not find container \"6b90f14406f6d00ca6061022d488025a2a243c160a8f4f75043ed34b92c3ed7a\": container with ID starting with 6b90f14406f6d00ca6061022d488025a2a243c160a8f4f75043ed34b92c3ed7a not found: ID does not exist" Jan 27 20:55:07 crc kubenswrapper[4915]: I0127 20:55:07.358516 4915 scope.go:117] "RemoveContainer" containerID="af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" Jan 27 20:55:07 crc kubenswrapper[4915]: E0127 20:55:07.359404 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:55:07 crc kubenswrapper[4915]: I0127 20:55:07.372812 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08bb1cf4-0626-4abc-beda-89db767d1a18" path="/var/lib/kubelet/pods/08bb1cf4-0626-4abc-beda-89db767d1a18/volumes" Jan 27 20:55:22 crc kubenswrapper[4915]: I0127 20:55:22.357829 4915 scope.go:117] "RemoveContainer" containerID="af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" Jan 27 20:55:22 crc kubenswrapper[4915]: E0127 20:55:22.358710 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:55:34 crc kubenswrapper[4915]: I0127 20:55:34.383512 4915 scope.go:117] "RemoveContainer" containerID="af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" Jan 27 20:55:34 crc kubenswrapper[4915]: E0127 20:55:34.384869 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:55:47 crc kubenswrapper[4915]: I0127 20:55:47.358179 4915 scope.go:117] "RemoveContainer" containerID="af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" Jan 27 20:55:47 crc kubenswrapper[4915]: E0127 20:55:47.359230 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:56:01 crc kubenswrapper[4915]: I0127 20:56:01.357843 4915 scope.go:117] "RemoveContainer" containerID="af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" Jan 27 20:56:01 crc kubenswrapper[4915]: E0127 20:56:01.358869 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:56:12 crc kubenswrapper[4915]: I0127 20:56:12.357725 4915 scope.go:117] "RemoveContainer" containerID="af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" Jan 27 20:56:12 crc kubenswrapper[4915]: E0127 20:56:12.359068 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:56:24 crc kubenswrapper[4915]: I0127 20:56:24.358324 4915 scope.go:117] "RemoveContainer" containerID="af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" Jan 27 20:56:24 crc kubenswrapper[4915]: E0127 20:56:24.360067 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:56:35 crc kubenswrapper[4915]: I0127 20:56:35.357763 4915 scope.go:117] "RemoveContainer" containerID="af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" Jan 27 20:56:35 crc kubenswrapper[4915]: E0127 20:56:35.358993 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:56:47 crc kubenswrapper[4915]: I0127 20:56:47.360381 4915 scope.go:117] "RemoveContainer" containerID="af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" Jan 27 20:56:47 crc kubenswrapper[4915]: E0127 20:56:47.361222 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:56:59 crc kubenswrapper[4915]: I0127 20:56:59.364405 4915 scope.go:117] "RemoveContainer" containerID="af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" Jan 27 20:56:59 crc kubenswrapper[4915]: E0127 20:56:59.365481 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:57:14 crc kubenswrapper[4915]: I0127 20:57:14.358111 4915 scope.go:117] "RemoveContainer" containerID="af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" Jan 27 20:57:14 crc kubenswrapper[4915]: E0127 20:57:14.359085 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 20:57:26 crc kubenswrapper[4915]: I0127 20:57:26.357987 4915 scope.go:117] "RemoveContainer" containerID="af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" Jan 27 20:57:26 crc kubenswrapper[4915]: I0127 20:57:26.645110 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"9f35575782ffc79ba96e4548f073340d2943c17941f0b887ab44c5d085259845"} Jan 27 20:59:34 crc kubenswrapper[4915]: I0127 20:59:34.853690 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z5r9c"] Jan 27 20:59:34 crc kubenswrapper[4915]: E0127 20:59:34.854672 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08bb1cf4-0626-4abc-beda-89db767d1a18" containerName="extract-content" Jan 27 20:59:34 crc kubenswrapper[4915]: I0127 20:59:34.854687 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="08bb1cf4-0626-4abc-beda-89db767d1a18" containerName="extract-content" Jan 27 20:59:34 crc kubenswrapper[4915]: E0127 20:59:34.854702 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08bb1cf4-0626-4abc-beda-89db767d1a18" containerName="registry-server" Jan 27 20:59:34 crc kubenswrapper[4915]: I0127 20:59:34.854710 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="08bb1cf4-0626-4abc-beda-89db767d1a18" containerName="registry-server" Jan 27 20:59:34 crc kubenswrapper[4915]: E0127 20:59:34.854724 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08bb1cf4-0626-4abc-beda-89db767d1a18" containerName="extract-utilities" Jan 27 20:59:34 crc kubenswrapper[4915]: I0127 20:59:34.854732 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="08bb1cf4-0626-4abc-beda-89db767d1a18" containerName="extract-utilities" Jan 27 20:59:34 crc kubenswrapper[4915]: I0127 20:59:34.854973 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="08bb1cf4-0626-4abc-beda-89db767d1a18" containerName="registry-server" Jan 27 20:59:34 crc kubenswrapper[4915]: I0127 20:59:34.856570 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5r9c" Jan 27 20:59:34 crc kubenswrapper[4915]: I0127 20:59:34.889384 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z5r9c"] Jan 27 20:59:34 crc kubenswrapper[4915]: I0127 20:59:34.971127 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmrq8\" (UniqueName: \"kubernetes.io/projected/3d499feb-9e58-4617-8172-4422737dd0bb-kube-api-access-tmrq8\") pod \"redhat-operators-z5r9c\" (UID: \"3d499feb-9e58-4617-8172-4422737dd0bb\") " pod="openshift-marketplace/redhat-operators-z5r9c" Jan 27 20:59:34 crc kubenswrapper[4915]: I0127 20:59:34.971203 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d499feb-9e58-4617-8172-4422737dd0bb-catalog-content\") pod \"redhat-operators-z5r9c\" (UID: \"3d499feb-9e58-4617-8172-4422737dd0bb\") " pod="openshift-marketplace/redhat-operators-z5r9c" Jan 27 20:59:34 crc kubenswrapper[4915]: I0127 20:59:34.971226 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d499feb-9e58-4617-8172-4422737dd0bb-utilities\") pod \"redhat-operators-z5r9c\" (UID: \"3d499feb-9e58-4617-8172-4422737dd0bb\") " pod="openshift-marketplace/redhat-operators-z5r9c" Jan 27 20:59:35 crc kubenswrapper[4915]: I0127 20:59:35.073409 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmrq8\" (UniqueName: \"kubernetes.io/projected/3d499feb-9e58-4617-8172-4422737dd0bb-kube-api-access-tmrq8\") pod \"redhat-operators-z5r9c\" (UID: \"3d499feb-9e58-4617-8172-4422737dd0bb\") " pod="openshift-marketplace/redhat-operators-z5r9c" Jan 27 20:59:35 crc kubenswrapper[4915]: I0127 20:59:35.073501 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d499feb-9e58-4617-8172-4422737dd0bb-catalog-content\") pod \"redhat-operators-z5r9c\" (UID: \"3d499feb-9e58-4617-8172-4422737dd0bb\") " pod="openshift-marketplace/redhat-operators-z5r9c" Jan 27 20:59:35 crc kubenswrapper[4915]: I0127 20:59:35.073528 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d499feb-9e58-4617-8172-4422737dd0bb-utilities\") pod \"redhat-operators-z5r9c\" (UID: \"3d499feb-9e58-4617-8172-4422737dd0bb\") " pod="openshift-marketplace/redhat-operators-z5r9c" Jan 27 20:59:35 crc kubenswrapper[4915]: I0127 20:59:35.074083 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d499feb-9e58-4617-8172-4422737dd0bb-catalog-content\") pod \"redhat-operators-z5r9c\" (UID: \"3d499feb-9e58-4617-8172-4422737dd0bb\") " pod="openshift-marketplace/redhat-operators-z5r9c" Jan 27 20:59:35 crc kubenswrapper[4915]: I0127 20:59:35.074104 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d499feb-9e58-4617-8172-4422737dd0bb-utilities\") pod \"redhat-operators-z5r9c\" (UID: \"3d499feb-9e58-4617-8172-4422737dd0bb\") " pod="openshift-marketplace/redhat-operators-z5r9c" Jan 27 20:59:35 crc kubenswrapper[4915]: I0127 20:59:35.096447 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmrq8\" (UniqueName: \"kubernetes.io/projected/3d499feb-9e58-4617-8172-4422737dd0bb-kube-api-access-tmrq8\") pod \"redhat-operators-z5r9c\" (UID: \"3d499feb-9e58-4617-8172-4422737dd0bb\") " pod="openshift-marketplace/redhat-operators-z5r9c" Jan 27 20:59:35 crc kubenswrapper[4915]: I0127 20:59:35.193254 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5r9c" Jan 27 20:59:35 crc kubenswrapper[4915]: I0127 20:59:35.700931 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z5r9c"] Jan 27 20:59:35 crc kubenswrapper[4915]: I0127 20:59:35.890151 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5r9c" event={"ID":"3d499feb-9e58-4617-8172-4422737dd0bb","Type":"ContainerStarted","Data":"b645bdbcd0e1077fad8645d1b061322343b867c370a44b218ef0104df95b200e"} Jan 27 20:59:35 crc kubenswrapper[4915]: I0127 20:59:35.890502 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5r9c" event={"ID":"3d499feb-9e58-4617-8172-4422737dd0bb","Type":"ContainerStarted","Data":"83e09b91ea68ce8f1e52e9243bfe33febeebc310c5bdc30f1ec6fc03357d2e11"} Jan 27 20:59:36 crc kubenswrapper[4915]: I0127 20:59:36.901894 4915 generic.go:334] "Generic (PLEG): container finished" podID="3d499feb-9e58-4617-8172-4422737dd0bb" containerID="b645bdbcd0e1077fad8645d1b061322343b867c370a44b218ef0104df95b200e" exitCode=0 Jan 27 20:59:36 crc kubenswrapper[4915]: I0127 20:59:36.901955 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5r9c" event={"ID":"3d499feb-9e58-4617-8172-4422737dd0bb","Type":"ContainerDied","Data":"b645bdbcd0e1077fad8645d1b061322343b867c370a44b218ef0104df95b200e"} Jan 27 20:59:36 crc kubenswrapper[4915]: I0127 20:59:36.904418 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 20:59:37 crc kubenswrapper[4915]: I0127 20:59:37.911997 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5r9c" event={"ID":"3d499feb-9e58-4617-8172-4422737dd0bb","Type":"ContainerStarted","Data":"0432c46df5220a2ce3bad4ab3d1185bde438a53477d9456287de1f1af13d5319"} Jan 27 20:59:39 crc kubenswrapper[4915]: I0127 20:59:39.936446 4915 generic.go:334] "Generic (PLEG): container finished" podID="3d499feb-9e58-4617-8172-4422737dd0bb" containerID="0432c46df5220a2ce3bad4ab3d1185bde438a53477d9456287de1f1af13d5319" exitCode=0 Jan 27 20:59:39 crc kubenswrapper[4915]: I0127 20:59:39.936698 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5r9c" event={"ID":"3d499feb-9e58-4617-8172-4422737dd0bb","Type":"ContainerDied","Data":"0432c46df5220a2ce3bad4ab3d1185bde438a53477d9456287de1f1af13d5319"} Jan 27 20:59:40 crc kubenswrapper[4915]: I0127 20:59:40.957259 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5r9c" event={"ID":"3d499feb-9e58-4617-8172-4422737dd0bb","Type":"ContainerStarted","Data":"cfc4572d5617f52d79ed48cca001be66a3cbf8296e4efaca8d349c9378a675eb"} Jan 27 20:59:40 crc kubenswrapper[4915]: I0127 20:59:40.975495 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z5r9c" podStartSLOduration=3.156810372 podStartE2EDuration="6.975471076s" podCreationTimestamp="2026-01-27 20:59:34 +0000 UTC" firstStartedPulling="2026-01-27 20:59:36.904073015 +0000 UTC m=+8268.261926689" lastFinishedPulling="2026-01-27 20:59:40.722733719 +0000 UTC m=+8272.080587393" observedRunningTime="2026-01-27 20:59:40.975382863 +0000 UTC m=+8272.333236587" watchObservedRunningTime="2026-01-27 20:59:40.975471076 +0000 UTC m=+8272.333324750" Jan 27 20:59:45 crc kubenswrapper[4915]: I0127 20:59:45.195037 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z5r9c" Jan 27 20:59:45 crc kubenswrapper[4915]: I0127 20:59:45.195523 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z5r9c" Jan 27 20:59:46 crc kubenswrapper[4915]: I0127 20:59:46.246376 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z5r9c" podUID="3d499feb-9e58-4617-8172-4422737dd0bb" containerName="registry-server" probeResult="failure" output=< Jan 27 20:59:46 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Jan 27 20:59:46 crc kubenswrapper[4915]: > Jan 27 20:59:50 crc kubenswrapper[4915]: I0127 20:59:50.624528 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:59:50 crc kubenswrapper[4915]: I0127 20:59:50.625238 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:59:55 crc kubenswrapper[4915]: I0127 20:59:55.276343 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z5r9c" Jan 27 20:59:55 crc kubenswrapper[4915]: I0127 20:59:55.350753 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z5r9c" Jan 27 20:59:55 crc kubenswrapper[4915]: I0127 20:59:55.512278 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z5r9c"] Jan 27 20:59:57 crc kubenswrapper[4915]: I0127 20:59:57.088282 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z5r9c" podUID="3d499feb-9e58-4617-8172-4422737dd0bb" containerName="registry-server" containerID="cri-o://cfc4572d5617f52d79ed48cca001be66a3cbf8296e4efaca8d349c9378a675eb" gracePeriod=2 Jan 27 20:59:57 crc kubenswrapper[4915]: I0127 20:59:57.531271 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5r9c" Jan 27 20:59:57 crc kubenswrapper[4915]: I0127 20:59:57.683199 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d499feb-9e58-4617-8172-4422737dd0bb-catalog-content\") pod \"3d499feb-9e58-4617-8172-4422737dd0bb\" (UID: \"3d499feb-9e58-4617-8172-4422737dd0bb\") " Jan 27 20:59:57 crc kubenswrapper[4915]: I0127 20:59:57.683369 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmrq8\" (UniqueName: \"kubernetes.io/projected/3d499feb-9e58-4617-8172-4422737dd0bb-kube-api-access-tmrq8\") pod \"3d499feb-9e58-4617-8172-4422737dd0bb\" (UID: \"3d499feb-9e58-4617-8172-4422737dd0bb\") " Jan 27 20:59:57 crc kubenswrapper[4915]: I0127 20:59:57.683488 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d499feb-9e58-4617-8172-4422737dd0bb-utilities\") pod \"3d499feb-9e58-4617-8172-4422737dd0bb\" (UID: \"3d499feb-9e58-4617-8172-4422737dd0bb\") " Jan 27 20:59:57 crc kubenswrapper[4915]: I0127 20:59:57.684271 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d499feb-9e58-4617-8172-4422737dd0bb-utilities" (OuterVolumeSpecName: "utilities") pod "3d499feb-9e58-4617-8172-4422737dd0bb" (UID: "3d499feb-9e58-4617-8172-4422737dd0bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:59:57 crc kubenswrapper[4915]: I0127 20:59:57.689915 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d499feb-9e58-4617-8172-4422737dd0bb-kube-api-access-tmrq8" (OuterVolumeSpecName: "kube-api-access-tmrq8") pod "3d499feb-9e58-4617-8172-4422737dd0bb" (UID: "3d499feb-9e58-4617-8172-4422737dd0bb"). InnerVolumeSpecName "kube-api-access-tmrq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:59:57 crc kubenswrapper[4915]: I0127 20:59:57.785866 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d499feb-9e58-4617-8172-4422737dd0bb-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:59:57 crc kubenswrapper[4915]: I0127 20:59:57.785950 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmrq8\" (UniqueName: \"kubernetes.io/projected/3d499feb-9e58-4617-8172-4422737dd0bb-kube-api-access-tmrq8\") on node \"crc\" DevicePath \"\"" Jan 27 20:59:57 crc kubenswrapper[4915]: I0127 20:59:57.820779 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d499feb-9e58-4617-8172-4422737dd0bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d499feb-9e58-4617-8172-4422737dd0bb" (UID: "3d499feb-9e58-4617-8172-4422737dd0bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:59:57 crc kubenswrapper[4915]: I0127 20:59:57.887763 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d499feb-9e58-4617-8172-4422737dd0bb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:59:58 crc kubenswrapper[4915]: I0127 20:59:58.100436 4915 generic.go:334] "Generic (PLEG): container finished" podID="3d499feb-9e58-4617-8172-4422737dd0bb" containerID="cfc4572d5617f52d79ed48cca001be66a3cbf8296e4efaca8d349c9378a675eb" exitCode=0 Jan 27 20:59:58 crc kubenswrapper[4915]: I0127 20:59:58.100484 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5r9c" event={"ID":"3d499feb-9e58-4617-8172-4422737dd0bb","Type":"ContainerDied","Data":"cfc4572d5617f52d79ed48cca001be66a3cbf8296e4efaca8d349c9378a675eb"} Jan 27 20:59:58 crc kubenswrapper[4915]: I0127 20:59:58.100536 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5r9c" Jan 27 20:59:58 crc kubenswrapper[4915]: I0127 20:59:58.100547 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5r9c" event={"ID":"3d499feb-9e58-4617-8172-4422737dd0bb","Type":"ContainerDied","Data":"83e09b91ea68ce8f1e52e9243bfe33febeebc310c5bdc30f1ec6fc03357d2e11"} Jan 27 20:59:58 crc kubenswrapper[4915]: I0127 20:59:58.100585 4915 scope.go:117] "RemoveContainer" containerID="cfc4572d5617f52d79ed48cca001be66a3cbf8296e4efaca8d349c9378a675eb" Jan 27 20:59:58 crc kubenswrapper[4915]: I0127 20:59:58.130102 4915 scope.go:117] "RemoveContainer" containerID="0432c46df5220a2ce3bad4ab3d1185bde438a53477d9456287de1f1af13d5319" Jan 27 20:59:58 crc kubenswrapper[4915]: I0127 20:59:58.158048 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z5r9c"] Jan 27 20:59:58 crc kubenswrapper[4915]: I0127 20:59:58.170267 4915 scope.go:117] "RemoveContainer" containerID="b645bdbcd0e1077fad8645d1b061322343b867c370a44b218ef0104df95b200e" Jan 27 20:59:58 crc kubenswrapper[4915]: I0127 20:59:58.174879 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z5r9c"] Jan 27 20:59:58 crc kubenswrapper[4915]: I0127 20:59:58.213566 4915 scope.go:117] "RemoveContainer" containerID="cfc4572d5617f52d79ed48cca001be66a3cbf8296e4efaca8d349c9378a675eb" Jan 27 20:59:58 crc kubenswrapper[4915]: E0127 20:59:58.214275 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfc4572d5617f52d79ed48cca001be66a3cbf8296e4efaca8d349c9378a675eb\": container with ID starting with cfc4572d5617f52d79ed48cca001be66a3cbf8296e4efaca8d349c9378a675eb not found: ID does not exist" containerID="cfc4572d5617f52d79ed48cca001be66a3cbf8296e4efaca8d349c9378a675eb" Jan 27 20:59:58 crc kubenswrapper[4915]: I0127 20:59:58.214326 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfc4572d5617f52d79ed48cca001be66a3cbf8296e4efaca8d349c9378a675eb"} err="failed to get container status \"cfc4572d5617f52d79ed48cca001be66a3cbf8296e4efaca8d349c9378a675eb\": rpc error: code = NotFound desc = could not find container \"cfc4572d5617f52d79ed48cca001be66a3cbf8296e4efaca8d349c9378a675eb\": container with ID starting with cfc4572d5617f52d79ed48cca001be66a3cbf8296e4efaca8d349c9378a675eb not found: ID does not exist" Jan 27 20:59:58 crc kubenswrapper[4915]: I0127 20:59:58.214360 4915 scope.go:117] "RemoveContainer" containerID="0432c46df5220a2ce3bad4ab3d1185bde438a53477d9456287de1f1af13d5319" Jan 27 20:59:58 crc kubenswrapper[4915]: E0127 20:59:58.214754 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0432c46df5220a2ce3bad4ab3d1185bde438a53477d9456287de1f1af13d5319\": container with ID starting with 0432c46df5220a2ce3bad4ab3d1185bde438a53477d9456287de1f1af13d5319 not found: ID does not exist" containerID="0432c46df5220a2ce3bad4ab3d1185bde438a53477d9456287de1f1af13d5319" Jan 27 20:59:58 crc kubenswrapper[4915]: I0127 20:59:58.214821 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0432c46df5220a2ce3bad4ab3d1185bde438a53477d9456287de1f1af13d5319"} err="failed to get container status \"0432c46df5220a2ce3bad4ab3d1185bde438a53477d9456287de1f1af13d5319\": rpc error: code = NotFound desc = could not find container \"0432c46df5220a2ce3bad4ab3d1185bde438a53477d9456287de1f1af13d5319\": container with ID starting with 0432c46df5220a2ce3bad4ab3d1185bde438a53477d9456287de1f1af13d5319 not found: ID does not exist" Jan 27 20:59:58 crc kubenswrapper[4915]: I0127 20:59:58.214873 4915 scope.go:117] "RemoveContainer" containerID="b645bdbcd0e1077fad8645d1b061322343b867c370a44b218ef0104df95b200e" Jan 27 20:59:58 crc kubenswrapper[4915]: E0127 20:59:58.215164 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b645bdbcd0e1077fad8645d1b061322343b867c370a44b218ef0104df95b200e\": container with ID starting with b645bdbcd0e1077fad8645d1b061322343b867c370a44b218ef0104df95b200e not found: ID does not exist" containerID="b645bdbcd0e1077fad8645d1b061322343b867c370a44b218ef0104df95b200e" Jan 27 20:59:58 crc kubenswrapper[4915]: I0127 20:59:58.215195 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b645bdbcd0e1077fad8645d1b061322343b867c370a44b218ef0104df95b200e"} err="failed to get container status \"b645bdbcd0e1077fad8645d1b061322343b867c370a44b218ef0104df95b200e\": rpc error: code = NotFound desc = could not find container \"b645bdbcd0e1077fad8645d1b061322343b867c370a44b218ef0104df95b200e\": container with ID starting with b645bdbcd0e1077fad8645d1b061322343b867c370a44b218ef0104df95b200e not found: ID does not exist" Jan 27 20:59:59 crc kubenswrapper[4915]: I0127 20:59:59.371945 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d499feb-9e58-4617-8172-4422737dd0bb" path="/var/lib/kubelet/pods/3d499feb-9e58-4617-8172-4422737dd0bb/volumes" Jan 27 21:00:00 crc kubenswrapper[4915]: I0127 21:00:00.167515 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492460-glj29"] Jan 27 21:00:00 crc kubenswrapper[4915]: E0127 21:00:00.167957 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d499feb-9e58-4617-8172-4422737dd0bb" containerName="extract-content" Jan 27 21:00:00 crc kubenswrapper[4915]: I0127 21:00:00.167978 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d499feb-9e58-4617-8172-4422737dd0bb" containerName="extract-content" Jan 27 21:00:00 crc kubenswrapper[4915]: E0127 21:00:00.167996 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d499feb-9e58-4617-8172-4422737dd0bb" containerName="registry-server" Jan 27 21:00:00 crc kubenswrapper[4915]: I0127 21:00:00.168004 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d499feb-9e58-4617-8172-4422737dd0bb" containerName="registry-server" Jan 27 21:00:00 crc kubenswrapper[4915]: E0127 21:00:00.168042 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d499feb-9e58-4617-8172-4422737dd0bb" containerName="extract-utilities" Jan 27 21:00:00 crc kubenswrapper[4915]: I0127 21:00:00.168050 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d499feb-9e58-4617-8172-4422737dd0bb" containerName="extract-utilities" Jan 27 21:00:00 crc kubenswrapper[4915]: I0127 21:00:00.168299 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d499feb-9e58-4617-8172-4422737dd0bb" containerName="registry-server" Jan 27 21:00:00 crc kubenswrapper[4915]: I0127 21:00:00.169341 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glj29" Jan 27 21:00:00 crc kubenswrapper[4915]: I0127 21:00:00.172862 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 21:00:00 crc kubenswrapper[4915]: I0127 21:00:00.173110 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 21:00:00 crc kubenswrapper[4915]: I0127 21:00:00.182949 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492460-glj29"] Jan 27 21:00:00 crc kubenswrapper[4915]: I0127 21:00:00.340655 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt5ch\" (UniqueName: \"kubernetes.io/projected/c4788263-bc9c-4c2a-8b8a-a4f20061100f-kube-api-access-zt5ch\") pod \"collect-profiles-29492460-glj29\" (UID: \"c4788263-bc9c-4c2a-8b8a-a4f20061100f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glj29" Jan 27 21:00:00 crc kubenswrapper[4915]: I0127 21:00:00.340718 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4788263-bc9c-4c2a-8b8a-a4f20061100f-secret-volume\") pod \"collect-profiles-29492460-glj29\" (UID: \"c4788263-bc9c-4c2a-8b8a-a4f20061100f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glj29" Jan 27 21:00:00 crc kubenswrapper[4915]: I0127 21:00:00.341094 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4788263-bc9c-4c2a-8b8a-a4f20061100f-config-volume\") pod \"collect-profiles-29492460-glj29\" (UID: \"c4788263-bc9c-4c2a-8b8a-a4f20061100f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glj29" Jan 27 21:00:00 crc kubenswrapper[4915]: I0127 21:00:00.443710 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4788263-bc9c-4c2a-8b8a-a4f20061100f-config-volume\") pod \"collect-profiles-29492460-glj29\" (UID: \"c4788263-bc9c-4c2a-8b8a-a4f20061100f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glj29" Jan 27 21:00:00 crc kubenswrapper[4915]: I0127 21:00:00.443869 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt5ch\" (UniqueName: \"kubernetes.io/projected/c4788263-bc9c-4c2a-8b8a-a4f20061100f-kube-api-access-zt5ch\") pod \"collect-profiles-29492460-glj29\" (UID: \"c4788263-bc9c-4c2a-8b8a-a4f20061100f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glj29" Jan 27 21:00:00 crc kubenswrapper[4915]: I0127 21:00:00.443905 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4788263-bc9c-4c2a-8b8a-a4f20061100f-secret-volume\") pod \"collect-profiles-29492460-glj29\" (UID: \"c4788263-bc9c-4c2a-8b8a-a4f20061100f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glj29" Jan 27 21:00:00 crc kubenswrapper[4915]: I0127 21:00:00.444708 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4788263-bc9c-4c2a-8b8a-a4f20061100f-config-volume\") pod \"collect-profiles-29492460-glj29\" (UID: \"c4788263-bc9c-4c2a-8b8a-a4f20061100f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glj29" Jan 27 21:00:00 crc kubenswrapper[4915]: I0127 21:00:00.456458 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4788263-bc9c-4c2a-8b8a-a4f20061100f-secret-volume\") pod \"collect-profiles-29492460-glj29\" (UID: \"c4788263-bc9c-4c2a-8b8a-a4f20061100f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glj29" Jan 27 21:00:00 crc kubenswrapper[4915]: I0127 21:00:00.462918 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt5ch\" (UniqueName: \"kubernetes.io/projected/c4788263-bc9c-4c2a-8b8a-a4f20061100f-kube-api-access-zt5ch\") pod \"collect-profiles-29492460-glj29\" (UID: \"c4788263-bc9c-4c2a-8b8a-a4f20061100f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glj29" Jan 27 21:00:00 crc kubenswrapper[4915]: I0127 21:00:00.535157 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glj29" Jan 27 21:00:00 crc kubenswrapper[4915]: I0127 21:00:00.988594 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492460-glj29"] Jan 27 21:00:01 crc kubenswrapper[4915]: I0127 21:00:01.134261 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glj29" event={"ID":"c4788263-bc9c-4c2a-8b8a-a4f20061100f","Type":"ContainerStarted","Data":"417d94e06fd65b89026c889f65fb96bbc583f6c201b4b1345fcd97f0039167f7"} Jan 27 21:00:02 crc kubenswrapper[4915]: I0127 21:00:02.156879 4915 generic.go:334] "Generic (PLEG): container finished" podID="c4788263-bc9c-4c2a-8b8a-a4f20061100f" containerID="2843c350b681c6a96f4f08be1a1e58600174f63299013ac762da4a8bb1f0cdc3" exitCode=0 Jan 27 21:00:02 crc kubenswrapper[4915]: I0127 21:00:02.156960 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glj29" event={"ID":"c4788263-bc9c-4c2a-8b8a-a4f20061100f","Type":"ContainerDied","Data":"2843c350b681c6a96f4f08be1a1e58600174f63299013ac762da4a8bb1f0cdc3"} Jan 27 21:00:03 crc kubenswrapper[4915]: I0127 21:00:03.503636 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glj29" Jan 27 21:00:03 crc kubenswrapper[4915]: I0127 21:00:03.516072 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4788263-bc9c-4c2a-8b8a-a4f20061100f-secret-volume\") pod \"c4788263-bc9c-4c2a-8b8a-a4f20061100f\" (UID: \"c4788263-bc9c-4c2a-8b8a-a4f20061100f\") " Jan 27 21:00:03 crc kubenswrapper[4915]: I0127 21:00:03.516162 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4788263-bc9c-4c2a-8b8a-a4f20061100f-config-volume\") pod \"c4788263-bc9c-4c2a-8b8a-a4f20061100f\" (UID: \"c4788263-bc9c-4c2a-8b8a-a4f20061100f\") " Jan 27 21:00:03 crc kubenswrapper[4915]: I0127 21:00:03.516260 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt5ch\" (UniqueName: \"kubernetes.io/projected/c4788263-bc9c-4c2a-8b8a-a4f20061100f-kube-api-access-zt5ch\") pod \"c4788263-bc9c-4c2a-8b8a-a4f20061100f\" (UID: \"c4788263-bc9c-4c2a-8b8a-a4f20061100f\") " Jan 27 21:00:03 crc kubenswrapper[4915]: I0127 21:00:03.518403 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4788263-bc9c-4c2a-8b8a-a4f20061100f-config-volume" (OuterVolumeSpecName: "config-volume") pod "c4788263-bc9c-4c2a-8b8a-a4f20061100f" (UID: "c4788263-bc9c-4c2a-8b8a-a4f20061100f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:00:03 crc kubenswrapper[4915]: I0127 21:00:03.525623 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4788263-bc9c-4c2a-8b8a-a4f20061100f-kube-api-access-zt5ch" (OuterVolumeSpecName: "kube-api-access-zt5ch") pod "c4788263-bc9c-4c2a-8b8a-a4f20061100f" (UID: "c4788263-bc9c-4c2a-8b8a-a4f20061100f"). InnerVolumeSpecName "kube-api-access-zt5ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:00:03 crc kubenswrapper[4915]: I0127 21:00:03.533025 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4788263-bc9c-4c2a-8b8a-a4f20061100f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c4788263-bc9c-4c2a-8b8a-a4f20061100f" (UID: "c4788263-bc9c-4c2a-8b8a-a4f20061100f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:00:03 crc kubenswrapper[4915]: I0127 21:00:03.619141 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt5ch\" (UniqueName: \"kubernetes.io/projected/c4788263-bc9c-4c2a-8b8a-a4f20061100f-kube-api-access-zt5ch\") on node \"crc\" DevicePath \"\"" Jan 27 21:00:03 crc kubenswrapper[4915]: I0127 21:00:03.619434 4915 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4788263-bc9c-4c2a-8b8a-a4f20061100f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 21:00:03 crc kubenswrapper[4915]: I0127 21:00:03.619605 4915 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4788263-bc9c-4c2a-8b8a-a4f20061100f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 21:00:04 crc kubenswrapper[4915]: I0127 21:00:04.178215 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glj29" event={"ID":"c4788263-bc9c-4c2a-8b8a-a4f20061100f","Type":"ContainerDied","Data":"417d94e06fd65b89026c889f65fb96bbc583f6c201b4b1345fcd97f0039167f7"} Jan 27 21:00:04 crc kubenswrapper[4915]: I0127 21:00:04.178601 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="417d94e06fd65b89026c889f65fb96bbc583f6c201b4b1345fcd97f0039167f7" Jan 27 21:00:04 crc kubenswrapper[4915]: I0127 21:00:04.178304 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glj29" Jan 27 21:00:04 crc kubenswrapper[4915]: I0127 21:00:04.587445 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492415-trlxq"] Jan 27 21:00:04 crc kubenswrapper[4915]: I0127 21:00:04.595297 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492415-trlxq"] Jan 27 21:00:05 crc kubenswrapper[4915]: I0127 21:00:05.369454 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a2584d4-dae1-4371-be0f-f27cedc845f9" path="/var/lib/kubelet/pods/0a2584d4-dae1-4371-be0f-f27cedc845f9/volumes" Jan 27 21:00:20 crc kubenswrapper[4915]: I0127 21:00:20.624976 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:00:20 crc kubenswrapper[4915]: I0127 21:00:20.625488 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:00:50 crc kubenswrapper[4915]: I0127 21:00:50.624769 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:00:50 crc kubenswrapper[4915]: I0127 21:00:50.625220 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:00:50 crc kubenswrapper[4915]: I0127 21:00:50.625261 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 21:00:50 crc kubenswrapper[4915]: I0127 21:00:50.625988 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9f35575782ffc79ba96e4548f073340d2943c17941f0b887ab44c5d085259845"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 21:00:50 crc kubenswrapper[4915]: I0127 21:00:50.626037 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://9f35575782ffc79ba96e4548f073340d2943c17941f0b887ab44c5d085259845" gracePeriod=600 Jan 27 21:00:51 crc kubenswrapper[4915]: I0127 21:00:51.688554 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="9f35575782ffc79ba96e4548f073340d2943c17941f0b887ab44c5d085259845" exitCode=0 Jan 27 21:00:51 crc kubenswrapper[4915]: I0127 21:00:51.688641 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"9f35575782ffc79ba96e4548f073340d2943c17941f0b887ab44c5d085259845"} Jan 27 21:00:51 crc kubenswrapper[4915]: I0127 21:00:51.689256 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d"} Jan 27 21:00:51 crc kubenswrapper[4915]: I0127 21:00:51.689287 4915 scope.go:117] "RemoveContainer" containerID="af4e8b156a0ec58c0e35f6aea6abe357333661a08eb0301c39c246329585d731" Jan 27 21:01:00 crc kubenswrapper[4915]: I0127 21:01:00.155865 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29492461-dwc92"] Jan 27 21:01:00 crc kubenswrapper[4915]: E0127 21:01:00.156918 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4788263-bc9c-4c2a-8b8a-a4f20061100f" containerName="collect-profiles" Jan 27 21:01:00 crc kubenswrapper[4915]: I0127 21:01:00.156941 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4788263-bc9c-4c2a-8b8a-a4f20061100f" containerName="collect-profiles" Jan 27 21:01:00 crc kubenswrapper[4915]: I0127 21:01:00.157393 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4788263-bc9c-4c2a-8b8a-a4f20061100f" containerName="collect-profiles" Jan 27 21:01:00 crc kubenswrapper[4915]: I0127 21:01:00.158078 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492461-dwc92" Jan 27 21:01:00 crc kubenswrapper[4915]: I0127 21:01:00.176867 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492461-dwc92"] Jan 27 21:01:00 crc kubenswrapper[4915]: I0127 21:01:00.256737 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a24fbf57-ee45-4af3-a8c3-d7be1a8d5190-fernet-keys\") pod \"keystone-cron-29492461-dwc92\" (UID: \"a24fbf57-ee45-4af3-a8c3-d7be1a8d5190\") " pod="openstack/keystone-cron-29492461-dwc92" Jan 27 21:01:00 crc kubenswrapper[4915]: I0127 21:01:00.257149 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a24fbf57-ee45-4af3-a8c3-d7be1a8d5190-combined-ca-bundle\") pod \"keystone-cron-29492461-dwc92\" (UID: \"a24fbf57-ee45-4af3-a8c3-d7be1a8d5190\") " pod="openstack/keystone-cron-29492461-dwc92" Jan 27 21:01:00 crc kubenswrapper[4915]: I0127 21:01:00.257207 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5tmz\" (UniqueName: \"kubernetes.io/projected/a24fbf57-ee45-4af3-a8c3-d7be1a8d5190-kube-api-access-d5tmz\") pod \"keystone-cron-29492461-dwc92\" (UID: \"a24fbf57-ee45-4af3-a8c3-d7be1a8d5190\") " pod="openstack/keystone-cron-29492461-dwc92" Jan 27 21:01:00 crc kubenswrapper[4915]: I0127 21:01:00.257267 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a24fbf57-ee45-4af3-a8c3-d7be1a8d5190-config-data\") pod \"keystone-cron-29492461-dwc92\" (UID: \"a24fbf57-ee45-4af3-a8c3-d7be1a8d5190\") " pod="openstack/keystone-cron-29492461-dwc92" Jan 27 21:01:00 crc kubenswrapper[4915]: I0127 21:01:00.273494 4915 scope.go:117] "RemoveContainer" containerID="7130398f323392b546c973bf9f26a234c699ec4d90acc64223c4d9179671b8c7" Jan 27 21:01:00 crc kubenswrapper[4915]: I0127 21:01:00.358791 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a24fbf57-ee45-4af3-a8c3-d7be1a8d5190-fernet-keys\") pod \"keystone-cron-29492461-dwc92\" (UID: \"a24fbf57-ee45-4af3-a8c3-d7be1a8d5190\") " pod="openstack/keystone-cron-29492461-dwc92" Jan 27 21:01:00 crc kubenswrapper[4915]: I0127 21:01:00.358903 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a24fbf57-ee45-4af3-a8c3-d7be1a8d5190-combined-ca-bundle\") pod \"keystone-cron-29492461-dwc92\" (UID: \"a24fbf57-ee45-4af3-a8c3-d7be1a8d5190\") " pod="openstack/keystone-cron-29492461-dwc92" Jan 27 21:01:00 crc kubenswrapper[4915]: I0127 21:01:00.358955 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5tmz\" (UniqueName: \"kubernetes.io/projected/a24fbf57-ee45-4af3-a8c3-d7be1a8d5190-kube-api-access-d5tmz\") pod \"keystone-cron-29492461-dwc92\" (UID: \"a24fbf57-ee45-4af3-a8c3-d7be1a8d5190\") " pod="openstack/keystone-cron-29492461-dwc92" Jan 27 21:01:00 crc kubenswrapper[4915]: I0127 21:01:00.358995 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a24fbf57-ee45-4af3-a8c3-d7be1a8d5190-config-data\") pod \"keystone-cron-29492461-dwc92\" (UID: \"a24fbf57-ee45-4af3-a8c3-d7be1a8d5190\") " pod="openstack/keystone-cron-29492461-dwc92" Jan 27 21:01:00 crc kubenswrapper[4915]: I0127 21:01:00.372356 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a24fbf57-ee45-4af3-a8c3-d7be1a8d5190-fernet-keys\") pod \"keystone-cron-29492461-dwc92\" (UID: \"a24fbf57-ee45-4af3-a8c3-d7be1a8d5190\") " pod="openstack/keystone-cron-29492461-dwc92" Jan 27 21:01:00 crc kubenswrapper[4915]: I0127 21:01:00.372456 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a24fbf57-ee45-4af3-a8c3-d7be1a8d5190-config-data\") pod \"keystone-cron-29492461-dwc92\" (UID: \"a24fbf57-ee45-4af3-a8c3-d7be1a8d5190\") " pod="openstack/keystone-cron-29492461-dwc92" Jan 27 21:01:00 crc kubenswrapper[4915]: I0127 21:01:00.372647 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a24fbf57-ee45-4af3-a8c3-d7be1a8d5190-combined-ca-bundle\") pod \"keystone-cron-29492461-dwc92\" (UID: \"a24fbf57-ee45-4af3-a8c3-d7be1a8d5190\") " pod="openstack/keystone-cron-29492461-dwc92" Jan 27 21:01:00 crc kubenswrapper[4915]: I0127 21:01:00.379639 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5tmz\" (UniqueName: \"kubernetes.io/projected/a24fbf57-ee45-4af3-a8c3-d7be1a8d5190-kube-api-access-d5tmz\") pod \"keystone-cron-29492461-dwc92\" (UID: \"a24fbf57-ee45-4af3-a8c3-d7be1a8d5190\") " pod="openstack/keystone-cron-29492461-dwc92" Jan 27 21:01:00 crc kubenswrapper[4915]: I0127 21:01:00.494160 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492461-dwc92" Jan 27 21:01:01 crc kubenswrapper[4915]: I0127 21:01:01.001449 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492461-dwc92"] Jan 27 21:01:01 crc kubenswrapper[4915]: W0127 21:01:01.008391 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda24fbf57_ee45_4af3_a8c3_d7be1a8d5190.slice/crio-e0c67ae7493317dcfd38e38a66bc5717047beb3c91bcc3d35b0ffdd7842180b9 WatchSource:0}: Error finding container e0c67ae7493317dcfd38e38a66bc5717047beb3c91bcc3d35b0ffdd7842180b9: Status 404 returned error can't find the container with id e0c67ae7493317dcfd38e38a66bc5717047beb3c91bcc3d35b0ffdd7842180b9 Jan 27 21:01:01 crc kubenswrapper[4915]: I0127 21:01:01.804457 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492461-dwc92" event={"ID":"a24fbf57-ee45-4af3-a8c3-d7be1a8d5190","Type":"ContainerStarted","Data":"86569d36a5498020d0207e90a11d36bc5168fe96ca9e0856465ff2ef9cd8fb8b"} Jan 27 21:01:01 crc kubenswrapper[4915]: I0127 21:01:01.805081 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492461-dwc92" event={"ID":"a24fbf57-ee45-4af3-a8c3-d7be1a8d5190","Type":"ContainerStarted","Data":"e0c67ae7493317dcfd38e38a66bc5717047beb3c91bcc3d35b0ffdd7842180b9"} Jan 27 21:01:01 crc kubenswrapper[4915]: I0127 21:01:01.839946 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29492461-dwc92" podStartSLOduration=1.839911874 podStartE2EDuration="1.839911874s" podCreationTimestamp="2026-01-27 21:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:01:01.826720646 +0000 UTC m=+8353.184574360" watchObservedRunningTime="2026-01-27 21:01:01.839911874 +0000 UTC m=+8353.197765568" Jan 27 21:01:03 crc kubenswrapper[4915]: I0127 21:01:03.820624 4915 generic.go:334] "Generic (PLEG): container finished" podID="a24fbf57-ee45-4af3-a8c3-d7be1a8d5190" containerID="86569d36a5498020d0207e90a11d36bc5168fe96ca9e0856465ff2ef9cd8fb8b" exitCode=0 Jan 27 21:01:03 crc kubenswrapper[4915]: I0127 21:01:03.820706 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492461-dwc92" event={"ID":"a24fbf57-ee45-4af3-a8c3-d7be1a8d5190","Type":"ContainerDied","Data":"86569d36a5498020d0207e90a11d36bc5168fe96ca9e0856465ff2ef9cd8fb8b"} Jan 27 21:01:05 crc kubenswrapper[4915]: I0127 21:01:05.209757 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492461-dwc92" Jan 27 21:01:05 crc kubenswrapper[4915]: I0127 21:01:05.252696 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a24fbf57-ee45-4af3-a8c3-d7be1a8d5190-config-data\") pod \"a24fbf57-ee45-4af3-a8c3-d7be1a8d5190\" (UID: \"a24fbf57-ee45-4af3-a8c3-d7be1a8d5190\") " Jan 27 21:01:05 crc kubenswrapper[4915]: I0127 21:01:05.252839 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a24fbf57-ee45-4af3-a8c3-d7be1a8d5190-combined-ca-bundle\") pod \"a24fbf57-ee45-4af3-a8c3-d7be1a8d5190\" (UID: \"a24fbf57-ee45-4af3-a8c3-d7be1a8d5190\") " Jan 27 21:01:05 crc kubenswrapper[4915]: I0127 21:01:05.252959 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5tmz\" (UniqueName: \"kubernetes.io/projected/a24fbf57-ee45-4af3-a8c3-d7be1a8d5190-kube-api-access-d5tmz\") pod \"a24fbf57-ee45-4af3-a8c3-d7be1a8d5190\" (UID: \"a24fbf57-ee45-4af3-a8c3-d7be1a8d5190\") " Jan 27 21:01:05 crc kubenswrapper[4915]: I0127 21:01:05.253144 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a24fbf57-ee45-4af3-a8c3-d7be1a8d5190-fernet-keys\") pod \"a24fbf57-ee45-4af3-a8c3-d7be1a8d5190\" (UID: \"a24fbf57-ee45-4af3-a8c3-d7be1a8d5190\") " Jan 27 21:01:05 crc kubenswrapper[4915]: I0127 21:01:05.260379 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a24fbf57-ee45-4af3-a8c3-d7be1a8d5190-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a24fbf57-ee45-4af3-a8c3-d7be1a8d5190" (UID: "a24fbf57-ee45-4af3-a8c3-d7be1a8d5190"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:01:05 crc kubenswrapper[4915]: I0127 21:01:05.266956 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a24fbf57-ee45-4af3-a8c3-d7be1a8d5190-kube-api-access-d5tmz" (OuterVolumeSpecName: "kube-api-access-d5tmz") pod "a24fbf57-ee45-4af3-a8c3-d7be1a8d5190" (UID: "a24fbf57-ee45-4af3-a8c3-d7be1a8d5190"). InnerVolumeSpecName "kube-api-access-d5tmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:01:05 crc kubenswrapper[4915]: I0127 21:01:05.284556 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a24fbf57-ee45-4af3-a8c3-d7be1a8d5190-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a24fbf57-ee45-4af3-a8c3-d7be1a8d5190" (UID: "a24fbf57-ee45-4af3-a8c3-d7be1a8d5190"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:01:05 crc kubenswrapper[4915]: I0127 21:01:05.304021 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a24fbf57-ee45-4af3-a8c3-d7be1a8d5190-config-data" (OuterVolumeSpecName: "config-data") pod "a24fbf57-ee45-4af3-a8c3-d7be1a8d5190" (UID: "a24fbf57-ee45-4af3-a8c3-d7be1a8d5190"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:01:05 crc kubenswrapper[4915]: I0127 21:01:05.355843 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a24fbf57-ee45-4af3-a8c3-d7be1a8d5190-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 21:01:05 crc kubenswrapper[4915]: I0127 21:01:05.355896 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a24fbf57-ee45-4af3-a8c3-d7be1a8d5190-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 21:01:05 crc kubenswrapper[4915]: I0127 21:01:05.355912 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5tmz\" (UniqueName: \"kubernetes.io/projected/a24fbf57-ee45-4af3-a8c3-d7be1a8d5190-kube-api-access-d5tmz\") on node \"crc\" DevicePath \"\"" Jan 27 21:01:05 crc kubenswrapper[4915]: I0127 21:01:05.355925 4915 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a24fbf57-ee45-4af3-a8c3-d7be1a8d5190-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 21:01:05 crc kubenswrapper[4915]: I0127 21:01:05.838980 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492461-dwc92" event={"ID":"a24fbf57-ee45-4af3-a8c3-d7be1a8d5190","Type":"ContainerDied","Data":"e0c67ae7493317dcfd38e38a66bc5717047beb3c91bcc3d35b0ffdd7842180b9"} Jan 27 21:01:05 crc kubenswrapper[4915]: I0127 21:01:05.839376 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0c67ae7493317dcfd38e38a66bc5717047beb3c91bcc3d35b0ffdd7842180b9" Jan 27 21:01:05 crc kubenswrapper[4915]: I0127 21:01:05.839050 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492461-dwc92" Jan 27 21:02:50 crc kubenswrapper[4915]: I0127 21:02:50.624391 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:02:50 crc kubenswrapper[4915]: I0127 21:02:50.624933 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:03:20 crc kubenswrapper[4915]: I0127 21:03:20.624741 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:03:20 crc kubenswrapper[4915]: I0127 21:03:20.625240 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:03:25 crc kubenswrapper[4915]: I0127 21:03:25.472232 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kk899"] Jan 27 21:03:25 crc kubenswrapper[4915]: E0127 21:03:25.473448 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a24fbf57-ee45-4af3-a8c3-d7be1a8d5190" containerName="keystone-cron" Jan 27 21:03:25 crc kubenswrapper[4915]: I0127 21:03:25.473473 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a24fbf57-ee45-4af3-a8c3-d7be1a8d5190" containerName="keystone-cron" Jan 27 21:03:25 crc kubenswrapper[4915]: I0127 21:03:25.473893 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a24fbf57-ee45-4af3-a8c3-d7be1a8d5190" containerName="keystone-cron" Jan 27 21:03:25 crc kubenswrapper[4915]: I0127 21:03:25.476138 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kk899" Jan 27 21:03:25 crc kubenswrapper[4915]: I0127 21:03:25.489253 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kk899"] Jan 27 21:03:25 crc kubenswrapper[4915]: I0127 21:03:25.540156 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c00c4123-f7db-4fb2-bc4e-5ab019113755-utilities\") pod \"community-operators-kk899\" (UID: \"c00c4123-f7db-4fb2-bc4e-5ab019113755\") " pod="openshift-marketplace/community-operators-kk899" Jan 27 21:03:25 crc kubenswrapper[4915]: I0127 21:03:25.540237 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c00c4123-f7db-4fb2-bc4e-5ab019113755-catalog-content\") pod \"community-operators-kk899\" (UID: \"c00c4123-f7db-4fb2-bc4e-5ab019113755\") " pod="openshift-marketplace/community-operators-kk899" Jan 27 21:03:25 crc kubenswrapper[4915]: I0127 21:03:25.540504 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsc67\" (UniqueName: \"kubernetes.io/projected/c00c4123-f7db-4fb2-bc4e-5ab019113755-kube-api-access-gsc67\") pod \"community-operators-kk899\" (UID: \"c00c4123-f7db-4fb2-bc4e-5ab019113755\") " pod="openshift-marketplace/community-operators-kk899" Jan 27 21:03:25 crc kubenswrapper[4915]: I0127 21:03:25.642148 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c00c4123-f7db-4fb2-bc4e-5ab019113755-utilities\") pod \"community-operators-kk899\" (UID: \"c00c4123-f7db-4fb2-bc4e-5ab019113755\") " pod="openshift-marketplace/community-operators-kk899" Jan 27 21:03:25 crc kubenswrapper[4915]: I0127 21:03:25.642218 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c00c4123-f7db-4fb2-bc4e-5ab019113755-catalog-content\") pod \"community-operators-kk899\" (UID: \"c00c4123-f7db-4fb2-bc4e-5ab019113755\") " pod="openshift-marketplace/community-operators-kk899" Jan 27 21:03:25 crc kubenswrapper[4915]: I0127 21:03:25.642365 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsc67\" (UniqueName: \"kubernetes.io/projected/c00c4123-f7db-4fb2-bc4e-5ab019113755-kube-api-access-gsc67\") pod \"community-operators-kk899\" (UID: \"c00c4123-f7db-4fb2-bc4e-5ab019113755\") " pod="openshift-marketplace/community-operators-kk899" Jan 27 21:03:25 crc kubenswrapper[4915]: I0127 21:03:25.642840 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c00c4123-f7db-4fb2-bc4e-5ab019113755-utilities\") pod \"community-operators-kk899\" (UID: \"c00c4123-f7db-4fb2-bc4e-5ab019113755\") " pod="openshift-marketplace/community-operators-kk899" Jan 27 21:03:25 crc kubenswrapper[4915]: I0127 21:03:25.642869 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c00c4123-f7db-4fb2-bc4e-5ab019113755-catalog-content\") pod \"community-operators-kk899\" (UID: \"c00c4123-f7db-4fb2-bc4e-5ab019113755\") " pod="openshift-marketplace/community-operators-kk899" Jan 27 21:03:25 crc kubenswrapper[4915]: I0127 21:03:25.662690 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsc67\" (UniqueName: \"kubernetes.io/projected/c00c4123-f7db-4fb2-bc4e-5ab019113755-kube-api-access-gsc67\") pod \"community-operators-kk899\" (UID: \"c00c4123-f7db-4fb2-bc4e-5ab019113755\") " pod="openshift-marketplace/community-operators-kk899" Jan 27 21:03:25 crc kubenswrapper[4915]: I0127 21:03:25.815706 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kk899" Jan 27 21:03:26 crc kubenswrapper[4915]: I0127 21:03:26.646229 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kk899"] Jan 27 21:03:26 crc kubenswrapper[4915]: W0127 21:03:26.651195 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc00c4123_f7db_4fb2_bc4e_5ab019113755.slice/crio-d6bde09cf27516d4a26a335ff38815d4a136294d1c42aa4b1d03199fd9f4024c WatchSource:0}: Error finding container d6bde09cf27516d4a26a335ff38815d4a136294d1c42aa4b1d03199fd9f4024c: Status 404 returned error can't find the container with id d6bde09cf27516d4a26a335ff38815d4a136294d1c42aa4b1d03199fd9f4024c Jan 27 21:03:27 crc kubenswrapper[4915]: I0127 21:03:27.272423 4915 generic.go:334] "Generic (PLEG): container finished" podID="c00c4123-f7db-4fb2-bc4e-5ab019113755" containerID="1a1baad27db254e65eb94cef5b96d20c57cdbf248544355988c0b67907de05e6" exitCode=0 Jan 27 21:03:27 crc kubenswrapper[4915]: I0127 21:03:27.272926 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kk899" event={"ID":"c00c4123-f7db-4fb2-bc4e-5ab019113755","Type":"ContainerDied","Data":"1a1baad27db254e65eb94cef5b96d20c57cdbf248544355988c0b67907de05e6"} Jan 27 21:03:27 crc kubenswrapper[4915]: I0127 21:03:27.273048 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kk899" event={"ID":"c00c4123-f7db-4fb2-bc4e-5ab019113755","Type":"ContainerStarted","Data":"d6bde09cf27516d4a26a335ff38815d4a136294d1c42aa4b1d03199fd9f4024c"} Jan 27 21:03:28 crc kubenswrapper[4915]: I0127 21:03:28.285873 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kk899" event={"ID":"c00c4123-f7db-4fb2-bc4e-5ab019113755","Type":"ContainerStarted","Data":"248a454247cc46e6cbee96ac22f7f3ba0435a09248e51d2663c8d52471064c73"} Jan 27 21:03:29 crc kubenswrapper[4915]: I0127 21:03:29.305440 4915 generic.go:334] "Generic (PLEG): container finished" podID="c00c4123-f7db-4fb2-bc4e-5ab019113755" containerID="248a454247cc46e6cbee96ac22f7f3ba0435a09248e51d2663c8d52471064c73" exitCode=0 Jan 27 21:03:29 crc kubenswrapper[4915]: I0127 21:03:29.305502 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kk899" event={"ID":"c00c4123-f7db-4fb2-bc4e-5ab019113755","Type":"ContainerDied","Data":"248a454247cc46e6cbee96ac22f7f3ba0435a09248e51d2663c8d52471064c73"} Jan 27 21:03:30 crc kubenswrapper[4915]: I0127 21:03:30.317858 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kk899" event={"ID":"c00c4123-f7db-4fb2-bc4e-5ab019113755","Type":"ContainerStarted","Data":"417f85ef5b3bcbc856ec57c85643052026592fa8d22fcfbe68a26bfabe80936f"} Jan 27 21:03:30 crc kubenswrapper[4915]: I0127 21:03:30.360465 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kk899" podStartSLOduration=2.671423135 podStartE2EDuration="5.360439641s" podCreationTimestamp="2026-01-27 21:03:25 +0000 UTC" firstStartedPulling="2026-01-27 21:03:27.274639663 +0000 UTC m=+8498.632493327" lastFinishedPulling="2026-01-27 21:03:29.963656149 +0000 UTC m=+8501.321509833" observedRunningTime="2026-01-27 21:03:30.352917884 +0000 UTC m=+8501.710771548" watchObservedRunningTime="2026-01-27 21:03:30.360439641 +0000 UTC m=+8501.718293315" Jan 27 21:03:35 crc kubenswrapper[4915]: I0127 21:03:35.815998 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kk899" Jan 27 21:03:35 crc kubenswrapper[4915]: I0127 21:03:35.816554 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kk899" Jan 27 21:03:35 crc kubenswrapper[4915]: I0127 21:03:35.893702 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kk899" Jan 27 21:03:36 crc kubenswrapper[4915]: I0127 21:03:36.440603 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kk899" Jan 27 21:03:36 crc kubenswrapper[4915]: I0127 21:03:36.501252 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kk899"] Jan 27 21:03:38 crc kubenswrapper[4915]: I0127 21:03:38.387624 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kk899" podUID="c00c4123-f7db-4fb2-bc4e-5ab019113755" containerName="registry-server" containerID="cri-o://417f85ef5b3bcbc856ec57c85643052026592fa8d22fcfbe68a26bfabe80936f" gracePeriod=2 Jan 27 21:03:38 crc kubenswrapper[4915]: I0127 21:03:38.859332 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kk899" Jan 27 21:03:39 crc kubenswrapper[4915]: I0127 21:03:39.023270 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsc67\" (UniqueName: \"kubernetes.io/projected/c00c4123-f7db-4fb2-bc4e-5ab019113755-kube-api-access-gsc67\") pod \"c00c4123-f7db-4fb2-bc4e-5ab019113755\" (UID: \"c00c4123-f7db-4fb2-bc4e-5ab019113755\") " Jan 27 21:03:39 crc kubenswrapper[4915]: I0127 21:03:39.023382 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c00c4123-f7db-4fb2-bc4e-5ab019113755-catalog-content\") pod \"c00c4123-f7db-4fb2-bc4e-5ab019113755\" (UID: \"c00c4123-f7db-4fb2-bc4e-5ab019113755\") " Jan 27 21:03:39 crc kubenswrapper[4915]: I0127 21:03:39.023437 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c00c4123-f7db-4fb2-bc4e-5ab019113755-utilities\") pod \"c00c4123-f7db-4fb2-bc4e-5ab019113755\" (UID: \"c00c4123-f7db-4fb2-bc4e-5ab019113755\") " Jan 27 21:03:39 crc kubenswrapper[4915]: I0127 21:03:39.025138 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c00c4123-f7db-4fb2-bc4e-5ab019113755-utilities" (OuterVolumeSpecName: "utilities") pod "c00c4123-f7db-4fb2-bc4e-5ab019113755" (UID: "c00c4123-f7db-4fb2-bc4e-5ab019113755"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:03:39 crc kubenswrapper[4915]: I0127 21:03:39.032542 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c00c4123-f7db-4fb2-bc4e-5ab019113755-kube-api-access-gsc67" (OuterVolumeSpecName: "kube-api-access-gsc67") pod "c00c4123-f7db-4fb2-bc4e-5ab019113755" (UID: "c00c4123-f7db-4fb2-bc4e-5ab019113755"). InnerVolumeSpecName "kube-api-access-gsc67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:03:39 crc kubenswrapper[4915]: I0127 21:03:39.083780 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c00c4123-f7db-4fb2-bc4e-5ab019113755-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c00c4123-f7db-4fb2-bc4e-5ab019113755" (UID: "c00c4123-f7db-4fb2-bc4e-5ab019113755"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:03:39 crc kubenswrapper[4915]: I0127 21:03:39.125639 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsc67\" (UniqueName: \"kubernetes.io/projected/c00c4123-f7db-4fb2-bc4e-5ab019113755-kube-api-access-gsc67\") on node \"crc\" DevicePath \"\"" Jan 27 21:03:39 crc kubenswrapper[4915]: I0127 21:03:39.125676 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c00c4123-f7db-4fb2-bc4e-5ab019113755-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:03:39 crc kubenswrapper[4915]: I0127 21:03:39.125688 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c00c4123-f7db-4fb2-bc4e-5ab019113755-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:03:39 crc kubenswrapper[4915]: I0127 21:03:39.397846 4915 generic.go:334] "Generic (PLEG): container finished" podID="c00c4123-f7db-4fb2-bc4e-5ab019113755" containerID="417f85ef5b3bcbc856ec57c85643052026592fa8d22fcfbe68a26bfabe80936f" exitCode=0 Jan 27 21:03:39 crc kubenswrapper[4915]: I0127 21:03:39.399186 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kk899" event={"ID":"c00c4123-f7db-4fb2-bc4e-5ab019113755","Type":"ContainerDied","Data":"417f85ef5b3bcbc856ec57c85643052026592fa8d22fcfbe68a26bfabe80936f"} Jan 27 21:03:39 crc kubenswrapper[4915]: I0127 21:03:39.399226 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kk899" event={"ID":"c00c4123-f7db-4fb2-bc4e-5ab019113755","Type":"ContainerDied","Data":"d6bde09cf27516d4a26a335ff38815d4a136294d1c42aa4b1d03199fd9f4024c"} Jan 27 21:03:39 crc kubenswrapper[4915]: I0127 21:03:39.399247 4915 scope.go:117] "RemoveContainer" containerID="417f85ef5b3bcbc856ec57c85643052026592fa8d22fcfbe68a26bfabe80936f" Jan 27 21:03:39 crc kubenswrapper[4915]: I0127 21:03:39.399254 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kk899" Jan 27 21:03:39 crc kubenswrapper[4915]: I0127 21:03:39.444334 4915 scope.go:117] "RemoveContainer" containerID="248a454247cc46e6cbee96ac22f7f3ba0435a09248e51d2663c8d52471064c73" Jan 27 21:03:39 crc kubenswrapper[4915]: I0127 21:03:39.444524 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kk899"] Jan 27 21:03:39 crc kubenswrapper[4915]: I0127 21:03:39.457004 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kk899"] Jan 27 21:03:39 crc kubenswrapper[4915]: I0127 21:03:39.461321 4915 scope.go:117] "RemoveContainer" containerID="1a1baad27db254e65eb94cef5b96d20c57cdbf248544355988c0b67907de05e6" Jan 27 21:03:39 crc kubenswrapper[4915]: I0127 21:03:39.507728 4915 scope.go:117] "RemoveContainer" containerID="417f85ef5b3bcbc856ec57c85643052026592fa8d22fcfbe68a26bfabe80936f" Jan 27 21:03:39 crc kubenswrapper[4915]: E0127 21:03:39.508106 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"417f85ef5b3bcbc856ec57c85643052026592fa8d22fcfbe68a26bfabe80936f\": container with ID starting with 417f85ef5b3bcbc856ec57c85643052026592fa8d22fcfbe68a26bfabe80936f not found: ID does not exist" containerID="417f85ef5b3bcbc856ec57c85643052026592fa8d22fcfbe68a26bfabe80936f" Jan 27 21:03:39 crc kubenswrapper[4915]: I0127 21:03:39.508179 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"417f85ef5b3bcbc856ec57c85643052026592fa8d22fcfbe68a26bfabe80936f"} err="failed to get container status \"417f85ef5b3bcbc856ec57c85643052026592fa8d22fcfbe68a26bfabe80936f\": rpc error: code = NotFound desc = could not find container \"417f85ef5b3bcbc856ec57c85643052026592fa8d22fcfbe68a26bfabe80936f\": container with ID starting with 417f85ef5b3bcbc856ec57c85643052026592fa8d22fcfbe68a26bfabe80936f not found: ID does not exist" Jan 27 21:03:39 crc kubenswrapper[4915]: I0127 21:03:39.508210 4915 scope.go:117] "RemoveContainer" containerID="248a454247cc46e6cbee96ac22f7f3ba0435a09248e51d2663c8d52471064c73" Jan 27 21:03:39 crc kubenswrapper[4915]: E0127 21:03:39.508626 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"248a454247cc46e6cbee96ac22f7f3ba0435a09248e51d2663c8d52471064c73\": container with ID starting with 248a454247cc46e6cbee96ac22f7f3ba0435a09248e51d2663c8d52471064c73 not found: ID does not exist" containerID="248a454247cc46e6cbee96ac22f7f3ba0435a09248e51d2663c8d52471064c73" Jan 27 21:03:39 crc kubenswrapper[4915]: I0127 21:03:39.508675 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"248a454247cc46e6cbee96ac22f7f3ba0435a09248e51d2663c8d52471064c73"} err="failed to get container status \"248a454247cc46e6cbee96ac22f7f3ba0435a09248e51d2663c8d52471064c73\": rpc error: code = NotFound desc = could not find container \"248a454247cc46e6cbee96ac22f7f3ba0435a09248e51d2663c8d52471064c73\": container with ID starting with 248a454247cc46e6cbee96ac22f7f3ba0435a09248e51d2663c8d52471064c73 not found: ID does not exist" Jan 27 21:03:39 crc kubenswrapper[4915]: I0127 21:03:39.508703 4915 scope.go:117] "RemoveContainer" containerID="1a1baad27db254e65eb94cef5b96d20c57cdbf248544355988c0b67907de05e6" Jan 27 21:03:39 crc kubenswrapper[4915]: E0127 21:03:39.508973 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a1baad27db254e65eb94cef5b96d20c57cdbf248544355988c0b67907de05e6\": container with ID starting with 1a1baad27db254e65eb94cef5b96d20c57cdbf248544355988c0b67907de05e6 not found: ID does not exist" containerID="1a1baad27db254e65eb94cef5b96d20c57cdbf248544355988c0b67907de05e6" Jan 27 21:03:39 crc kubenswrapper[4915]: I0127 21:03:39.509010 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a1baad27db254e65eb94cef5b96d20c57cdbf248544355988c0b67907de05e6"} err="failed to get container status \"1a1baad27db254e65eb94cef5b96d20c57cdbf248544355988c0b67907de05e6\": rpc error: code = NotFound desc = could not find container \"1a1baad27db254e65eb94cef5b96d20c57cdbf248544355988c0b67907de05e6\": container with ID starting with 1a1baad27db254e65eb94cef5b96d20c57cdbf248544355988c0b67907de05e6 not found: ID does not exist" Jan 27 21:03:41 crc kubenswrapper[4915]: I0127 21:03:41.368508 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c00c4123-f7db-4fb2-bc4e-5ab019113755" path="/var/lib/kubelet/pods/c00c4123-f7db-4fb2-bc4e-5ab019113755/volumes" Jan 27 21:03:50 crc kubenswrapper[4915]: I0127 21:03:50.624717 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:03:50 crc kubenswrapper[4915]: I0127 21:03:50.625396 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:03:50 crc kubenswrapper[4915]: I0127 21:03:50.625436 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 21:03:50 crc kubenswrapper[4915]: I0127 21:03:50.626339 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 21:03:50 crc kubenswrapper[4915]: I0127 21:03:50.626391 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" gracePeriod=600 Jan 27 21:03:50 crc kubenswrapper[4915]: E0127 21:03:50.755035 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:03:51 crc kubenswrapper[4915]: I0127 21:03:51.508392 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" exitCode=0 Jan 27 21:03:51 crc kubenswrapper[4915]: I0127 21:03:51.508438 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d"} Jan 27 21:03:51 crc kubenswrapper[4915]: I0127 21:03:51.508470 4915 scope.go:117] "RemoveContainer" containerID="9f35575782ffc79ba96e4548f073340d2943c17941f0b887ab44c5d085259845" Jan 27 21:03:51 crc kubenswrapper[4915]: I0127 21:03:51.509139 4915 scope.go:117] "RemoveContainer" containerID="80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" Jan 27 21:03:51 crc kubenswrapper[4915]: E0127 21:03:51.509453 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:04:05 crc kubenswrapper[4915]: I0127 21:04:05.357705 4915 scope.go:117] "RemoveContainer" containerID="80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" Jan 27 21:04:05 crc kubenswrapper[4915]: E0127 21:04:05.358558 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:04:06 crc kubenswrapper[4915]: I0127 21:04:06.046019 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gkjnm"] Jan 27 21:04:06 crc kubenswrapper[4915]: E0127 21:04:06.046821 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c00c4123-f7db-4fb2-bc4e-5ab019113755" containerName="extract-utilities" Jan 27 21:04:06 crc kubenswrapper[4915]: I0127 21:04:06.046842 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c00c4123-f7db-4fb2-bc4e-5ab019113755" containerName="extract-utilities" Jan 27 21:04:06 crc kubenswrapper[4915]: E0127 21:04:06.046886 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c00c4123-f7db-4fb2-bc4e-5ab019113755" containerName="registry-server" Jan 27 21:04:06 crc kubenswrapper[4915]: I0127 21:04:06.046894 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c00c4123-f7db-4fb2-bc4e-5ab019113755" containerName="registry-server" Jan 27 21:04:06 crc kubenswrapper[4915]: E0127 21:04:06.046909 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c00c4123-f7db-4fb2-bc4e-5ab019113755" containerName="extract-content" Jan 27 21:04:06 crc kubenswrapper[4915]: I0127 21:04:06.046919 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c00c4123-f7db-4fb2-bc4e-5ab019113755" containerName="extract-content" Jan 27 21:04:06 crc kubenswrapper[4915]: I0127 21:04:06.047151 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c00c4123-f7db-4fb2-bc4e-5ab019113755" containerName="registry-server" Jan 27 21:04:06 crc kubenswrapper[4915]: I0127 21:04:06.048768 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gkjnm" Jan 27 21:04:06 crc kubenswrapper[4915]: I0127 21:04:06.065273 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gkjnm"] Jan 27 21:04:06 crc kubenswrapper[4915]: I0127 21:04:06.078547 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805e5b59-ea97-49d2-9578-dd912db50bbd-utilities\") pod \"redhat-marketplace-gkjnm\" (UID: \"805e5b59-ea97-49d2-9578-dd912db50bbd\") " pod="openshift-marketplace/redhat-marketplace-gkjnm" Jan 27 21:04:06 crc kubenswrapper[4915]: I0127 21:04:06.078668 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnj5m\" (UniqueName: \"kubernetes.io/projected/805e5b59-ea97-49d2-9578-dd912db50bbd-kube-api-access-cnj5m\") pod \"redhat-marketplace-gkjnm\" (UID: \"805e5b59-ea97-49d2-9578-dd912db50bbd\") " pod="openshift-marketplace/redhat-marketplace-gkjnm" Jan 27 21:04:06 crc kubenswrapper[4915]: I0127 21:04:06.078720 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805e5b59-ea97-49d2-9578-dd912db50bbd-catalog-content\") pod \"redhat-marketplace-gkjnm\" (UID: \"805e5b59-ea97-49d2-9578-dd912db50bbd\") " pod="openshift-marketplace/redhat-marketplace-gkjnm" Jan 27 21:04:06 crc kubenswrapper[4915]: I0127 21:04:06.180418 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805e5b59-ea97-49d2-9578-dd912db50bbd-utilities\") pod \"redhat-marketplace-gkjnm\" (UID: \"805e5b59-ea97-49d2-9578-dd912db50bbd\") " pod="openshift-marketplace/redhat-marketplace-gkjnm" Jan 27 21:04:06 crc kubenswrapper[4915]: I0127 21:04:06.180557 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnj5m\" (UniqueName: \"kubernetes.io/projected/805e5b59-ea97-49d2-9578-dd912db50bbd-kube-api-access-cnj5m\") pod \"redhat-marketplace-gkjnm\" (UID: \"805e5b59-ea97-49d2-9578-dd912db50bbd\") " pod="openshift-marketplace/redhat-marketplace-gkjnm" Jan 27 21:04:06 crc kubenswrapper[4915]: I0127 21:04:06.180615 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805e5b59-ea97-49d2-9578-dd912db50bbd-catalog-content\") pod \"redhat-marketplace-gkjnm\" (UID: \"805e5b59-ea97-49d2-9578-dd912db50bbd\") " pod="openshift-marketplace/redhat-marketplace-gkjnm" Jan 27 21:04:06 crc kubenswrapper[4915]: I0127 21:04:06.181021 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805e5b59-ea97-49d2-9578-dd912db50bbd-utilities\") pod \"redhat-marketplace-gkjnm\" (UID: \"805e5b59-ea97-49d2-9578-dd912db50bbd\") " pod="openshift-marketplace/redhat-marketplace-gkjnm" Jan 27 21:04:06 crc kubenswrapper[4915]: I0127 21:04:06.181123 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805e5b59-ea97-49d2-9578-dd912db50bbd-catalog-content\") pod \"redhat-marketplace-gkjnm\" (UID: \"805e5b59-ea97-49d2-9578-dd912db50bbd\") " pod="openshift-marketplace/redhat-marketplace-gkjnm" Jan 27 21:04:06 crc kubenswrapper[4915]: I0127 21:04:06.210772 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnj5m\" (UniqueName: \"kubernetes.io/projected/805e5b59-ea97-49d2-9578-dd912db50bbd-kube-api-access-cnj5m\") pod \"redhat-marketplace-gkjnm\" (UID: \"805e5b59-ea97-49d2-9578-dd912db50bbd\") " pod="openshift-marketplace/redhat-marketplace-gkjnm" Jan 27 21:04:06 crc kubenswrapper[4915]: I0127 21:04:06.374032 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gkjnm" Jan 27 21:04:07 crc kubenswrapper[4915]: I0127 21:04:06.874598 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gkjnm"] Jan 27 21:04:07 crc kubenswrapper[4915]: I0127 21:04:07.672966 4915 generic.go:334] "Generic (PLEG): container finished" podID="805e5b59-ea97-49d2-9578-dd912db50bbd" containerID="85039cc0739efa638e95cc70bd1187ca7b91966174d11f03967283e87e393da7" exitCode=0 Jan 27 21:04:07 crc kubenswrapper[4915]: I0127 21:04:07.673303 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gkjnm" event={"ID":"805e5b59-ea97-49d2-9578-dd912db50bbd","Type":"ContainerDied","Data":"85039cc0739efa638e95cc70bd1187ca7b91966174d11f03967283e87e393da7"} Jan 27 21:04:07 crc kubenswrapper[4915]: I0127 21:04:07.673342 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gkjnm" event={"ID":"805e5b59-ea97-49d2-9578-dd912db50bbd","Type":"ContainerStarted","Data":"34edcc507aa193119c365930507c18f61a691fd41fc40e29d140dd01d2812a98"} Jan 27 21:04:10 crc kubenswrapper[4915]: I0127 21:04:10.710696 4915 generic.go:334] "Generic (PLEG): container finished" podID="805e5b59-ea97-49d2-9578-dd912db50bbd" containerID="f62cf3ebc830c5f36be09d4546ab82a1c8b63ccd7ae0f5b2403622a4d54fffc9" exitCode=0 Jan 27 21:04:10 crc kubenswrapper[4915]: I0127 21:04:10.710852 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gkjnm" event={"ID":"805e5b59-ea97-49d2-9578-dd912db50bbd","Type":"ContainerDied","Data":"f62cf3ebc830c5f36be09d4546ab82a1c8b63ccd7ae0f5b2403622a4d54fffc9"} Jan 27 21:04:12 crc kubenswrapper[4915]: I0127 21:04:12.737741 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gkjnm" event={"ID":"805e5b59-ea97-49d2-9578-dd912db50bbd","Type":"ContainerStarted","Data":"074f1a501bc5d490b19cd5745bb4f98690577b239f82bd97feb8497133209755"} Jan 27 21:04:16 crc kubenswrapper[4915]: I0127 21:04:16.374312 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gkjnm" Jan 27 21:04:16 crc kubenswrapper[4915]: I0127 21:04:16.374773 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gkjnm" Jan 27 21:04:16 crc kubenswrapper[4915]: I0127 21:04:16.429737 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gkjnm" Jan 27 21:04:16 crc kubenswrapper[4915]: I0127 21:04:16.449883 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gkjnm" podStartSLOduration=5.966248567 podStartE2EDuration="10.44986061s" podCreationTimestamp="2026-01-27 21:04:06 +0000 UTC" firstStartedPulling="2026-01-27 21:04:07.676419607 +0000 UTC m=+8539.034273311" lastFinishedPulling="2026-01-27 21:04:12.16003169 +0000 UTC m=+8543.517885354" observedRunningTime="2026-01-27 21:04:12.762211389 +0000 UTC m=+8544.120065053" watchObservedRunningTime="2026-01-27 21:04:16.44986061 +0000 UTC m=+8547.807714274" Jan 27 21:04:17 crc kubenswrapper[4915]: I0127 21:04:17.359675 4915 scope.go:117] "RemoveContainer" containerID="80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" Jan 27 21:04:17 crc kubenswrapper[4915]: E0127 21:04:17.360016 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:04:26 crc kubenswrapper[4915]: I0127 21:04:26.416626 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gkjnm" Jan 27 21:04:26 crc kubenswrapper[4915]: I0127 21:04:26.474334 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gkjnm"] Jan 27 21:04:26 crc kubenswrapper[4915]: I0127 21:04:26.879242 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gkjnm" podUID="805e5b59-ea97-49d2-9578-dd912db50bbd" containerName="registry-server" containerID="cri-o://074f1a501bc5d490b19cd5745bb4f98690577b239f82bd97feb8497133209755" gracePeriod=2 Jan 27 21:04:28 crc kubenswrapper[4915]: I0127 21:04:28.226066 4915 generic.go:334] "Generic (PLEG): container finished" podID="805e5b59-ea97-49d2-9578-dd912db50bbd" containerID="074f1a501bc5d490b19cd5745bb4f98690577b239f82bd97feb8497133209755" exitCode=0 Jan 27 21:04:28 crc kubenswrapper[4915]: I0127 21:04:28.226370 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gkjnm" event={"ID":"805e5b59-ea97-49d2-9578-dd912db50bbd","Type":"ContainerDied","Data":"074f1a501bc5d490b19cd5745bb4f98690577b239f82bd97feb8497133209755"} Jan 27 21:04:28 crc kubenswrapper[4915]: I0127 21:04:28.622943 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gkjnm" Jan 27 21:04:28 crc kubenswrapper[4915]: I0127 21:04:28.685458 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnj5m\" (UniqueName: \"kubernetes.io/projected/805e5b59-ea97-49d2-9578-dd912db50bbd-kube-api-access-cnj5m\") pod \"805e5b59-ea97-49d2-9578-dd912db50bbd\" (UID: \"805e5b59-ea97-49d2-9578-dd912db50bbd\") " Jan 27 21:04:28 crc kubenswrapper[4915]: I0127 21:04:28.685572 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805e5b59-ea97-49d2-9578-dd912db50bbd-utilities\") pod \"805e5b59-ea97-49d2-9578-dd912db50bbd\" (UID: \"805e5b59-ea97-49d2-9578-dd912db50bbd\") " Jan 27 21:04:28 crc kubenswrapper[4915]: I0127 21:04:28.685621 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805e5b59-ea97-49d2-9578-dd912db50bbd-catalog-content\") pod \"805e5b59-ea97-49d2-9578-dd912db50bbd\" (UID: \"805e5b59-ea97-49d2-9578-dd912db50bbd\") " Jan 27 21:04:28 crc kubenswrapper[4915]: I0127 21:04:28.686929 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/805e5b59-ea97-49d2-9578-dd912db50bbd-utilities" (OuterVolumeSpecName: "utilities") pod "805e5b59-ea97-49d2-9578-dd912db50bbd" (UID: "805e5b59-ea97-49d2-9578-dd912db50bbd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:04:28 crc kubenswrapper[4915]: I0127 21:04:28.693317 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/805e5b59-ea97-49d2-9578-dd912db50bbd-kube-api-access-cnj5m" (OuterVolumeSpecName: "kube-api-access-cnj5m") pod "805e5b59-ea97-49d2-9578-dd912db50bbd" (UID: "805e5b59-ea97-49d2-9578-dd912db50bbd"). InnerVolumeSpecName "kube-api-access-cnj5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:04:28 crc kubenswrapper[4915]: I0127 21:04:28.715378 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/805e5b59-ea97-49d2-9578-dd912db50bbd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "805e5b59-ea97-49d2-9578-dd912db50bbd" (UID: "805e5b59-ea97-49d2-9578-dd912db50bbd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:04:28 crc kubenswrapper[4915]: I0127 21:04:28.787995 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805e5b59-ea97-49d2-9578-dd912db50bbd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:04:28 crc kubenswrapper[4915]: I0127 21:04:28.788023 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnj5m\" (UniqueName: \"kubernetes.io/projected/805e5b59-ea97-49d2-9578-dd912db50bbd-kube-api-access-cnj5m\") on node \"crc\" DevicePath \"\"" Jan 27 21:04:28 crc kubenswrapper[4915]: I0127 21:04:28.788034 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805e5b59-ea97-49d2-9578-dd912db50bbd-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:04:29 crc kubenswrapper[4915]: I0127 21:04:29.236949 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gkjnm" event={"ID":"805e5b59-ea97-49d2-9578-dd912db50bbd","Type":"ContainerDied","Data":"34edcc507aa193119c365930507c18f61a691fd41fc40e29d140dd01d2812a98"} Jan 27 21:04:29 crc kubenswrapper[4915]: I0127 21:04:29.237008 4915 scope.go:117] "RemoveContainer" containerID="074f1a501bc5d490b19cd5745bb4f98690577b239f82bd97feb8497133209755" Jan 27 21:04:29 crc kubenswrapper[4915]: I0127 21:04:29.237014 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gkjnm" Jan 27 21:04:29 crc kubenswrapper[4915]: I0127 21:04:29.277436 4915 scope.go:117] "RemoveContainer" containerID="f62cf3ebc830c5f36be09d4546ab82a1c8b63ccd7ae0f5b2403622a4d54fffc9" Jan 27 21:04:29 crc kubenswrapper[4915]: I0127 21:04:29.294013 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gkjnm"] Jan 27 21:04:29 crc kubenswrapper[4915]: I0127 21:04:29.303711 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gkjnm"] Jan 27 21:04:29 crc kubenswrapper[4915]: I0127 21:04:29.320289 4915 scope.go:117] "RemoveContainer" containerID="85039cc0739efa638e95cc70bd1187ca7b91966174d11f03967283e87e393da7" Jan 27 21:04:29 crc kubenswrapper[4915]: I0127 21:04:29.370753 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="805e5b59-ea97-49d2-9578-dd912db50bbd" path="/var/lib/kubelet/pods/805e5b59-ea97-49d2-9578-dd912db50bbd/volumes" Jan 27 21:04:32 crc kubenswrapper[4915]: I0127 21:04:32.358016 4915 scope.go:117] "RemoveContainer" containerID="80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" Jan 27 21:04:32 crc kubenswrapper[4915]: E0127 21:04:32.360999 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:04:45 crc kubenswrapper[4915]: I0127 21:04:45.357652 4915 scope.go:117] "RemoveContainer" containerID="80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" Jan 27 21:04:45 crc kubenswrapper[4915]: E0127 21:04:45.358479 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:04:56 crc kubenswrapper[4915]: I0127 21:04:56.357903 4915 scope.go:117] "RemoveContainer" containerID="80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" Jan 27 21:04:56 crc kubenswrapper[4915]: E0127 21:04:56.358839 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:04:56 crc kubenswrapper[4915]: I0127 21:04:56.579247 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-28msb"] Jan 27 21:04:56 crc kubenswrapper[4915]: E0127 21:04:56.579613 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="805e5b59-ea97-49d2-9578-dd912db50bbd" containerName="extract-content" Jan 27 21:04:56 crc kubenswrapper[4915]: I0127 21:04:56.579625 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="805e5b59-ea97-49d2-9578-dd912db50bbd" containerName="extract-content" Jan 27 21:04:56 crc kubenswrapper[4915]: E0127 21:04:56.579665 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="805e5b59-ea97-49d2-9578-dd912db50bbd" containerName="extract-utilities" Jan 27 21:04:56 crc kubenswrapper[4915]: I0127 21:04:56.579671 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="805e5b59-ea97-49d2-9578-dd912db50bbd" containerName="extract-utilities" Jan 27 21:04:56 crc kubenswrapper[4915]: E0127 21:04:56.579683 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="805e5b59-ea97-49d2-9578-dd912db50bbd" containerName="registry-server" Jan 27 21:04:56 crc kubenswrapper[4915]: I0127 21:04:56.579689 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="805e5b59-ea97-49d2-9578-dd912db50bbd" containerName="registry-server" Jan 27 21:04:56 crc kubenswrapper[4915]: I0127 21:04:56.579876 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="805e5b59-ea97-49d2-9578-dd912db50bbd" containerName="registry-server" Jan 27 21:04:56 crc kubenswrapper[4915]: I0127 21:04:56.581215 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-28msb" Jan 27 21:04:56 crc kubenswrapper[4915]: I0127 21:04:56.591740 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-28msb"] Jan 27 21:04:56 crc kubenswrapper[4915]: I0127 21:04:56.710469 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb6jf\" (UniqueName: \"kubernetes.io/projected/bd882799-3eeb-4ded-a5d9-e1d43a4b748c-kube-api-access-bb6jf\") pod \"certified-operators-28msb\" (UID: \"bd882799-3eeb-4ded-a5d9-e1d43a4b748c\") " pod="openshift-marketplace/certified-operators-28msb" Jan 27 21:04:56 crc kubenswrapper[4915]: I0127 21:04:56.710562 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd882799-3eeb-4ded-a5d9-e1d43a4b748c-utilities\") pod \"certified-operators-28msb\" (UID: \"bd882799-3eeb-4ded-a5d9-e1d43a4b748c\") " pod="openshift-marketplace/certified-operators-28msb" Jan 27 21:04:56 crc kubenswrapper[4915]: I0127 21:04:56.710667 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd882799-3eeb-4ded-a5d9-e1d43a4b748c-catalog-content\") pod \"certified-operators-28msb\" (UID: \"bd882799-3eeb-4ded-a5d9-e1d43a4b748c\") " pod="openshift-marketplace/certified-operators-28msb" Jan 27 21:04:56 crc kubenswrapper[4915]: I0127 21:04:56.812367 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd882799-3eeb-4ded-a5d9-e1d43a4b748c-utilities\") pod \"certified-operators-28msb\" (UID: \"bd882799-3eeb-4ded-a5d9-e1d43a4b748c\") " pod="openshift-marketplace/certified-operators-28msb" Jan 27 21:04:56 crc kubenswrapper[4915]: I0127 21:04:56.812510 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd882799-3eeb-4ded-a5d9-e1d43a4b748c-catalog-content\") pod \"certified-operators-28msb\" (UID: \"bd882799-3eeb-4ded-a5d9-e1d43a4b748c\") " pod="openshift-marketplace/certified-operators-28msb" Jan 27 21:04:56 crc kubenswrapper[4915]: I0127 21:04:56.812616 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb6jf\" (UniqueName: \"kubernetes.io/projected/bd882799-3eeb-4ded-a5d9-e1d43a4b748c-kube-api-access-bb6jf\") pod \"certified-operators-28msb\" (UID: \"bd882799-3eeb-4ded-a5d9-e1d43a4b748c\") " pod="openshift-marketplace/certified-operators-28msb" Jan 27 21:04:56 crc kubenswrapper[4915]: I0127 21:04:56.812916 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd882799-3eeb-4ded-a5d9-e1d43a4b748c-utilities\") pod \"certified-operators-28msb\" (UID: \"bd882799-3eeb-4ded-a5d9-e1d43a4b748c\") " pod="openshift-marketplace/certified-operators-28msb" Jan 27 21:04:56 crc kubenswrapper[4915]: I0127 21:04:56.813441 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd882799-3eeb-4ded-a5d9-e1d43a4b748c-catalog-content\") pod \"certified-operators-28msb\" (UID: \"bd882799-3eeb-4ded-a5d9-e1d43a4b748c\") " pod="openshift-marketplace/certified-operators-28msb" Jan 27 21:04:56 crc kubenswrapper[4915]: I0127 21:04:56.833998 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb6jf\" (UniqueName: \"kubernetes.io/projected/bd882799-3eeb-4ded-a5d9-e1d43a4b748c-kube-api-access-bb6jf\") pod \"certified-operators-28msb\" (UID: \"bd882799-3eeb-4ded-a5d9-e1d43a4b748c\") " pod="openshift-marketplace/certified-operators-28msb" Jan 27 21:04:56 crc kubenswrapper[4915]: I0127 21:04:56.909018 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-28msb" Jan 27 21:04:57 crc kubenswrapper[4915]: I0127 21:04:57.439879 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-28msb"] Jan 27 21:04:57 crc kubenswrapper[4915]: W0127 21:04:57.441227 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd882799_3eeb_4ded_a5d9_e1d43a4b748c.slice/crio-9154addf2bbfe05c7da6bbec7b597e166610f9c232d1759613bff3378014d711 WatchSource:0}: Error finding container 9154addf2bbfe05c7da6bbec7b597e166610f9c232d1759613bff3378014d711: Status 404 returned error can't find the container with id 9154addf2bbfe05c7da6bbec7b597e166610f9c232d1759613bff3378014d711 Jan 27 21:04:57 crc kubenswrapper[4915]: I0127 21:04:57.507532 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28msb" event={"ID":"bd882799-3eeb-4ded-a5d9-e1d43a4b748c","Type":"ContainerStarted","Data":"9154addf2bbfe05c7da6bbec7b597e166610f9c232d1759613bff3378014d711"} Jan 27 21:04:58 crc kubenswrapper[4915]: I0127 21:04:58.518397 4915 generic.go:334] "Generic (PLEG): container finished" podID="bd882799-3eeb-4ded-a5d9-e1d43a4b748c" containerID="907536b364df180274d3f768d798d857dd01353e80206ccb2b248293c9c7929b" exitCode=0 Jan 27 21:04:58 crc kubenswrapper[4915]: I0127 21:04:58.518500 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28msb" event={"ID":"bd882799-3eeb-4ded-a5d9-e1d43a4b748c","Type":"ContainerDied","Data":"907536b364df180274d3f768d798d857dd01353e80206ccb2b248293c9c7929b"} Jan 27 21:04:58 crc kubenswrapper[4915]: I0127 21:04:58.520557 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 21:04:59 crc kubenswrapper[4915]: I0127 21:04:59.533491 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28msb" event={"ID":"bd882799-3eeb-4ded-a5d9-e1d43a4b748c","Type":"ContainerStarted","Data":"1eb593a744280d762e0a767c694d2bedf1097200d4d832ed6bd981e0a3e72b6c"} Jan 27 21:05:00 crc kubenswrapper[4915]: I0127 21:05:00.545345 4915 generic.go:334] "Generic (PLEG): container finished" podID="bd882799-3eeb-4ded-a5d9-e1d43a4b748c" containerID="1eb593a744280d762e0a767c694d2bedf1097200d4d832ed6bd981e0a3e72b6c" exitCode=0 Jan 27 21:05:00 crc kubenswrapper[4915]: I0127 21:05:00.545387 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28msb" event={"ID":"bd882799-3eeb-4ded-a5d9-e1d43a4b748c","Type":"ContainerDied","Data":"1eb593a744280d762e0a767c694d2bedf1097200d4d832ed6bd981e0a3e72b6c"} Jan 27 21:05:01 crc kubenswrapper[4915]: I0127 21:05:01.556194 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28msb" event={"ID":"bd882799-3eeb-4ded-a5d9-e1d43a4b748c","Type":"ContainerStarted","Data":"611d2be6dc1c46d5a8991962f7142278c0f28db6749a06186fc5688d25fe9274"} Jan 27 21:05:01 crc kubenswrapper[4915]: I0127 21:05:01.587863 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-28msb" podStartSLOduration=3.165826584 podStartE2EDuration="5.587832515s" podCreationTimestamp="2026-01-27 21:04:56 +0000 UTC" firstStartedPulling="2026-01-27 21:04:58.520236607 +0000 UTC m=+8589.878090271" lastFinishedPulling="2026-01-27 21:05:00.942242498 +0000 UTC m=+8592.300096202" observedRunningTime="2026-01-27 21:05:01.58078303 +0000 UTC m=+8592.938636694" watchObservedRunningTime="2026-01-27 21:05:01.587832515 +0000 UTC m=+8592.945686209" Jan 27 21:05:06 crc kubenswrapper[4915]: I0127 21:05:06.909646 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-28msb" Jan 27 21:05:06 crc kubenswrapper[4915]: I0127 21:05:06.910625 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-28msb" Jan 27 21:05:06 crc kubenswrapper[4915]: I0127 21:05:06.965929 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-28msb" Jan 27 21:05:07 crc kubenswrapper[4915]: I0127 21:05:07.357651 4915 scope.go:117] "RemoveContainer" containerID="80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" Jan 27 21:05:07 crc kubenswrapper[4915]: E0127 21:05:07.358195 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:05:07 crc kubenswrapper[4915]: I0127 21:05:07.664240 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-28msb" Jan 27 21:05:07 crc kubenswrapper[4915]: I0127 21:05:07.741334 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-28msb"] Jan 27 21:05:09 crc kubenswrapper[4915]: I0127 21:05:09.625111 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-28msb" podUID="bd882799-3eeb-4ded-a5d9-e1d43a4b748c" containerName="registry-server" containerID="cri-o://611d2be6dc1c46d5a8991962f7142278c0f28db6749a06186fc5688d25fe9274" gracePeriod=2 Jan 27 21:05:10 crc kubenswrapper[4915]: I0127 21:05:10.137091 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-28msb" Jan 27 21:05:10 crc kubenswrapper[4915]: I0127 21:05:10.208597 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd882799-3eeb-4ded-a5d9-e1d43a4b748c-catalog-content\") pod \"bd882799-3eeb-4ded-a5d9-e1d43a4b748c\" (UID: \"bd882799-3eeb-4ded-a5d9-e1d43a4b748c\") " Jan 27 21:05:10 crc kubenswrapper[4915]: I0127 21:05:10.208756 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd882799-3eeb-4ded-a5d9-e1d43a4b748c-utilities\") pod \"bd882799-3eeb-4ded-a5d9-e1d43a4b748c\" (UID: \"bd882799-3eeb-4ded-a5d9-e1d43a4b748c\") " Jan 27 21:05:10 crc kubenswrapper[4915]: I0127 21:05:10.208890 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb6jf\" (UniqueName: \"kubernetes.io/projected/bd882799-3eeb-4ded-a5d9-e1d43a4b748c-kube-api-access-bb6jf\") pod \"bd882799-3eeb-4ded-a5d9-e1d43a4b748c\" (UID: \"bd882799-3eeb-4ded-a5d9-e1d43a4b748c\") " Jan 27 21:05:10 crc kubenswrapper[4915]: I0127 21:05:10.209875 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd882799-3eeb-4ded-a5d9-e1d43a4b748c-utilities" (OuterVolumeSpecName: "utilities") pod "bd882799-3eeb-4ded-a5d9-e1d43a4b748c" (UID: "bd882799-3eeb-4ded-a5d9-e1d43a4b748c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:05:10 crc kubenswrapper[4915]: I0127 21:05:10.215528 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd882799-3eeb-4ded-a5d9-e1d43a4b748c-kube-api-access-bb6jf" (OuterVolumeSpecName: "kube-api-access-bb6jf") pod "bd882799-3eeb-4ded-a5d9-e1d43a4b748c" (UID: "bd882799-3eeb-4ded-a5d9-e1d43a4b748c"). InnerVolumeSpecName "kube-api-access-bb6jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:05:10 crc kubenswrapper[4915]: I0127 21:05:10.260030 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd882799-3eeb-4ded-a5d9-e1d43a4b748c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd882799-3eeb-4ded-a5d9-e1d43a4b748c" (UID: "bd882799-3eeb-4ded-a5d9-e1d43a4b748c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:05:10 crc kubenswrapper[4915]: I0127 21:05:10.312167 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb6jf\" (UniqueName: \"kubernetes.io/projected/bd882799-3eeb-4ded-a5d9-e1d43a4b748c-kube-api-access-bb6jf\") on node \"crc\" DevicePath \"\"" Jan 27 21:05:10 crc kubenswrapper[4915]: I0127 21:05:10.312223 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd882799-3eeb-4ded-a5d9-e1d43a4b748c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:05:10 crc kubenswrapper[4915]: I0127 21:05:10.312243 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd882799-3eeb-4ded-a5d9-e1d43a4b748c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:05:10 crc kubenswrapper[4915]: I0127 21:05:10.636594 4915 generic.go:334] "Generic (PLEG): container finished" podID="bd882799-3eeb-4ded-a5d9-e1d43a4b748c" containerID="611d2be6dc1c46d5a8991962f7142278c0f28db6749a06186fc5688d25fe9274" exitCode=0 Jan 27 21:05:10 crc kubenswrapper[4915]: I0127 21:05:10.636734 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-28msb" Jan 27 21:05:10 crc kubenswrapper[4915]: I0127 21:05:10.636729 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28msb" event={"ID":"bd882799-3eeb-4ded-a5d9-e1d43a4b748c","Type":"ContainerDied","Data":"611d2be6dc1c46d5a8991962f7142278c0f28db6749a06186fc5688d25fe9274"} Jan 27 21:05:10 crc kubenswrapper[4915]: I0127 21:05:10.637232 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28msb" event={"ID":"bd882799-3eeb-4ded-a5d9-e1d43a4b748c","Type":"ContainerDied","Data":"9154addf2bbfe05c7da6bbec7b597e166610f9c232d1759613bff3378014d711"} Jan 27 21:05:10 crc kubenswrapper[4915]: I0127 21:05:10.637252 4915 scope.go:117] "RemoveContainer" containerID="611d2be6dc1c46d5a8991962f7142278c0f28db6749a06186fc5688d25fe9274" Jan 27 21:05:10 crc kubenswrapper[4915]: I0127 21:05:10.661289 4915 scope.go:117] "RemoveContainer" containerID="1eb593a744280d762e0a767c694d2bedf1097200d4d832ed6bd981e0a3e72b6c" Jan 27 21:05:10 crc kubenswrapper[4915]: I0127 21:05:10.683034 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-28msb"] Jan 27 21:05:10 crc kubenswrapper[4915]: I0127 21:05:10.696344 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-28msb"] Jan 27 21:05:10 crc kubenswrapper[4915]: I0127 21:05:10.708960 4915 scope.go:117] "RemoveContainer" containerID="907536b364df180274d3f768d798d857dd01353e80206ccb2b248293c9c7929b" Jan 27 21:05:10 crc kubenswrapper[4915]: I0127 21:05:10.762532 4915 scope.go:117] "RemoveContainer" containerID="611d2be6dc1c46d5a8991962f7142278c0f28db6749a06186fc5688d25fe9274" Jan 27 21:05:10 crc kubenswrapper[4915]: E0127 21:05:10.763099 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"611d2be6dc1c46d5a8991962f7142278c0f28db6749a06186fc5688d25fe9274\": container with ID starting with 611d2be6dc1c46d5a8991962f7142278c0f28db6749a06186fc5688d25fe9274 not found: ID does not exist" containerID="611d2be6dc1c46d5a8991962f7142278c0f28db6749a06186fc5688d25fe9274" Jan 27 21:05:10 crc kubenswrapper[4915]: I0127 21:05:10.763142 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"611d2be6dc1c46d5a8991962f7142278c0f28db6749a06186fc5688d25fe9274"} err="failed to get container status \"611d2be6dc1c46d5a8991962f7142278c0f28db6749a06186fc5688d25fe9274\": rpc error: code = NotFound desc = could not find container \"611d2be6dc1c46d5a8991962f7142278c0f28db6749a06186fc5688d25fe9274\": container with ID starting with 611d2be6dc1c46d5a8991962f7142278c0f28db6749a06186fc5688d25fe9274 not found: ID does not exist" Jan 27 21:05:10 crc kubenswrapper[4915]: I0127 21:05:10.763178 4915 scope.go:117] "RemoveContainer" containerID="1eb593a744280d762e0a767c694d2bedf1097200d4d832ed6bd981e0a3e72b6c" Jan 27 21:05:10 crc kubenswrapper[4915]: E0127 21:05:10.763477 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1eb593a744280d762e0a767c694d2bedf1097200d4d832ed6bd981e0a3e72b6c\": container with ID starting with 1eb593a744280d762e0a767c694d2bedf1097200d4d832ed6bd981e0a3e72b6c not found: ID does not exist" containerID="1eb593a744280d762e0a767c694d2bedf1097200d4d832ed6bd981e0a3e72b6c" Jan 27 21:05:10 crc kubenswrapper[4915]: I0127 21:05:10.763499 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1eb593a744280d762e0a767c694d2bedf1097200d4d832ed6bd981e0a3e72b6c"} err="failed to get container status \"1eb593a744280d762e0a767c694d2bedf1097200d4d832ed6bd981e0a3e72b6c\": rpc error: code = NotFound desc = could not find container \"1eb593a744280d762e0a767c694d2bedf1097200d4d832ed6bd981e0a3e72b6c\": container with ID starting with 1eb593a744280d762e0a767c694d2bedf1097200d4d832ed6bd981e0a3e72b6c not found: ID does not exist" Jan 27 21:05:10 crc kubenswrapper[4915]: I0127 21:05:10.763518 4915 scope.go:117] "RemoveContainer" containerID="907536b364df180274d3f768d798d857dd01353e80206ccb2b248293c9c7929b" Jan 27 21:05:10 crc kubenswrapper[4915]: E0127 21:05:10.763728 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"907536b364df180274d3f768d798d857dd01353e80206ccb2b248293c9c7929b\": container with ID starting with 907536b364df180274d3f768d798d857dd01353e80206ccb2b248293c9c7929b not found: ID does not exist" containerID="907536b364df180274d3f768d798d857dd01353e80206ccb2b248293c9c7929b" Jan 27 21:05:10 crc kubenswrapper[4915]: I0127 21:05:10.763764 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"907536b364df180274d3f768d798d857dd01353e80206ccb2b248293c9c7929b"} err="failed to get container status \"907536b364df180274d3f768d798d857dd01353e80206ccb2b248293c9c7929b\": rpc error: code = NotFound desc = could not find container \"907536b364df180274d3f768d798d857dd01353e80206ccb2b248293c9c7929b\": container with ID starting with 907536b364df180274d3f768d798d857dd01353e80206ccb2b248293c9c7929b not found: ID does not exist" Jan 27 21:05:11 crc kubenswrapper[4915]: I0127 21:05:11.369038 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd882799-3eeb-4ded-a5d9-e1d43a4b748c" path="/var/lib/kubelet/pods/bd882799-3eeb-4ded-a5d9-e1d43a4b748c/volumes" Jan 27 21:05:19 crc kubenswrapper[4915]: I0127 21:05:19.369374 4915 scope.go:117] "RemoveContainer" containerID="80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" Jan 27 21:05:19 crc kubenswrapper[4915]: E0127 21:05:19.370922 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:05:32 crc kubenswrapper[4915]: I0127 21:05:32.358408 4915 scope.go:117] "RemoveContainer" containerID="80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" Jan 27 21:05:32 crc kubenswrapper[4915]: E0127 21:05:32.359768 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:05:46 crc kubenswrapper[4915]: I0127 21:05:46.357649 4915 scope.go:117] "RemoveContainer" containerID="80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" Jan 27 21:05:46 crc kubenswrapper[4915]: E0127 21:05:46.359492 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:06:00 crc kubenswrapper[4915]: I0127 21:06:00.359297 4915 scope.go:117] "RemoveContainer" containerID="80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" Jan 27 21:06:00 crc kubenswrapper[4915]: E0127 21:06:00.360371 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:06:12 crc kubenswrapper[4915]: I0127 21:06:12.358704 4915 scope.go:117] "RemoveContainer" containerID="80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" Jan 27 21:06:12 crc kubenswrapper[4915]: E0127 21:06:12.359647 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:06:24 crc kubenswrapper[4915]: I0127 21:06:24.362695 4915 scope.go:117] "RemoveContainer" containerID="80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" Jan 27 21:06:24 crc kubenswrapper[4915]: E0127 21:06:24.363412 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:06:36 crc kubenswrapper[4915]: I0127 21:06:36.357891 4915 scope.go:117] "RemoveContainer" containerID="80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" Jan 27 21:06:36 crc kubenswrapper[4915]: E0127 21:06:36.358729 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:06:47 crc kubenswrapper[4915]: I0127 21:06:47.359192 4915 scope.go:117] "RemoveContainer" containerID="80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" Jan 27 21:06:47 crc kubenswrapper[4915]: E0127 21:06:47.371009 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:07:00 crc kubenswrapper[4915]: I0127 21:07:00.357605 4915 scope.go:117] "RemoveContainer" containerID="80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" Jan 27 21:07:00 crc kubenswrapper[4915]: E0127 21:07:00.358588 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:07:14 crc kubenswrapper[4915]: I0127 21:07:14.357604 4915 scope.go:117] "RemoveContainer" containerID="80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" Jan 27 21:07:14 crc kubenswrapper[4915]: E0127 21:07:14.358408 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:07:28 crc kubenswrapper[4915]: I0127 21:07:28.358444 4915 scope.go:117] "RemoveContainer" containerID="80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" Jan 27 21:07:28 crc kubenswrapper[4915]: E0127 21:07:28.359275 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:07:43 crc kubenswrapper[4915]: I0127 21:07:43.363599 4915 scope.go:117] "RemoveContainer" containerID="80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" Jan 27 21:07:43 crc kubenswrapper[4915]: E0127 21:07:43.364226 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:07:57 crc kubenswrapper[4915]: I0127 21:07:57.358325 4915 scope.go:117] "RemoveContainer" containerID="80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" Jan 27 21:07:57 crc kubenswrapper[4915]: E0127 21:07:57.359325 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:08:08 crc kubenswrapper[4915]: I0127 21:08:08.358370 4915 scope.go:117] "RemoveContainer" containerID="80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" Jan 27 21:08:08 crc kubenswrapper[4915]: E0127 21:08:08.359632 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:08:22 crc kubenswrapper[4915]: I0127 21:08:22.357765 4915 scope.go:117] "RemoveContainer" containerID="80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" Jan 27 21:08:22 crc kubenswrapper[4915]: E0127 21:08:22.358772 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:08:33 crc kubenswrapper[4915]: I0127 21:08:33.359546 4915 scope.go:117] "RemoveContainer" containerID="80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" Jan 27 21:08:33 crc kubenswrapper[4915]: E0127 21:08:33.360853 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:08:44 crc kubenswrapper[4915]: I0127 21:08:44.358179 4915 scope.go:117] "RemoveContainer" containerID="80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" Jan 27 21:08:44 crc kubenswrapper[4915]: E0127 21:08:44.358920 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:08:59 crc kubenswrapper[4915]: I0127 21:08:59.364868 4915 scope.go:117] "RemoveContainer" containerID="80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" Jan 27 21:08:59 crc kubenswrapper[4915]: I0127 21:08:59.761267 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"22f124dd04d7c88cd9a2a4d2e4d2707fc784e175ebb87aadd75834e52a39ac5d"} Jan 27 21:10:00 crc kubenswrapper[4915]: I0127 21:10:00.626670 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s4dz5"] Jan 27 21:10:00 crc kubenswrapper[4915]: E0127 21:10:00.629088 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd882799-3eeb-4ded-a5d9-e1d43a4b748c" containerName="extract-content" Jan 27 21:10:00 crc kubenswrapper[4915]: I0127 21:10:00.629234 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd882799-3eeb-4ded-a5d9-e1d43a4b748c" containerName="extract-content" Jan 27 21:10:00 crc kubenswrapper[4915]: E0127 21:10:00.629351 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd882799-3eeb-4ded-a5d9-e1d43a4b748c" containerName="extract-utilities" Jan 27 21:10:00 crc kubenswrapper[4915]: I0127 21:10:00.629429 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd882799-3eeb-4ded-a5d9-e1d43a4b748c" containerName="extract-utilities" Jan 27 21:10:00 crc kubenswrapper[4915]: E0127 21:10:00.629528 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd882799-3eeb-4ded-a5d9-e1d43a4b748c" containerName="registry-server" Jan 27 21:10:00 crc kubenswrapper[4915]: I0127 21:10:00.629605 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd882799-3eeb-4ded-a5d9-e1d43a4b748c" containerName="registry-server" Jan 27 21:10:00 crc kubenswrapper[4915]: I0127 21:10:00.629944 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd882799-3eeb-4ded-a5d9-e1d43a4b748c" containerName="registry-server" Jan 27 21:10:00 crc kubenswrapper[4915]: I0127 21:10:00.631874 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4dz5" Jan 27 21:10:00 crc kubenswrapper[4915]: I0127 21:10:00.642094 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s4dz5"] Jan 27 21:10:00 crc kubenswrapper[4915]: I0127 21:10:00.770690 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee551a1f-cdba-4421-a321-4eb8f72639a4-utilities\") pod \"redhat-operators-s4dz5\" (UID: \"ee551a1f-cdba-4421-a321-4eb8f72639a4\") " pod="openshift-marketplace/redhat-operators-s4dz5" Jan 27 21:10:00 crc kubenswrapper[4915]: I0127 21:10:00.770815 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmsxq\" (UniqueName: \"kubernetes.io/projected/ee551a1f-cdba-4421-a321-4eb8f72639a4-kube-api-access-hmsxq\") pod \"redhat-operators-s4dz5\" (UID: \"ee551a1f-cdba-4421-a321-4eb8f72639a4\") " pod="openshift-marketplace/redhat-operators-s4dz5" Jan 27 21:10:00 crc kubenswrapper[4915]: I0127 21:10:00.770916 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee551a1f-cdba-4421-a321-4eb8f72639a4-catalog-content\") pod \"redhat-operators-s4dz5\" (UID: \"ee551a1f-cdba-4421-a321-4eb8f72639a4\") " pod="openshift-marketplace/redhat-operators-s4dz5" Jan 27 21:10:00 crc kubenswrapper[4915]: I0127 21:10:00.873575 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee551a1f-cdba-4421-a321-4eb8f72639a4-utilities\") pod \"redhat-operators-s4dz5\" (UID: \"ee551a1f-cdba-4421-a321-4eb8f72639a4\") " pod="openshift-marketplace/redhat-operators-s4dz5" Jan 27 21:10:00 crc kubenswrapper[4915]: I0127 21:10:00.873765 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmsxq\" (UniqueName: \"kubernetes.io/projected/ee551a1f-cdba-4421-a321-4eb8f72639a4-kube-api-access-hmsxq\") pod \"redhat-operators-s4dz5\" (UID: \"ee551a1f-cdba-4421-a321-4eb8f72639a4\") " pod="openshift-marketplace/redhat-operators-s4dz5" Jan 27 21:10:00 crc kubenswrapper[4915]: I0127 21:10:00.873896 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee551a1f-cdba-4421-a321-4eb8f72639a4-catalog-content\") pod \"redhat-operators-s4dz5\" (UID: \"ee551a1f-cdba-4421-a321-4eb8f72639a4\") " pod="openshift-marketplace/redhat-operators-s4dz5" Jan 27 21:10:00 crc kubenswrapper[4915]: I0127 21:10:00.874530 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee551a1f-cdba-4421-a321-4eb8f72639a4-utilities\") pod \"redhat-operators-s4dz5\" (UID: \"ee551a1f-cdba-4421-a321-4eb8f72639a4\") " pod="openshift-marketplace/redhat-operators-s4dz5" Jan 27 21:10:00 crc kubenswrapper[4915]: I0127 21:10:00.874674 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee551a1f-cdba-4421-a321-4eb8f72639a4-catalog-content\") pod \"redhat-operators-s4dz5\" (UID: \"ee551a1f-cdba-4421-a321-4eb8f72639a4\") " pod="openshift-marketplace/redhat-operators-s4dz5" Jan 27 21:10:00 crc kubenswrapper[4915]: I0127 21:10:00.902426 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmsxq\" (UniqueName: \"kubernetes.io/projected/ee551a1f-cdba-4421-a321-4eb8f72639a4-kube-api-access-hmsxq\") pod \"redhat-operators-s4dz5\" (UID: \"ee551a1f-cdba-4421-a321-4eb8f72639a4\") " pod="openshift-marketplace/redhat-operators-s4dz5" Jan 27 21:10:00 crc kubenswrapper[4915]: I0127 21:10:00.977101 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4dz5" Jan 27 21:10:01 crc kubenswrapper[4915]: I0127 21:10:01.443443 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s4dz5"] Jan 27 21:10:01 crc kubenswrapper[4915]: I0127 21:10:01.580075 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4dz5" event={"ID":"ee551a1f-cdba-4421-a321-4eb8f72639a4","Type":"ContainerStarted","Data":"77b117b84ad5084f7f18cc9ff6ba17216698ee6430b7529ff90d1621697a9641"} Jan 27 21:10:02 crc kubenswrapper[4915]: I0127 21:10:02.590965 4915 generic.go:334] "Generic (PLEG): container finished" podID="ee551a1f-cdba-4421-a321-4eb8f72639a4" containerID="286ea9affab27c689484389403ee817d5c63dbad7f4504726f3c392d8c0e09d1" exitCode=0 Jan 27 21:10:02 crc kubenswrapper[4915]: I0127 21:10:02.591063 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4dz5" event={"ID":"ee551a1f-cdba-4421-a321-4eb8f72639a4","Type":"ContainerDied","Data":"286ea9affab27c689484389403ee817d5c63dbad7f4504726f3c392d8c0e09d1"} Jan 27 21:10:02 crc kubenswrapper[4915]: I0127 21:10:02.593885 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 21:10:03 crc kubenswrapper[4915]: I0127 21:10:03.601583 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4dz5" event={"ID":"ee551a1f-cdba-4421-a321-4eb8f72639a4","Type":"ContainerStarted","Data":"8d098ad3075eca0d22d4c3f9b5d59eb0b13ad4ceff338d5fd1e2c0bde4341491"} Jan 27 21:10:04 crc kubenswrapper[4915]: I0127 21:10:04.617253 4915 generic.go:334] "Generic (PLEG): container finished" podID="ee551a1f-cdba-4421-a321-4eb8f72639a4" containerID="8d098ad3075eca0d22d4c3f9b5d59eb0b13ad4ceff338d5fd1e2c0bde4341491" exitCode=0 Jan 27 21:10:04 crc kubenswrapper[4915]: I0127 21:10:04.617298 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4dz5" event={"ID":"ee551a1f-cdba-4421-a321-4eb8f72639a4","Type":"ContainerDied","Data":"8d098ad3075eca0d22d4c3f9b5d59eb0b13ad4ceff338d5fd1e2c0bde4341491"} Jan 27 21:10:06 crc kubenswrapper[4915]: I0127 21:10:06.642425 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4dz5" event={"ID":"ee551a1f-cdba-4421-a321-4eb8f72639a4","Type":"ContainerStarted","Data":"cc2c17c059344a230dab093a2fbc398969f43a339c48ed78a2a64a973224b8da"} Jan 27 21:10:06 crc kubenswrapper[4915]: I0127 21:10:06.666254 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s4dz5" podStartSLOduration=4.20686685 podStartE2EDuration="6.666238556s" podCreationTimestamp="2026-01-27 21:10:00 +0000 UTC" firstStartedPulling="2026-01-27 21:10:02.592780767 +0000 UTC m=+8893.950634431" lastFinishedPulling="2026-01-27 21:10:05.052152463 +0000 UTC m=+8896.410006137" observedRunningTime="2026-01-27 21:10:06.665388375 +0000 UTC m=+8898.023242059" watchObservedRunningTime="2026-01-27 21:10:06.666238556 +0000 UTC m=+8898.024092230" Jan 27 21:10:10 crc kubenswrapper[4915]: I0127 21:10:10.977386 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s4dz5" Jan 27 21:10:10 crc kubenswrapper[4915]: I0127 21:10:10.977711 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s4dz5" Jan 27 21:10:12 crc kubenswrapper[4915]: I0127 21:10:12.020914 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s4dz5" podUID="ee551a1f-cdba-4421-a321-4eb8f72639a4" containerName="registry-server" probeResult="failure" output=< Jan 27 21:10:12 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Jan 27 21:10:12 crc kubenswrapper[4915]: > Jan 27 21:10:21 crc kubenswrapper[4915]: I0127 21:10:21.050964 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s4dz5" Jan 27 21:10:21 crc kubenswrapper[4915]: I0127 21:10:21.112196 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s4dz5" Jan 27 21:10:21 crc kubenswrapper[4915]: I0127 21:10:21.286945 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s4dz5"] Jan 27 21:10:22 crc kubenswrapper[4915]: I0127 21:10:22.801183 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s4dz5" podUID="ee551a1f-cdba-4421-a321-4eb8f72639a4" containerName="registry-server" containerID="cri-o://cc2c17c059344a230dab093a2fbc398969f43a339c48ed78a2a64a973224b8da" gracePeriod=2 Jan 27 21:10:23 crc kubenswrapper[4915]: I0127 21:10:23.303193 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4dz5" Jan 27 21:10:23 crc kubenswrapper[4915]: I0127 21:10:23.480766 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmsxq\" (UniqueName: \"kubernetes.io/projected/ee551a1f-cdba-4421-a321-4eb8f72639a4-kube-api-access-hmsxq\") pod \"ee551a1f-cdba-4421-a321-4eb8f72639a4\" (UID: \"ee551a1f-cdba-4421-a321-4eb8f72639a4\") " Jan 27 21:10:23 crc kubenswrapper[4915]: I0127 21:10:23.480907 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee551a1f-cdba-4421-a321-4eb8f72639a4-catalog-content\") pod \"ee551a1f-cdba-4421-a321-4eb8f72639a4\" (UID: \"ee551a1f-cdba-4421-a321-4eb8f72639a4\") " Jan 27 21:10:23 crc kubenswrapper[4915]: I0127 21:10:23.480944 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee551a1f-cdba-4421-a321-4eb8f72639a4-utilities\") pod \"ee551a1f-cdba-4421-a321-4eb8f72639a4\" (UID: \"ee551a1f-cdba-4421-a321-4eb8f72639a4\") " Jan 27 21:10:23 crc kubenswrapper[4915]: I0127 21:10:23.482808 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee551a1f-cdba-4421-a321-4eb8f72639a4-utilities" (OuterVolumeSpecName: "utilities") pod "ee551a1f-cdba-4421-a321-4eb8f72639a4" (UID: "ee551a1f-cdba-4421-a321-4eb8f72639a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:10:23 crc kubenswrapper[4915]: I0127 21:10:23.487059 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee551a1f-cdba-4421-a321-4eb8f72639a4-kube-api-access-hmsxq" (OuterVolumeSpecName: "kube-api-access-hmsxq") pod "ee551a1f-cdba-4421-a321-4eb8f72639a4" (UID: "ee551a1f-cdba-4421-a321-4eb8f72639a4"). InnerVolumeSpecName "kube-api-access-hmsxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:10:23 crc kubenswrapper[4915]: I0127 21:10:23.583668 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmsxq\" (UniqueName: \"kubernetes.io/projected/ee551a1f-cdba-4421-a321-4eb8f72639a4-kube-api-access-hmsxq\") on node \"crc\" DevicePath \"\"" Jan 27 21:10:23 crc kubenswrapper[4915]: I0127 21:10:23.583906 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee551a1f-cdba-4421-a321-4eb8f72639a4-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:10:23 crc kubenswrapper[4915]: I0127 21:10:23.603834 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee551a1f-cdba-4421-a321-4eb8f72639a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee551a1f-cdba-4421-a321-4eb8f72639a4" (UID: "ee551a1f-cdba-4421-a321-4eb8f72639a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:10:23 crc kubenswrapper[4915]: I0127 21:10:23.685421 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee551a1f-cdba-4421-a321-4eb8f72639a4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:10:23 crc kubenswrapper[4915]: I0127 21:10:23.813089 4915 generic.go:334] "Generic (PLEG): container finished" podID="ee551a1f-cdba-4421-a321-4eb8f72639a4" containerID="cc2c17c059344a230dab093a2fbc398969f43a339c48ed78a2a64a973224b8da" exitCode=0 Jan 27 21:10:23 crc kubenswrapper[4915]: I0127 21:10:23.813138 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4dz5" event={"ID":"ee551a1f-cdba-4421-a321-4eb8f72639a4","Type":"ContainerDied","Data":"cc2c17c059344a230dab093a2fbc398969f43a339c48ed78a2a64a973224b8da"} Jan 27 21:10:23 crc kubenswrapper[4915]: I0127 21:10:23.813150 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4dz5" Jan 27 21:10:23 crc kubenswrapper[4915]: I0127 21:10:23.813180 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4dz5" event={"ID":"ee551a1f-cdba-4421-a321-4eb8f72639a4","Type":"ContainerDied","Data":"77b117b84ad5084f7f18cc9ff6ba17216698ee6430b7529ff90d1621697a9641"} Jan 27 21:10:23 crc kubenswrapper[4915]: I0127 21:10:23.813198 4915 scope.go:117] "RemoveContainer" containerID="cc2c17c059344a230dab093a2fbc398969f43a339c48ed78a2a64a973224b8da" Jan 27 21:10:23 crc kubenswrapper[4915]: I0127 21:10:23.845257 4915 scope.go:117] "RemoveContainer" containerID="8d098ad3075eca0d22d4c3f9b5d59eb0b13ad4ceff338d5fd1e2c0bde4341491" Jan 27 21:10:23 crc kubenswrapper[4915]: I0127 21:10:23.847361 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s4dz5"] Jan 27 21:10:23 crc kubenswrapper[4915]: I0127 21:10:23.859811 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s4dz5"] Jan 27 21:10:23 crc kubenswrapper[4915]: I0127 21:10:23.873511 4915 scope.go:117] "RemoveContainer" containerID="286ea9affab27c689484389403ee817d5c63dbad7f4504726f3c392d8c0e09d1" Jan 27 21:10:23 crc kubenswrapper[4915]: I0127 21:10:23.912737 4915 scope.go:117] "RemoveContainer" containerID="cc2c17c059344a230dab093a2fbc398969f43a339c48ed78a2a64a973224b8da" Jan 27 21:10:23 crc kubenswrapper[4915]: E0127 21:10:23.913300 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc2c17c059344a230dab093a2fbc398969f43a339c48ed78a2a64a973224b8da\": container with ID starting with cc2c17c059344a230dab093a2fbc398969f43a339c48ed78a2a64a973224b8da not found: ID does not exist" containerID="cc2c17c059344a230dab093a2fbc398969f43a339c48ed78a2a64a973224b8da" Jan 27 21:10:23 crc kubenswrapper[4915]: I0127 21:10:23.913363 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc2c17c059344a230dab093a2fbc398969f43a339c48ed78a2a64a973224b8da"} err="failed to get container status \"cc2c17c059344a230dab093a2fbc398969f43a339c48ed78a2a64a973224b8da\": rpc error: code = NotFound desc = could not find container \"cc2c17c059344a230dab093a2fbc398969f43a339c48ed78a2a64a973224b8da\": container with ID starting with cc2c17c059344a230dab093a2fbc398969f43a339c48ed78a2a64a973224b8da not found: ID does not exist" Jan 27 21:10:23 crc kubenswrapper[4915]: I0127 21:10:23.913384 4915 scope.go:117] "RemoveContainer" containerID="8d098ad3075eca0d22d4c3f9b5d59eb0b13ad4ceff338d5fd1e2c0bde4341491" Jan 27 21:10:23 crc kubenswrapper[4915]: E0127 21:10:23.913811 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d098ad3075eca0d22d4c3f9b5d59eb0b13ad4ceff338d5fd1e2c0bde4341491\": container with ID starting with 8d098ad3075eca0d22d4c3f9b5d59eb0b13ad4ceff338d5fd1e2c0bde4341491 not found: ID does not exist" containerID="8d098ad3075eca0d22d4c3f9b5d59eb0b13ad4ceff338d5fd1e2c0bde4341491" Jan 27 21:10:23 crc kubenswrapper[4915]: I0127 21:10:23.913845 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d098ad3075eca0d22d4c3f9b5d59eb0b13ad4ceff338d5fd1e2c0bde4341491"} err="failed to get container status \"8d098ad3075eca0d22d4c3f9b5d59eb0b13ad4ceff338d5fd1e2c0bde4341491\": rpc error: code = NotFound desc = could not find container \"8d098ad3075eca0d22d4c3f9b5d59eb0b13ad4ceff338d5fd1e2c0bde4341491\": container with ID starting with 8d098ad3075eca0d22d4c3f9b5d59eb0b13ad4ceff338d5fd1e2c0bde4341491 not found: ID does not exist" Jan 27 21:10:23 crc kubenswrapper[4915]: I0127 21:10:23.913864 4915 scope.go:117] "RemoveContainer" containerID="286ea9affab27c689484389403ee817d5c63dbad7f4504726f3c392d8c0e09d1" Jan 27 21:10:23 crc kubenswrapper[4915]: E0127 21:10:23.914286 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"286ea9affab27c689484389403ee817d5c63dbad7f4504726f3c392d8c0e09d1\": container with ID starting with 286ea9affab27c689484389403ee817d5c63dbad7f4504726f3c392d8c0e09d1 not found: ID does not exist" containerID="286ea9affab27c689484389403ee817d5c63dbad7f4504726f3c392d8c0e09d1" Jan 27 21:10:23 crc kubenswrapper[4915]: I0127 21:10:23.914315 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"286ea9affab27c689484389403ee817d5c63dbad7f4504726f3c392d8c0e09d1"} err="failed to get container status \"286ea9affab27c689484389403ee817d5c63dbad7f4504726f3c392d8c0e09d1\": rpc error: code = NotFound desc = could not find container \"286ea9affab27c689484389403ee817d5c63dbad7f4504726f3c392d8c0e09d1\": container with ID starting with 286ea9affab27c689484389403ee817d5c63dbad7f4504726f3c392d8c0e09d1 not found: ID does not exist" Jan 27 21:10:25 crc kubenswrapper[4915]: I0127 21:10:25.370713 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee551a1f-cdba-4421-a321-4eb8f72639a4" path="/var/lib/kubelet/pods/ee551a1f-cdba-4421-a321-4eb8f72639a4/volumes" Jan 27 21:11:20 crc kubenswrapper[4915]: I0127 21:11:20.624859 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:11:20 crc kubenswrapper[4915]: I0127 21:11:20.625714 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:11:50 crc kubenswrapper[4915]: I0127 21:11:50.624858 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:11:50 crc kubenswrapper[4915]: I0127 21:11:50.625384 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:12:20 crc kubenswrapper[4915]: I0127 21:12:20.624965 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:12:20 crc kubenswrapper[4915]: I0127 21:12:20.625611 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:12:20 crc kubenswrapper[4915]: I0127 21:12:20.625696 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 21:12:20 crc kubenswrapper[4915]: I0127 21:12:20.626277 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"22f124dd04d7c88cd9a2a4d2e4d2707fc784e175ebb87aadd75834e52a39ac5d"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 21:12:20 crc kubenswrapper[4915]: I0127 21:12:20.626335 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://22f124dd04d7c88cd9a2a4d2e4d2707fc784e175ebb87aadd75834e52a39ac5d" gracePeriod=600 Jan 27 21:12:20 crc kubenswrapper[4915]: I0127 21:12:20.931901 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="22f124dd04d7c88cd9a2a4d2e4d2707fc784e175ebb87aadd75834e52a39ac5d" exitCode=0 Jan 27 21:12:20 crc kubenswrapper[4915]: I0127 21:12:20.931999 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"22f124dd04d7c88cd9a2a4d2e4d2707fc784e175ebb87aadd75834e52a39ac5d"} Jan 27 21:12:20 crc kubenswrapper[4915]: I0127 21:12:20.933120 4915 scope.go:117] "RemoveContainer" containerID="80c7ce8176cfc6fa032c5513072c161332f610be00b2cbab2675690b6b6ce57d" Jan 27 21:12:21 crc kubenswrapper[4915]: I0127 21:12:21.942952 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18"} Jan 27 21:13:39 crc kubenswrapper[4915]: I0127 21:13:39.428171 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nmdp2"] Jan 27 21:13:39 crc kubenswrapper[4915]: E0127 21:13:39.429327 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee551a1f-cdba-4421-a321-4eb8f72639a4" containerName="extract-content" Jan 27 21:13:39 crc kubenswrapper[4915]: I0127 21:13:39.429340 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee551a1f-cdba-4421-a321-4eb8f72639a4" containerName="extract-content" Jan 27 21:13:39 crc kubenswrapper[4915]: E0127 21:13:39.429363 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee551a1f-cdba-4421-a321-4eb8f72639a4" containerName="extract-utilities" Jan 27 21:13:39 crc kubenswrapper[4915]: I0127 21:13:39.429369 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee551a1f-cdba-4421-a321-4eb8f72639a4" containerName="extract-utilities" Jan 27 21:13:39 crc kubenswrapper[4915]: E0127 21:13:39.429398 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee551a1f-cdba-4421-a321-4eb8f72639a4" containerName="registry-server" Jan 27 21:13:39 crc kubenswrapper[4915]: I0127 21:13:39.429407 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee551a1f-cdba-4421-a321-4eb8f72639a4" containerName="registry-server" Jan 27 21:13:39 crc kubenswrapper[4915]: I0127 21:13:39.429564 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee551a1f-cdba-4421-a321-4eb8f72639a4" containerName="registry-server" Jan 27 21:13:39 crc kubenswrapper[4915]: I0127 21:13:39.430839 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmdp2" Jan 27 21:13:39 crc kubenswrapper[4915]: I0127 21:13:39.454758 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nmdp2"] Jan 27 21:13:39 crc kubenswrapper[4915]: I0127 21:13:39.560954 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/849e320e-5ee0-45f7-8852-3ee8c88dbbfe-utilities\") pod \"community-operators-nmdp2\" (UID: \"849e320e-5ee0-45f7-8852-3ee8c88dbbfe\") " pod="openshift-marketplace/community-operators-nmdp2" Jan 27 21:13:39 crc kubenswrapper[4915]: I0127 21:13:39.561275 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/849e320e-5ee0-45f7-8852-3ee8c88dbbfe-catalog-content\") pod \"community-operators-nmdp2\" (UID: \"849e320e-5ee0-45f7-8852-3ee8c88dbbfe\") " pod="openshift-marketplace/community-operators-nmdp2" Jan 27 21:13:39 crc kubenswrapper[4915]: I0127 21:13:39.561362 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29xh5\" (UniqueName: \"kubernetes.io/projected/849e320e-5ee0-45f7-8852-3ee8c88dbbfe-kube-api-access-29xh5\") pod \"community-operators-nmdp2\" (UID: \"849e320e-5ee0-45f7-8852-3ee8c88dbbfe\") " pod="openshift-marketplace/community-operators-nmdp2" Jan 27 21:13:39 crc kubenswrapper[4915]: I0127 21:13:39.663336 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/849e320e-5ee0-45f7-8852-3ee8c88dbbfe-catalog-content\") pod \"community-operators-nmdp2\" (UID: \"849e320e-5ee0-45f7-8852-3ee8c88dbbfe\") " pod="openshift-marketplace/community-operators-nmdp2" Jan 27 21:13:39 crc kubenswrapper[4915]: I0127 21:13:39.663410 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29xh5\" (UniqueName: \"kubernetes.io/projected/849e320e-5ee0-45f7-8852-3ee8c88dbbfe-kube-api-access-29xh5\") pod \"community-operators-nmdp2\" (UID: \"849e320e-5ee0-45f7-8852-3ee8c88dbbfe\") " pod="openshift-marketplace/community-operators-nmdp2" Jan 27 21:13:39 crc kubenswrapper[4915]: I0127 21:13:39.663551 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/849e320e-5ee0-45f7-8852-3ee8c88dbbfe-utilities\") pod \"community-operators-nmdp2\" (UID: \"849e320e-5ee0-45f7-8852-3ee8c88dbbfe\") " pod="openshift-marketplace/community-operators-nmdp2" Jan 27 21:13:39 crc kubenswrapper[4915]: I0127 21:13:39.664275 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/849e320e-5ee0-45f7-8852-3ee8c88dbbfe-catalog-content\") pod \"community-operators-nmdp2\" (UID: \"849e320e-5ee0-45f7-8852-3ee8c88dbbfe\") " pod="openshift-marketplace/community-operators-nmdp2" Jan 27 21:13:39 crc kubenswrapper[4915]: I0127 21:13:39.664403 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/849e320e-5ee0-45f7-8852-3ee8c88dbbfe-utilities\") pod \"community-operators-nmdp2\" (UID: \"849e320e-5ee0-45f7-8852-3ee8c88dbbfe\") " pod="openshift-marketplace/community-operators-nmdp2" Jan 27 21:13:39 crc kubenswrapper[4915]: I0127 21:13:39.687565 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29xh5\" (UniqueName: \"kubernetes.io/projected/849e320e-5ee0-45f7-8852-3ee8c88dbbfe-kube-api-access-29xh5\") pod \"community-operators-nmdp2\" (UID: \"849e320e-5ee0-45f7-8852-3ee8c88dbbfe\") " pod="openshift-marketplace/community-operators-nmdp2" Jan 27 21:13:39 crc kubenswrapper[4915]: I0127 21:13:39.766668 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmdp2" Jan 27 21:13:40 crc kubenswrapper[4915]: I0127 21:13:40.309226 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nmdp2"] Jan 27 21:13:40 crc kubenswrapper[4915]: I0127 21:13:40.649323 4915 generic.go:334] "Generic (PLEG): container finished" podID="849e320e-5ee0-45f7-8852-3ee8c88dbbfe" containerID="b650a637df27f2fc3d7dd21a97d544a4f7419b997a740526d48f4e8a9c1dfbbf" exitCode=0 Jan 27 21:13:40 crc kubenswrapper[4915]: I0127 21:13:40.649443 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmdp2" event={"ID":"849e320e-5ee0-45f7-8852-3ee8c88dbbfe","Type":"ContainerDied","Data":"b650a637df27f2fc3d7dd21a97d544a4f7419b997a740526d48f4e8a9c1dfbbf"} Jan 27 21:13:40 crc kubenswrapper[4915]: I0127 21:13:40.649610 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmdp2" event={"ID":"849e320e-5ee0-45f7-8852-3ee8c88dbbfe","Type":"ContainerStarted","Data":"419426f403af72ec68b614c31c37736f2af12a0fbf3b99a839e72acc001fdadd"} Jan 27 21:13:41 crc kubenswrapper[4915]: I0127 21:13:41.661868 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmdp2" event={"ID":"849e320e-5ee0-45f7-8852-3ee8c88dbbfe","Type":"ContainerStarted","Data":"a08527794e07f6d43293645a64ba18c929b9579ee96f1917af3b830021aa722a"} Jan 27 21:13:42 crc kubenswrapper[4915]: I0127 21:13:42.676404 4915 generic.go:334] "Generic (PLEG): container finished" podID="849e320e-5ee0-45f7-8852-3ee8c88dbbfe" containerID="a08527794e07f6d43293645a64ba18c929b9579ee96f1917af3b830021aa722a" exitCode=0 Jan 27 21:13:42 crc kubenswrapper[4915]: I0127 21:13:42.676583 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmdp2" event={"ID":"849e320e-5ee0-45f7-8852-3ee8c88dbbfe","Type":"ContainerDied","Data":"a08527794e07f6d43293645a64ba18c929b9579ee96f1917af3b830021aa722a"} Jan 27 21:13:43 crc kubenswrapper[4915]: I0127 21:13:43.686599 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmdp2" event={"ID":"849e320e-5ee0-45f7-8852-3ee8c88dbbfe","Type":"ContainerStarted","Data":"cb3b0cacaef15abe991a0a3159993837316c19d9f955e2ad590776819cecb829"} Jan 27 21:13:49 crc kubenswrapper[4915]: I0127 21:13:49.767181 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nmdp2" Jan 27 21:13:49 crc kubenswrapper[4915]: I0127 21:13:49.768120 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nmdp2" Jan 27 21:13:49 crc kubenswrapper[4915]: I0127 21:13:49.862576 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nmdp2" Jan 27 21:13:49 crc kubenswrapper[4915]: I0127 21:13:49.894061 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nmdp2" podStartSLOduration=8.447338923 podStartE2EDuration="10.894040448s" podCreationTimestamp="2026-01-27 21:13:39 +0000 UTC" firstStartedPulling="2026-01-27 21:13:40.650975977 +0000 UTC m=+9112.008829641" lastFinishedPulling="2026-01-27 21:13:43.097677492 +0000 UTC m=+9114.455531166" observedRunningTime="2026-01-27 21:13:43.714783618 +0000 UTC m=+9115.072637292" watchObservedRunningTime="2026-01-27 21:13:49.894040448 +0000 UTC m=+9121.251894112" Jan 27 21:13:50 crc kubenswrapper[4915]: I0127 21:13:50.813881 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nmdp2" Jan 27 21:13:50 crc kubenswrapper[4915]: I0127 21:13:50.876784 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nmdp2"] Jan 27 21:13:52 crc kubenswrapper[4915]: I0127 21:13:52.770316 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nmdp2" podUID="849e320e-5ee0-45f7-8852-3ee8c88dbbfe" containerName="registry-server" containerID="cri-o://cb3b0cacaef15abe991a0a3159993837316c19d9f955e2ad590776819cecb829" gracePeriod=2 Jan 27 21:13:53 crc kubenswrapper[4915]: I0127 21:13:53.355755 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmdp2" Jan 27 21:13:53 crc kubenswrapper[4915]: I0127 21:13:53.363392 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/849e320e-5ee0-45f7-8852-3ee8c88dbbfe-catalog-content\") pod \"849e320e-5ee0-45f7-8852-3ee8c88dbbfe\" (UID: \"849e320e-5ee0-45f7-8852-3ee8c88dbbfe\") " Jan 27 21:13:53 crc kubenswrapper[4915]: I0127 21:13:53.363484 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/849e320e-5ee0-45f7-8852-3ee8c88dbbfe-utilities\") pod \"849e320e-5ee0-45f7-8852-3ee8c88dbbfe\" (UID: \"849e320e-5ee0-45f7-8852-3ee8c88dbbfe\") " Jan 27 21:13:53 crc kubenswrapper[4915]: I0127 21:13:53.363685 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29xh5\" (UniqueName: \"kubernetes.io/projected/849e320e-5ee0-45f7-8852-3ee8c88dbbfe-kube-api-access-29xh5\") pod \"849e320e-5ee0-45f7-8852-3ee8c88dbbfe\" (UID: \"849e320e-5ee0-45f7-8852-3ee8c88dbbfe\") " Jan 27 21:13:53 crc kubenswrapper[4915]: I0127 21:13:53.364357 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/849e320e-5ee0-45f7-8852-3ee8c88dbbfe-utilities" (OuterVolumeSpecName: "utilities") pod "849e320e-5ee0-45f7-8852-3ee8c88dbbfe" (UID: "849e320e-5ee0-45f7-8852-3ee8c88dbbfe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:13:53 crc kubenswrapper[4915]: I0127 21:13:53.364642 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/849e320e-5ee0-45f7-8852-3ee8c88dbbfe-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:13:53 crc kubenswrapper[4915]: I0127 21:13:53.374309 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/849e320e-5ee0-45f7-8852-3ee8c88dbbfe-kube-api-access-29xh5" (OuterVolumeSpecName: "kube-api-access-29xh5") pod "849e320e-5ee0-45f7-8852-3ee8c88dbbfe" (UID: "849e320e-5ee0-45f7-8852-3ee8c88dbbfe"). InnerVolumeSpecName "kube-api-access-29xh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:13:53 crc kubenswrapper[4915]: I0127 21:13:53.428735 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/849e320e-5ee0-45f7-8852-3ee8c88dbbfe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "849e320e-5ee0-45f7-8852-3ee8c88dbbfe" (UID: "849e320e-5ee0-45f7-8852-3ee8c88dbbfe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:13:53 crc kubenswrapper[4915]: I0127 21:13:53.466341 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/849e320e-5ee0-45f7-8852-3ee8c88dbbfe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:13:53 crc kubenswrapper[4915]: I0127 21:13:53.466613 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29xh5\" (UniqueName: \"kubernetes.io/projected/849e320e-5ee0-45f7-8852-3ee8c88dbbfe-kube-api-access-29xh5\") on node \"crc\" DevicePath \"\"" Jan 27 21:13:53 crc kubenswrapper[4915]: I0127 21:13:53.785567 4915 generic.go:334] "Generic (PLEG): container finished" podID="849e320e-5ee0-45f7-8852-3ee8c88dbbfe" containerID="cb3b0cacaef15abe991a0a3159993837316c19d9f955e2ad590776819cecb829" exitCode=0 Jan 27 21:13:53 crc kubenswrapper[4915]: I0127 21:13:53.785609 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmdp2" event={"ID":"849e320e-5ee0-45f7-8852-3ee8c88dbbfe","Type":"ContainerDied","Data":"cb3b0cacaef15abe991a0a3159993837316c19d9f955e2ad590776819cecb829"} Jan 27 21:13:53 crc kubenswrapper[4915]: I0127 21:13:53.785635 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmdp2" Jan 27 21:13:53 crc kubenswrapper[4915]: I0127 21:13:53.785660 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmdp2" event={"ID":"849e320e-5ee0-45f7-8852-3ee8c88dbbfe","Type":"ContainerDied","Data":"419426f403af72ec68b614c31c37736f2af12a0fbf3b99a839e72acc001fdadd"} Jan 27 21:13:53 crc kubenswrapper[4915]: I0127 21:13:53.785691 4915 scope.go:117] "RemoveContainer" containerID="cb3b0cacaef15abe991a0a3159993837316c19d9f955e2ad590776819cecb829" Jan 27 21:13:53 crc kubenswrapper[4915]: I0127 21:13:53.813039 4915 scope.go:117] "RemoveContainer" containerID="a08527794e07f6d43293645a64ba18c929b9579ee96f1917af3b830021aa722a" Jan 27 21:13:53 crc kubenswrapper[4915]: I0127 21:13:53.825839 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nmdp2"] Jan 27 21:13:53 crc kubenswrapper[4915]: I0127 21:13:53.843185 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nmdp2"] Jan 27 21:13:53 crc kubenswrapper[4915]: I0127 21:13:53.846490 4915 scope.go:117] "RemoveContainer" containerID="b650a637df27f2fc3d7dd21a97d544a4f7419b997a740526d48f4e8a9c1dfbbf" Jan 27 21:13:53 crc kubenswrapper[4915]: I0127 21:13:53.883120 4915 scope.go:117] "RemoveContainer" containerID="cb3b0cacaef15abe991a0a3159993837316c19d9f955e2ad590776819cecb829" Jan 27 21:13:53 crc kubenswrapper[4915]: E0127 21:13:53.883587 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb3b0cacaef15abe991a0a3159993837316c19d9f955e2ad590776819cecb829\": container with ID starting with cb3b0cacaef15abe991a0a3159993837316c19d9f955e2ad590776819cecb829 not found: ID does not exist" containerID="cb3b0cacaef15abe991a0a3159993837316c19d9f955e2ad590776819cecb829" Jan 27 21:13:53 crc kubenswrapper[4915]: I0127 21:13:53.883620 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb3b0cacaef15abe991a0a3159993837316c19d9f955e2ad590776819cecb829"} err="failed to get container status \"cb3b0cacaef15abe991a0a3159993837316c19d9f955e2ad590776819cecb829\": rpc error: code = NotFound desc = could not find container \"cb3b0cacaef15abe991a0a3159993837316c19d9f955e2ad590776819cecb829\": container with ID starting with cb3b0cacaef15abe991a0a3159993837316c19d9f955e2ad590776819cecb829 not found: ID does not exist" Jan 27 21:13:53 crc kubenswrapper[4915]: I0127 21:13:53.883640 4915 scope.go:117] "RemoveContainer" containerID="a08527794e07f6d43293645a64ba18c929b9579ee96f1917af3b830021aa722a" Jan 27 21:13:53 crc kubenswrapper[4915]: E0127 21:13:53.883989 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a08527794e07f6d43293645a64ba18c929b9579ee96f1917af3b830021aa722a\": container with ID starting with a08527794e07f6d43293645a64ba18c929b9579ee96f1917af3b830021aa722a not found: ID does not exist" containerID="a08527794e07f6d43293645a64ba18c929b9579ee96f1917af3b830021aa722a" Jan 27 21:13:53 crc kubenswrapper[4915]: I0127 21:13:53.884053 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a08527794e07f6d43293645a64ba18c929b9579ee96f1917af3b830021aa722a"} err="failed to get container status \"a08527794e07f6d43293645a64ba18c929b9579ee96f1917af3b830021aa722a\": rpc error: code = NotFound desc = could not find container \"a08527794e07f6d43293645a64ba18c929b9579ee96f1917af3b830021aa722a\": container with ID starting with a08527794e07f6d43293645a64ba18c929b9579ee96f1917af3b830021aa722a not found: ID does not exist" Jan 27 21:13:53 crc kubenswrapper[4915]: I0127 21:13:53.884067 4915 scope.go:117] "RemoveContainer" containerID="b650a637df27f2fc3d7dd21a97d544a4f7419b997a740526d48f4e8a9c1dfbbf" Jan 27 21:13:53 crc kubenswrapper[4915]: E0127 21:13:53.884307 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b650a637df27f2fc3d7dd21a97d544a4f7419b997a740526d48f4e8a9c1dfbbf\": container with ID starting with b650a637df27f2fc3d7dd21a97d544a4f7419b997a740526d48f4e8a9c1dfbbf not found: ID does not exist" containerID="b650a637df27f2fc3d7dd21a97d544a4f7419b997a740526d48f4e8a9c1dfbbf" Jan 27 21:13:53 crc kubenswrapper[4915]: I0127 21:13:53.884361 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b650a637df27f2fc3d7dd21a97d544a4f7419b997a740526d48f4e8a9c1dfbbf"} err="failed to get container status \"b650a637df27f2fc3d7dd21a97d544a4f7419b997a740526d48f4e8a9c1dfbbf\": rpc error: code = NotFound desc = could not find container \"b650a637df27f2fc3d7dd21a97d544a4f7419b997a740526d48f4e8a9c1dfbbf\": container with ID starting with b650a637df27f2fc3d7dd21a97d544a4f7419b997a740526d48f4e8a9c1dfbbf not found: ID does not exist" Jan 27 21:13:55 crc kubenswrapper[4915]: I0127 21:13:55.370067 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="849e320e-5ee0-45f7-8852-3ee8c88dbbfe" path="/var/lib/kubelet/pods/849e320e-5ee0-45f7-8852-3ee8c88dbbfe/volumes" Jan 27 21:14:20 crc kubenswrapper[4915]: I0127 21:14:20.624414 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:14:20 crc kubenswrapper[4915]: I0127 21:14:20.625308 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:14:50 crc kubenswrapper[4915]: I0127 21:14:50.624254 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:14:50 crc kubenswrapper[4915]: I0127 21:14:50.625090 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:15:00 crc kubenswrapper[4915]: I0127 21:15:00.153555 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492475-xsdxt"] Jan 27 21:15:00 crc kubenswrapper[4915]: E0127 21:15:00.154607 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849e320e-5ee0-45f7-8852-3ee8c88dbbfe" containerName="extract-utilities" Jan 27 21:15:00 crc kubenswrapper[4915]: I0127 21:15:00.154623 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="849e320e-5ee0-45f7-8852-3ee8c88dbbfe" containerName="extract-utilities" Jan 27 21:15:00 crc kubenswrapper[4915]: E0127 21:15:00.154647 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849e320e-5ee0-45f7-8852-3ee8c88dbbfe" containerName="extract-content" Jan 27 21:15:00 crc kubenswrapper[4915]: I0127 21:15:00.154656 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="849e320e-5ee0-45f7-8852-3ee8c88dbbfe" containerName="extract-content" Jan 27 21:15:00 crc kubenswrapper[4915]: E0127 21:15:00.154681 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849e320e-5ee0-45f7-8852-3ee8c88dbbfe" containerName="registry-server" Jan 27 21:15:00 crc kubenswrapper[4915]: I0127 21:15:00.154689 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="849e320e-5ee0-45f7-8852-3ee8c88dbbfe" containerName="registry-server" Jan 27 21:15:00 crc kubenswrapper[4915]: I0127 21:15:00.155212 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="849e320e-5ee0-45f7-8852-3ee8c88dbbfe" containerName="registry-server" Jan 27 21:15:00 crc kubenswrapper[4915]: I0127 21:15:00.156059 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-xsdxt" Jan 27 21:15:00 crc kubenswrapper[4915]: I0127 21:15:00.159530 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 21:15:00 crc kubenswrapper[4915]: I0127 21:15:00.159896 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 21:15:00 crc kubenswrapper[4915]: I0127 21:15:00.185513 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492475-xsdxt"] Jan 27 21:15:00 crc kubenswrapper[4915]: I0127 21:15:00.258174 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c29d0b6f-123d-4787-a857-457b85377880-config-volume\") pod \"collect-profiles-29492475-xsdxt\" (UID: \"c29d0b6f-123d-4787-a857-457b85377880\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-xsdxt" Jan 27 21:15:00 crc kubenswrapper[4915]: I0127 21:15:00.258229 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c29d0b6f-123d-4787-a857-457b85377880-secret-volume\") pod \"collect-profiles-29492475-xsdxt\" (UID: \"c29d0b6f-123d-4787-a857-457b85377880\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-xsdxt" Jan 27 21:15:00 crc kubenswrapper[4915]: I0127 21:15:00.258360 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7clfr\" (UniqueName: \"kubernetes.io/projected/c29d0b6f-123d-4787-a857-457b85377880-kube-api-access-7clfr\") pod \"collect-profiles-29492475-xsdxt\" (UID: \"c29d0b6f-123d-4787-a857-457b85377880\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-xsdxt" Jan 27 21:15:00 crc kubenswrapper[4915]: I0127 21:15:00.359942 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c29d0b6f-123d-4787-a857-457b85377880-secret-volume\") pod \"collect-profiles-29492475-xsdxt\" (UID: \"c29d0b6f-123d-4787-a857-457b85377880\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-xsdxt" Jan 27 21:15:00 crc kubenswrapper[4915]: I0127 21:15:00.360154 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7clfr\" (UniqueName: \"kubernetes.io/projected/c29d0b6f-123d-4787-a857-457b85377880-kube-api-access-7clfr\") pod \"collect-profiles-29492475-xsdxt\" (UID: \"c29d0b6f-123d-4787-a857-457b85377880\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-xsdxt" Jan 27 21:15:00 crc kubenswrapper[4915]: I0127 21:15:00.360251 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c29d0b6f-123d-4787-a857-457b85377880-config-volume\") pod \"collect-profiles-29492475-xsdxt\" (UID: \"c29d0b6f-123d-4787-a857-457b85377880\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-xsdxt" Jan 27 21:15:00 crc kubenswrapper[4915]: I0127 21:15:00.361157 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c29d0b6f-123d-4787-a857-457b85377880-config-volume\") pod \"collect-profiles-29492475-xsdxt\" (UID: \"c29d0b6f-123d-4787-a857-457b85377880\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-xsdxt" Jan 27 21:15:00 crc kubenswrapper[4915]: I0127 21:15:00.376931 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c29d0b6f-123d-4787-a857-457b85377880-secret-volume\") pod \"collect-profiles-29492475-xsdxt\" (UID: \"c29d0b6f-123d-4787-a857-457b85377880\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-xsdxt" Jan 27 21:15:00 crc kubenswrapper[4915]: I0127 21:15:00.380279 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7clfr\" (UniqueName: \"kubernetes.io/projected/c29d0b6f-123d-4787-a857-457b85377880-kube-api-access-7clfr\") pod \"collect-profiles-29492475-xsdxt\" (UID: \"c29d0b6f-123d-4787-a857-457b85377880\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-xsdxt" Jan 27 21:15:00 crc kubenswrapper[4915]: I0127 21:15:00.492672 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-xsdxt" Jan 27 21:15:00 crc kubenswrapper[4915]: I0127 21:15:00.999332 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492475-xsdxt"] Jan 27 21:15:01 crc kubenswrapper[4915]: I0127 21:15:01.446160 4915 generic.go:334] "Generic (PLEG): container finished" podID="c29d0b6f-123d-4787-a857-457b85377880" containerID="8780f1d241b985bf20a0e9bfdcbca985a95246cffa1c2618598ed77a3ab33f09" exitCode=0 Jan 27 21:15:01 crc kubenswrapper[4915]: I0127 21:15:01.446252 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-xsdxt" event={"ID":"c29d0b6f-123d-4787-a857-457b85377880","Type":"ContainerDied","Data":"8780f1d241b985bf20a0e9bfdcbca985a95246cffa1c2618598ed77a3ab33f09"} Jan 27 21:15:01 crc kubenswrapper[4915]: I0127 21:15:01.447522 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-xsdxt" event={"ID":"c29d0b6f-123d-4787-a857-457b85377880","Type":"ContainerStarted","Data":"dafc0bc7d521ee6ef88b70230731e34a8896abe15cbbf407400fa919e9ee0512"} Jan 27 21:15:02 crc kubenswrapper[4915]: I0127 21:15:02.802479 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-xsdxt" Jan 27 21:15:02 crc kubenswrapper[4915]: I0127 21:15:02.934931 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c29d0b6f-123d-4787-a857-457b85377880-secret-volume\") pod \"c29d0b6f-123d-4787-a857-457b85377880\" (UID: \"c29d0b6f-123d-4787-a857-457b85377880\") " Jan 27 21:15:02 crc kubenswrapper[4915]: I0127 21:15:02.935105 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7clfr\" (UniqueName: \"kubernetes.io/projected/c29d0b6f-123d-4787-a857-457b85377880-kube-api-access-7clfr\") pod \"c29d0b6f-123d-4787-a857-457b85377880\" (UID: \"c29d0b6f-123d-4787-a857-457b85377880\") " Jan 27 21:15:02 crc kubenswrapper[4915]: I0127 21:15:02.935276 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c29d0b6f-123d-4787-a857-457b85377880-config-volume\") pod \"c29d0b6f-123d-4787-a857-457b85377880\" (UID: \"c29d0b6f-123d-4787-a857-457b85377880\") " Jan 27 21:15:02 crc kubenswrapper[4915]: I0127 21:15:02.936177 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c29d0b6f-123d-4787-a857-457b85377880-config-volume" (OuterVolumeSpecName: "config-volume") pod "c29d0b6f-123d-4787-a857-457b85377880" (UID: "c29d0b6f-123d-4787-a857-457b85377880"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:15:02 crc kubenswrapper[4915]: I0127 21:15:02.940384 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c29d0b6f-123d-4787-a857-457b85377880-kube-api-access-7clfr" (OuterVolumeSpecName: "kube-api-access-7clfr") pod "c29d0b6f-123d-4787-a857-457b85377880" (UID: "c29d0b6f-123d-4787-a857-457b85377880"). InnerVolumeSpecName "kube-api-access-7clfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:15:02 crc kubenswrapper[4915]: I0127 21:15:02.941437 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29d0b6f-123d-4787-a857-457b85377880-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c29d0b6f-123d-4787-a857-457b85377880" (UID: "c29d0b6f-123d-4787-a857-457b85377880"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:15:03 crc kubenswrapper[4915]: I0127 21:15:03.037933 4915 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c29d0b6f-123d-4787-a857-457b85377880-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 21:15:03 crc kubenswrapper[4915]: I0127 21:15:03.037977 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7clfr\" (UniqueName: \"kubernetes.io/projected/c29d0b6f-123d-4787-a857-457b85377880-kube-api-access-7clfr\") on node \"crc\" DevicePath \"\"" Jan 27 21:15:03 crc kubenswrapper[4915]: I0127 21:15:03.037991 4915 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c29d0b6f-123d-4787-a857-457b85377880-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 21:15:03 crc kubenswrapper[4915]: I0127 21:15:03.467622 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-xsdxt" event={"ID":"c29d0b6f-123d-4787-a857-457b85377880","Type":"ContainerDied","Data":"dafc0bc7d521ee6ef88b70230731e34a8896abe15cbbf407400fa919e9ee0512"} Jan 27 21:15:03 crc kubenswrapper[4915]: I0127 21:15:03.467682 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dafc0bc7d521ee6ef88b70230731e34a8896abe15cbbf407400fa919e9ee0512" Jan 27 21:15:03 crc kubenswrapper[4915]: I0127 21:15:03.467730 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-xsdxt" Jan 27 21:15:03 crc kubenswrapper[4915]: I0127 21:15:03.876610 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492430-pzhw4"] Jan 27 21:15:03 crc kubenswrapper[4915]: I0127 21:15:03.887907 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492430-pzhw4"] Jan 27 21:15:05 crc kubenswrapper[4915]: I0127 21:15:05.371208 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f0fc2e7-ca2f-4479-a3ba-8512004f1298" path="/var/lib/kubelet/pods/2f0fc2e7-ca2f-4479-a3ba-8512004f1298/volumes" Jan 27 21:15:20 crc kubenswrapper[4915]: I0127 21:15:20.625065 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:15:20 crc kubenswrapper[4915]: I0127 21:15:20.625776 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:15:20 crc kubenswrapper[4915]: I0127 21:15:20.625895 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 21:15:20 crc kubenswrapper[4915]: I0127 21:15:20.627267 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 21:15:20 crc kubenswrapper[4915]: I0127 21:15:20.627392 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18" gracePeriod=600 Jan 27 21:15:20 crc kubenswrapper[4915]: E0127 21:15:20.759143 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:15:21 crc kubenswrapper[4915]: I0127 21:15:21.628893 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18" exitCode=0 Jan 27 21:15:21 crc kubenswrapper[4915]: I0127 21:15:21.628952 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18"} Jan 27 21:15:21 crc kubenswrapper[4915]: I0127 21:15:21.628994 4915 scope.go:117] "RemoveContainer" containerID="22f124dd04d7c88cd9a2a4d2e4d2707fc784e175ebb87aadd75834e52a39ac5d" Jan 27 21:15:21 crc kubenswrapper[4915]: I0127 21:15:21.629973 4915 scope.go:117] "RemoveContainer" containerID="08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18" Jan 27 21:15:21 crc kubenswrapper[4915]: E0127 21:15:21.630289 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:15:34 crc kubenswrapper[4915]: I0127 21:15:34.357989 4915 scope.go:117] "RemoveContainer" containerID="08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18" Jan 27 21:15:34 crc kubenswrapper[4915]: E0127 21:15:34.360235 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:15:48 crc kubenswrapper[4915]: I0127 21:15:48.358065 4915 scope.go:117] "RemoveContainer" containerID="08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18" Jan 27 21:15:48 crc kubenswrapper[4915]: E0127 21:15:48.358772 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:16:00 crc kubenswrapper[4915]: I0127 21:16:00.679514 4915 scope.go:117] "RemoveContainer" containerID="fbd10a4cd783b1c991e66052af6fbc2fe56831fc5ed435310b4cb64511279579" Jan 27 21:16:01 crc kubenswrapper[4915]: I0127 21:16:01.358279 4915 scope.go:117] "RemoveContainer" containerID="08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18" Jan 27 21:16:01 crc kubenswrapper[4915]: E0127 21:16:01.358982 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:16:14 crc kubenswrapper[4915]: I0127 21:16:14.357711 4915 scope.go:117] "RemoveContainer" containerID="08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18" Jan 27 21:16:14 crc kubenswrapper[4915]: E0127 21:16:14.358392 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:16:27 crc kubenswrapper[4915]: I0127 21:16:27.357262 4915 scope.go:117] "RemoveContainer" containerID="08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18" Jan 27 21:16:27 crc kubenswrapper[4915]: E0127 21:16:27.359020 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:16:42 crc kubenswrapper[4915]: I0127 21:16:42.357968 4915 scope.go:117] "RemoveContainer" containerID="08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18" Jan 27 21:16:42 crc kubenswrapper[4915]: E0127 21:16:42.359464 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:16:53 crc kubenswrapper[4915]: I0127 21:16:53.357776 4915 scope.go:117] "RemoveContainer" containerID="08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18" Jan 27 21:16:53 crc kubenswrapper[4915]: E0127 21:16:53.358730 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:17:04 crc kubenswrapper[4915]: I0127 21:17:04.357705 4915 scope.go:117] "RemoveContainer" containerID="08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18" Jan 27 21:17:04 crc kubenswrapper[4915]: E0127 21:17:04.358593 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:17:16 crc kubenswrapper[4915]: I0127 21:17:16.357457 4915 scope.go:117] "RemoveContainer" containerID="08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18" Jan 27 21:17:16 crc kubenswrapper[4915]: E0127 21:17:16.358358 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:17:28 crc kubenswrapper[4915]: I0127 21:17:28.360357 4915 scope.go:117] "RemoveContainer" containerID="08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18" Jan 27 21:17:28 crc kubenswrapper[4915]: E0127 21:17:28.361888 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:17:41 crc kubenswrapper[4915]: I0127 21:17:41.359919 4915 scope.go:117] "RemoveContainer" containerID="08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18" Jan 27 21:17:41 crc kubenswrapper[4915]: E0127 21:17:41.361141 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:17:46 crc kubenswrapper[4915]: I0127 21:17:46.248418 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t6t9r"] Jan 27 21:17:46 crc kubenswrapper[4915]: E0127 21:17:46.249784 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29d0b6f-123d-4787-a857-457b85377880" containerName="collect-profiles" Jan 27 21:17:46 crc kubenswrapper[4915]: I0127 21:17:46.249819 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29d0b6f-123d-4787-a857-457b85377880" containerName="collect-profiles" Jan 27 21:17:46 crc kubenswrapper[4915]: I0127 21:17:46.250101 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29d0b6f-123d-4787-a857-457b85377880" containerName="collect-profiles" Jan 27 21:17:46 crc kubenswrapper[4915]: I0127 21:17:46.252272 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6t9r" Jan 27 21:17:46 crc kubenswrapper[4915]: I0127 21:17:46.278906 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdqwp\" (UniqueName: \"kubernetes.io/projected/baa5930d-c42f-44c3-9ea4-fd82303b7e12-kube-api-access-gdqwp\") pod \"certified-operators-t6t9r\" (UID: \"baa5930d-c42f-44c3-9ea4-fd82303b7e12\") " pod="openshift-marketplace/certified-operators-t6t9r" Jan 27 21:17:46 crc kubenswrapper[4915]: I0127 21:17:46.278961 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baa5930d-c42f-44c3-9ea4-fd82303b7e12-utilities\") pod \"certified-operators-t6t9r\" (UID: \"baa5930d-c42f-44c3-9ea4-fd82303b7e12\") " pod="openshift-marketplace/certified-operators-t6t9r" Jan 27 21:17:46 crc kubenswrapper[4915]: I0127 21:17:46.279007 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baa5930d-c42f-44c3-9ea4-fd82303b7e12-catalog-content\") pod \"certified-operators-t6t9r\" (UID: \"baa5930d-c42f-44c3-9ea4-fd82303b7e12\") " pod="openshift-marketplace/certified-operators-t6t9r" Jan 27 21:17:46 crc kubenswrapper[4915]: I0127 21:17:46.279720 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t6t9r"] Jan 27 21:17:46 crc kubenswrapper[4915]: I0127 21:17:46.381417 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdqwp\" (UniqueName: \"kubernetes.io/projected/baa5930d-c42f-44c3-9ea4-fd82303b7e12-kube-api-access-gdqwp\") pod \"certified-operators-t6t9r\" (UID: \"baa5930d-c42f-44c3-9ea4-fd82303b7e12\") " pod="openshift-marketplace/certified-operators-t6t9r" Jan 27 21:17:46 crc kubenswrapper[4915]: I0127 21:17:46.381488 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baa5930d-c42f-44c3-9ea4-fd82303b7e12-utilities\") pod \"certified-operators-t6t9r\" (UID: \"baa5930d-c42f-44c3-9ea4-fd82303b7e12\") " pod="openshift-marketplace/certified-operators-t6t9r" Jan 27 21:17:46 crc kubenswrapper[4915]: I0127 21:17:46.381554 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baa5930d-c42f-44c3-9ea4-fd82303b7e12-catalog-content\") pod \"certified-operators-t6t9r\" (UID: \"baa5930d-c42f-44c3-9ea4-fd82303b7e12\") " pod="openshift-marketplace/certified-operators-t6t9r" Jan 27 21:17:46 crc kubenswrapper[4915]: I0127 21:17:46.382737 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baa5930d-c42f-44c3-9ea4-fd82303b7e12-utilities\") pod \"certified-operators-t6t9r\" (UID: \"baa5930d-c42f-44c3-9ea4-fd82303b7e12\") " pod="openshift-marketplace/certified-operators-t6t9r" Jan 27 21:17:46 crc kubenswrapper[4915]: I0127 21:17:46.383109 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baa5930d-c42f-44c3-9ea4-fd82303b7e12-catalog-content\") pod \"certified-operators-t6t9r\" (UID: \"baa5930d-c42f-44c3-9ea4-fd82303b7e12\") " pod="openshift-marketplace/certified-operators-t6t9r" Jan 27 21:17:46 crc kubenswrapper[4915]: I0127 21:17:46.407180 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdqwp\" (UniqueName: \"kubernetes.io/projected/baa5930d-c42f-44c3-9ea4-fd82303b7e12-kube-api-access-gdqwp\") pod \"certified-operators-t6t9r\" (UID: \"baa5930d-c42f-44c3-9ea4-fd82303b7e12\") " pod="openshift-marketplace/certified-operators-t6t9r" Jan 27 21:17:46 crc kubenswrapper[4915]: I0127 21:17:46.579078 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6t9r" Jan 27 21:17:47 crc kubenswrapper[4915]: I0127 21:17:47.095801 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t6t9r"] Jan 27 21:17:47 crc kubenswrapper[4915]: I0127 21:17:47.498052 4915 generic.go:334] "Generic (PLEG): container finished" podID="baa5930d-c42f-44c3-9ea4-fd82303b7e12" containerID="c6a72439ed2c5cb27491b8a6040cd8300ddd7e783a91e087f2feffdc076fbe1d" exitCode=0 Jan 27 21:17:47 crc kubenswrapper[4915]: I0127 21:17:47.498135 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6t9r" event={"ID":"baa5930d-c42f-44c3-9ea4-fd82303b7e12","Type":"ContainerDied","Data":"c6a72439ed2c5cb27491b8a6040cd8300ddd7e783a91e087f2feffdc076fbe1d"} Jan 27 21:17:47 crc kubenswrapper[4915]: I0127 21:17:47.498414 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6t9r" event={"ID":"baa5930d-c42f-44c3-9ea4-fd82303b7e12","Type":"ContainerStarted","Data":"4ac3d82707e0034345e36c98478b5b61ad4dc515ee3a04965a44df63c9e1168d"} Jan 27 21:17:47 crc kubenswrapper[4915]: I0127 21:17:47.501648 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 21:17:49 crc kubenswrapper[4915]: I0127 21:17:49.255832 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wkwqn"] Jan 27 21:17:49 crc kubenswrapper[4915]: I0127 21:17:49.259905 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wkwqn" Jan 27 21:17:49 crc kubenswrapper[4915]: I0127 21:17:49.271527 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wkwqn"] Jan 27 21:17:49 crc kubenswrapper[4915]: I0127 21:17:49.354077 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5zxx\" (UniqueName: \"kubernetes.io/projected/96e4929b-499d-41d4-bc7d-c70fe59e5d82-kube-api-access-b5zxx\") pod \"redhat-marketplace-wkwqn\" (UID: \"96e4929b-499d-41d4-bc7d-c70fe59e5d82\") " pod="openshift-marketplace/redhat-marketplace-wkwqn" Jan 27 21:17:49 crc kubenswrapper[4915]: I0127 21:17:49.354122 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96e4929b-499d-41d4-bc7d-c70fe59e5d82-catalog-content\") pod \"redhat-marketplace-wkwqn\" (UID: \"96e4929b-499d-41d4-bc7d-c70fe59e5d82\") " pod="openshift-marketplace/redhat-marketplace-wkwqn" Jan 27 21:17:49 crc kubenswrapper[4915]: I0127 21:17:49.354163 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96e4929b-499d-41d4-bc7d-c70fe59e5d82-utilities\") pod \"redhat-marketplace-wkwqn\" (UID: \"96e4929b-499d-41d4-bc7d-c70fe59e5d82\") " pod="openshift-marketplace/redhat-marketplace-wkwqn" Jan 27 21:17:49 crc kubenswrapper[4915]: I0127 21:17:49.456381 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5zxx\" (UniqueName: \"kubernetes.io/projected/96e4929b-499d-41d4-bc7d-c70fe59e5d82-kube-api-access-b5zxx\") pod \"redhat-marketplace-wkwqn\" (UID: \"96e4929b-499d-41d4-bc7d-c70fe59e5d82\") " pod="openshift-marketplace/redhat-marketplace-wkwqn" Jan 27 21:17:49 crc kubenswrapper[4915]: I0127 21:17:49.456447 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96e4929b-499d-41d4-bc7d-c70fe59e5d82-catalog-content\") pod \"redhat-marketplace-wkwqn\" (UID: \"96e4929b-499d-41d4-bc7d-c70fe59e5d82\") " pod="openshift-marketplace/redhat-marketplace-wkwqn" Jan 27 21:17:49 crc kubenswrapper[4915]: I0127 21:17:49.456531 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96e4929b-499d-41d4-bc7d-c70fe59e5d82-utilities\") pod \"redhat-marketplace-wkwqn\" (UID: \"96e4929b-499d-41d4-bc7d-c70fe59e5d82\") " pod="openshift-marketplace/redhat-marketplace-wkwqn" Jan 27 21:17:49 crc kubenswrapper[4915]: I0127 21:17:49.457306 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96e4929b-499d-41d4-bc7d-c70fe59e5d82-catalog-content\") pod \"redhat-marketplace-wkwqn\" (UID: \"96e4929b-499d-41d4-bc7d-c70fe59e5d82\") " pod="openshift-marketplace/redhat-marketplace-wkwqn" Jan 27 21:17:49 crc kubenswrapper[4915]: I0127 21:17:49.457996 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96e4929b-499d-41d4-bc7d-c70fe59e5d82-utilities\") pod \"redhat-marketplace-wkwqn\" (UID: \"96e4929b-499d-41d4-bc7d-c70fe59e5d82\") " pod="openshift-marketplace/redhat-marketplace-wkwqn" Jan 27 21:17:49 crc kubenswrapper[4915]: I0127 21:17:49.488112 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5zxx\" (UniqueName: \"kubernetes.io/projected/96e4929b-499d-41d4-bc7d-c70fe59e5d82-kube-api-access-b5zxx\") pod \"redhat-marketplace-wkwqn\" (UID: \"96e4929b-499d-41d4-bc7d-c70fe59e5d82\") " pod="openshift-marketplace/redhat-marketplace-wkwqn" Jan 27 21:17:49 crc kubenswrapper[4915]: I0127 21:17:49.518319 4915 generic.go:334] "Generic (PLEG): container finished" podID="baa5930d-c42f-44c3-9ea4-fd82303b7e12" containerID="713785b4ab903a5da03e79f080d2a833026a449ab0360c83341532c72f7d740c" exitCode=0 Jan 27 21:17:49 crc kubenswrapper[4915]: I0127 21:17:49.518367 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6t9r" event={"ID":"baa5930d-c42f-44c3-9ea4-fd82303b7e12","Type":"ContainerDied","Data":"713785b4ab903a5da03e79f080d2a833026a449ab0360c83341532c72f7d740c"} Jan 27 21:17:49 crc kubenswrapper[4915]: I0127 21:17:49.587691 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wkwqn" Jan 27 21:17:50 crc kubenswrapper[4915]: I0127 21:17:50.056712 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wkwqn"] Jan 27 21:17:50 crc kubenswrapper[4915]: W0127 21:17:50.059529 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96e4929b_499d_41d4_bc7d_c70fe59e5d82.slice/crio-9da0d32d714fa7e1974a75f13d7faf255970c4716b8e014f895679a18651bd89 WatchSource:0}: Error finding container 9da0d32d714fa7e1974a75f13d7faf255970c4716b8e014f895679a18651bd89: Status 404 returned error can't find the container with id 9da0d32d714fa7e1974a75f13d7faf255970c4716b8e014f895679a18651bd89 Jan 27 21:17:50 crc kubenswrapper[4915]: I0127 21:17:50.529489 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6t9r" event={"ID":"baa5930d-c42f-44c3-9ea4-fd82303b7e12","Type":"ContainerStarted","Data":"435f0edcd043d7810a9e47a4523b422823ecf0ee73c172c09521cc5947d44b32"} Jan 27 21:17:50 crc kubenswrapper[4915]: I0127 21:17:50.531539 4915 generic.go:334] "Generic (PLEG): container finished" podID="96e4929b-499d-41d4-bc7d-c70fe59e5d82" containerID="cda541434e527a224272fc068b79acb0767d92a31096e172b7d3e8660ba2865b" exitCode=0 Jan 27 21:17:50 crc kubenswrapper[4915]: I0127 21:17:50.531570 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wkwqn" event={"ID":"96e4929b-499d-41d4-bc7d-c70fe59e5d82","Type":"ContainerDied","Data":"cda541434e527a224272fc068b79acb0767d92a31096e172b7d3e8660ba2865b"} Jan 27 21:17:50 crc kubenswrapper[4915]: I0127 21:17:50.531586 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wkwqn" event={"ID":"96e4929b-499d-41d4-bc7d-c70fe59e5d82","Type":"ContainerStarted","Data":"9da0d32d714fa7e1974a75f13d7faf255970c4716b8e014f895679a18651bd89"} Jan 27 21:17:50 crc kubenswrapper[4915]: I0127 21:17:50.561539 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t6t9r" podStartSLOduration=2.161276996 podStartE2EDuration="4.561511332s" podCreationTimestamp="2026-01-27 21:17:46 +0000 UTC" firstStartedPulling="2026-01-27 21:17:47.501101655 +0000 UTC m=+9358.858955359" lastFinishedPulling="2026-01-27 21:17:49.901336031 +0000 UTC m=+9361.259189695" observedRunningTime="2026-01-27 21:17:50.549396272 +0000 UTC m=+9361.907249966" watchObservedRunningTime="2026-01-27 21:17:50.561511332 +0000 UTC m=+9361.919365016" Jan 27 21:17:52 crc kubenswrapper[4915]: I0127 21:17:52.551917 4915 generic.go:334] "Generic (PLEG): container finished" podID="96e4929b-499d-41d4-bc7d-c70fe59e5d82" containerID="f896c598ce3ab04616f8d234f1287e631ab9d6d4739499c69f07fffbb78d27cf" exitCode=0 Jan 27 21:17:52 crc kubenswrapper[4915]: I0127 21:17:52.552039 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wkwqn" event={"ID":"96e4929b-499d-41d4-bc7d-c70fe59e5d82","Type":"ContainerDied","Data":"f896c598ce3ab04616f8d234f1287e631ab9d6d4739499c69f07fffbb78d27cf"} Jan 27 21:17:53 crc kubenswrapper[4915]: I0127 21:17:53.566494 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wkwqn" event={"ID":"96e4929b-499d-41d4-bc7d-c70fe59e5d82","Type":"ContainerStarted","Data":"c065d10123895fa1406fbbfbd4be9e062b60540191684c27a594ce78f39ef43a"} Jan 27 21:17:53 crc kubenswrapper[4915]: I0127 21:17:53.594176 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wkwqn" podStartSLOduration=2.1784096330000002 podStartE2EDuration="4.594158322s" podCreationTimestamp="2026-01-27 21:17:49 +0000 UTC" firstStartedPulling="2026-01-27 21:17:50.533561071 +0000 UTC m=+9361.891414735" lastFinishedPulling="2026-01-27 21:17:52.94930976 +0000 UTC m=+9364.307163424" observedRunningTime="2026-01-27 21:17:53.588756729 +0000 UTC m=+9364.946610423" watchObservedRunningTime="2026-01-27 21:17:53.594158322 +0000 UTC m=+9364.952011996" Jan 27 21:17:56 crc kubenswrapper[4915]: I0127 21:17:56.357504 4915 scope.go:117] "RemoveContainer" containerID="08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18" Jan 27 21:17:56 crc kubenswrapper[4915]: E0127 21:17:56.358317 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:17:56 crc kubenswrapper[4915]: I0127 21:17:56.579980 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t6t9r" Jan 27 21:17:56 crc kubenswrapper[4915]: I0127 21:17:56.580054 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t6t9r" Jan 27 21:17:56 crc kubenswrapper[4915]: I0127 21:17:56.628234 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t6t9r" Jan 27 21:17:57 crc kubenswrapper[4915]: I0127 21:17:57.646933 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t6t9r" Jan 27 21:17:58 crc kubenswrapper[4915]: I0127 21:17:58.030755 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t6t9r"] Jan 27 21:17:59 crc kubenswrapper[4915]: I0127 21:17:59.588141 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wkwqn" Jan 27 21:17:59 crc kubenswrapper[4915]: I0127 21:17:59.588580 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wkwqn" Jan 27 21:17:59 crc kubenswrapper[4915]: I0127 21:17:59.614064 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t6t9r" podUID="baa5930d-c42f-44c3-9ea4-fd82303b7e12" containerName="registry-server" containerID="cri-o://435f0edcd043d7810a9e47a4523b422823ecf0ee73c172c09521cc5947d44b32" gracePeriod=2 Jan 27 21:17:59 crc kubenswrapper[4915]: I0127 21:17:59.638889 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wkwqn" Jan 27 21:17:59 crc kubenswrapper[4915]: I0127 21:17:59.690470 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wkwqn" Jan 27 21:18:00 crc kubenswrapper[4915]: I0127 21:18:00.450732 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wkwqn"] Jan 27 21:18:00 crc kubenswrapper[4915]: I0127 21:18:00.572043 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6t9r" Jan 27 21:18:00 crc kubenswrapper[4915]: I0127 21:18:00.622831 4915 generic.go:334] "Generic (PLEG): container finished" podID="baa5930d-c42f-44c3-9ea4-fd82303b7e12" containerID="435f0edcd043d7810a9e47a4523b422823ecf0ee73c172c09521cc5947d44b32" exitCode=0 Jan 27 21:18:00 crc kubenswrapper[4915]: I0127 21:18:00.622876 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6t9r" Jan 27 21:18:00 crc kubenswrapper[4915]: I0127 21:18:00.622915 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6t9r" event={"ID":"baa5930d-c42f-44c3-9ea4-fd82303b7e12","Type":"ContainerDied","Data":"435f0edcd043d7810a9e47a4523b422823ecf0ee73c172c09521cc5947d44b32"} Jan 27 21:18:00 crc kubenswrapper[4915]: I0127 21:18:00.622975 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6t9r" event={"ID":"baa5930d-c42f-44c3-9ea4-fd82303b7e12","Type":"ContainerDied","Data":"4ac3d82707e0034345e36c98478b5b61ad4dc515ee3a04965a44df63c9e1168d"} Jan 27 21:18:00 crc kubenswrapper[4915]: I0127 21:18:00.623001 4915 scope.go:117] "RemoveContainer" containerID="435f0edcd043d7810a9e47a4523b422823ecf0ee73c172c09521cc5947d44b32" Jan 27 21:18:00 crc kubenswrapper[4915]: I0127 21:18:00.648371 4915 scope.go:117] "RemoveContainer" containerID="713785b4ab903a5da03e79f080d2a833026a449ab0360c83341532c72f7d740c" Jan 27 21:18:00 crc kubenswrapper[4915]: I0127 21:18:00.670139 4915 scope.go:117] "RemoveContainer" containerID="c6a72439ed2c5cb27491b8a6040cd8300ddd7e783a91e087f2feffdc076fbe1d" Jan 27 21:18:00 crc kubenswrapper[4915]: I0127 21:18:00.689303 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baa5930d-c42f-44c3-9ea4-fd82303b7e12-catalog-content\") pod \"baa5930d-c42f-44c3-9ea4-fd82303b7e12\" (UID: \"baa5930d-c42f-44c3-9ea4-fd82303b7e12\") " Jan 27 21:18:00 crc kubenswrapper[4915]: I0127 21:18:00.689467 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdqwp\" (UniqueName: \"kubernetes.io/projected/baa5930d-c42f-44c3-9ea4-fd82303b7e12-kube-api-access-gdqwp\") pod \"baa5930d-c42f-44c3-9ea4-fd82303b7e12\" (UID: \"baa5930d-c42f-44c3-9ea4-fd82303b7e12\") " Jan 27 21:18:00 crc kubenswrapper[4915]: I0127 21:18:00.689598 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baa5930d-c42f-44c3-9ea4-fd82303b7e12-utilities\") pod \"baa5930d-c42f-44c3-9ea4-fd82303b7e12\" (UID: \"baa5930d-c42f-44c3-9ea4-fd82303b7e12\") " Jan 27 21:18:00 crc kubenswrapper[4915]: I0127 21:18:00.695718 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baa5930d-c42f-44c3-9ea4-fd82303b7e12-utilities" (OuterVolumeSpecName: "utilities") pod "baa5930d-c42f-44c3-9ea4-fd82303b7e12" (UID: "baa5930d-c42f-44c3-9ea4-fd82303b7e12"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:18:00 crc kubenswrapper[4915]: I0127 21:18:00.696334 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baa5930d-c42f-44c3-9ea4-fd82303b7e12-kube-api-access-gdqwp" (OuterVolumeSpecName: "kube-api-access-gdqwp") pod "baa5930d-c42f-44c3-9ea4-fd82303b7e12" (UID: "baa5930d-c42f-44c3-9ea4-fd82303b7e12"). InnerVolumeSpecName "kube-api-access-gdqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:18:00 crc kubenswrapper[4915]: I0127 21:18:00.745738 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baa5930d-c42f-44c3-9ea4-fd82303b7e12-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "baa5930d-c42f-44c3-9ea4-fd82303b7e12" (UID: "baa5930d-c42f-44c3-9ea4-fd82303b7e12"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:18:00 crc kubenswrapper[4915]: I0127 21:18:00.761400 4915 scope.go:117] "RemoveContainer" containerID="435f0edcd043d7810a9e47a4523b422823ecf0ee73c172c09521cc5947d44b32" Jan 27 21:18:00 crc kubenswrapper[4915]: E0127 21:18:00.761905 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"435f0edcd043d7810a9e47a4523b422823ecf0ee73c172c09521cc5947d44b32\": container with ID starting with 435f0edcd043d7810a9e47a4523b422823ecf0ee73c172c09521cc5947d44b32 not found: ID does not exist" containerID="435f0edcd043d7810a9e47a4523b422823ecf0ee73c172c09521cc5947d44b32" Jan 27 21:18:00 crc kubenswrapper[4915]: I0127 21:18:00.761959 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"435f0edcd043d7810a9e47a4523b422823ecf0ee73c172c09521cc5947d44b32"} err="failed to get container status \"435f0edcd043d7810a9e47a4523b422823ecf0ee73c172c09521cc5947d44b32\": rpc error: code = NotFound desc = could not find container \"435f0edcd043d7810a9e47a4523b422823ecf0ee73c172c09521cc5947d44b32\": container with ID starting with 435f0edcd043d7810a9e47a4523b422823ecf0ee73c172c09521cc5947d44b32 not found: ID does not exist" Jan 27 21:18:00 crc kubenswrapper[4915]: I0127 21:18:00.761996 4915 scope.go:117] "RemoveContainer" containerID="713785b4ab903a5da03e79f080d2a833026a449ab0360c83341532c72f7d740c" Jan 27 21:18:00 crc kubenswrapper[4915]: E0127 21:18:00.762308 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"713785b4ab903a5da03e79f080d2a833026a449ab0360c83341532c72f7d740c\": container with ID starting with 713785b4ab903a5da03e79f080d2a833026a449ab0360c83341532c72f7d740c not found: ID does not exist" containerID="713785b4ab903a5da03e79f080d2a833026a449ab0360c83341532c72f7d740c" Jan 27 21:18:00 crc kubenswrapper[4915]: I0127 21:18:00.762373 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"713785b4ab903a5da03e79f080d2a833026a449ab0360c83341532c72f7d740c"} err="failed to get container status \"713785b4ab903a5da03e79f080d2a833026a449ab0360c83341532c72f7d740c\": rpc error: code = NotFound desc = could not find container \"713785b4ab903a5da03e79f080d2a833026a449ab0360c83341532c72f7d740c\": container with ID starting with 713785b4ab903a5da03e79f080d2a833026a449ab0360c83341532c72f7d740c not found: ID does not exist" Jan 27 21:18:00 crc kubenswrapper[4915]: I0127 21:18:00.762396 4915 scope.go:117] "RemoveContainer" containerID="c6a72439ed2c5cb27491b8a6040cd8300ddd7e783a91e087f2feffdc076fbe1d" Jan 27 21:18:00 crc kubenswrapper[4915]: E0127 21:18:00.762651 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6a72439ed2c5cb27491b8a6040cd8300ddd7e783a91e087f2feffdc076fbe1d\": container with ID starting with c6a72439ed2c5cb27491b8a6040cd8300ddd7e783a91e087f2feffdc076fbe1d not found: ID does not exist" containerID="c6a72439ed2c5cb27491b8a6040cd8300ddd7e783a91e087f2feffdc076fbe1d" Jan 27 21:18:00 crc kubenswrapper[4915]: I0127 21:18:00.762718 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6a72439ed2c5cb27491b8a6040cd8300ddd7e783a91e087f2feffdc076fbe1d"} err="failed to get container status \"c6a72439ed2c5cb27491b8a6040cd8300ddd7e783a91e087f2feffdc076fbe1d\": rpc error: code = NotFound desc = could not find container \"c6a72439ed2c5cb27491b8a6040cd8300ddd7e783a91e087f2feffdc076fbe1d\": container with ID starting with c6a72439ed2c5cb27491b8a6040cd8300ddd7e783a91e087f2feffdc076fbe1d not found: ID does not exist" Jan 27 21:18:00 crc kubenswrapper[4915]: I0127 21:18:00.792218 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baa5930d-c42f-44c3-9ea4-fd82303b7e12-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:18:00 crc kubenswrapper[4915]: I0127 21:18:00.792261 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdqwp\" (UniqueName: \"kubernetes.io/projected/baa5930d-c42f-44c3-9ea4-fd82303b7e12-kube-api-access-gdqwp\") on node \"crc\" DevicePath \"\"" Jan 27 21:18:00 crc kubenswrapper[4915]: I0127 21:18:00.792276 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baa5930d-c42f-44c3-9ea4-fd82303b7e12-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:18:00 crc kubenswrapper[4915]: I0127 21:18:00.949158 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t6t9r"] Jan 27 21:18:00 crc kubenswrapper[4915]: I0127 21:18:00.955840 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t6t9r"] Jan 27 21:18:01 crc kubenswrapper[4915]: I0127 21:18:01.368768 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baa5930d-c42f-44c3-9ea4-fd82303b7e12" path="/var/lib/kubelet/pods/baa5930d-c42f-44c3-9ea4-fd82303b7e12/volumes" Jan 27 21:18:01 crc kubenswrapper[4915]: I0127 21:18:01.631974 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wkwqn" podUID="96e4929b-499d-41d4-bc7d-c70fe59e5d82" containerName="registry-server" containerID="cri-o://c065d10123895fa1406fbbfbd4be9e062b60540191684c27a594ce78f39ef43a" gracePeriod=2 Jan 27 21:18:02 crc kubenswrapper[4915]: I0127 21:18:02.120729 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wkwqn" Jan 27 21:18:02 crc kubenswrapper[4915]: I0127 21:18:02.219251 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96e4929b-499d-41d4-bc7d-c70fe59e5d82-utilities\") pod \"96e4929b-499d-41d4-bc7d-c70fe59e5d82\" (UID: \"96e4929b-499d-41d4-bc7d-c70fe59e5d82\") " Jan 27 21:18:02 crc kubenswrapper[4915]: I0127 21:18:02.219327 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96e4929b-499d-41d4-bc7d-c70fe59e5d82-catalog-content\") pod \"96e4929b-499d-41d4-bc7d-c70fe59e5d82\" (UID: \"96e4929b-499d-41d4-bc7d-c70fe59e5d82\") " Jan 27 21:18:02 crc kubenswrapper[4915]: I0127 21:18:02.219446 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5zxx\" (UniqueName: \"kubernetes.io/projected/96e4929b-499d-41d4-bc7d-c70fe59e5d82-kube-api-access-b5zxx\") pod \"96e4929b-499d-41d4-bc7d-c70fe59e5d82\" (UID: \"96e4929b-499d-41d4-bc7d-c70fe59e5d82\") " Jan 27 21:18:02 crc kubenswrapper[4915]: I0127 21:18:02.220313 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96e4929b-499d-41d4-bc7d-c70fe59e5d82-utilities" (OuterVolumeSpecName: "utilities") pod "96e4929b-499d-41d4-bc7d-c70fe59e5d82" (UID: "96e4929b-499d-41d4-bc7d-c70fe59e5d82"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:18:02 crc kubenswrapper[4915]: I0127 21:18:02.227919 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e4929b-499d-41d4-bc7d-c70fe59e5d82-kube-api-access-b5zxx" (OuterVolumeSpecName: "kube-api-access-b5zxx") pod "96e4929b-499d-41d4-bc7d-c70fe59e5d82" (UID: "96e4929b-499d-41d4-bc7d-c70fe59e5d82"). InnerVolumeSpecName "kube-api-access-b5zxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:18:02 crc kubenswrapper[4915]: I0127 21:18:02.244946 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96e4929b-499d-41d4-bc7d-c70fe59e5d82-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96e4929b-499d-41d4-bc7d-c70fe59e5d82" (UID: "96e4929b-499d-41d4-bc7d-c70fe59e5d82"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:18:02 crc kubenswrapper[4915]: I0127 21:18:02.322500 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96e4929b-499d-41d4-bc7d-c70fe59e5d82-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:18:02 crc kubenswrapper[4915]: I0127 21:18:02.322561 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96e4929b-499d-41d4-bc7d-c70fe59e5d82-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:18:02 crc kubenswrapper[4915]: I0127 21:18:02.322584 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5zxx\" (UniqueName: \"kubernetes.io/projected/96e4929b-499d-41d4-bc7d-c70fe59e5d82-kube-api-access-b5zxx\") on node \"crc\" DevicePath \"\"" Jan 27 21:18:02 crc kubenswrapper[4915]: I0127 21:18:02.647390 4915 generic.go:334] "Generic (PLEG): container finished" podID="96e4929b-499d-41d4-bc7d-c70fe59e5d82" containerID="c065d10123895fa1406fbbfbd4be9e062b60540191684c27a594ce78f39ef43a" exitCode=0 Jan 27 21:18:02 crc kubenswrapper[4915]: I0127 21:18:02.647445 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wkwqn" event={"ID":"96e4929b-499d-41d4-bc7d-c70fe59e5d82","Type":"ContainerDied","Data":"c065d10123895fa1406fbbfbd4be9e062b60540191684c27a594ce78f39ef43a"} Jan 27 21:18:02 crc kubenswrapper[4915]: I0127 21:18:02.647941 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wkwqn" event={"ID":"96e4929b-499d-41d4-bc7d-c70fe59e5d82","Type":"ContainerDied","Data":"9da0d32d714fa7e1974a75f13d7faf255970c4716b8e014f895679a18651bd89"} Jan 27 21:18:02 crc kubenswrapper[4915]: I0127 21:18:02.647984 4915 scope.go:117] "RemoveContainer" containerID="c065d10123895fa1406fbbfbd4be9e062b60540191684c27a594ce78f39ef43a" Jan 27 21:18:02 crc kubenswrapper[4915]: I0127 21:18:02.647502 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wkwqn" Jan 27 21:18:02 crc kubenswrapper[4915]: I0127 21:18:02.682518 4915 scope.go:117] "RemoveContainer" containerID="f896c598ce3ab04616f8d234f1287e631ab9d6d4739499c69f07fffbb78d27cf" Jan 27 21:18:02 crc kubenswrapper[4915]: I0127 21:18:02.706225 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wkwqn"] Jan 27 21:18:02 crc kubenswrapper[4915]: I0127 21:18:02.715870 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wkwqn"] Jan 27 21:18:02 crc kubenswrapper[4915]: I0127 21:18:02.742231 4915 scope.go:117] "RemoveContainer" containerID="cda541434e527a224272fc068b79acb0767d92a31096e172b7d3e8660ba2865b" Jan 27 21:18:02 crc kubenswrapper[4915]: I0127 21:18:02.788517 4915 scope.go:117] "RemoveContainer" containerID="c065d10123895fa1406fbbfbd4be9e062b60540191684c27a594ce78f39ef43a" Jan 27 21:18:02 crc kubenswrapper[4915]: E0127 21:18:02.789121 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c065d10123895fa1406fbbfbd4be9e062b60540191684c27a594ce78f39ef43a\": container with ID starting with c065d10123895fa1406fbbfbd4be9e062b60540191684c27a594ce78f39ef43a not found: ID does not exist" containerID="c065d10123895fa1406fbbfbd4be9e062b60540191684c27a594ce78f39ef43a" Jan 27 21:18:02 crc kubenswrapper[4915]: I0127 21:18:02.789180 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c065d10123895fa1406fbbfbd4be9e062b60540191684c27a594ce78f39ef43a"} err="failed to get container status \"c065d10123895fa1406fbbfbd4be9e062b60540191684c27a594ce78f39ef43a\": rpc error: code = NotFound desc = could not find container \"c065d10123895fa1406fbbfbd4be9e062b60540191684c27a594ce78f39ef43a\": container with ID starting with c065d10123895fa1406fbbfbd4be9e062b60540191684c27a594ce78f39ef43a not found: ID does not exist" Jan 27 21:18:02 crc kubenswrapper[4915]: I0127 21:18:02.789213 4915 scope.go:117] "RemoveContainer" containerID="f896c598ce3ab04616f8d234f1287e631ab9d6d4739499c69f07fffbb78d27cf" Jan 27 21:18:02 crc kubenswrapper[4915]: E0127 21:18:02.789572 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f896c598ce3ab04616f8d234f1287e631ab9d6d4739499c69f07fffbb78d27cf\": container with ID starting with f896c598ce3ab04616f8d234f1287e631ab9d6d4739499c69f07fffbb78d27cf not found: ID does not exist" containerID="f896c598ce3ab04616f8d234f1287e631ab9d6d4739499c69f07fffbb78d27cf" Jan 27 21:18:02 crc kubenswrapper[4915]: I0127 21:18:02.789607 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f896c598ce3ab04616f8d234f1287e631ab9d6d4739499c69f07fffbb78d27cf"} err="failed to get container status \"f896c598ce3ab04616f8d234f1287e631ab9d6d4739499c69f07fffbb78d27cf\": rpc error: code = NotFound desc = could not find container \"f896c598ce3ab04616f8d234f1287e631ab9d6d4739499c69f07fffbb78d27cf\": container with ID starting with f896c598ce3ab04616f8d234f1287e631ab9d6d4739499c69f07fffbb78d27cf not found: ID does not exist" Jan 27 21:18:02 crc kubenswrapper[4915]: I0127 21:18:02.789632 4915 scope.go:117] "RemoveContainer" containerID="cda541434e527a224272fc068b79acb0767d92a31096e172b7d3e8660ba2865b" Jan 27 21:18:02 crc kubenswrapper[4915]: E0127 21:18:02.789940 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cda541434e527a224272fc068b79acb0767d92a31096e172b7d3e8660ba2865b\": container with ID starting with cda541434e527a224272fc068b79acb0767d92a31096e172b7d3e8660ba2865b not found: ID does not exist" containerID="cda541434e527a224272fc068b79acb0767d92a31096e172b7d3e8660ba2865b" Jan 27 21:18:02 crc kubenswrapper[4915]: I0127 21:18:02.789972 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cda541434e527a224272fc068b79acb0767d92a31096e172b7d3e8660ba2865b"} err="failed to get container status \"cda541434e527a224272fc068b79acb0767d92a31096e172b7d3e8660ba2865b\": rpc error: code = NotFound desc = could not find container \"cda541434e527a224272fc068b79acb0767d92a31096e172b7d3e8660ba2865b\": container with ID starting with cda541434e527a224272fc068b79acb0767d92a31096e172b7d3e8660ba2865b not found: ID does not exist" Jan 27 21:18:03 crc kubenswrapper[4915]: I0127 21:18:03.374113 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96e4929b-499d-41d4-bc7d-c70fe59e5d82" path="/var/lib/kubelet/pods/96e4929b-499d-41d4-bc7d-c70fe59e5d82/volumes" Jan 27 21:18:10 crc kubenswrapper[4915]: I0127 21:18:10.358377 4915 scope.go:117] "RemoveContainer" containerID="08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18" Jan 27 21:18:10 crc kubenswrapper[4915]: E0127 21:18:10.359991 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:18:25 crc kubenswrapper[4915]: I0127 21:18:25.358205 4915 scope.go:117] "RemoveContainer" containerID="08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18" Jan 27 21:18:25 crc kubenswrapper[4915]: E0127 21:18:25.359769 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:18:40 crc kubenswrapper[4915]: I0127 21:18:40.358301 4915 scope.go:117] "RemoveContainer" containerID="08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18" Jan 27 21:18:40 crc kubenswrapper[4915]: E0127 21:18:40.359436 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:18:55 crc kubenswrapper[4915]: I0127 21:18:55.358124 4915 scope.go:117] "RemoveContainer" containerID="08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18" Jan 27 21:18:55 crc kubenswrapper[4915]: E0127 21:18:55.358963 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:19:10 crc kubenswrapper[4915]: I0127 21:19:10.358524 4915 scope.go:117] "RemoveContainer" containerID="08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18" Jan 27 21:19:10 crc kubenswrapper[4915]: E0127 21:19:10.359255 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:19:24 crc kubenswrapper[4915]: I0127 21:19:24.358451 4915 scope.go:117] "RemoveContainer" containerID="08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18" Jan 27 21:19:24 crc kubenswrapper[4915]: E0127 21:19:24.359974 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:19:39 crc kubenswrapper[4915]: I0127 21:19:39.365306 4915 scope.go:117] "RemoveContainer" containerID="08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18" Jan 27 21:19:39 crc kubenswrapper[4915]: E0127 21:19:39.366075 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:19:54 crc kubenswrapper[4915]: I0127 21:19:54.358214 4915 scope.go:117] "RemoveContainer" containerID="08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18" Jan 27 21:19:54 crc kubenswrapper[4915]: E0127 21:19:54.359086 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:20:08 crc kubenswrapper[4915]: I0127 21:20:08.358034 4915 scope.go:117] "RemoveContainer" containerID="08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18" Jan 27 21:20:08 crc kubenswrapper[4915]: E0127 21:20:08.359207 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:20:22 crc kubenswrapper[4915]: I0127 21:20:22.975349 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zwjg5"] Jan 27 21:20:22 crc kubenswrapper[4915]: E0127 21:20:22.976327 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e4929b-499d-41d4-bc7d-c70fe59e5d82" containerName="extract-content" Jan 27 21:20:22 crc kubenswrapper[4915]: I0127 21:20:22.976342 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e4929b-499d-41d4-bc7d-c70fe59e5d82" containerName="extract-content" Jan 27 21:20:22 crc kubenswrapper[4915]: E0127 21:20:22.976360 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baa5930d-c42f-44c3-9ea4-fd82303b7e12" containerName="extract-content" Jan 27 21:20:22 crc kubenswrapper[4915]: I0127 21:20:22.976369 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa5930d-c42f-44c3-9ea4-fd82303b7e12" containerName="extract-content" Jan 27 21:20:22 crc kubenswrapper[4915]: E0127 21:20:22.976384 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e4929b-499d-41d4-bc7d-c70fe59e5d82" containerName="extract-utilities" Jan 27 21:20:22 crc kubenswrapper[4915]: I0127 21:20:22.976393 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e4929b-499d-41d4-bc7d-c70fe59e5d82" containerName="extract-utilities" Jan 27 21:20:22 crc kubenswrapper[4915]: E0127 21:20:22.976418 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baa5930d-c42f-44c3-9ea4-fd82303b7e12" containerName="registry-server" Jan 27 21:20:22 crc kubenswrapper[4915]: I0127 21:20:22.976429 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa5930d-c42f-44c3-9ea4-fd82303b7e12" containerName="registry-server" Jan 27 21:20:22 crc kubenswrapper[4915]: E0127 21:20:22.976454 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e4929b-499d-41d4-bc7d-c70fe59e5d82" containerName="registry-server" Jan 27 21:20:22 crc kubenswrapper[4915]: I0127 21:20:22.976466 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e4929b-499d-41d4-bc7d-c70fe59e5d82" containerName="registry-server" Jan 27 21:20:22 crc kubenswrapper[4915]: E0127 21:20:22.976485 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baa5930d-c42f-44c3-9ea4-fd82303b7e12" containerName="extract-utilities" Jan 27 21:20:22 crc kubenswrapper[4915]: I0127 21:20:22.976496 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa5930d-c42f-44c3-9ea4-fd82303b7e12" containerName="extract-utilities" Jan 27 21:20:22 crc kubenswrapper[4915]: I0127 21:20:22.976780 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e4929b-499d-41d4-bc7d-c70fe59e5d82" containerName="registry-server" Jan 27 21:20:22 crc kubenswrapper[4915]: I0127 21:20:22.976842 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="baa5930d-c42f-44c3-9ea4-fd82303b7e12" containerName="registry-server" Jan 27 21:20:22 crc kubenswrapper[4915]: I0127 21:20:22.978623 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zwjg5" Jan 27 21:20:23 crc kubenswrapper[4915]: I0127 21:20:22.992920 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zwjg5"] Jan 27 21:20:23 crc kubenswrapper[4915]: I0127 21:20:23.036555 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86f4123b-20b5-44c5-8f55-e9980b3bfc68-catalog-content\") pod \"redhat-operators-zwjg5\" (UID: \"86f4123b-20b5-44c5-8f55-e9980b3bfc68\") " pod="openshift-marketplace/redhat-operators-zwjg5" Jan 27 21:20:23 crc kubenswrapper[4915]: I0127 21:20:23.036628 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86f4123b-20b5-44c5-8f55-e9980b3bfc68-utilities\") pod \"redhat-operators-zwjg5\" (UID: \"86f4123b-20b5-44c5-8f55-e9980b3bfc68\") " pod="openshift-marketplace/redhat-operators-zwjg5" Jan 27 21:20:23 crc kubenswrapper[4915]: I0127 21:20:23.036709 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpsdr\" (UniqueName: \"kubernetes.io/projected/86f4123b-20b5-44c5-8f55-e9980b3bfc68-kube-api-access-xpsdr\") pod \"redhat-operators-zwjg5\" (UID: \"86f4123b-20b5-44c5-8f55-e9980b3bfc68\") " pod="openshift-marketplace/redhat-operators-zwjg5" Jan 27 21:20:23 crc kubenswrapper[4915]: I0127 21:20:23.137534 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86f4123b-20b5-44c5-8f55-e9980b3bfc68-utilities\") pod \"redhat-operators-zwjg5\" (UID: \"86f4123b-20b5-44c5-8f55-e9980b3bfc68\") " pod="openshift-marketplace/redhat-operators-zwjg5" Jan 27 21:20:23 crc kubenswrapper[4915]: I0127 21:20:23.137648 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpsdr\" (UniqueName: \"kubernetes.io/projected/86f4123b-20b5-44c5-8f55-e9980b3bfc68-kube-api-access-xpsdr\") pod \"redhat-operators-zwjg5\" (UID: \"86f4123b-20b5-44c5-8f55-e9980b3bfc68\") " pod="openshift-marketplace/redhat-operators-zwjg5" Jan 27 21:20:23 crc kubenswrapper[4915]: I0127 21:20:23.137717 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86f4123b-20b5-44c5-8f55-e9980b3bfc68-catalog-content\") pod \"redhat-operators-zwjg5\" (UID: \"86f4123b-20b5-44c5-8f55-e9980b3bfc68\") " pod="openshift-marketplace/redhat-operators-zwjg5" Jan 27 21:20:23 crc kubenswrapper[4915]: I0127 21:20:23.138099 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86f4123b-20b5-44c5-8f55-e9980b3bfc68-utilities\") pod \"redhat-operators-zwjg5\" (UID: \"86f4123b-20b5-44c5-8f55-e9980b3bfc68\") " pod="openshift-marketplace/redhat-operators-zwjg5" Jan 27 21:20:23 crc kubenswrapper[4915]: I0127 21:20:23.138151 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86f4123b-20b5-44c5-8f55-e9980b3bfc68-catalog-content\") pod \"redhat-operators-zwjg5\" (UID: \"86f4123b-20b5-44c5-8f55-e9980b3bfc68\") " pod="openshift-marketplace/redhat-operators-zwjg5" Jan 27 21:20:23 crc kubenswrapper[4915]: I0127 21:20:23.173003 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpsdr\" (UniqueName: \"kubernetes.io/projected/86f4123b-20b5-44c5-8f55-e9980b3bfc68-kube-api-access-xpsdr\") pod \"redhat-operators-zwjg5\" (UID: \"86f4123b-20b5-44c5-8f55-e9980b3bfc68\") " pod="openshift-marketplace/redhat-operators-zwjg5" Jan 27 21:20:23 crc kubenswrapper[4915]: I0127 21:20:23.353334 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zwjg5" Jan 27 21:20:23 crc kubenswrapper[4915]: I0127 21:20:23.358716 4915 scope.go:117] "RemoveContainer" containerID="08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18" Jan 27 21:20:23 crc kubenswrapper[4915]: I0127 21:20:23.841004 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zwjg5"] Jan 27 21:20:24 crc kubenswrapper[4915]: I0127 21:20:24.058635 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwjg5" event={"ID":"86f4123b-20b5-44c5-8f55-e9980b3bfc68","Type":"ContainerStarted","Data":"dbf5d9ce7d25c9113bf77ad7bfdf4ea67c22ee2ab479c9c2654d1da53064d08c"} Jan 27 21:20:24 crc kubenswrapper[4915]: I0127 21:20:24.059124 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwjg5" event={"ID":"86f4123b-20b5-44c5-8f55-e9980b3bfc68","Type":"ContainerStarted","Data":"756260f20dd4088e20da9837180fadc7f854f101e24a105d0fd9b25caa9f493a"} Jan 27 21:20:24 crc kubenswrapper[4915]: I0127 21:20:24.072011 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"06f791b78b57e367f956c55040a37cbc5f4f8e197a80eca6e9f9c70b520b87b9"} Jan 27 21:20:25 crc kubenswrapper[4915]: I0127 21:20:25.084728 4915 generic.go:334] "Generic (PLEG): container finished" podID="86f4123b-20b5-44c5-8f55-e9980b3bfc68" containerID="dbf5d9ce7d25c9113bf77ad7bfdf4ea67c22ee2ab479c9c2654d1da53064d08c" exitCode=0 Jan 27 21:20:25 crc kubenswrapper[4915]: I0127 21:20:25.084857 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwjg5" event={"ID":"86f4123b-20b5-44c5-8f55-e9980b3bfc68","Type":"ContainerDied","Data":"dbf5d9ce7d25c9113bf77ad7bfdf4ea67c22ee2ab479c9c2654d1da53064d08c"} Jan 27 21:20:27 crc kubenswrapper[4915]: I0127 21:20:27.424367 4915 generic.go:334] "Generic (PLEG): container finished" podID="86f4123b-20b5-44c5-8f55-e9980b3bfc68" containerID="2c16a383461e07caa2deb3ad2fadfb6d554c92957f6183ca5924705d0a1ff417" exitCode=0 Jan 27 21:20:27 crc kubenswrapper[4915]: I0127 21:20:27.424448 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwjg5" event={"ID":"86f4123b-20b5-44c5-8f55-e9980b3bfc68","Type":"ContainerDied","Data":"2c16a383461e07caa2deb3ad2fadfb6d554c92957f6183ca5924705d0a1ff417"} Jan 27 21:20:28 crc kubenswrapper[4915]: I0127 21:20:28.438331 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwjg5" event={"ID":"86f4123b-20b5-44c5-8f55-e9980b3bfc68","Type":"ContainerStarted","Data":"5d793745b3757867e7aae755b99c7c1326a127bdc361c0435c9cd05e91cd0f03"} Jan 27 21:20:28 crc kubenswrapper[4915]: I0127 21:20:28.465261 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zwjg5" podStartSLOduration=3.678998257 podStartE2EDuration="6.465243319s" podCreationTimestamp="2026-01-27 21:20:22 +0000 UTC" firstStartedPulling="2026-01-27 21:20:25.087883466 +0000 UTC m=+9516.445737130" lastFinishedPulling="2026-01-27 21:20:27.874128508 +0000 UTC m=+9519.231982192" observedRunningTime="2026-01-27 21:20:28.462994503 +0000 UTC m=+9519.820848227" watchObservedRunningTime="2026-01-27 21:20:28.465243319 +0000 UTC m=+9519.823096993" Jan 27 21:20:33 crc kubenswrapper[4915]: I0127 21:20:33.354021 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zwjg5" Jan 27 21:20:33 crc kubenswrapper[4915]: I0127 21:20:33.354811 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zwjg5" Jan 27 21:20:34 crc kubenswrapper[4915]: I0127 21:20:34.477531 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zwjg5" podUID="86f4123b-20b5-44c5-8f55-e9980b3bfc68" containerName="registry-server" probeResult="failure" output=< Jan 27 21:20:34 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Jan 27 21:20:34 crc kubenswrapper[4915]: > Jan 27 21:20:43 crc kubenswrapper[4915]: I0127 21:20:43.416555 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zwjg5" Jan 27 21:20:43 crc kubenswrapper[4915]: I0127 21:20:43.512688 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zwjg5" Jan 27 21:20:43 crc kubenswrapper[4915]: I0127 21:20:43.664749 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zwjg5"] Jan 27 21:20:44 crc kubenswrapper[4915]: I0127 21:20:44.600911 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zwjg5" podUID="86f4123b-20b5-44c5-8f55-e9980b3bfc68" containerName="registry-server" containerID="cri-o://5d793745b3757867e7aae755b99c7c1326a127bdc361c0435c9cd05e91cd0f03" gracePeriod=2 Jan 27 21:20:45 crc kubenswrapper[4915]: I0127 21:20:45.081902 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zwjg5" Jan 27 21:20:45 crc kubenswrapper[4915]: I0127 21:20:45.207543 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpsdr\" (UniqueName: \"kubernetes.io/projected/86f4123b-20b5-44c5-8f55-e9980b3bfc68-kube-api-access-xpsdr\") pod \"86f4123b-20b5-44c5-8f55-e9980b3bfc68\" (UID: \"86f4123b-20b5-44c5-8f55-e9980b3bfc68\") " Jan 27 21:20:45 crc kubenswrapper[4915]: I0127 21:20:45.207821 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86f4123b-20b5-44c5-8f55-e9980b3bfc68-catalog-content\") pod \"86f4123b-20b5-44c5-8f55-e9980b3bfc68\" (UID: \"86f4123b-20b5-44c5-8f55-e9980b3bfc68\") " Jan 27 21:20:45 crc kubenswrapper[4915]: I0127 21:20:45.207934 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86f4123b-20b5-44c5-8f55-e9980b3bfc68-utilities\") pod \"86f4123b-20b5-44c5-8f55-e9980b3bfc68\" (UID: \"86f4123b-20b5-44c5-8f55-e9980b3bfc68\") " Jan 27 21:20:45 crc kubenswrapper[4915]: I0127 21:20:45.208967 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86f4123b-20b5-44c5-8f55-e9980b3bfc68-utilities" (OuterVolumeSpecName: "utilities") pod "86f4123b-20b5-44c5-8f55-e9980b3bfc68" (UID: "86f4123b-20b5-44c5-8f55-e9980b3bfc68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:20:45 crc kubenswrapper[4915]: I0127 21:20:45.213240 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86f4123b-20b5-44c5-8f55-e9980b3bfc68-kube-api-access-xpsdr" (OuterVolumeSpecName: "kube-api-access-xpsdr") pod "86f4123b-20b5-44c5-8f55-e9980b3bfc68" (UID: "86f4123b-20b5-44c5-8f55-e9980b3bfc68"). InnerVolumeSpecName "kube-api-access-xpsdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:20:45 crc kubenswrapper[4915]: I0127 21:20:45.310438 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpsdr\" (UniqueName: \"kubernetes.io/projected/86f4123b-20b5-44c5-8f55-e9980b3bfc68-kube-api-access-xpsdr\") on node \"crc\" DevicePath \"\"" Jan 27 21:20:45 crc kubenswrapper[4915]: I0127 21:20:45.310480 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86f4123b-20b5-44c5-8f55-e9980b3bfc68-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:20:45 crc kubenswrapper[4915]: I0127 21:20:45.331880 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86f4123b-20b5-44c5-8f55-e9980b3bfc68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86f4123b-20b5-44c5-8f55-e9980b3bfc68" (UID: "86f4123b-20b5-44c5-8f55-e9980b3bfc68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:20:45 crc kubenswrapper[4915]: I0127 21:20:45.412255 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86f4123b-20b5-44c5-8f55-e9980b3bfc68-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:20:45 crc kubenswrapper[4915]: I0127 21:20:45.611125 4915 generic.go:334] "Generic (PLEG): container finished" podID="86f4123b-20b5-44c5-8f55-e9980b3bfc68" containerID="5d793745b3757867e7aae755b99c7c1326a127bdc361c0435c9cd05e91cd0f03" exitCode=0 Jan 27 21:20:45 crc kubenswrapper[4915]: I0127 21:20:45.611166 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwjg5" event={"ID":"86f4123b-20b5-44c5-8f55-e9980b3bfc68","Type":"ContainerDied","Data":"5d793745b3757867e7aae755b99c7c1326a127bdc361c0435c9cd05e91cd0f03"} Jan 27 21:20:45 crc kubenswrapper[4915]: I0127 21:20:45.611198 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zwjg5" Jan 27 21:20:45 crc kubenswrapper[4915]: I0127 21:20:45.611216 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwjg5" event={"ID":"86f4123b-20b5-44c5-8f55-e9980b3bfc68","Type":"ContainerDied","Data":"756260f20dd4088e20da9837180fadc7f854f101e24a105d0fd9b25caa9f493a"} Jan 27 21:20:45 crc kubenswrapper[4915]: I0127 21:20:45.611237 4915 scope.go:117] "RemoveContainer" containerID="5d793745b3757867e7aae755b99c7c1326a127bdc361c0435c9cd05e91cd0f03" Jan 27 21:20:45 crc kubenswrapper[4915]: I0127 21:20:45.645696 4915 scope.go:117] "RemoveContainer" containerID="2c16a383461e07caa2deb3ad2fadfb6d554c92957f6183ca5924705d0a1ff417" Jan 27 21:20:45 crc kubenswrapper[4915]: I0127 21:20:45.646885 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zwjg5"] Jan 27 21:20:45 crc kubenswrapper[4915]: I0127 21:20:45.657542 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zwjg5"] Jan 27 21:20:45 crc kubenswrapper[4915]: I0127 21:20:45.671961 4915 scope.go:117] "RemoveContainer" containerID="dbf5d9ce7d25c9113bf77ad7bfdf4ea67c22ee2ab479c9c2654d1da53064d08c" Jan 27 21:20:45 crc kubenswrapper[4915]: I0127 21:20:45.711362 4915 scope.go:117] "RemoveContainer" containerID="5d793745b3757867e7aae755b99c7c1326a127bdc361c0435c9cd05e91cd0f03" Jan 27 21:20:45 crc kubenswrapper[4915]: E0127 21:20:45.712248 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d793745b3757867e7aae755b99c7c1326a127bdc361c0435c9cd05e91cd0f03\": container with ID starting with 5d793745b3757867e7aae755b99c7c1326a127bdc361c0435c9cd05e91cd0f03 not found: ID does not exist" containerID="5d793745b3757867e7aae755b99c7c1326a127bdc361c0435c9cd05e91cd0f03" Jan 27 21:20:45 crc kubenswrapper[4915]: I0127 21:20:45.712291 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d793745b3757867e7aae755b99c7c1326a127bdc361c0435c9cd05e91cd0f03"} err="failed to get container status \"5d793745b3757867e7aae755b99c7c1326a127bdc361c0435c9cd05e91cd0f03\": rpc error: code = NotFound desc = could not find container \"5d793745b3757867e7aae755b99c7c1326a127bdc361c0435c9cd05e91cd0f03\": container with ID starting with 5d793745b3757867e7aae755b99c7c1326a127bdc361c0435c9cd05e91cd0f03 not found: ID does not exist" Jan 27 21:20:45 crc kubenswrapper[4915]: I0127 21:20:45.712320 4915 scope.go:117] "RemoveContainer" containerID="2c16a383461e07caa2deb3ad2fadfb6d554c92957f6183ca5924705d0a1ff417" Jan 27 21:20:45 crc kubenswrapper[4915]: E0127 21:20:45.712769 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c16a383461e07caa2deb3ad2fadfb6d554c92957f6183ca5924705d0a1ff417\": container with ID starting with 2c16a383461e07caa2deb3ad2fadfb6d554c92957f6183ca5924705d0a1ff417 not found: ID does not exist" containerID="2c16a383461e07caa2deb3ad2fadfb6d554c92957f6183ca5924705d0a1ff417" Jan 27 21:20:45 crc kubenswrapper[4915]: I0127 21:20:45.712840 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c16a383461e07caa2deb3ad2fadfb6d554c92957f6183ca5924705d0a1ff417"} err="failed to get container status \"2c16a383461e07caa2deb3ad2fadfb6d554c92957f6183ca5924705d0a1ff417\": rpc error: code = NotFound desc = could not find container \"2c16a383461e07caa2deb3ad2fadfb6d554c92957f6183ca5924705d0a1ff417\": container with ID starting with 2c16a383461e07caa2deb3ad2fadfb6d554c92957f6183ca5924705d0a1ff417 not found: ID does not exist" Jan 27 21:20:45 crc kubenswrapper[4915]: I0127 21:20:45.712867 4915 scope.go:117] "RemoveContainer" containerID="dbf5d9ce7d25c9113bf77ad7bfdf4ea67c22ee2ab479c9c2654d1da53064d08c" Jan 27 21:20:45 crc kubenswrapper[4915]: E0127 21:20:45.713229 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbf5d9ce7d25c9113bf77ad7bfdf4ea67c22ee2ab479c9c2654d1da53064d08c\": container with ID starting with dbf5d9ce7d25c9113bf77ad7bfdf4ea67c22ee2ab479c9c2654d1da53064d08c not found: ID does not exist" containerID="dbf5d9ce7d25c9113bf77ad7bfdf4ea67c22ee2ab479c9c2654d1da53064d08c" Jan 27 21:20:45 crc kubenswrapper[4915]: I0127 21:20:45.713272 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbf5d9ce7d25c9113bf77ad7bfdf4ea67c22ee2ab479c9c2654d1da53064d08c"} err="failed to get container status \"dbf5d9ce7d25c9113bf77ad7bfdf4ea67c22ee2ab479c9c2654d1da53064d08c\": rpc error: code = NotFound desc = could not find container \"dbf5d9ce7d25c9113bf77ad7bfdf4ea67c22ee2ab479c9c2654d1da53064d08c\": container with ID starting with dbf5d9ce7d25c9113bf77ad7bfdf4ea67c22ee2ab479c9c2654d1da53064d08c not found: ID does not exist" Jan 27 21:20:47 crc kubenswrapper[4915]: I0127 21:20:47.374680 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86f4123b-20b5-44c5-8f55-e9980b3bfc68" path="/var/lib/kubelet/pods/86f4123b-20b5-44c5-8f55-e9980b3bfc68/volumes" Jan 27 21:22:50 crc kubenswrapper[4915]: I0127 21:22:50.625057 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:22:50 crc kubenswrapper[4915]: I0127 21:22:50.625775 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:23:20 crc kubenswrapper[4915]: I0127 21:23:20.624225 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:23:20 crc kubenswrapper[4915]: I0127 21:23:20.624680 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:23:50 crc kubenswrapper[4915]: I0127 21:23:50.625368 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:23:50 crc kubenswrapper[4915]: I0127 21:23:50.626311 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:23:50 crc kubenswrapper[4915]: I0127 21:23:50.626383 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 21:23:50 crc kubenswrapper[4915]: I0127 21:23:50.627351 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"06f791b78b57e367f956c55040a37cbc5f4f8e197a80eca6e9f9c70b520b87b9"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 21:23:50 crc kubenswrapper[4915]: I0127 21:23:50.627429 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://06f791b78b57e367f956c55040a37cbc5f4f8e197a80eca6e9f9c70b520b87b9" gracePeriod=600 Jan 27 21:23:51 crc kubenswrapper[4915]: I0127 21:23:51.420001 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="06f791b78b57e367f956c55040a37cbc5f4f8e197a80eca6e9f9c70b520b87b9" exitCode=0 Jan 27 21:23:51 crc kubenswrapper[4915]: I0127 21:23:51.420041 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"06f791b78b57e367f956c55040a37cbc5f4f8e197a80eca6e9f9c70b520b87b9"} Jan 27 21:23:51 crc kubenswrapper[4915]: I0127 21:23:51.420518 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0"} Jan 27 21:23:51 crc kubenswrapper[4915]: I0127 21:23:51.420595 4915 scope.go:117] "RemoveContainer" containerID="08b61bcb968ed9eb369154cc158773b39b09e710dafde33b796066ced382be18" Jan 27 21:24:02 crc kubenswrapper[4915]: I0127 21:24:02.080213 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mzllq"] Jan 27 21:24:02 crc kubenswrapper[4915]: E0127 21:24:02.081069 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86f4123b-20b5-44c5-8f55-e9980b3bfc68" containerName="extract-content" Jan 27 21:24:02 crc kubenswrapper[4915]: I0127 21:24:02.081083 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="86f4123b-20b5-44c5-8f55-e9980b3bfc68" containerName="extract-content" Jan 27 21:24:02 crc kubenswrapper[4915]: E0127 21:24:02.081095 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86f4123b-20b5-44c5-8f55-e9980b3bfc68" containerName="registry-server" Jan 27 21:24:02 crc kubenswrapper[4915]: I0127 21:24:02.081101 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="86f4123b-20b5-44c5-8f55-e9980b3bfc68" containerName="registry-server" Jan 27 21:24:02 crc kubenswrapper[4915]: E0127 21:24:02.081115 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86f4123b-20b5-44c5-8f55-e9980b3bfc68" containerName="extract-utilities" Jan 27 21:24:02 crc kubenswrapper[4915]: I0127 21:24:02.081121 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="86f4123b-20b5-44c5-8f55-e9980b3bfc68" containerName="extract-utilities" Jan 27 21:24:02 crc kubenswrapper[4915]: I0127 21:24:02.081294 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="86f4123b-20b5-44c5-8f55-e9980b3bfc68" containerName="registry-server" Jan 27 21:24:02 crc kubenswrapper[4915]: I0127 21:24:02.083293 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mzllq" Jan 27 21:24:02 crc kubenswrapper[4915]: I0127 21:24:02.097215 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mzllq"] Jan 27 21:24:02 crc kubenswrapper[4915]: I0127 21:24:02.208106 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93ed0a15-240d-4a71-8467-b736e743db7a-catalog-content\") pod \"community-operators-mzllq\" (UID: \"93ed0a15-240d-4a71-8467-b736e743db7a\") " pod="openshift-marketplace/community-operators-mzllq" Jan 27 21:24:02 crc kubenswrapper[4915]: I0127 21:24:02.208499 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rscl\" (UniqueName: \"kubernetes.io/projected/93ed0a15-240d-4a71-8467-b736e743db7a-kube-api-access-6rscl\") pod \"community-operators-mzllq\" (UID: \"93ed0a15-240d-4a71-8467-b736e743db7a\") " pod="openshift-marketplace/community-operators-mzllq" Jan 27 21:24:02 crc kubenswrapper[4915]: I0127 21:24:02.208656 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93ed0a15-240d-4a71-8467-b736e743db7a-utilities\") pod \"community-operators-mzllq\" (UID: \"93ed0a15-240d-4a71-8467-b736e743db7a\") " pod="openshift-marketplace/community-operators-mzllq" Jan 27 21:24:02 crc kubenswrapper[4915]: I0127 21:24:02.310481 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93ed0a15-240d-4a71-8467-b736e743db7a-catalog-content\") pod \"community-operators-mzllq\" (UID: \"93ed0a15-240d-4a71-8467-b736e743db7a\") " pod="openshift-marketplace/community-operators-mzllq" Jan 27 21:24:02 crc kubenswrapper[4915]: I0127 21:24:02.310587 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rscl\" (UniqueName: \"kubernetes.io/projected/93ed0a15-240d-4a71-8467-b736e743db7a-kube-api-access-6rscl\") pod \"community-operators-mzllq\" (UID: \"93ed0a15-240d-4a71-8467-b736e743db7a\") " pod="openshift-marketplace/community-operators-mzllq" Jan 27 21:24:02 crc kubenswrapper[4915]: I0127 21:24:02.310638 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93ed0a15-240d-4a71-8467-b736e743db7a-utilities\") pod \"community-operators-mzllq\" (UID: \"93ed0a15-240d-4a71-8467-b736e743db7a\") " pod="openshift-marketplace/community-operators-mzllq" Jan 27 21:24:02 crc kubenswrapper[4915]: I0127 21:24:02.311073 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93ed0a15-240d-4a71-8467-b736e743db7a-catalog-content\") pod \"community-operators-mzllq\" (UID: \"93ed0a15-240d-4a71-8467-b736e743db7a\") " pod="openshift-marketplace/community-operators-mzllq" Jan 27 21:24:02 crc kubenswrapper[4915]: I0127 21:24:02.311112 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93ed0a15-240d-4a71-8467-b736e743db7a-utilities\") pod \"community-operators-mzllq\" (UID: \"93ed0a15-240d-4a71-8467-b736e743db7a\") " pod="openshift-marketplace/community-operators-mzllq" Jan 27 21:24:02 crc kubenswrapper[4915]: I0127 21:24:02.327772 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rscl\" (UniqueName: \"kubernetes.io/projected/93ed0a15-240d-4a71-8467-b736e743db7a-kube-api-access-6rscl\") pod \"community-operators-mzllq\" (UID: \"93ed0a15-240d-4a71-8467-b736e743db7a\") " pod="openshift-marketplace/community-operators-mzllq" Jan 27 21:24:02 crc kubenswrapper[4915]: I0127 21:24:02.411637 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mzllq" Jan 27 21:24:02 crc kubenswrapper[4915]: I0127 21:24:02.887739 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mzllq"] Jan 27 21:24:03 crc kubenswrapper[4915]: I0127 21:24:03.544192 4915 generic.go:334] "Generic (PLEG): container finished" podID="93ed0a15-240d-4a71-8467-b736e743db7a" containerID="8b2ae8eb1ca8d5fcbf8081588401059580d91cebd46182c205d216061cedbe4f" exitCode=0 Jan 27 21:24:03 crc kubenswrapper[4915]: I0127 21:24:03.544383 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzllq" event={"ID":"93ed0a15-240d-4a71-8467-b736e743db7a","Type":"ContainerDied","Data":"8b2ae8eb1ca8d5fcbf8081588401059580d91cebd46182c205d216061cedbe4f"} Jan 27 21:24:03 crc kubenswrapper[4915]: I0127 21:24:03.544810 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzllq" event={"ID":"93ed0a15-240d-4a71-8467-b736e743db7a","Type":"ContainerStarted","Data":"61531bcf47ebb053118950b0b72f9b7aff183f5bcf50993e25a7e21f9007ae46"} Jan 27 21:24:03 crc kubenswrapper[4915]: I0127 21:24:03.548206 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 21:24:04 crc kubenswrapper[4915]: I0127 21:24:04.554286 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzllq" event={"ID":"93ed0a15-240d-4a71-8467-b736e743db7a","Type":"ContainerStarted","Data":"9f30c75e2d2f8e735d310e32357b1eaa5c07d46fb532951ab0864e528254ef3d"} Jan 27 21:24:05 crc kubenswrapper[4915]: I0127 21:24:05.567759 4915 generic.go:334] "Generic (PLEG): container finished" podID="93ed0a15-240d-4a71-8467-b736e743db7a" containerID="9f30c75e2d2f8e735d310e32357b1eaa5c07d46fb532951ab0864e528254ef3d" exitCode=0 Jan 27 21:24:05 crc kubenswrapper[4915]: I0127 21:24:05.567910 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzllq" event={"ID":"93ed0a15-240d-4a71-8467-b736e743db7a","Type":"ContainerDied","Data":"9f30c75e2d2f8e735d310e32357b1eaa5c07d46fb532951ab0864e528254ef3d"} Jan 27 21:24:06 crc kubenswrapper[4915]: I0127 21:24:06.578011 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzllq" event={"ID":"93ed0a15-240d-4a71-8467-b736e743db7a","Type":"ContainerStarted","Data":"bee7c34b37dcbaeffe3712d8bf5703e06c366cb76baa131b7a7e659af7409893"} Jan 27 21:24:06 crc kubenswrapper[4915]: I0127 21:24:06.598726 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mzllq" podStartSLOduration=2.108276495 podStartE2EDuration="4.59870708s" podCreationTimestamp="2026-01-27 21:24:02 +0000 UTC" firstStartedPulling="2026-01-27 21:24:03.547974833 +0000 UTC m=+9734.905828497" lastFinishedPulling="2026-01-27 21:24:06.038405418 +0000 UTC m=+9737.396259082" observedRunningTime="2026-01-27 21:24:06.593894551 +0000 UTC m=+9737.951748235" watchObservedRunningTime="2026-01-27 21:24:06.59870708 +0000 UTC m=+9737.956560744" Jan 27 21:24:12 crc kubenswrapper[4915]: I0127 21:24:12.412062 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mzllq" Jan 27 21:24:12 crc kubenswrapper[4915]: I0127 21:24:12.412568 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mzllq" Jan 27 21:24:12 crc kubenswrapper[4915]: I0127 21:24:12.491832 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mzllq" Jan 27 21:24:12 crc kubenswrapper[4915]: I0127 21:24:12.675870 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mzllq" Jan 27 21:24:12 crc kubenswrapper[4915]: I0127 21:24:12.728696 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mzllq"] Jan 27 21:24:14 crc kubenswrapper[4915]: I0127 21:24:14.651247 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mzllq" podUID="93ed0a15-240d-4a71-8467-b736e743db7a" containerName="registry-server" containerID="cri-o://bee7c34b37dcbaeffe3712d8bf5703e06c366cb76baa131b7a7e659af7409893" gracePeriod=2 Jan 27 21:24:15 crc kubenswrapper[4915]: I0127 21:24:15.180897 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mzllq" Jan 27 21:24:15 crc kubenswrapper[4915]: I0127 21:24:15.288975 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93ed0a15-240d-4a71-8467-b736e743db7a-catalog-content\") pod \"93ed0a15-240d-4a71-8467-b736e743db7a\" (UID: \"93ed0a15-240d-4a71-8467-b736e743db7a\") " Jan 27 21:24:15 crc kubenswrapper[4915]: I0127 21:24:15.289668 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93ed0a15-240d-4a71-8467-b736e743db7a-utilities\") pod \"93ed0a15-240d-4a71-8467-b736e743db7a\" (UID: \"93ed0a15-240d-4a71-8467-b736e743db7a\") " Jan 27 21:24:15 crc kubenswrapper[4915]: I0127 21:24:15.289966 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rscl\" (UniqueName: \"kubernetes.io/projected/93ed0a15-240d-4a71-8467-b736e743db7a-kube-api-access-6rscl\") pod \"93ed0a15-240d-4a71-8467-b736e743db7a\" (UID: \"93ed0a15-240d-4a71-8467-b736e743db7a\") " Jan 27 21:24:15 crc kubenswrapper[4915]: I0127 21:24:15.290460 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93ed0a15-240d-4a71-8467-b736e743db7a-utilities" (OuterVolumeSpecName: "utilities") pod "93ed0a15-240d-4a71-8467-b736e743db7a" (UID: "93ed0a15-240d-4a71-8467-b736e743db7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:24:15 crc kubenswrapper[4915]: I0127 21:24:15.292455 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93ed0a15-240d-4a71-8467-b736e743db7a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:24:15 crc kubenswrapper[4915]: I0127 21:24:15.302976 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93ed0a15-240d-4a71-8467-b736e743db7a-kube-api-access-6rscl" (OuterVolumeSpecName: "kube-api-access-6rscl") pod "93ed0a15-240d-4a71-8467-b736e743db7a" (UID: "93ed0a15-240d-4a71-8467-b736e743db7a"). InnerVolumeSpecName "kube-api-access-6rscl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:24:15 crc kubenswrapper[4915]: I0127 21:24:15.340697 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93ed0a15-240d-4a71-8467-b736e743db7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "93ed0a15-240d-4a71-8467-b736e743db7a" (UID: "93ed0a15-240d-4a71-8467-b736e743db7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:24:15 crc kubenswrapper[4915]: I0127 21:24:15.394331 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93ed0a15-240d-4a71-8467-b736e743db7a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:24:15 crc kubenswrapper[4915]: I0127 21:24:15.394472 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rscl\" (UniqueName: \"kubernetes.io/projected/93ed0a15-240d-4a71-8467-b736e743db7a-kube-api-access-6rscl\") on node \"crc\" DevicePath \"\"" Jan 27 21:24:15 crc kubenswrapper[4915]: I0127 21:24:15.668548 4915 generic.go:334] "Generic (PLEG): container finished" podID="93ed0a15-240d-4a71-8467-b736e743db7a" containerID="bee7c34b37dcbaeffe3712d8bf5703e06c366cb76baa131b7a7e659af7409893" exitCode=0 Jan 27 21:24:15 crc kubenswrapper[4915]: I0127 21:24:15.668653 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzllq" event={"ID":"93ed0a15-240d-4a71-8467-b736e743db7a","Type":"ContainerDied","Data":"bee7c34b37dcbaeffe3712d8bf5703e06c366cb76baa131b7a7e659af7409893"} Jan 27 21:24:15 crc kubenswrapper[4915]: I0127 21:24:15.669080 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzllq" event={"ID":"93ed0a15-240d-4a71-8467-b736e743db7a","Type":"ContainerDied","Data":"61531bcf47ebb053118950b0b72f9b7aff183f5bcf50993e25a7e21f9007ae46"} Jan 27 21:24:15 crc kubenswrapper[4915]: I0127 21:24:15.669118 4915 scope.go:117] "RemoveContainer" containerID="bee7c34b37dcbaeffe3712d8bf5703e06c366cb76baa131b7a7e659af7409893" Jan 27 21:24:15 crc kubenswrapper[4915]: I0127 21:24:15.668666 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mzllq" Jan 27 21:24:15 crc kubenswrapper[4915]: I0127 21:24:15.698084 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mzllq"] Jan 27 21:24:15 crc kubenswrapper[4915]: I0127 21:24:15.707745 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mzllq"] Jan 27 21:24:15 crc kubenswrapper[4915]: I0127 21:24:15.708226 4915 scope.go:117] "RemoveContainer" containerID="9f30c75e2d2f8e735d310e32357b1eaa5c07d46fb532951ab0864e528254ef3d" Jan 27 21:24:15 crc kubenswrapper[4915]: I0127 21:24:15.737020 4915 scope.go:117] "RemoveContainer" containerID="8b2ae8eb1ca8d5fcbf8081588401059580d91cebd46182c205d216061cedbe4f" Jan 27 21:24:15 crc kubenswrapper[4915]: I0127 21:24:15.814251 4915 scope.go:117] "RemoveContainer" containerID="bee7c34b37dcbaeffe3712d8bf5703e06c366cb76baa131b7a7e659af7409893" Jan 27 21:24:15 crc kubenswrapper[4915]: E0127 21:24:15.815255 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bee7c34b37dcbaeffe3712d8bf5703e06c366cb76baa131b7a7e659af7409893\": container with ID starting with bee7c34b37dcbaeffe3712d8bf5703e06c366cb76baa131b7a7e659af7409893 not found: ID does not exist" containerID="bee7c34b37dcbaeffe3712d8bf5703e06c366cb76baa131b7a7e659af7409893" Jan 27 21:24:15 crc kubenswrapper[4915]: I0127 21:24:15.815328 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bee7c34b37dcbaeffe3712d8bf5703e06c366cb76baa131b7a7e659af7409893"} err="failed to get container status \"bee7c34b37dcbaeffe3712d8bf5703e06c366cb76baa131b7a7e659af7409893\": rpc error: code = NotFound desc = could not find container \"bee7c34b37dcbaeffe3712d8bf5703e06c366cb76baa131b7a7e659af7409893\": container with ID starting with bee7c34b37dcbaeffe3712d8bf5703e06c366cb76baa131b7a7e659af7409893 not found: ID does not exist" Jan 27 21:24:15 crc kubenswrapper[4915]: I0127 21:24:15.815374 4915 scope.go:117] "RemoveContainer" containerID="9f30c75e2d2f8e735d310e32357b1eaa5c07d46fb532951ab0864e528254ef3d" Jan 27 21:24:15 crc kubenswrapper[4915]: E0127 21:24:15.815892 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f30c75e2d2f8e735d310e32357b1eaa5c07d46fb532951ab0864e528254ef3d\": container with ID starting with 9f30c75e2d2f8e735d310e32357b1eaa5c07d46fb532951ab0864e528254ef3d not found: ID does not exist" containerID="9f30c75e2d2f8e735d310e32357b1eaa5c07d46fb532951ab0864e528254ef3d" Jan 27 21:24:15 crc kubenswrapper[4915]: I0127 21:24:15.815927 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f30c75e2d2f8e735d310e32357b1eaa5c07d46fb532951ab0864e528254ef3d"} err="failed to get container status \"9f30c75e2d2f8e735d310e32357b1eaa5c07d46fb532951ab0864e528254ef3d\": rpc error: code = NotFound desc = could not find container \"9f30c75e2d2f8e735d310e32357b1eaa5c07d46fb532951ab0864e528254ef3d\": container with ID starting with 9f30c75e2d2f8e735d310e32357b1eaa5c07d46fb532951ab0864e528254ef3d not found: ID does not exist" Jan 27 21:24:15 crc kubenswrapper[4915]: I0127 21:24:15.816012 4915 scope.go:117] "RemoveContainer" containerID="8b2ae8eb1ca8d5fcbf8081588401059580d91cebd46182c205d216061cedbe4f" Jan 27 21:24:15 crc kubenswrapper[4915]: E0127 21:24:15.816383 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b2ae8eb1ca8d5fcbf8081588401059580d91cebd46182c205d216061cedbe4f\": container with ID starting with 8b2ae8eb1ca8d5fcbf8081588401059580d91cebd46182c205d216061cedbe4f not found: ID does not exist" containerID="8b2ae8eb1ca8d5fcbf8081588401059580d91cebd46182c205d216061cedbe4f" Jan 27 21:24:15 crc kubenswrapper[4915]: I0127 21:24:15.816420 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b2ae8eb1ca8d5fcbf8081588401059580d91cebd46182c205d216061cedbe4f"} err="failed to get container status \"8b2ae8eb1ca8d5fcbf8081588401059580d91cebd46182c205d216061cedbe4f\": rpc error: code = NotFound desc = could not find container \"8b2ae8eb1ca8d5fcbf8081588401059580d91cebd46182c205d216061cedbe4f\": container with ID starting with 8b2ae8eb1ca8d5fcbf8081588401059580d91cebd46182c205d216061cedbe4f not found: ID does not exist" Jan 27 21:24:17 crc kubenswrapper[4915]: I0127 21:24:17.375347 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93ed0a15-240d-4a71-8467-b736e743db7a" path="/var/lib/kubelet/pods/93ed0a15-240d-4a71-8467-b736e743db7a/volumes" Jan 27 21:25:12 crc kubenswrapper[4915]: I0127 21:25:12.603662 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vt59w/must-gather-9w4f4"] Jan 27 21:25:12 crc kubenswrapper[4915]: E0127 21:25:12.604722 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93ed0a15-240d-4a71-8467-b736e743db7a" containerName="extract-content" Jan 27 21:25:12 crc kubenswrapper[4915]: I0127 21:25:12.604737 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="93ed0a15-240d-4a71-8467-b736e743db7a" containerName="extract-content" Jan 27 21:25:12 crc kubenswrapper[4915]: E0127 21:25:12.604750 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93ed0a15-240d-4a71-8467-b736e743db7a" containerName="extract-utilities" Jan 27 21:25:12 crc kubenswrapper[4915]: I0127 21:25:12.604758 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="93ed0a15-240d-4a71-8467-b736e743db7a" containerName="extract-utilities" Jan 27 21:25:12 crc kubenswrapper[4915]: E0127 21:25:12.604775 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93ed0a15-240d-4a71-8467-b736e743db7a" containerName="registry-server" Jan 27 21:25:12 crc kubenswrapper[4915]: I0127 21:25:12.604784 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="93ed0a15-240d-4a71-8467-b736e743db7a" containerName="registry-server" Jan 27 21:25:12 crc kubenswrapper[4915]: I0127 21:25:12.605048 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="93ed0a15-240d-4a71-8467-b736e743db7a" containerName="registry-server" Jan 27 21:25:12 crc kubenswrapper[4915]: I0127 21:25:12.606522 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vt59w/must-gather-9w4f4" Jan 27 21:25:12 crc kubenswrapper[4915]: I0127 21:25:12.608720 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vt59w"/"kube-root-ca.crt" Jan 27 21:25:12 crc kubenswrapper[4915]: I0127 21:25:12.608961 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vt59w"/"default-dockercfg-nz7bv" Jan 27 21:25:12 crc kubenswrapper[4915]: I0127 21:25:12.610454 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vt59w"/"openshift-service-ca.crt" Jan 27 21:25:12 crc kubenswrapper[4915]: I0127 21:25:12.627548 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vt59w/must-gather-9w4f4"] Jan 27 21:25:12 crc kubenswrapper[4915]: I0127 21:25:12.684235 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0f9c81c5-31fb-48a5-9a0d-6f87b46ec965-must-gather-output\") pod \"must-gather-9w4f4\" (UID: \"0f9c81c5-31fb-48a5-9a0d-6f87b46ec965\") " pod="openshift-must-gather-vt59w/must-gather-9w4f4" Jan 27 21:25:12 crc kubenswrapper[4915]: I0127 21:25:12.684308 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t2nz\" (UniqueName: \"kubernetes.io/projected/0f9c81c5-31fb-48a5-9a0d-6f87b46ec965-kube-api-access-5t2nz\") pod \"must-gather-9w4f4\" (UID: \"0f9c81c5-31fb-48a5-9a0d-6f87b46ec965\") " pod="openshift-must-gather-vt59w/must-gather-9w4f4" Jan 27 21:25:12 crc kubenswrapper[4915]: I0127 21:25:12.786103 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0f9c81c5-31fb-48a5-9a0d-6f87b46ec965-must-gather-output\") pod \"must-gather-9w4f4\" (UID: \"0f9c81c5-31fb-48a5-9a0d-6f87b46ec965\") " pod="openshift-must-gather-vt59w/must-gather-9w4f4" Jan 27 21:25:12 crc kubenswrapper[4915]: I0127 21:25:12.786203 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t2nz\" (UniqueName: \"kubernetes.io/projected/0f9c81c5-31fb-48a5-9a0d-6f87b46ec965-kube-api-access-5t2nz\") pod \"must-gather-9w4f4\" (UID: \"0f9c81c5-31fb-48a5-9a0d-6f87b46ec965\") " pod="openshift-must-gather-vt59w/must-gather-9w4f4" Jan 27 21:25:12 crc kubenswrapper[4915]: I0127 21:25:12.786651 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0f9c81c5-31fb-48a5-9a0d-6f87b46ec965-must-gather-output\") pod \"must-gather-9w4f4\" (UID: \"0f9c81c5-31fb-48a5-9a0d-6f87b46ec965\") " pod="openshift-must-gather-vt59w/must-gather-9w4f4" Jan 27 21:25:12 crc kubenswrapper[4915]: I0127 21:25:12.806313 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t2nz\" (UniqueName: \"kubernetes.io/projected/0f9c81c5-31fb-48a5-9a0d-6f87b46ec965-kube-api-access-5t2nz\") pod \"must-gather-9w4f4\" (UID: \"0f9c81c5-31fb-48a5-9a0d-6f87b46ec965\") " pod="openshift-must-gather-vt59w/must-gather-9w4f4" Jan 27 21:25:12 crc kubenswrapper[4915]: I0127 21:25:12.929371 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vt59w/must-gather-9w4f4" Jan 27 21:25:13 crc kubenswrapper[4915]: I0127 21:25:13.370693 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vt59w/must-gather-9w4f4"] Jan 27 21:25:13 crc kubenswrapper[4915]: W0127 21:25:13.373936 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f9c81c5_31fb_48a5_9a0d_6f87b46ec965.slice/crio-11c3a22ae8d630358d825d92bbf971faf3c4e8526f97367542611da1fa3d9481 WatchSource:0}: Error finding container 11c3a22ae8d630358d825d92bbf971faf3c4e8526f97367542611da1fa3d9481: Status 404 returned error can't find the container with id 11c3a22ae8d630358d825d92bbf971faf3c4e8526f97367542611da1fa3d9481 Jan 27 21:25:14 crc kubenswrapper[4915]: I0127 21:25:14.245717 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vt59w/must-gather-9w4f4" event={"ID":"0f9c81c5-31fb-48a5-9a0d-6f87b46ec965","Type":"ContainerStarted","Data":"11c3a22ae8d630358d825d92bbf971faf3c4e8526f97367542611da1fa3d9481"} Jan 27 21:25:20 crc kubenswrapper[4915]: I0127 21:25:20.302581 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vt59w/must-gather-9w4f4" event={"ID":"0f9c81c5-31fb-48a5-9a0d-6f87b46ec965","Type":"ContainerStarted","Data":"f7e5108a9ac1f0954ab0106f2a6833ebb593b1640367f16e57ec9ccab1e9e4f9"} Jan 27 21:25:20 crc kubenswrapper[4915]: I0127 21:25:20.303226 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vt59w/must-gather-9w4f4" event={"ID":"0f9c81c5-31fb-48a5-9a0d-6f87b46ec965","Type":"ContainerStarted","Data":"ae31fb97529f3ac8b1cbaaaa62caf2beab6e8c568f3b805f5cf7597a10a7f9d6"} Jan 27 21:25:20 crc kubenswrapper[4915]: I0127 21:25:20.318233 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vt59w/must-gather-9w4f4" podStartSLOduration=2.107342687 podStartE2EDuration="8.318214936s" podCreationTimestamp="2026-01-27 21:25:12 +0000 UTC" firstStartedPulling="2026-01-27 21:25:13.384590604 +0000 UTC m=+9804.742444268" lastFinishedPulling="2026-01-27 21:25:19.595462813 +0000 UTC m=+9810.953316517" observedRunningTime="2026-01-27 21:25:20.31672124 +0000 UTC m=+9811.674574904" watchObservedRunningTime="2026-01-27 21:25:20.318214936 +0000 UTC m=+9811.676068600" Jan 27 21:25:24 crc kubenswrapper[4915]: I0127 21:25:24.870572 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vt59w/crc-debug-qvzl8"] Jan 27 21:25:24 crc kubenswrapper[4915]: I0127 21:25:24.874233 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vt59w/crc-debug-qvzl8" Jan 27 21:25:24 crc kubenswrapper[4915]: I0127 21:25:24.944089 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/987ef9d0-475c-427e-8102-89613e048de5-host\") pod \"crc-debug-qvzl8\" (UID: \"987ef9d0-475c-427e-8102-89613e048de5\") " pod="openshift-must-gather-vt59w/crc-debug-qvzl8" Jan 27 21:25:24 crc kubenswrapper[4915]: I0127 21:25:24.944139 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fcnb\" (UniqueName: \"kubernetes.io/projected/987ef9d0-475c-427e-8102-89613e048de5-kube-api-access-5fcnb\") pod \"crc-debug-qvzl8\" (UID: \"987ef9d0-475c-427e-8102-89613e048de5\") " pod="openshift-must-gather-vt59w/crc-debug-qvzl8" Jan 27 21:25:25 crc kubenswrapper[4915]: I0127 21:25:25.046545 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/987ef9d0-475c-427e-8102-89613e048de5-host\") pod \"crc-debug-qvzl8\" (UID: \"987ef9d0-475c-427e-8102-89613e048de5\") " pod="openshift-must-gather-vt59w/crc-debug-qvzl8" Jan 27 21:25:25 crc kubenswrapper[4915]: I0127 21:25:25.046811 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fcnb\" (UniqueName: \"kubernetes.io/projected/987ef9d0-475c-427e-8102-89613e048de5-kube-api-access-5fcnb\") pod \"crc-debug-qvzl8\" (UID: \"987ef9d0-475c-427e-8102-89613e048de5\") " pod="openshift-must-gather-vt59w/crc-debug-qvzl8" Jan 27 21:25:25 crc kubenswrapper[4915]: I0127 21:25:25.046839 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/987ef9d0-475c-427e-8102-89613e048de5-host\") pod \"crc-debug-qvzl8\" (UID: \"987ef9d0-475c-427e-8102-89613e048de5\") " pod="openshift-must-gather-vt59w/crc-debug-qvzl8" Jan 27 21:25:25 crc kubenswrapper[4915]: I0127 21:25:25.071529 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fcnb\" (UniqueName: \"kubernetes.io/projected/987ef9d0-475c-427e-8102-89613e048de5-kube-api-access-5fcnb\") pod \"crc-debug-qvzl8\" (UID: \"987ef9d0-475c-427e-8102-89613e048de5\") " pod="openshift-must-gather-vt59w/crc-debug-qvzl8" Jan 27 21:25:25 crc kubenswrapper[4915]: I0127 21:25:25.203569 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vt59w/crc-debug-qvzl8" Jan 27 21:25:25 crc kubenswrapper[4915]: I0127 21:25:25.371684 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vt59w/crc-debug-qvzl8" event={"ID":"987ef9d0-475c-427e-8102-89613e048de5","Type":"ContainerStarted","Data":"6ea3d6b45a65a634ae15baccaf9667a1b6019a995b0e5666af76b9547fee5faf"} Jan 27 21:25:37 crc kubenswrapper[4915]: I0127 21:25:37.475956 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vt59w/crc-debug-qvzl8" event={"ID":"987ef9d0-475c-427e-8102-89613e048de5","Type":"ContainerStarted","Data":"536885c1a11b7ee1ad9d891a60f08c875625249d4c0a3d95b48b21d13c49ff19"} Jan 27 21:25:37 crc kubenswrapper[4915]: I0127 21:25:37.495412 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vt59w/crc-debug-qvzl8" podStartSLOduration=1.542849011 podStartE2EDuration="13.495398453s" podCreationTimestamp="2026-01-27 21:25:24 +0000 UTC" firstStartedPulling="2026-01-27 21:25:25.235724017 +0000 UTC m=+9816.593577681" lastFinishedPulling="2026-01-27 21:25:37.188273459 +0000 UTC m=+9828.546127123" observedRunningTime="2026-01-27 21:25:37.488042242 +0000 UTC m=+9828.845895906" watchObservedRunningTime="2026-01-27 21:25:37.495398453 +0000 UTC m=+9828.853252117" Jan 27 21:25:50 crc kubenswrapper[4915]: I0127 21:25:50.625283 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:25:50 crc kubenswrapper[4915]: I0127 21:25:50.625840 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:25:56 crc kubenswrapper[4915]: I0127 21:25:56.628732 4915 generic.go:334] "Generic (PLEG): container finished" podID="987ef9d0-475c-427e-8102-89613e048de5" containerID="536885c1a11b7ee1ad9d891a60f08c875625249d4c0a3d95b48b21d13c49ff19" exitCode=0 Jan 27 21:25:56 crc kubenswrapper[4915]: I0127 21:25:56.628830 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vt59w/crc-debug-qvzl8" event={"ID":"987ef9d0-475c-427e-8102-89613e048de5","Type":"ContainerDied","Data":"536885c1a11b7ee1ad9d891a60f08c875625249d4c0a3d95b48b21d13c49ff19"} Jan 27 21:25:57 crc kubenswrapper[4915]: I0127 21:25:57.775267 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vt59w/crc-debug-qvzl8" Jan 27 21:25:57 crc kubenswrapper[4915]: I0127 21:25:57.817318 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vt59w/crc-debug-qvzl8"] Jan 27 21:25:57 crc kubenswrapper[4915]: I0127 21:25:57.826735 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vt59w/crc-debug-qvzl8"] Jan 27 21:25:57 crc kubenswrapper[4915]: I0127 21:25:57.964456 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fcnb\" (UniqueName: \"kubernetes.io/projected/987ef9d0-475c-427e-8102-89613e048de5-kube-api-access-5fcnb\") pod \"987ef9d0-475c-427e-8102-89613e048de5\" (UID: \"987ef9d0-475c-427e-8102-89613e048de5\") " Jan 27 21:25:57 crc kubenswrapper[4915]: I0127 21:25:57.964646 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/987ef9d0-475c-427e-8102-89613e048de5-host\") pod \"987ef9d0-475c-427e-8102-89613e048de5\" (UID: \"987ef9d0-475c-427e-8102-89613e048de5\") " Jan 27 21:25:57 crc kubenswrapper[4915]: I0127 21:25:57.964715 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/987ef9d0-475c-427e-8102-89613e048de5-host" (OuterVolumeSpecName: "host") pod "987ef9d0-475c-427e-8102-89613e048de5" (UID: "987ef9d0-475c-427e-8102-89613e048de5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:25:57 crc kubenswrapper[4915]: I0127 21:25:57.965178 4915 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/987ef9d0-475c-427e-8102-89613e048de5-host\") on node \"crc\" DevicePath \"\"" Jan 27 21:25:57 crc kubenswrapper[4915]: I0127 21:25:57.970203 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/987ef9d0-475c-427e-8102-89613e048de5-kube-api-access-5fcnb" (OuterVolumeSpecName: "kube-api-access-5fcnb") pod "987ef9d0-475c-427e-8102-89613e048de5" (UID: "987ef9d0-475c-427e-8102-89613e048de5"). InnerVolumeSpecName "kube-api-access-5fcnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:25:58 crc kubenswrapper[4915]: I0127 21:25:58.067128 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fcnb\" (UniqueName: \"kubernetes.io/projected/987ef9d0-475c-427e-8102-89613e048de5-kube-api-access-5fcnb\") on node \"crc\" DevicePath \"\"" Jan 27 21:25:58 crc kubenswrapper[4915]: I0127 21:25:58.647167 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ea3d6b45a65a634ae15baccaf9667a1b6019a995b0e5666af76b9547fee5faf" Jan 27 21:25:58 crc kubenswrapper[4915]: I0127 21:25:58.647216 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vt59w/crc-debug-qvzl8" Jan 27 21:25:59 crc kubenswrapper[4915]: I0127 21:25:59.015011 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vt59w/crc-debug-79x88"] Jan 27 21:25:59 crc kubenswrapper[4915]: E0127 21:25:59.015460 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="987ef9d0-475c-427e-8102-89613e048de5" containerName="container-00" Jan 27 21:25:59 crc kubenswrapper[4915]: I0127 21:25:59.015477 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="987ef9d0-475c-427e-8102-89613e048de5" containerName="container-00" Jan 27 21:25:59 crc kubenswrapper[4915]: I0127 21:25:59.015685 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="987ef9d0-475c-427e-8102-89613e048de5" containerName="container-00" Jan 27 21:25:59 crc kubenswrapper[4915]: I0127 21:25:59.016426 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vt59w/crc-debug-79x88" Jan 27 21:25:59 crc kubenswrapper[4915]: I0127 21:25:59.083497 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be8f6fa4-46d0-4800-b79f-4bac72f17776-host\") pod \"crc-debug-79x88\" (UID: \"be8f6fa4-46d0-4800-b79f-4bac72f17776\") " pod="openshift-must-gather-vt59w/crc-debug-79x88" Jan 27 21:25:59 crc kubenswrapper[4915]: I0127 21:25:59.083572 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxvgp\" (UniqueName: \"kubernetes.io/projected/be8f6fa4-46d0-4800-b79f-4bac72f17776-kube-api-access-jxvgp\") pod \"crc-debug-79x88\" (UID: \"be8f6fa4-46d0-4800-b79f-4bac72f17776\") " pod="openshift-must-gather-vt59w/crc-debug-79x88" Jan 27 21:25:59 crc kubenswrapper[4915]: I0127 21:25:59.184979 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be8f6fa4-46d0-4800-b79f-4bac72f17776-host\") pod \"crc-debug-79x88\" (UID: \"be8f6fa4-46d0-4800-b79f-4bac72f17776\") " pod="openshift-must-gather-vt59w/crc-debug-79x88" Jan 27 21:25:59 crc kubenswrapper[4915]: I0127 21:25:59.185383 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxvgp\" (UniqueName: \"kubernetes.io/projected/be8f6fa4-46d0-4800-b79f-4bac72f17776-kube-api-access-jxvgp\") pod \"crc-debug-79x88\" (UID: \"be8f6fa4-46d0-4800-b79f-4bac72f17776\") " pod="openshift-must-gather-vt59w/crc-debug-79x88" Jan 27 21:25:59 crc kubenswrapper[4915]: I0127 21:25:59.185173 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be8f6fa4-46d0-4800-b79f-4bac72f17776-host\") pod \"crc-debug-79x88\" (UID: \"be8f6fa4-46d0-4800-b79f-4bac72f17776\") " pod="openshift-must-gather-vt59w/crc-debug-79x88" Jan 27 21:25:59 crc kubenswrapper[4915]: I0127 21:25:59.206765 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxvgp\" (UniqueName: \"kubernetes.io/projected/be8f6fa4-46d0-4800-b79f-4bac72f17776-kube-api-access-jxvgp\") pod \"crc-debug-79x88\" (UID: \"be8f6fa4-46d0-4800-b79f-4bac72f17776\") " pod="openshift-must-gather-vt59w/crc-debug-79x88" Jan 27 21:25:59 crc kubenswrapper[4915]: I0127 21:25:59.332762 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vt59w/crc-debug-79x88" Jan 27 21:25:59 crc kubenswrapper[4915]: I0127 21:25:59.375496 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="987ef9d0-475c-427e-8102-89613e048de5" path="/var/lib/kubelet/pods/987ef9d0-475c-427e-8102-89613e048de5/volumes" Jan 27 21:25:59 crc kubenswrapper[4915]: I0127 21:25:59.659506 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vt59w/crc-debug-79x88" event={"ID":"be8f6fa4-46d0-4800-b79f-4bac72f17776","Type":"ContainerStarted","Data":"cea03762e61f1894a66c2fe2c35c61ef38bb03291ce5a17aab2ea77c73ce57e3"} Jan 27 21:26:00 crc kubenswrapper[4915]: I0127 21:26:00.668171 4915 generic.go:334] "Generic (PLEG): container finished" podID="be8f6fa4-46d0-4800-b79f-4bac72f17776" containerID="c944ea38e7f471252f810df61f7d3793736047bf78a110f9b74e652f33673a7c" exitCode=1 Jan 27 21:26:00 crc kubenswrapper[4915]: I0127 21:26:00.668205 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vt59w/crc-debug-79x88" event={"ID":"be8f6fa4-46d0-4800-b79f-4bac72f17776","Type":"ContainerDied","Data":"c944ea38e7f471252f810df61f7d3793736047bf78a110f9b74e652f33673a7c"} Jan 27 21:26:00 crc kubenswrapper[4915]: I0127 21:26:00.722853 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vt59w/crc-debug-79x88"] Jan 27 21:26:00 crc kubenswrapper[4915]: I0127 21:26:00.728899 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vt59w/crc-debug-79x88"] Jan 27 21:26:01 crc kubenswrapper[4915]: I0127 21:26:01.863161 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vt59w/crc-debug-79x88" Jan 27 21:26:01 crc kubenswrapper[4915]: I0127 21:26:01.935154 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxvgp\" (UniqueName: \"kubernetes.io/projected/be8f6fa4-46d0-4800-b79f-4bac72f17776-kube-api-access-jxvgp\") pod \"be8f6fa4-46d0-4800-b79f-4bac72f17776\" (UID: \"be8f6fa4-46d0-4800-b79f-4bac72f17776\") " Jan 27 21:26:01 crc kubenswrapper[4915]: I0127 21:26:01.935314 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be8f6fa4-46d0-4800-b79f-4bac72f17776-host\") pod \"be8f6fa4-46d0-4800-b79f-4bac72f17776\" (UID: \"be8f6fa4-46d0-4800-b79f-4bac72f17776\") " Jan 27 21:26:01 crc kubenswrapper[4915]: I0127 21:26:01.935404 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be8f6fa4-46d0-4800-b79f-4bac72f17776-host" (OuterVolumeSpecName: "host") pod "be8f6fa4-46d0-4800-b79f-4bac72f17776" (UID: "be8f6fa4-46d0-4800-b79f-4bac72f17776"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:26:01 crc kubenswrapper[4915]: I0127 21:26:01.935962 4915 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be8f6fa4-46d0-4800-b79f-4bac72f17776-host\") on node \"crc\" DevicePath \"\"" Jan 27 21:26:01 crc kubenswrapper[4915]: I0127 21:26:01.942325 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be8f6fa4-46d0-4800-b79f-4bac72f17776-kube-api-access-jxvgp" (OuterVolumeSpecName: "kube-api-access-jxvgp") pod "be8f6fa4-46d0-4800-b79f-4bac72f17776" (UID: "be8f6fa4-46d0-4800-b79f-4bac72f17776"). InnerVolumeSpecName "kube-api-access-jxvgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:26:02 crc kubenswrapper[4915]: I0127 21:26:02.037432 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxvgp\" (UniqueName: \"kubernetes.io/projected/be8f6fa4-46d0-4800-b79f-4bac72f17776-kube-api-access-jxvgp\") on node \"crc\" DevicePath \"\"" Jan 27 21:26:02 crc kubenswrapper[4915]: I0127 21:26:02.688571 4915 scope.go:117] "RemoveContainer" containerID="c944ea38e7f471252f810df61f7d3793736047bf78a110f9b74e652f33673a7c" Jan 27 21:26:02 crc kubenswrapper[4915]: I0127 21:26:02.688610 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vt59w/crc-debug-79x88" Jan 27 21:26:03 crc kubenswrapper[4915]: I0127 21:26:03.369987 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be8f6fa4-46d0-4800-b79f-4bac72f17776" path="/var/lib/kubelet/pods/be8f6fa4-46d0-4800-b79f-4bac72f17776/volumes" Jan 27 21:26:20 crc kubenswrapper[4915]: I0127 21:26:20.624737 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:26:20 crc kubenswrapper[4915]: I0127 21:26:20.625287 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:26:35 crc kubenswrapper[4915]: I0127 21:26:35.782963 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6985bc6658-t8db5_3614bc8f-8773-4f22-a9bd-01c34adbd6cc/barbican-api/0.log" Jan 27 21:26:35 crc kubenswrapper[4915]: I0127 21:26:35.923190 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6985bc6658-t8db5_3614bc8f-8773-4f22-a9bd-01c34adbd6cc/barbican-api-log/0.log" Jan 27 21:26:35 crc kubenswrapper[4915]: I0127 21:26:35.979139 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5c875f5dbd-4s44w_80fb5a13-de72-4173-a859-349b05c9d042/barbican-keystone-listener/0.log" Jan 27 21:26:36 crc kubenswrapper[4915]: I0127 21:26:36.153128 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5c875f5dbd-4s44w_80fb5a13-de72-4173-a859-349b05c9d042/barbican-keystone-listener-log/0.log" Jan 27 21:26:36 crc kubenswrapper[4915]: I0127 21:26:36.211206 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-575f575d8f-mwqfl_3b806ab6-0402-4f4a-8163-d2bb7da4beb5/barbican-worker/0.log" Jan 27 21:26:36 crc kubenswrapper[4915]: I0127 21:26:36.293442 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-575f575d8f-mwqfl_3b806ab6-0402-4f4a-8163-d2bb7da4beb5/barbican-worker-log/0.log" Jan 27 21:26:36 crc kubenswrapper[4915]: I0127 21:26:36.436316 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e859ffbc-394d-4d46-b6bb-42a96f99d76d/cinder-api-log/0.log" Jan 27 21:26:36 crc kubenswrapper[4915]: I0127 21:26:36.555072 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e859ffbc-394d-4d46-b6bb-42a96f99d76d/cinder-api/0.log" Jan 27 21:26:36 crc kubenswrapper[4915]: I0127 21:26:36.743514 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_3e47fb87-132e-4753-b24f-c79929b6e73e/cinder-backup/0.log" Jan 27 21:26:36 crc kubenswrapper[4915]: I0127 21:26:36.751513 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_3e47fb87-132e-4753-b24f-c79929b6e73e/probe/0.log" Jan 27 21:26:36 crc kubenswrapper[4915]: I0127 21:26:36.839740 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_eac71222-8c11-4c6c-9399-0e4eb53487d4/cinder-scheduler/0.log" Jan 27 21:26:37 crc kubenswrapper[4915]: I0127 21:26:37.008647 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_eac71222-8c11-4c6c-9399-0e4eb53487d4/probe/0.log" Jan 27 21:26:37 crc kubenswrapper[4915]: I0127 21:26:37.062982 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_1677f1f0-6421-4baa-82a5-9eb372118c1c/cinder-volume/0.log" Jan 27 21:26:37 crc kubenswrapper[4915]: I0127 21:26:37.132374 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_1677f1f0-6421-4baa-82a5-9eb372118c1c/probe/0.log" Jan 27 21:26:37 crc kubenswrapper[4915]: I0127 21:26:37.224418 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-75b77568d9-2lnpp_de74ec03-d1bc-43cc-b036-73b93a0d94b3/init/0.log" Jan 27 21:26:37 crc kubenswrapper[4915]: I0127 21:26:37.406151 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-75b77568d9-2lnpp_de74ec03-d1bc-43cc-b036-73b93a0d94b3/dnsmasq-dns/0.log" Jan 27 21:26:37 crc kubenswrapper[4915]: I0127 21:26:37.410178 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-75b77568d9-2lnpp_de74ec03-d1bc-43cc-b036-73b93a0d94b3/init/0.log" Jan 27 21:26:37 crc kubenswrapper[4915]: I0127 21:26:37.538275 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f7efd00f-5998-493d-a94a-7c319ecc7663/glance-httpd/0.log" Jan 27 21:26:37 crc kubenswrapper[4915]: I0127 21:26:37.744285 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f7efd00f-5998-493d-a94a-7c319ecc7663/glance-log/0.log" Jan 27 21:26:37 crc kubenswrapper[4915]: I0127 21:26:37.878475 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4863f021-132b-43da-adf0-babc9a72f8a1/glance-httpd/0.log" Jan 27 21:26:37 crc kubenswrapper[4915]: I0127 21:26:37.916339 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4863f021-132b-43da-adf0-babc9a72f8a1/glance-log/0.log" Jan 27 21:26:38 crc kubenswrapper[4915]: I0127 21:26:38.100043 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5d48b84649-qr295_5b9cdc8a-c579-45e2-994b-9119ac49807f/keystone-api/0.log" Jan 27 21:26:38 crc kubenswrapper[4915]: I0127 21:26:38.112890 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29492461-dwc92_a24fbf57-ee45-4af3-a8c3-d7be1a8d5190/keystone-cron/0.log" Jan 27 21:26:38 crc kubenswrapper[4915]: I0127 21:26:38.310360 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-copy-data_05230450-655e-4be4-a0fb-031369fc838d/adoption/0.log" Jan 27 21:26:38 crc kubenswrapper[4915]: I0127 21:26:38.685099 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5f78f7959c-b4nv2_7f48a50f-8b17-4d5d-9c46-64f7ad566e13/neutron-httpd/0.log" Jan 27 21:26:38 crc kubenswrapper[4915]: I0127 21:26:38.708648 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5f78f7959c-b4nv2_7f48a50f-8b17-4d5d-9c46-64f7ad566e13/neutron-api/0.log" Jan 27 21:26:39 crc kubenswrapper[4915]: I0127 21:26:39.068688 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b81bcc90-1a59-47ed-bfe9-23cae865c970/nova-api-api/0.log" Jan 27 21:26:39 crc kubenswrapper[4915]: I0127 21:26:39.096633 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b81bcc90-1a59-47ed-bfe9-23cae865c970/nova-api-log/0.log" Jan 27 21:26:39 crc kubenswrapper[4915]: I0127 21:26:39.515340 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ee50a3c9-47e3-457d-b0f2-5d828f365449/nova-cell0-conductor-conductor/0.log" Jan 27 21:26:39 crc kubenswrapper[4915]: I0127 21:26:39.526277 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_448a7ba3-9525-4822-bcd1-336594ae694a/nova-cell1-conductor-conductor/0.log" Jan 27 21:26:39 crc kubenswrapper[4915]: I0127 21:26:39.845090 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_e3e36924-9ae1-415f-87e7-2a8eca6a7a93/nova-cell1-novncproxy-novncproxy/0.log" Jan 27 21:26:39 crc kubenswrapper[4915]: I0127 21:26:39.872968 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e/nova-metadata-log/0.log" Jan 27 21:26:40 crc kubenswrapper[4915]: I0127 21:26:40.284499 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e993a7e6-4dc2-44f5-99c4-9fffe9aa3b4e/nova-metadata-metadata/0.log" Jan 27 21:26:40 crc kubenswrapper[4915]: I0127 21:26:40.300118 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_97d1cd56-fbc6-4949-a880-f04476b51277/nova-scheduler-scheduler/0.log" Jan 27 21:26:40 crc kubenswrapper[4915]: I0127 21:26:40.350174 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-5db958498-dmvb4_5c98922c-696e-4b7e-a12a-16b8f840c516/init/0.log" Jan 27 21:26:40 crc kubenswrapper[4915]: I0127 21:26:40.567917 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-5db958498-dmvb4_5c98922c-696e-4b7e-a12a-16b8f840c516/init/0.log" Jan 27 21:26:40 crc kubenswrapper[4915]: I0127 21:26:40.612293 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-5db958498-dmvb4_5c98922c-696e-4b7e-a12a-16b8f840c516/octavia-api-provider-agent/0.log" Jan 27 21:26:40 crc kubenswrapper[4915]: I0127 21:26:40.779337 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-lkpg8_2c14c777-a5d8-495b-b85a-43f045df5aa7/init/0.log" Jan 27 21:26:40 crc kubenswrapper[4915]: I0127 21:26:40.921439 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-5db958498-dmvb4_5c98922c-696e-4b7e-a12a-16b8f840c516/octavia-api/0.log" Jan 27 21:26:40 crc kubenswrapper[4915]: I0127 21:26:40.977602 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-lkpg8_2c14c777-a5d8-495b-b85a-43f045df5aa7/init/0.log" Jan 27 21:26:41 crc kubenswrapper[4915]: I0127 21:26:41.075244 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-lkpg8_2c14c777-a5d8-495b-b85a-43f045df5aa7/octavia-healthmanager/0.log" Jan 27 21:26:41 crc kubenswrapper[4915]: I0127 21:26:41.197780 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-f8s9t_53ac311c-9520-4cc5-a3a8-3ae1f822ff23/init/0.log" Jan 27 21:26:41 crc kubenswrapper[4915]: I0127 21:26:41.954617 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-59f8cff499-wpc9q_3daa5b09-3c8c-45d8-820c-e5ec689d5776/init/0.log" Jan 27 21:26:42 crc kubenswrapper[4915]: I0127 21:26:42.042595 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-f8s9t_53ac311c-9520-4cc5-a3a8-3ae1f822ff23/init/0.log" Jan 27 21:26:42 crc kubenswrapper[4915]: I0127 21:26:42.089559 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-f8s9t_53ac311c-9520-4cc5-a3a8-3ae1f822ff23/octavia-housekeeping/0.log" Jan 27 21:26:42 crc kubenswrapper[4915]: I0127 21:26:42.191806 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-59f8cff499-wpc9q_3daa5b09-3c8c-45d8-820c-e5ec689d5776/init/0.log" Jan 27 21:26:42 crc kubenswrapper[4915]: I0127 21:26:42.218903 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-59f8cff499-wpc9q_3daa5b09-3c8c-45d8-820c-e5ec689d5776/octavia-amphora-httpd/0.log" Jan 27 21:26:42 crc kubenswrapper[4915]: I0127 21:26:42.343323 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-mbmqb_dc1d771a-7c3b-48fc-9779-5ee95669692b/init/0.log" Jan 27 21:26:42 crc kubenswrapper[4915]: I0127 21:26:42.565551 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-mbmqb_dc1d771a-7c3b-48fc-9779-5ee95669692b/octavia-rsyslog/0.log" Jan 27 21:26:42 crc kubenswrapper[4915]: I0127 21:26:42.623955 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-mbmqb_dc1d771a-7c3b-48fc-9779-5ee95669692b/init/0.log" Jan 27 21:26:42 crc kubenswrapper[4915]: I0127 21:26:42.640584 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-ngf58_25e3000f-52ca-4d59-8f94-3f6ba166b355/init/0.log" Jan 27 21:26:42 crc kubenswrapper[4915]: I0127 21:26:42.805708 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-ngf58_25e3000f-52ca-4d59-8f94-3f6ba166b355/init/0.log" Jan 27 21:26:42 crc kubenswrapper[4915]: I0127 21:26:42.918448 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f79f430a-9a9a-4bd9-b475-09ab36193443/mysql-bootstrap/0.log" Jan 27 21:26:43 crc kubenswrapper[4915]: I0127 21:26:43.038525 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-ngf58_25e3000f-52ca-4d59-8f94-3f6ba166b355/octavia-worker/0.log" Jan 27 21:26:43 crc kubenswrapper[4915]: I0127 21:26:43.109410 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f79f430a-9a9a-4bd9-b475-09ab36193443/mysql-bootstrap/0.log" Jan 27 21:26:43 crc kubenswrapper[4915]: I0127 21:26:43.115490 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f79f430a-9a9a-4bd9-b475-09ab36193443/galera/0.log" Jan 27 21:26:43 crc kubenswrapper[4915]: I0127 21:26:43.647825 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6cc7f0cb-29c3-471a-bb68-723c51a8bc6d/mysql-bootstrap/0.log" Jan 27 21:26:43 crc kubenswrapper[4915]: I0127 21:26:43.833617 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_85479810-8690-4848-a11f-f7cec2d3b63a/openstackclient/0.log" Jan 27 21:26:43 crc kubenswrapper[4915]: I0127 21:26:43.875324 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6cc7f0cb-29c3-471a-bb68-723c51a8bc6d/galera/0.log" Jan 27 21:26:43 crc kubenswrapper[4915]: I0127 21:26:43.893101 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6cc7f0cb-29c3-471a-bb68-723c51a8bc6d/mysql-bootstrap/0.log" Jan 27 21:26:44 crc kubenswrapper[4915]: I0127 21:26:44.080947 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-tjwzg_5dc8ad96-6e14-4ab9-b6e5-34698d4600e9/openstack-network-exporter/0.log" Jan 27 21:26:44 crc kubenswrapper[4915]: I0127 21:26:44.113264 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ec21d9de-6a54-4606-bcb5-ca3b1dad1edf/memcached/0.log" Jan 27 21:26:44 crc kubenswrapper[4915]: I0127 21:26:44.153726 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-mwr2k_31b1b872-d75b-4fb7-8995-a9183f0a8728/ovn-controller/0.log" Jan 27 21:26:44 crc kubenswrapper[4915]: I0127 21:26:44.293247 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mcbqz_199b3c02-36d1-4807-8615-2ac7d42b3a3c/ovsdb-server-init/0.log" Jan 27 21:26:44 crc kubenswrapper[4915]: I0127 21:26:44.499491 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mcbqz_199b3c02-36d1-4807-8615-2ac7d42b3a3c/ovsdb-server-init/0.log" Jan 27 21:26:44 crc kubenswrapper[4915]: I0127 21:26:44.510669 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mcbqz_199b3c02-36d1-4807-8615-2ac7d42b3a3c/ovs-vswitchd/0.log" Jan 27 21:26:44 crc kubenswrapper[4915]: I0127 21:26:44.520775 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mcbqz_199b3c02-36d1-4807-8615-2ac7d42b3a3c/ovsdb-server/0.log" Jan 27 21:26:44 crc kubenswrapper[4915]: I0127 21:26:44.524827 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-copy-data_fd3e52e3-4b88-4172-998c-b2e4048f94c3/adoption/0.log" Jan 27 21:26:44 crc kubenswrapper[4915]: I0127 21:26:44.728647 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ab068f49-8db7-44b8-8210-d43885e4958c/ovn-northd/0.log" Jan 27 21:26:44 crc kubenswrapper[4915]: I0127 21:26:44.744073 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ab068f49-8db7-44b8-8210-d43885e4958c/openstack-network-exporter/0.log" Jan 27 21:26:44 crc kubenswrapper[4915]: I0127 21:26:44.825578 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_29af113a-6c28-4034-8541-3f450288d15f/openstack-network-exporter/0.log" Jan 27 21:26:44 crc kubenswrapper[4915]: I0127 21:26:44.915329 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_29af113a-6c28-4034-8541-3f450288d15f/ovsdbserver-nb/0.log" Jan 27 21:26:44 crc kubenswrapper[4915]: I0127 21:26:44.924117 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_2e0e8901-adce-4d27-8f75-95a18d45659a/openstack-network-exporter/0.log" Jan 27 21:26:45 crc kubenswrapper[4915]: I0127 21:26:45.007469 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_2e0e8901-adce-4d27-8f75-95a18d45659a/ovsdbserver-nb/0.log" Jan 27 21:26:45 crc kubenswrapper[4915]: I0127 21:26:45.136982 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_6bb28fc6-fbee-4298-b22c-0581e2f377f0/openstack-network-exporter/0.log" Jan 27 21:26:45 crc kubenswrapper[4915]: I0127 21:26:45.166714 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_6bb28fc6-fbee-4298-b22c-0581e2f377f0/ovsdbserver-nb/0.log" Jan 27 21:26:45 crc kubenswrapper[4915]: I0127 21:26:45.267871 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2c487705-cbc3-408b-82be-8634bfdb534d/openstack-network-exporter/0.log" Jan 27 21:26:45 crc kubenswrapper[4915]: I0127 21:26:45.305806 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2c487705-cbc3-408b-82be-8634bfdb534d/ovsdbserver-sb/0.log" Jan 27 21:26:45 crc kubenswrapper[4915]: I0127 21:26:45.385648 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_91cd6e4c-0379-4e16-8074-d2b2b347e64e/openstack-network-exporter/0.log" Jan 27 21:26:45 crc kubenswrapper[4915]: I0127 21:26:45.454101 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_91cd6e4c-0379-4e16-8074-d2b2b347e64e/ovsdbserver-sb/0.log" Jan 27 21:26:45 crc kubenswrapper[4915]: I0127 21:26:45.526397 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_814e662a-f9d9-422d-b2e3-7dd5be4313c3/openstack-network-exporter/0.log" Jan 27 21:26:45 crc kubenswrapper[4915]: I0127 21:26:45.744102 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_814e662a-f9d9-422d-b2e3-7dd5be4313c3/ovsdbserver-sb/0.log" Jan 27 21:26:45 crc kubenswrapper[4915]: I0127 21:26:45.930320 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-9f9d58b64-d6k58_bec9253f-bf66-4582-9ec1-d8a758db7367/placement-api/0.log" Jan 27 21:26:45 crc kubenswrapper[4915]: I0127 21:26:45.933740 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-9f9d58b64-d6k58_bec9253f-bf66-4582-9ec1-d8a758db7367/placement-log/0.log" Jan 27 21:26:45 crc kubenswrapper[4915]: I0127 21:26:45.979657 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d90d24c5-20d2-49ea-928c-484d5baeeb9e/setup-container/0.log" Jan 27 21:26:46 crc kubenswrapper[4915]: I0127 21:26:46.144673 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d90d24c5-20d2-49ea-928c-484d5baeeb9e/setup-container/0.log" Jan 27 21:26:46 crc kubenswrapper[4915]: I0127 21:26:46.150241 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0c15f8bf-aaf2-46e5-b80a-fe471d471444/setup-container/0.log" Jan 27 21:26:46 crc kubenswrapper[4915]: I0127 21:26:46.150517 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d90d24c5-20d2-49ea-928c-484d5baeeb9e/rabbitmq/0.log" Jan 27 21:26:46 crc kubenswrapper[4915]: I0127 21:26:46.355590 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0c15f8bf-aaf2-46e5-b80a-fe471d471444/setup-container/0.log" Jan 27 21:26:46 crc kubenswrapper[4915]: I0127 21:26:46.425828 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0c15f8bf-aaf2-46e5-b80a-fe471d471444/rabbitmq/0.log" Jan 27 21:26:50 crc kubenswrapper[4915]: I0127 21:26:50.625188 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:26:50 crc kubenswrapper[4915]: I0127 21:26:50.625847 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:26:50 crc kubenswrapper[4915]: I0127 21:26:50.625901 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" Jan 27 21:26:50 crc kubenswrapper[4915]: I0127 21:26:50.626650 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0"} pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 21:26:50 crc kubenswrapper[4915]: I0127 21:26:50.626716 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" containerID="cri-o://1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" gracePeriod=600 Jan 27 21:26:50 crc kubenswrapper[4915]: E0127 21:26:50.748487 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:26:51 crc kubenswrapper[4915]: I0127 21:26:51.111606 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db92-39b6-4acf-89af-34169c61e709" containerID="1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" exitCode=0 Jan 27 21:26:51 crc kubenswrapper[4915]: I0127 21:26:51.111646 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerDied","Data":"1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0"} Jan 27 21:26:51 crc kubenswrapper[4915]: I0127 21:26:51.111677 4915 scope.go:117] "RemoveContainer" containerID="06f791b78b57e367f956c55040a37cbc5f4f8e197a80eca6e9f9c70b520b87b9" Jan 27 21:26:51 crc kubenswrapper[4915]: I0127 21:26:51.112262 4915 scope.go:117] "RemoveContainer" containerID="1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" Jan 27 21:26:51 crc kubenswrapper[4915]: E0127 21:26:51.112492 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:27:04 crc kubenswrapper[4915]: I0127 21:27:04.358553 4915 scope.go:117] "RemoveContainer" containerID="1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" Jan 27 21:27:04 crc kubenswrapper[4915]: E0127 21:27:04.359449 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:27:06 crc kubenswrapper[4915]: I0127 21:27:06.006766 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96_3c204dc6-9798-4818-8a10-ef8ccc2ce918/util/0.log" Jan 27 21:27:06 crc kubenswrapper[4915]: I0127 21:27:06.213528 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96_3c204dc6-9798-4818-8a10-ef8ccc2ce918/util/0.log" Jan 27 21:27:06 crc kubenswrapper[4915]: I0127 21:27:06.236267 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96_3c204dc6-9798-4818-8a10-ef8ccc2ce918/pull/0.log" Jan 27 21:27:06 crc kubenswrapper[4915]: I0127 21:27:06.275276 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96_3c204dc6-9798-4818-8a10-ef8ccc2ce918/pull/0.log" Jan 27 21:27:06 crc kubenswrapper[4915]: I0127 21:27:06.424959 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96_3c204dc6-9798-4818-8a10-ef8ccc2ce918/pull/0.log" Jan 27 21:27:06 crc kubenswrapper[4915]: I0127 21:27:06.448272 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96_3c204dc6-9798-4818-8a10-ef8ccc2ce918/extract/0.log" Jan 27 21:27:06 crc kubenswrapper[4915]: I0127 21:27:06.491180 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_73cd039ee0ff571e389a67eb6c1ec9ce429f21c65ba53bd39c09be4627zvc96_3c204dc6-9798-4818-8a10-ef8ccc2ce918/util/0.log" Jan 27 21:27:06 crc kubenswrapper[4915]: I0127 21:27:06.759604 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-65ff799cfd-mrctf_efcf0ee2-8cec-485e-b699-9228967f50de/manager/0.log" Jan 27 21:27:06 crc kubenswrapper[4915]: I0127 21:27:06.835033 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-655bf9cfbb-l5rlq_411fdf42-e261-4eb9-a72b-e5067da8116d/manager/0.log" Jan 27 21:27:06 crc kubenswrapper[4915]: I0127 21:27:06.901232 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-77554cdc5c-4jwbr_841f427a-0fba-4d27-b325-9e048c7242d0/manager/0.log" Jan 27 21:27:07 crc kubenswrapper[4915]: I0127 21:27:07.068437 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-67dd55ff59-2wrrs_3db878ad-a2f4-4bc6-bc28-a91ace116f6c/manager/0.log" Jan 27 21:27:07 crc kubenswrapper[4915]: I0127 21:27:07.150220 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-575ffb885b-lfz95_3abe8388-a712-4933-a52a-020b4d9cf1a1/manager/0.log" Jan 27 21:27:07 crc kubenswrapper[4915]: I0127 21:27:07.262622 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-d7jbs_a54f43cb-a1a3-47dc-8f79-9902abf8e15a/manager/0.log" Jan 27 21:27:07 crc kubenswrapper[4915]: I0127 21:27:07.670674 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d75bc88d5-rklwj_4e9d96b4-5711-408c-8f62-198e8a9af22f/manager/0.log" Jan 27 21:27:07 crc kubenswrapper[4915]: I0127 21:27:07.744841 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-768b776ffb-c298f_ff429293-b11f-4482-90e7-41d277b5a044/manager/0.log" Jan 27 21:27:07 crc kubenswrapper[4915]: I0127 21:27:07.984685 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-849fcfbb6b-7499c_c4fd8f30-3efb-4002-9508-247a3973daf5/manager/0.log" Jan 27 21:27:07 crc kubenswrapper[4915]: I0127 21:27:07.992124 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55f684fd56-k2wrn_157f36e8-64c1-41f5-a134-28a3dee716a0/manager/0.log" Jan 27 21:27:08 crc kubenswrapper[4915]: I0127 21:27:08.196528 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-8qsl7_1390839d-2560-4530-9a71-15568aa4e400/manager/0.log" Jan 27 21:27:08 crc kubenswrapper[4915]: I0127 21:27:08.239511 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7ffd8d76d4-b4mh9_b8d0acd2-831c-4c69-bb9e-661c93caf365/manager/0.log" Jan 27 21:27:08 crc kubenswrapper[4915]: I0127 21:27:08.557611 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7875d7675-4rgf6_f82c79f5-df27-4ba5-b193-2442492a9897/manager/0.log" Jan 27 21:27:08 crc kubenswrapper[4915]: I0127 21:27:08.650868 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-ddcbfd695-2dx85_14c2c0a2-eaa3-4f68-a081-35f187ee3ecd/manager/0.log" Jan 27 21:27:08 crc kubenswrapper[4915]: I0127 21:27:08.671798 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854xjclv_431eefc2-c140-49b9-acbc-523dffc5195b/manager/0.log" Jan 27 21:27:08 crc kubenswrapper[4915]: I0127 21:27:08.849627 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-67d88b5675-dgbhd_9c22429f-a66c-41ae-b664-8f986ca713ef/operator/0.log" Jan 27 21:27:09 crc kubenswrapper[4915]: I0127 21:27:09.139950 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-rg6kn_4741f828-8ade-428a-9903-cca6e8e1dc56/registry-server/0.log" Jan 27 21:27:09 crc kubenswrapper[4915]: I0127 21:27:09.449160 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-vcxbh_f448a710-dbc8-4561-8964-24b7bcf1ebf5/manager/0.log" Jan 27 21:27:09 crc kubenswrapper[4915]: I0127 21:27:09.544143 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-wn9ns_cd2e585e-7b6a-4b81-bd2e-969dfe93ea12/manager/0.log" Jan 27 21:27:09 crc kubenswrapper[4915]: I0127 21:27:09.683062 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-b5ht7_ce0a16bf-c68f-41c4-a3ee-88cfa0f32161/operator/0.log" Jan 27 21:27:09 crc kubenswrapper[4915]: I0127 21:27:09.982405 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-4lskf_04ed2ea6-d6e8-4e28-a038-7e9b23259535/manager/0.log" Jan 27 21:27:10 crc kubenswrapper[4915]: I0127 21:27:10.067719 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-799bc87c89-9lrtq_3ca605d2-336e-4af3-ace1-f4615c16aa3e/manager/0.log" Jan 27 21:27:10 crc kubenswrapper[4915]: I0127 21:27:10.242749 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-jdv22_0a712a3e-d212-4452-950d-51d8c803ffdf/manager/0.log" Jan 27 21:27:10 crc kubenswrapper[4915]: I0127 21:27:10.268143 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-767b8bc766-hs2pq_dd5fd940-cc29-4114-9634-3236f285b65c/manager/0.log" Jan 27 21:27:10 crc kubenswrapper[4915]: I0127 21:27:10.340781 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-bf776578d-hcmpr_c5c7155d-017b-4653-9294-5d99093b027d/manager/0.log" Jan 27 21:27:18 crc kubenswrapper[4915]: I0127 21:27:18.357586 4915 scope.go:117] "RemoveContainer" containerID="1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" Jan 27 21:27:18 crc kubenswrapper[4915]: E0127 21:27:18.358835 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:27:29 crc kubenswrapper[4915]: I0127 21:27:29.363326 4915 scope.go:117] "RemoveContainer" containerID="1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" Jan 27 21:27:29 crc kubenswrapper[4915]: E0127 21:27:29.364328 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:27:29 crc kubenswrapper[4915]: I0127 21:27:29.984303 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-tc9fv_c2e593e2-bf9f-415b-9efd-6ffd49cf9905/control-plane-machine-set-operator/0.log" Jan 27 21:27:30 crc kubenswrapper[4915]: I0127 21:27:30.246327 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ltcdr_8ea219a5-50e9-41e3-887e-e23d61ed73ef/kube-rbac-proxy/0.log" Jan 27 21:27:30 crc kubenswrapper[4915]: I0127 21:27:30.262945 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ltcdr_8ea219a5-50e9-41e3-887e-e23d61ed73ef/machine-api-operator/0.log" Jan 27 21:27:41 crc kubenswrapper[4915]: I0127 21:27:41.357963 4915 scope.go:117] "RemoveContainer" containerID="1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" Jan 27 21:27:41 crc kubenswrapper[4915]: E0127 21:27:41.358811 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:27:42 crc kubenswrapper[4915]: I0127 21:27:42.187471 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-95vhp_7b3c7c02-6cf8-49db-bd51-038ee25d5613/cert-manager-controller/0.log" Jan 27 21:27:42 crc kubenswrapper[4915]: I0127 21:27:42.391032 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-259gl_7aba51ef-4eab-42a1-bb96-a10f2ac2816e/cert-manager-cainjector/0.log" Jan 27 21:27:42 crc kubenswrapper[4915]: I0127 21:27:42.507248 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-x695t_ec5c0833-e215-4011-b71a-ed4854ab9be7/cert-manager-webhook/0.log" Jan 27 21:27:52 crc kubenswrapper[4915]: I0127 21:27:52.357838 4915 scope.go:117] "RemoveContainer" containerID="1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" Jan 27 21:27:52 crc kubenswrapper[4915]: E0127 21:27:52.359933 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:27:54 crc kubenswrapper[4915]: I0127 21:27:54.182430 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wb25q"] Jan 27 21:27:54 crc kubenswrapper[4915]: E0127 21:27:54.183337 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be8f6fa4-46d0-4800-b79f-4bac72f17776" containerName="container-00" Jan 27 21:27:54 crc kubenswrapper[4915]: I0127 21:27:54.183353 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="be8f6fa4-46d0-4800-b79f-4bac72f17776" containerName="container-00" Jan 27 21:27:54 crc kubenswrapper[4915]: I0127 21:27:54.183593 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="be8f6fa4-46d0-4800-b79f-4bac72f17776" containerName="container-00" Jan 27 21:27:54 crc kubenswrapper[4915]: I0127 21:27:54.185568 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wb25q" Jan 27 21:27:54 crc kubenswrapper[4915]: I0127 21:27:54.195344 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wb25q"] Jan 27 21:27:54 crc kubenswrapper[4915]: I0127 21:27:54.315294 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51783079-3f80-4d98-bf60-c25bc6e1490c-utilities\") pod \"certified-operators-wb25q\" (UID: \"51783079-3f80-4d98-bf60-c25bc6e1490c\") " pod="openshift-marketplace/certified-operators-wb25q" Jan 27 21:27:54 crc kubenswrapper[4915]: I0127 21:27:54.315512 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7mzk\" (UniqueName: \"kubernetes.io/projected/51783079-3f80-4d98-bf60-c25bc6e1490c-kube-api-access-z7mzk\") pod \"certified-operators-wb25q\" (UID: \"51783079-3f80-4d98-bf60-c25bc6e1490c\") " pod="openshift-marketplace/certified-operators-wb25q" Jan 27 21:27:54 crc kubenswrapper[4915]: I0127 21:27:54.315919 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51783079-3f80-4d98-bf60-c25bc6e1490c-catalog-content\") pod \"certified-operators-wb25q\" (UID: \"51783079-3f80-4d98-bf60-c25bc6e1490c\") " pod="openshift-marketplace/certified-operators-wb25q" Jan 27 21:27:54 crc kubenswrapper[4915]: I0127 21:27:54.418014 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51783079-3f80-4d98-bf60-c25bc6e1490c-utilities\") pod \"certified-operators-wb25q\" (UID: \"51783079-3f80-4d98-bf60-c25bc6e1490c\") " pod="openshift-marketplace/certified-operators-wb25q" Jan 27 21:27:54 crc kubenswrapper[4915]: I0127 21:27:54.418091 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7mzk\" (UniqueName: \"kubernetes.io/projected/51783079-3f80-4d98-bf60-c25bc6e1490c-kube-api-access-z7mzk\") pod \"certified-operators-wb25q\" (UID: \"51783079-3f80-4d98-bf60-c25bc6e1490c\") " pod="openshift-marketplace/certified-operators-wb25q" Jan 27 21:27:54 crc kubenswrapper[4915]: I0127 21:27:54.418183 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51783079-3f80-4d98-bf60-c25bc6e1490c-catalog-content\") pod \"certified-operators-wb25q\" (UID: \"51783079-3f80-4d98-bf60-c25bc6e1490c\") " pod="openshift-marketplace/certified-operators-wb25q" Jan 27 21:27:54 crc kubenswrapper[4915]: I0127 21:27:54.418854 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51783079-3f80-4d98-bf60-c25bc6e1490c-catalog-content\") pod \"certified-operators-wb25q\" (UID: \"51783079-3f80-4d98-bf60-c25bc6e1490c\") " pod="openshift-marketplace/certified-operators-wb25q" Jan 27 21:27:54 crc kubenswrapper[4915]: I0127 21:27:54.419107 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51783079-3f80-4d98-bf60-c25bc6e1490c-utilities\") pod \"certified-operators-wb25q\" (UID: \"51783079-3f80-4d98-bf60-c25bc6e1490c\") " pod="openshift-marketplace/certified-operators-wb25q" Jan 27 21:27:54 crc kubenswrapper[4915]: I0127 21:27:54.451818 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7mzk\" (UniqueName: \"kubernetes.io/projected/51783079-3f80-4d98-bf60-c25bc6e1490c-kube-api-access-z7mzk\") pod \"certified-operators-wb25q\" (UID: \"51783079-3f80-4d98-bf60-c25bc6e1490c\") " pod="openshift-marketplace/certified-operators-wb25q" Jan 27 21:27:54 crc kubenswrapper[4915]: I0127 21:27:54.525356 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wb25q" Jan 27 21:27:55 crc kubenswrapper[4915]: I0127 21:27:55.115435 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wb25q"] Jan 27 21:27:55 crc kubenswrapper[4915]: I0127 21:27:55.675163 4915 generic.go:334] "Generic (PLEG): container finished" podID="51783079-3f80-4d98-bf60-c25bc6e1490c" containerID="c7187743c47bf91ce273696e9d9ef515e66ea074b6aaff33da6baf8976eb6910" exitCode=0 Jan 27 21:27:55 crc kubenswrapper[4915]: I0127 21:27:55.675347 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wb25q" event={"ID":"51783079-3f80-4d98-bf60-c25bc6e1490c","Type":"ContainerDied","Data":"c7187743c47bf91ce273696e9d9ef515e66ea074b6aaff33da6baf8976eb6910"} Jan 27 21:27:55 crc kubenswrapper[4915]: I0127 21:27:55.675493 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wb25q" event={"ID":"51783079-3f80-4d98-bf60-c25bc6e1490c","Type":"ContainerStarted","Data":"ddfed03321ad1e436177b5d4241830416fbd3535f8e3ec48e51aa7e1b647de1e"} Jan 27 21:27:56 crc kubenswrapper[4915]: I0127 21:27:56.639866 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-g6dxv_ed1227d1-5878-4f4f-8407-d2d1228792d1/nmstate-console-plugin/0.log" Jan 27 21:27:56 crc kubenswrapper[4915]: I0127 21:27:56.684552 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wb25q" event={"ID":"51783079-3f80-4d98-bf60-c25bc6e1490c","Type":"ContainerStarted","Data":"bac8de119d1ebf7d6a4af3fd3c49af67377d1a898dd1cc069a40b23539569f46"} Jan 27 21:27:56 crc kubenswrapper[4915]: I0127 21:27:56.786236 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-knsbk_6da2a25b-7823-4b43-849c-9a7d3e3894f6/nmstate-handler/0.log" Jan 27 21:27:56 crc kubenswrapper[4915]: I0127 21:27:56.882376 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-m56sv_c08f40b6-6f77-4a0b-8236-15d653d5fc44/kube-rbac-proxy/0.log" Jan 27 21:27:56 crc kubenswrapper[4915]: I0127 21:27:56.896096 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-m56sv_c08f40b6-6f77-4a0b-8236-15d653d5fc44/nmstate-metrics/0.log" Jan 27 21:27:57 crc kubenswrapper[4915]: I0127 21:27:57.003640 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-r6dxr_431a5ebc-8bd2-400d-a65a-e696e4a00f5c/nmstate-operator/0.log" Jan 27 21:27:57 crc kubenswrapper[4915]: I0127 21:27:57.083822 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-89j8n_670c5b3f-6bbe-4055-ad13-e2db42a6c70e/nmstate-webhook/0.log" Jan 27 21:27:57 crc kubenswrapper[4915]: I0127 21:27:57.700214 4915 generic.go:334] "Generic (PLEG): container finished" podID="51783079-3f80-4d98-bf60-c25bc6e1490c" containerID="bac8de119d1ebf7d6a4af3fd3c49af67377d1a898dd1cc069a40b23539569f46" exitCode=0 Jan 27 21:27:57 crc kubenswrapper[4915]: I0127 21:27:57.700542 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wb25q" event={"ID":"51783079-3f80-4d98-bf60-c25bc6e1490c","Type":"ContainerDied","Data":"bac8de119d1ebf7d6a4af3fd3c49af67377d1a898dd1cc069a40b23539569f46"} Jan 27 21:27:58 crc kubenswrapper[4915]: I0127 21:27:58.711735 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wb25q" event={"ID":"51783079-3f80-4d98-bf60-c25bc6e1490c","Type":"ContainerStarted","Data":"caa75d1b61b8a3d36b76f2cf40fa2484df76de1a017b46ea2e917625e9fefeb1"} Jan 27 21:27:58 crc kubenswrapper[4915]: I0127 21:27:58.740714 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wb25q" podStartSLOduration=2.308676558 podStartE2EDuration="4.740681436s" podCreationTimestamp="2026-01-27 21:27:54 +0000 UTC" firstStartedPulling="2026-01-27 21:27:55.676974859 +0000 UTC m=+9967.034828543" lastFinishedPulling="2026-01-27 21:27:58.108979757 +0000 UTC m=+9969.466833421" observedRunningTime="2026-01-27 21:27:58.732130715 +0000 UTC m=+9970.089984379" watchObservedRunningTime="2026-01-27 21:27:58.740681436 +0000 UTC m=+9970.098535100" Jan 27 21:28:04 crc kubenswrapper[4915]: I0127 21:28:04.526066 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wb25q" Jan 27 21:28:04 crc kubenswrapper[4915]: I0127 21:28:04.526606 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wb25q" Jan 27 21:28:04 crc kubenswrapper[4915]: I0127 21:28:04.582417 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wb25q" Jan 27 21:28:04 crc kubenswrapper[4915]: I0127 21:28:04.809712 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wb25q" Jan 27 21:28:04 crc kubenswrapper[4915]: I0127 21:28:04.866742 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wb25q"] Jan 27 21:28:05 crc kubenswrapper[4915]: I0127 21:28:05.357853 4915 scope.go:117] "RemoveContainer" containerID="1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" Jan 27 21:28:05 crc kubenswrapper[4915]: E0127 21:28:05.358121 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:28:06 crc kubenswrapper[4915]: I0127 21:28:06.778116 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wb25q" podUID="51783079-3f80-4d98-bf60-c25bc6e1490c" containerName="registry-server" containerID="cri-o://caa75d1b61b8a3d36b76f2cf40fa2484df76de1a017b46ea2e917625e9fefeb1" gracePeriod=2 Jan 27 21:28:07 crc kubenswrapper[4915]: I0127 21:28:07.228099 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dvxvk"] Jan 27 21:28:07 crc kubenswrapper[4915]: I0127 21:28:07.230369 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dvxvk" Jan 27 21:28:07 crc kubenswrapper[4915]: I0127 21:28:07.243680 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dvxvk"] Jan 27 21:28:07 crc kubenswrapper[4915]: I0127 21:28:07.364149 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b8e431f-5bd5-49d8-932a-96501543a82b-utilities\") pod \"redhat-marketplace-dvxvk\" (UID: \"3b8e431f-5bd5-49d8-932a-96501543a82b\") " pod="openshift-marketplace/redhat-marketplace-dvxvk" Jan 27 21:28:07 crc kubenswrapper[4915]: I0127 21:28:07.364224 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b8e431f-5bd5-49d8-932a-96501543a82b-catalog-content\") pod \"redhat-marketplace-dvxvk\" (UID: \"3b8e431f-5bd5-49d8-932a-96501543a82b\") " pod="openshift-marketplace/redhat-marketplace-dvxvk" Jan 27 21:28:07 crc kubenswrapper[4915]: I0127 21:28:07.364334 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjhr4\" (UniqueName: \"kubernetes.io/projected/3b8e431f-5bd5-49d8-932a-96501543a82b-kube-api-access-gjhr4\") pod \"redhat-marketplace-dvxvk\" (UID: \"3b8e431f-5bd5-49d8-932a-96501543a82b\") " pod="openshift-marketplace/redhat-marketplace-dvxvk" Jan 27 21:28:07 crc kubenswrapper[4915]: I0127 21:28:07.465902 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjhr4\" (UniqueName: \"kubernetes.io/projected/3b8e431f-5bd5-49d8-932a-96501543a82b-kube-api-access-gjhr4\") pod \"redhat-marketplace-dvxvk\" (UID: \"3b8e431f-5bd5-49d8-932a-96501543a82b\") " pod="openshift-marketplace/redhat-marketplace-dvxvk" Jan 27 21:28:07 crc kubenswrapper[4915]: I0127 21:28:07.466070 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b8e431f-5bd5-49d8-932a-96501543a82b-utilities\") pod \"redhat-marketplace-dvxvk\" (UID: \"3b8e431f-5bd5-49d8-932a-96501543a82b\") " pod="openshift-marketplace/redhat-marketplace-dvxvk" Jan 27 21:28:07 crc kubenswrapper[4915]: I0127 21:28:07.466139 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b8e431f-5bd5-49d8-932a-96501543a82b-catalog-content\") pod \"redhat-marketplace-dvxvk\" (UID: \"3b8e431f-5bd5-49d8-932a-96501543a82b\") " pod="openshift-marketplace/redhat-marketplace-dvxvk" Jan 27 21:28:07 crc kubenswrapper[4915]: I0127 21:28:07.467086 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b8e431f-5bd5-49d8-932a-96501543a82b-utilities\") pod \"redhat-marketplace-dvxvk\" (UID: \"3b8e431f-5bd5-49d8-932a-96501543a82b\") " pod="openshift-marketplace/redhat-marketplace-dvxvk" Jan 27 21:28:07 crc kubenswrapper[4915]: I0127 21:28:07.467422 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b8e431f-5bd5-49d8-932a-96501543a82b-catalog-content\") pod \"redhat-marketplace-dvxvk\" (UID: \"3b8e431f-5bd5-49d8-932a-96501543a82b\") " pod="openshift-marketplace/redhat-marketplace-dvxvk" Jan 27 21:28:07 crc kubenswrapper[4915]: I0127 21:28:07.486376 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjhr4\" (UniqueName: \"kubernetes.io/projected/3b8e431f-5bd5-49d8-932a-96501543a82b-kube-api-access-gjhr4\") pod \"redhat-marketplace-dvxvk\" (UID: \"3b8e431f-5bd5-49d8-932a-96501543a82b\") " pod="openshift-marketplace/redhat-marketplace-dvxvk" Jan 27 21:28:07 crc kubenswrapper[4915]: I0127 21:28:07.555980 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dvxvk" Jan 27 21:28:07 crc kubenswrapper[4915]: I0127 21:28:07.788044 4915 generic.go:334] "Generic (PLEG): container finished" podID="51783079-3f80-4d98-bf60-c25bc6e1490c" containerID="caa75d1b61b8a3d36b76f2cf40fa2484df76de1a017b46ea2e917625e9fefeb1" exitCode=0 Jan 27 21:28:07 crc kubenswrapper[4915]: I0127 21:28:07.788133 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wb25q" event={"ID":"51783079-3f80-4d98-bf60-c25bc6e1490c","Type":"ContainerDied","Data":"caa75d1b61b8a3d36b76f2cf40fa2484df76de1a017b46ea2e917625e9fefeb1"} Jan 27 21:28:08 crc kubenswrapper[4915]: I0127 21:28:08.165272 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wb25q" Jan 27 21:28:08 crc kubenswrapper[4915]: I0127 21:28:08.281817 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7mzk\" (UniqueName: \"kubernetes.io/projected/51783079-3f80-4d98-bf60-c25bc6e1490c-kube-api-access-z7mzk\") pod \"51783079-3f80-4d98-bf60-c25bc6e1490c\" (UID: \"51783079-3f80-4d98-bf60-c25bc6e1490c\") " Jan 27 21:28:08 crc kubenswrapper[4915]: I0127 21:28:08.281876 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51783079-3f80-4d98-bf60-c25bc6e1490c-utilities\") pod \"51783079-3f80-4d98-bf60-c25bc6e1490c\" (UID: \"51783079-3f80-4d98-bf60-c25bc6e1490c\") " Jan 27 21:28:08 crc kubenswrapper[4915]: I0127 21:28:08.282040 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51783079-3f80-4d98-bf60-c25bc6e1490c-catalog-content\") pod \"51783079-3f80-4d98-bf60-c25bc6e1490c\" (UID: \"51783079-3f80-4d98-bf60-c25bc6e1490c\") " Jan 27 21:28:08 crc kubenswrapper[4915]: I0127 21:28:08.282903 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51783079-3f80-4d98-bf60-c25bc6e1490c-utilities" (OuterVolumeSpecName: "utilities") pod "51783079-3f80-4d98-bf60-c25bc6e1490c" (UID: "51783079-3f80-4d98-bf60-c25bc6e1490c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:28:08 crc kubenswrapper[4915]: I0127 21:28:08.287451 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51783079-3f80-4d98-bf60-c25bc6e1490c-kube-api-access-z7mzk" (OuterVolumeSpecName: "kube-api-access-z7mzk") pod "51783079-3f80-4d98-bf60-c25bc6e1490c" (UID: "51783079-3f80-4d98-bf60-c25bc6e1490c"). InnerVolumeSpecName "kube-api-access-z7mzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:28:08 crc kubenswrapper[4915]: I0127 21:28:08.326098 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51783079-3f80-4d98-bf60-c25bc6e1490c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51783079-3f80-4d98-bf60-c25bc6e1490c" (UID: "51783079-3f80-4d98-bf60-c25bc6e1490c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:28:08 crc kubenswrapper[4915]: I0127 21:28:08.383870 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7mzk\" (UniqueName: \"kubernetes.io/projected/51783079-3f80-4d98-bf60-c25bc6e1490c-kube-api-access-z7mzk\") on node \"crc\" DevicePath \"\"" Jan 27 21:28:08 crc kubenswrapper[4915]: I0127 21:28:08.383902 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51783079-3f80-4d98-bf60-c25bc6e1490c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:28:08 crc kubenswrapper[4915]: I0127 21:28:08.383912 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51783079-3f80-4d98-bf60-c25bc6e1490c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:28:08 crc kubenswrapper[4915]: I0127 21:28:08.438245 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dvxvk"] Jan 27 21:28:08 crc kubenswrapper[4915]: I0127 21:28:08.797830 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wb25q" event={"ID":"51783079-3f80-4d98-bf60-c25bc6e1490c","Type":"ContainerDied","Data":"ddfed03321ad1e436177b5d4241830416fbd3535f8e3ec48e51aa7e1b647de1e"} Jan 27 21:28:08 crc kubenswrapper[4915]: I0127 21:28:08.797892 4915 scope.go:117] "RemoveContainer" containerID="caa75d1b61b8a3d36b76f2cf40fa2484df76de1a017b46ea2e917625e9fefeb1" Jan 27 21:28:08 crc kubenswrapper[4915]: I0127 21:28:08.798018 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wb25q" Jan 27 21:28:08 crc kubenswrapper[4915]: I0127 21:28:08.800050 4915 generic.go:334] "Generic (PLEG): container finished" podID="3b8e431f-5bd5-49d8-932a-96501543a82b" containerID="219f71a39128b7da1e7786da148512fda8c9d071e07dc5e78f810a3dd93a81df" exitCode=0 Jan 27 21:28:08 crc kubenswrapper[4915]: I0127 21:28:08.800089 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvxvk" event={"ID":"3b8e431f-5bd5-49d8-932a-96501543a82b","Type":"ContainerDied","Data":"219f71a39128b7da1e7786da148512fda8c9d071e07dc5e78f810a3dd93a81df"} Jan 27 21:28:08 crc kubenswrapper[4915]: I0127 21:28:08.800112 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvxvk" event={"ID":"3b8e431f-5bd5-49d8-932a-96501543a82b","Type":"ContainerStarted","Data":"95bcd4e79d75b0cd5dda3db6814a38538d78d7dedc6fa255704911ca8cdee0b4"} Jan 27 21:28:08 crc kubenswrapper[4915]: I0127 21:28:08.813308 4915 scope.go:117] "RemoveContainer" containerID="bac8de119d1ebf7d6a4af3fd3c49af67377d1a898dd1cc069a40b23539569f46" Jan 27 21:28:08 crc kubenswrapper[4915]: I0127 21:28:08.839468 4915 scope.go:117] "RemoveContainer" containerID="c7187743c47bf91ce273696e9d9ef515e66ea074b6aaff33da6baf8976eb6910" Jan 27 21:28:08 crc kubenswrapper[4915]: I0127 21:28:08.860048 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wb25q"] Jan 27 21:28:08 crc kubenswrapper[4915]: I0127 21:28:08.870374 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wb25q"] Jan 27 21:28:09 crc kubenswrapper[4915]: I0127 21:28:09.374116 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51783079-3f80-4d98-bf60-c25bc6e1490c" path="/var/lib/kubelet/pods/51783079-3f80-4d98-bf60-c25bc6e1490c/volumes" Jan 27 21:28:09 crc kubenswrapper[4915]: I0127 21:28:09.810382 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvxvk" event={"ID":"3b8e431f-5bd5-49d8-932a-96501543a82b","Type":"ContainerStarted","Data":"e7c6bf98a0302784487459181616b370056a4ca54941404b044e184585552959"} Jan 27 21:28:10 crc kubenswrapper[4915]: I0127 21:28:10.821431 4915 generic.go:334] "Generic (PLEG): container finished" podID="3b8e431f-5bd5-49d8-932a-96501543a82b" containerID="e7c6bf98a0302784487459181616b370056a4ca54941404b044e184585552959" exitCode=0 Jan 27 21:28:10 crc kubenswrapper[4915]: I0127 21:28:10.821466 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvxvk" event={"ID":"3b8e431f-5bd5-49d8-932a-96501543a82b","Type":"ContainerDied","Data":"e7c6bf98a0302784487459181616b370056a4ca54941404b044e184585552959"} Jan 27 21:28:11 crc kubenswrapper[4915]: I0127 21:28:11.835674 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvxvk" event={"ID":"3b8e431f-5bd5-49d8-932a-96501543a82b","Type":"ContainerStarted","Data":"bde1b36bed5c2b41cd6b42737ccb6e16aedc8647166fd55fb9e40aa3a200603b"} Jan 27 21:28:16 crc kubenswrapper[4915]: I0127 21:28:16.357500 4915 scope.go:117] "RemoveContainer" containerID="1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" Jan 27 21:28:16 crc kubenswrapper[4915]: E0127 21:28:16.358253 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:28:17 crc kubenswrapper[4915]: I0127 21:28:17.556505 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dvxvk" Jan 27 21:28:17 crc kubenswrapper[4915]: I0127 21:28:17.556855 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dvxvk" Jan 27 21:28:17 crc kubenswrapper[4915]: I0127 21:28:17.809273 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dvxvk" Jan 27 21:28:17 crc kubenswrapper[4915]: I0127 21:28:17.827377 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dvxvk" podStartSLOduration=8.407288594 podStartE2EDuration="10.827356738s" podCreationTimestamp="2026-01-27 21:28:07 +0000 UTC" firstStartedPulling="2026-01-27 21:28:08.801962017 +0000 UTC m=+9980.159815681" lastFinishedPulling="2026-01-27 21:28:11.222030131 +0000 UTC m=+9982.579883825" observedRunningTime="2026-01-27 21:28:11.855491003 +0000 UTC m=+9983.213344677" watchObservedRunningTime="2026-01-27 21:28:17.827356738 +0000 UTC m=+9989.185210402" Jan 27 21:28:17 crc kubenswrapper[4915]: I0127 21:28:17.935284 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dvxvk" Jan 27 21:28:18 crc kubenswrapper[4915]: I0127 21:28:18.043336 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dvxvk"] Jan 27 21:28:19 crc kubenswrapper[4915]: I0127 21:28:19.906885 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dvxvk" podUID="3b8e431f-5bd5-49d8-932a-96501543a82b" containerName="registry-server" containerID="cri-o://bde1b36bed5c2b41cd6b42737ccb6e16aedc8647166fd55fb9e40aa3a200603b" gracePeriod=2 Jan 27 21:28:20 crc kubenswrapper[4915]: I0127 21:28:20.465315 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dvxvk" Jan 27 21:28:20 crc kubenswrapper[4915]: I0127 21:28:20.624116 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjhr4\" (UniqueName: \"kubernetes.io/projected/3b8e431f-5bd5-49d8-932a-96501543a82b-kube-api-access-gjhr4\") pod \"3b8e431f-5bd5-49d8-932a-96501543a82b\" (UID: \"3b8e431f-5bd5-49d8-932a-96501543a82b\") " Jan 27 21:28:20 crc kubenswrapper[4915]: I0127 21:28:20.624194 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b8e431f-5bd5-49d8-932a-96501543a82b-catalog-content\") pod \"3b8e431f-5bd5-49d8-932a-96501543a82b\" (UID: \"3b8e431f-5bd5-49d8-932a-96501543a82b\") " Jan 27 21:28:20 crc kubenswrapper[4915]: I0127 21:28:20.624328 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b8e431f-5bd5-49d8-932a-96501543a82b-utilities\") pod \"3b8e431f-5bd5-49d8-932a-96501543a82b\" (UID: \"3b8e431f-5bd5-49d8-932a-96501543a82b\") " Jan 27 21:28:20 crc kubenswrapper[4915]: I0127 21:28:20.625433 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b8e431f-5bd5-49d8-932a-96501543a82b-utilities" (OuterVolumeSpecName: "utilities") pod "3b8e431f-5bd5-49d8-932a-96501543a82b" (UID: "3b8e431f-5bd5-49d8-932a-96501543a82b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:28:20 crc kubenswrapper[4915]: I0127 21:28:20.630362 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b8e431f-5bd5-49d8-932a-96501543a82b-kube-api-access-gjhr4" (OuterVolumeSpecName: "kube-api-access-gjhr4") pod "3b8e431f-5bd5-49d8-932a-96501543a82b" (UID: "3b8e431f-5bd5-49d8-932a-96501543a82b"). InnerVolumeSpecName "kube-api-access-gjhr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:28:20 crc kubenswrapper[4915]: I0127 21:28:20.655666 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b8e431f-5bd5-49d8-932a-96501543a82b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b8e431f-5bd5-49d8-932a-96501543a82b" (UID: "3b8e431f-5bd5-49d8-932a-96501543a82b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:28:20 crc kubenswrapper[4915]: I0127 21:28:20.727593 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b8e431f-5bd5-49d8-932a-96501543a82b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:28:20 crc kubenswrapper[4915]: I0127 21:28:20.728098 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjhr4\" (UniqueName: \"kubernetes.io/projected/3b8e431f-5bd5-49d8-932a-96501543a82b-kube-api-access-gjhr4\") on node \"crc\" DevicePath \"\"" Jan 27 21:28:20 crc kubenswrapper[4915]: I0127 21:28:20.728112 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b8e431f-5bd5-49d8-932a-96501543a82b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:28:20 crc kubenswrapper[4915]: I0127 21:28:20.919559 4915 generic.go:334] "Generic (PLEG): container finished" podID="3b8e431f-5bd5-49d8-932a-96501543a82b" containerID="bde1b36bed5c2b41cd6b42737ccb6e16aedc8647166fd55fb9e40aa3a200603b" exitCode=0 Jan 27 21:28:20 crc kubenswrapper[4915]: I0127 21:28:20.919619 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvxvk" event={"ID":"3b8e431f-5bd5-49d8-932a-96501543a82b","Type":"ContainerDied","Data":"bde1b36bed5c2b41cd6b42737ccb6e16aedc8647166fd55fb9e40aa3a200603b"} Jan 27 21:28:20 crc kubenswrapper[4915]: I0127 21:28:20.919651 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvxvk" event={"ID":"3b8e431f-5bd5-49d8-932a-96501543a82b","Type":"ContainerDied","Data":"95bcd4e79d75b0cd5dda3db6814a38538d78d7dedc6fa255704911ca8cdee0b4"} Jan 27 21:28:20 crc kubenswrapper[4915]: I0127 21:28:20.919672 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dvxvk" Jan 27 21:28:20 crc kubenswrapper[4915]: I0127 21:28:20.919684 4915 scope.go:117] "RemoveContainer" containerID="bde1b36bed5c2b41cd6b42737ccb6e16aedc8647166fd55fb9e40aa3a200603b" Jan 27 21:28:20 crc kubenswrapper[4915]: I0127 21:28:20.941808 4915 scope.go:117] "RemoveContainer" containerID="e7c6bf98a0302784487459181616b370056a4ca54941404b044e184585552959" Jan 27 21:28:20 crc kubenswrapper[4915]: I0127 21:28:20.956650 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dvxvk"] Jan 27 21:28:20 crc kubenswrapper[4915]: I0127 21:28:20.965374 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dvxvk"] Jan 27 21:28:20 crc kubenswrapper[4915]: I0127 21:28:20.973100 4915 scope.go:117] "RemoveContainer" containerID="219f71a39128b7da1e7786da148512fda8c9d071e07dc5e78f810a3dd93a81df" Jan 27 21:28:21 crc kubenswrapper[4915]: I0127 21:28:21.005664 4915 scope.go:117] "RemoveContainer" containerID="bde1b36bed5c2b41cd6b42737ccb6e16aedc8647166fd55fb9e40aa3a200603b" Jan 27 21:28:21 crc kubenswrapper[4915]: E0127 21:28:21.006452 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bde1b36bed5c2b41cd6b42737ccb6e16aedc8647166fd55fb9e40aa3a200603b\": container with ID starting with bde1b36bed5c2b41cd6b42737ccb6e16aedc8647166fd55fb9e40aa3a200603b not found: ID does not exist" containerID="bde1b36bed5c2b41cd6b42737ccb6e16aedc8647166fd55fb9e40aa3a200603b" Jan 27 21:28:21 crc kubenswrapper[4915]: I0127 21:28:21.006498 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bde1b36bed5c2b41cd6b42737ccb6e16aedc8647166fd55fb9e40aa3a200603b"} err="failed to get container status \"bde1b36bed5c2b41cd6b42737ccb6e16aedc8647166fd55fb9e40aa3a200603b\": rpc error: code = NotFound desc = could not find container \"bde1b36bed5c2b41cd6b42737ccb6e16aedc8647166fd55fb9e40aa3a200603b\": container with ID starting with bde1b36bed5c2b41cd6b42737ccb6e16aedc8647166fd55fb9e40aa3a200603b not found: ID does not exist" Jan 27 21:28:21 crc kubenswrapper[4915]: I0127 21:28:21.006546 4915 scope.go:117] "RemoveContainer" containerID="e7c6bf98a0302784487459181616b370056a4ca54941404b044e184585552959" Jan 27 21:28:21 crc kubenswrapper[4915]: E0127 21:28:21.006874 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7c6bf98a0302784487459181616b370056a4ca54941404b044e184585552959\": container with ID starting with e7c6bf98a0302784487459181616b370056a4ca54941404b044e184585552959 not found: ID does not exist" containerID="e7c6bf98a0302784487459181616b370056a4ca54941404b044e184585552959" Jan 27 21:28:21 crc kubenswrapper[4915]: I0127 21:28:21.006931 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7c6bf98a0302784487459181616b370056a4ca54941404b044e184585552959"} err="failed to get container status \"e7c6bf98a0302784487459181616b370056a4ca54941404b044e184585552959\": rpc error: code = NotFound desc = could not find container \"e7c6bf98a0302784487459181616b370056a4ca54941404b044e184585552959\": container with ID starting with e7c6bf98a0302784487459181616b370056a4ca54941404b044e184585552959 not found: ID does not exist" Jan 27 21:28:21 crc kubenswrapper[4915]: I0127 21:28:21.006959 4915 scope.go:117] "RemoveContainer" containerID="219f71a39128b7da1e7786da148512fda8c9d071e07dc5e78f810a3dd93a81df" Jan 27 21:28:21 crc kubenswrapper[4915]: E0127 21:28:21.007270 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"219f71a39128b7da1e7786da148512fda8c9d071e07dc5e78f810a3dd93a81df\": container with ID starting with 219f71a39128b7da1e7786da148512fda8c9d071e07dc5e78f810a3dd93a81df not found: ID does not exist" containerID="219f71a39128b7da1e7786da148512fda8c9d071e07dc5e78f810a3dd93a81df" Jan 27 21:28:21 crc kubenswrapper[4915]: I0127 21:28:21.007306 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"219f71a39128b7da1e7786da148512fda8c9d071e07dc5e78f810a3dd93a81df"} err="failed to get container status \"219f71a39128b7da1e7786da148512fda8c9d071e07dc5e78f810a3dd93a81df\": rpc error: code = NotFound desc = could not find container \"219f71a39128b7da1e7786da148512fda8c9d071e07dc5e78f810a3dd93a81df\": container with ID starting with 219f71a39128b7da1e7786da148512fda8c9d071e07dc5e78f810a3dd93a81df not found: ID does not exist" Jan 27 21:28:21 crc kubenswrapper[4915]: I0127 21:28:21.372583 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b8e431f-5bd5-49d8-932a-96501543a82b" path="/var/lib/kubelet/pods/3b8e431f-5bd5-49d8-932a-96501543a82b/volumes" Jan 27 21:28:25 crc kubenswrapper[4915]: I0127 21:28:25.240491 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-kbldf_80cb012a-8a30-42f8-b7f6-95505812f59e/kube-rbac-proxy/0.log" Jan 27 21:28:25 crc kubenswrapper[4915]: I0127 21:28:25.555851 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8xt_b6855ba9-2158-402c-ba8a-207bb1000e0d/cp-frr-files/0.log" Jan 27 21:28:25 crc kubenswrapper[4915]: I0127 21:28:25.664238 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-kbldf_80cb012a-8a30-42f8-b7f6-95505812f59e/controller/0.log" Jan 27 21:28:25 crc kubenswrapper[4915]: I0127 21:28:25.709298 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8xt_b6855ba9-2158-402c-ba8a-207bb1000e0d/cp-frr-files/0.log" Jan 27 21:28:25 crc kubenswrapper[4915]: I0127 21:28:25.736784 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8xt_b6855ba9-2158-402c-ba8a-207bb1000e0d/cp-metrics/0.log" Jan 27 21:28:25 crc kubenswrapper[4915]: I0127 21:28:25.762212 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8xt_b6855ba9-2158-402c-ba8a-207bb1000e0d/cp-reloader/0.log" Jan 27 21:28:25 crc kubenswrapper[4915]: I0127 21:28:25.847101 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8xt_b6855ba9-2158-402c-ba8a-207bb1000e0d/cp-reloader/0.log" Jan 27 21:28:26 crc kubenswrapper[4915]: I0127 21:28:26.721764 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8xt_b6855ba9-2158-402c-ba8a-207bb1000e0d/cp-frr-files/0.log" Jan 27 21:28:26 crc kubenswrapper[4915]: I0127 21:28:26.731575 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8xt_b6855ba9-2158-402c-ba8a-207bb1000e0d/cp-reloader/0.log" Jan 27 21:28:26 crc kubenswrapper[4915]: I0127 21:28:26.731774 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8xt_b6855ba9-2158-402c-ba8a-207bb1000e0d/cp-metrics/0.log" Jan 27 21:28:26 crc kubenswrapper[4915]: I0127 21:28:26.751256 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8xt_b6855ba9-2158-402c-ba8a-207bb1000e0d/cp-metrics/0.log" Jan 27 21:28:26 crc kubenswrapper[4915]: I0127 21:28:26.942529 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8xt_b6855ba9-2158-402c-ba8a-207bb1000e0d/cp-frr-files/0.log" Jan 27 21:28:26 crc kubenswrapper[4915]: I0127 21:28:26.963928 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8xt_b6855ba9-2158-402c-ba8a-207bb1000e0d/cp-reloader/0.log" Jan 27 21:28:26 crc kubenswrapper[4915]: I0127 21:28:26.982481 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8xt_b6855ba9-2158-402c-ba8a-207bb1000e0d/cp-metrics/0.log" Jan 27 21:28:26 crc kubenswrapper[4915]: I0127 21:28:26.985378 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8xt_b6855ba9-2158-402c-ba8a-207bb1000e0d/controller/0.log" Jan 27 21:28:27 crc kubenswrapper[4915]: I0127 21:28:27.165762 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8xt_b6855ba9-2158-402c-ba8a-207bb1000e0d/frr-metrics/0.log" Jan 27 21:28:27 crc kubenswrapper[4915]: I0127 21:28:27.182216 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8xt_b6855ba9-2158-402c-ba8a-207bb1000e0d/kube-rbac-proxy/0.log" Jan 27 21:28:27 crc kubenswrapper[4915]: I0127 21:28:27.198322 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8xt_b6855ba9-2158-402c-ba8a-207bb1000e0d/kube-rbac-proxy-frr/0.log" Jan 27 21:28:27 crc kubenswrapper[4915]: I0127 21:28:27.441607 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8xt_b6855ba9-2158-402c-ba8a-207bb1000e0d/reloader/0.log" Jan 27 21:28:27 crc kubenswrapper[4915]: I0127 21:28:27.462751 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-bkl8l_78f43a7d-8f1a-4ce5-b9bf-9dacd8e1b5b4/frr-k8s-webhook-server/0.log" Jan 27 21:28:27 crc kubenswrapper[4915]: I0127 21:28:27.794557 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5b5dc76568-9pz92_626172e0-b44e-4fc2-8aec-ca96111d56eb/manager/0.log" Jan 27 21:28:27 crc kubenswrapper[4915]: I0127 21:28:27.895168 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-cf655d589-7h4rj_c4299dfb-b500-4c1b-94b2-576aedc4704a/webhook-server/0.log" Jan 27 21:28:28 crc kubenswrapper[4915]: I0127 21:28:28.313440 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2t6jv_65dcae83-67d2-4c0c-8a1b-2693b4024913/kube-rbac-proxy/0.log" Jan 27 21:28:29 crc kubenswrapper[4915]: I0127 21:28:29.153753 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2t6jv_65dcae83-67d2-4c0c-8a1b-2693b4024913/speaker/0.log" Jan 27 21:28:29 crc kubenswrapper[4915]: I0127 21:28:29.371679 4915 scope.go:117] "RemoveContainer" containerID="1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" Jan 27 21:28:29 crc kubenswrapper[4915]: E0127 21:28:29.371994 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:28:30 crc kubenswrapper[4915]: I0127 21:28:30.316869 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jr8xt_b6855ba9-2158-402c-ba8a-207bb1000e0d/frr/0.log" Jan 27 21:28:41 crc kubenswrapper[4915]: I0127 21:28:41.288772 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th_c0908716-de6b-48e7-a672-bf577fcaba00/util/0.log" Jan 27 21:28:41 crc kubenswrapper[4915]: I0127 21:28:41.498897 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th_c0908716-de6b-48e7-a672-bf577fcaba00/pull/0.log" Jan 27 21:28:41 crc kubenswrapper[4915]: I0127 21:28:41.532502 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th_c0908716-de6b-48e7-a672-bf577fcaba00/util/0.log" Jan 27 21:28:41 crc kubenswrapper[4915]: I0127 21:28:41.577649 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th_c0908716-de6b-48e7-a672-bf577fcaba00/pull/0.log" Jan 27 21:28:41 crc kubenswrapper[4915]: I0127 21:28:41.775026 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th_c0908716-de6b-48e7-a672-bf577fcaba00/util/0.log" Jan 27 21:28:41 crc kubenswrapper[4915]: I0127 21:28:41.825518 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th_c0908716-de6b-48e7-a672-bf577fcaba00/pull/0.log" Jan 27 21:28:41 crc kubenswrapper[4915]: I0127 21:28:41.827562 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7x6th_c0908716-de6b-48e7-a672-bf577fcaba00/extract/0.log" Jan 27 21:28:41 crc kubenswrapper[4915]: I0127 21:28:41.953230 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh_779f706d-66f4-4ade-8c06-88382cc4a041/util/0.log" Jan 27 21:28:42 crc kubenswrapper[4915]: I0127 21:28:42.108957 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh_779f706d-66f4-4ade-8c06-88382cc4a041/pull/0.log" Jan 27 21:28:42 crc kubenswrapper[4915]: I0127 21:28:42.121326 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh_779f706d-66f4-4ade-8c06-88382cc4a041/pull/0.log" Jan 27 21:28:42 crc kubenswrapper[4915]: I0127 21:28:42.122089 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh_779f706d-66f4-4ade-8c06-88382cc4a041/util/0.log" Jan 27 21:28:42 crc kubenswrapper[4915]: I0127 21:28:42.319376 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh_779f706d-66f4-4ade-8c06-88382cc4a041/pull/0.log" Jan 27 21:28:42 crc kubenswrapper[4915]: I0127 21:28:42.365959 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh_779f706d-66f4-4ade-8c06-88382cc4a041/extract/0.log" Jan 27 21:28:42 crc kubenswrapper[4915]: I0127 21:28:42.382640 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczvdjh_779f706d-66f4-4ade-8c06-88382cc4a041/util/0.log" Jan 27 21:28:42 crc kubenswrapper[4915]: I0127 21:28:42.506160 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8_c75c7efc-2e8b-40e1-94ca-7be378c4b004/util/0.log" Jan 27 21:28:42 crc kubenswrapper[4915]: I0127 21:28:42.672167 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8_c75c7efc-2e8b-40e1-94ca-7be378c4b004/util/0.log" Jan 27 21:28:42 crc kubenswrapper[4915]: I0127 21:28:42.677193 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8_c75c7efc-2e8b-40e1-94ca-7be378c4b004/pull/0.log" Jan 27 21:28:42 crc kubenswrapper[4915]: I0127 21:28:42.706846 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8_c75c7efc-2e8b-40e1-94ca-7be378c4b004/pull/0.log" Jan 27 21:28:42 crc kubenswrapper[4915]: I0127 21:28:42.901593 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8_c75c7efc-2e8b-40e1-94ca-7be378c4b004/util/0.log" Jan 27 21:28:42 crc kubenswrapper[4915]: I0127 21:28:42.912880 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8_c75c7efc-2e8b-40e1-94ca-7be378c4b004/pull/0.log" Jan 27 21:28:42 crc kubenswrapper[4915]: I0127 21:28:42.925721 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713crdz8_c75c7efc-2e8b-40e1-94ca-7be378c4b004/extract/0.log" Jan 27 21:28:43 crc kubenswrapper[4915]: I0127 21:28:43.071399 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-975c5_81368b67-b485-4b0b-926e-332225f9022c/extract-utilities/0.log" Jan 27 21:28:43 crc kubenswrapper[4915]: I0127 21:28:43.230560 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-975c5_81368b67-b485-4b0b-926e-332225f9022c/extract-content/0.log" Jan 27 21:28:43 crc kubenswrapper[4915]: I0127 21:28:43.239018 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-975c5_81368b67-b485-4b0b-926e-332225f9022c/extract-content/0.log" Jan 27 21:28:43 crc kubenswrapper[4915]: I0127 21:28:43.240099 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-975c5_81368b67-b485-4b0b-926e-332225f9022c/extract-utilities/0.log" Jan 27 21:28:43 crc kubenswrapper[4915]: I0127 21:28:43.358383 4915 scope.go:117] "RemoveContainer" containerID="1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" Jan 27 21:28:43 crc kubenswrapper[4915]: E0127 21:28:43.358693 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:28:43 crc kubenswrapper[4915]: I0127 21:28:43.452870 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-975c5_81368b67-b485-4b0b-926e-332225f9022c/extract-content/0.log" Jan 27 21:28:43 crc kubenswrapper[4915]: I0127 21:28:43.454433 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-975c5_81368b67-b485-4b0b-926e-332225f9022c/extract-utilities/0.log" Jan 27 21:28:43 crc kubenswrapper[4915]: I0127 21:28:43.691998 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kf8sz_392339bf-6a1b-476b-9de9-7e3d179b9a06/extract-utilities/0.log" Jan 27 21:28:43 crc kubenswrapper[4915]: I0127 21:28:43.860387 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kf8sz_392339bf-6a1b-476b-9de9-7e3d179b9a06/extract-utilities/0.log" Jan 27 21:28:43 crc kubenswrapper[4915]: I0127 21:28:43.916195 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kf8sz_392339bf-6a1b-476b-9de9-7e3d179b9a06/extract-content/0.log" Jan 27 21:28:43 crc kubenswrapper[4915]: I0127 21:28:43.938299 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kf8sz_392339bf-6a1b-476b-9de9-7e3d179b9a06/extract-content/0.log" Jan 27 21:28:43 crc kubenswrapper[4915]: I0127 21:28:43.954930 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-975c5_81368b67-b485-4b0b-926e-332225f9022c/registry-server/0.log" Jan 27 21:28:44 crc kubenswrapper[4915]: I0127 21:28:44.138049 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kf8sz_392339bf-6a1b-476b-9de9-7e3d179b9a06/extract-content/0.log" Jan 27 21:28:44 crc kubenswrapper[4915]: I0127 21:28:44.189261 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kf8sz_392339bf-6a1b-476b-9de9-7e3d179b9a06/extract-utilities/0.log" Jan 27 21:28:44 crc kubenswrapper[4915]: I0127 21:28:44.466947 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-g6bqr_2d935ad0-7baa-422a-af52-0a58d7c14312/marketplace-operator/0.log" Jan 27 21:28:44 crc kubenswrapper[4915]: I0127 21:28:44.482671 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hqwv4_b5b20ad5-a32b-4346-999b-df644e36e6d5/extract-utilities/0.log" Jan 27 21:28:44 crc kubenswrapper[4915]: I0127 21:28:44.698060 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hqwv4_b5b20ad5-a32b-4346-999b-df644e36e6d5/extract-content/0.log" Jan 27 21:28:44 crc kubenswrapper[4915]: I0127 21:28:44.735056 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hqwv4_b5b20ad5-a32b-4346-999b-df644e36e6d5/extract-utilities/0.log" Jan 27 21:28:44 crc kubenswrapper[4915]: I0127 21:28:44.745163 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hqwv4_b5b20ad5-a32b-4346-999b-df644e36e6d5/extract-content/0.log" Jan 27 21:28:44 crc kubenswrapper[4915]: I0127 21:28:44.966442 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hqwv4_b5b20ad5-a32b-4346-999b-df644e36e6d5/extract-utilities/0.log" Jan 27 21:28:45 crc kubenswrapper[4915]: I0127 21:28:45.004069 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hqwv4_b5b20ad5-a32b-4346-999b-df644e36e6d5/extract-content/0.log" Jan 27 21:28:45 crc kubenswrapper[4915]: I0127 21:28:45.134108 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kf8sz_392339bf-6a1b-476b-9de9-7e3d179b9a06/registry-server/0.log" Jan 27 21:28:45 crc kubenswrapper[4915]: I0127 21:28:45.157000 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lhlml_7a9d312f-394f-4678-99fe-5a16946a5c6b/extract-utilities/0.log" Jan 27 21:28:45 crc kubenswrapper[4915]: I0127 21:28:45.364505 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hqwv4_b5b20ad5-a32b-4346-999b-df644e36e6d5/registry-server/0.log" Jan 27 21:28:45 crc kubenswrapper[4915]: I0127 21:28:45.371425 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lhlml_7a9d312f-394f-4678-99fe-5a16946a5c6b/extract-utilities/0.log" Jan 27 21:28:45 crc kubenswrapper[4915]: I0127 21:28:45.418500 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lhlml_7a9d312f-394f-4678-99fe-5a16946a5c6b/extract-content/0.log" Jan 27 21:28:45 crc kubenswrapper[4915]: I0127 21:28:45.436728 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lhlml_7a9d312f-394f-4678-99fe-5a16946a5c6b/extract-content/0.log" Jan 27 21:28:45 crc kubenswrapper[4915]: I0127 21:28:45.577924 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lhlml_7a9d312f-394f-4678-99fe-5a16946a5c6b/extract-utilities/0.log" Jan 27 21:28:45 crc kubenswrapper[4915]: I0127 21:28:45.602650 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lhlml_7a9d312f-394f-4678-99fe-5a16946a5c6b/extract-content/0.log" Jan 27 21:28:46 crc kubenswrapper[4915]: I0127 21:28:46.824188 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lhlml_7a9d312f-394f-4678-99fe-5a16946a5c6b/registry-server/0.log" Jan 27 21:28:57 crc kubenswrapper[4915]: I0127 21:28:57.357562 4915 scope.go:117] "RemoveContainer" containerID="1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" Jan 27 21:28:57 crc kubenswrapper[4915]: E0127 21:28:57.358397 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:29:08 crc kubenswrapper[4915]: I0127 21:29:08.357354 4915 scope.go:117] "RemoveContainer" containerID="1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" Jan 27 21:29:08 crc kubenswrapper[4915]: E0127 21:29:08.358255 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:29:09 crc kubenswrapper[4915]: E0127 21:29:09.527962 4915 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.106:36728->38.102.83.106:36797: write tcp 38.102.83.106:36728->38.102.83.106:36797: write: broken pipe Jan 27 21:29:22 crc kubenswrapper[4915]: I0127 21:29:22.358141 4915 scope.go:117] "RemoveContainer" containerID="1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" Jan 27 21:29:22 crc kubenswrapper[4915]: E0127 21:29:22.359049 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:29:35 crc kubenswrapper[4915]: I0127 21:29:35.358113 4915 scope.go:117] "RemoveContainer" containerID="1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" Jan 27 21:29:35 crc kubenswrapper[4915]: E0127 21:29:35.359244 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:29:47 crc kubenswrapper[4915]: I0127 21:29:47.357414 4915 scope.go:117] "RemoveContainer" containerID="1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" Jan 27 21:29:47 crc kubenswrapper[4915]: E0127 21:29:47.358254 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:29:58 crc kubenswrapper[4915]: I0127 21:29:58.358991 4915 scope.go:117] "RemoveContainer" containerID="1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" Jan 27 21:29:58 crc kubenswrapper[4915]: E0127 21:29:58.359680 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:30:00 crc kubenswrapper[4915]: I0127 21:30:00.166130 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492490-ll27w"] Jan 27 21:30:00 crc kubenswrapper[4915]: E0127 21:30:00.167893 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8e431f-5bd5-49d8-932a-96501543a82b" containerName="registry-server" Jan 27 21:30:00 crc kubenswrapper[4915]: I0127 21:30:00.167997 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8e431f-5bd5-49d8-932a-96501543a82b" containerName="registry-server" Jan 27 21:30:00 crc kubenswrapper[4915]: E0127 21:30:00.168080 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8e431f-5bd5-49d8-932a-96501543a82b" containerName="extract-utilities" Jan 27 21:30:00 crc kubenswrapper[4915]: I0127 21:30:00.168154 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8e431f-5bd5-49d8-932a-96501543a82b" containerName="extract-utilities" Jan 27 21:30:00 crc kubenswrapper[4915]: E0127 21:30:00.168248 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51783079-3f80-4d98-bf60-c25bc6e1490c" containerName="extract-utilities" Jan 27 21:30:00 crc kubenswrapper[4915]: I0127 21:30:00.168318 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="51783079-3f80-4d98-bf60-c25bc6e1490c" containerName="extract-utilities" Jan 27 21:30:00 crc kubenswrapper[4915]: E0127 21:30:00.168398 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51783079-3f80-4d98-bf60-c25bc6e1490c" containerName="registry-server" Jan 27 21:30:00 crc kubenswrapper[4915]: I0127 21:30:00.168490 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="51783079-3f80-4d98-bf60-c25bc6e1490c" containerName="registry-server" Jan 27 21:30:00 crc kubenswrapper[4915]: E0127 21:30:00.168572 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51783079-3f80-4d98-bf60-c25bc6e1490c" containerName="extract-content" Jan 27 21:30:00 crc kubenswrapper[4915]: I0127 21:30:00.168647 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="51783079-3f80-4d98-bf60-c25bc6e1490c" containerName="extract-content" Jan 27 21:30:00 crc kubenswrapper[4915]: E0127 21:30:00.168726 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8e431f-5bd5-49d8-932a-96501543a82b" containerName="extract-content" Jan 27 21:30:00 crc kubenswrapper[4915]: I0127 21:30:00.168814 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8e431f-5bd5-49d8-932a-96501543a82b" containerName="extract-content" Jan 27 21:30:00 crc kubenswrapper[4915]: I0127 21:30:00.169366 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="51783079-3f80-4d98-bf60-c25bc6e1490c" containerName="registry-server" Jan 27 21:30:00 crc kubenswrapper[4915]: I0127 21:30:00.169459 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b8e431f-5bd5-49d8-932a-96501543a82b" containerName="registry-server" Jan 27 21:30:00 crc kubenswrapper[4915]: I0127 21:30:00.170205 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-ll27w" Jan 27 21:30:00 crc kubenswrapper[4915]: I0127 21:30:00.172272 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 21:30:00 crc kubenswrapper[4915]: I0127 21:30:00.173853 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 21:30:00 crc kubenswrapper[4915]: I0127 21:30:00.193920 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492490-ll27w"] Jan 27 21:30:00 crc kubenswrapper[4915]: I0127 21:30:00.263975 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwtfn\" (UniqueName: \"kubernetes.io/projected/fd87787a-68f4-4155-868e-be589dbbdc62-kube-api-access-xwtfn\") pod \"collect-profiles-29492490-ll27w\" (UID: \"fd87787a-68f4-4155-868e-be589dbbdc62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-ll27w" Jan 27 21:30:00 crc kubenswrapper[4915]: I0127 21:30:00.264041 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fd87787a-68f4-4155-868e-be589dbbdc62-secret-volume\") pod \"collect-profiles-29492490-ll27w\" (UID: \"fd87787a-68f4-4155-868e-be589dbbdc62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-ll27w" Jan 27 21:30:00 crc kubenswrapper[4915]: I0127 21:30:00.264071 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd87787a-68f4-4155-868e-be589dbbdc62-config-volume\") pod \"collect-profiles-29492490-ll27w\" (UID: \"fd87787a-68f4-4155-868e-be589dbbdc62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-ll27w" Jan 27 21:30:00 crc kubenswrapper[4915]: I0127 21:30:00.366384 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwtfn\" (UniqueName: \"kubernetes.io/projected/fd87787a-68f4-4155-868e-be589dbbdc62-kube-api-access-xwtfn\") pod \"collect-profiles-29492490-ll27w\" (UID: \"fd87787a-68f4-4155-868e-be589dbbdc62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-ll27w" Jan 27 21:30:00 crc kubenswrapper[4915]: I0127 21:30:00.366456 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fd87787a-68f4-4155-868e-be589dbbdc62-secret-volume\") pod \"collect-profiles-29492490-ll27w\" (UID: \"fd87787a-68f4-4155-868e-be589dbbdc62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-ll27w" Jan 27 21:30:00 crc kubenswrapper[4915]: I0127 21:30:00.366491 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd87787a-68f4-4155-868e-be589dbbdc62-config-volume\") pod \"collect-profiles-29492490-ll27w\" (UID: \"fd87787a-68f4-4155-868e-be589dbbdc62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-ll27w" Jan 27 21:30:00 crc kubenswrapper[4915]: I0127 21:30:00.367629 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd87787a-68f4-4155-868e-be589dbbdc62-config-volume\") pod \"collect-profiles-29492490-ll27w\" (UID: \"fd87787a-68f4-4155-868e-be589dbbdc62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-ll27w" Jan 27 21:30:00 crc kubenswrapper[4915]: I0127 21:30:00.373757 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fd87787a-68f4-4155-868e-be589dbbdc62-secret-volume\") pod \"collect-profiles-29492490-ll27w\" (UID: \"fd87787a-68f4-4155-868e-be589dbbdc62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-ll27w" Jan 27 21:30:00 crc kubenswrapper[4915]: I0127 21:30:00.383093 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwtfn\" (UniqueName: \"kubernetes.io/projected/fd87787a-68f4-4155-868e-be589dbbdc62-kube-api-access-xwtfn\") pod \"collect-profiles-29492490-ll27w\" (UID: \"fd87787a-68f4-4155-868e-be589dbbdc62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-ll27w" Jan 27 21:30:00 crc kubenswrapper[4915]: I0127 21:30:00.513131 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-ll27w" Jan 27 21:30:00 crc kubenswrapper[4915]: I0127 21:30:00.968545 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492490-ll27w"] Jan 27 21:30:01 crc kubenswrapper[4915]: I0127 21:30:01.852579 4915 generic.go:334] "Generic (PLEG): container finished" podID="fd87787a-68f4-4155-868e-be589dbbdc62" containerID="5c248eda574593fe86874f31c190263d31aa084b02fc588012a4a939fc8db8aa" exitCode=0 Jan 27 21:30:01 crc kubenswrapper[4915]: I0127 21:30:01.852770 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-ll27w" event={"ID":"fd87787a-68f4-4155-868e-be589dbbdc62","Type":"ContainerDied","Data":"5c248eda574593fe86874f31c190263d31aa084b02fc588012a4a939fc8db8aa"} Jan 27 21:30:01 crc kubenswrapper[4915]: I0127 21:30:01.853083 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-ll27w" event={"ID":"fd87787a-68f4-4155-868e-be589dbbdc62","Type":"ContainerStarted","Data":"91aaef3d9b374542f344cbb292ac149ad4958d156b94e9d7544c105bd8e5c932"} Jan 27 21:30:03 crc kubenswrapper[4915]: I0127 21:30:03.208565 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-ll27w" Jan 27 21:30:03 crc kubenswrapper[4915]: I0127 21:30:03.327857 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fd87787a-68f4-4155-868e-be589dbbdc62-secret-volume\") pod \"fd87787a-68f4-4155-868e-be589dbbdc62\" (UID: \"fd87787a-68f4-4155-868e-be589dbbdc62\") " Jan 27 21:30:03 crc kubenswrapper[4915]: I0127 21:30:03.327899 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwtfn\" (UniqueName: \"kubernetes.io/projected/fd87787a-68f4-4155-868e-be589dbbdc62-kube-api-access-xwtfn\") pod \"fd87787a-68f4-4155-868e-be589dbbdc62\" (UID: \"fd87787a-68f4-4155-868e-be589dbbdc62\") " Jan 27 21:30:03 crc kubenswrapper[4915]: I0127 21:30:03.328002 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd87787a-68f4-4155-868e-be589dbbdc62-config-volume\") pod \"fd87787a-68f4-4155-868e-be589dbbdc62\" (UID: \"fd87787a-68f4-4155-868e-be589dbbdc62\") " Jan 27 21:30:03 crc kubenswrapper[4915]: I0127 21:30:03.328862 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd87787a-68f4-4155-868e-be589dbbdc62-config-volume" (OuterVolumeSpecName: "config-volume") pod "fd87787a-68f4-4155-868e-be589dbbdc62" (UID: "fd87787a-68f4-4155-868e-be589dbbdc62"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:30:03 crc kubenswrapper[4915]: I0127 21:30:03.333826 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd87787a-68f4-4155-868e-be589dbbdc62-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fd87787a-68f4-4155-868e-be589dbbdc62" (UID: "fd87787a-68f4-4155-868e-be589dbbdc62"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:30:03 crc kubenswrapper[4915]: I0127 21:30:03.343015 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd87787a-68f4-4155-868e-be589dbbdc62-kube-api-access-xwtfn" (OuterVolumeSpecName: "kube-api-access-xwtfn") pod "fd87787a-68f4-4155-868e-be589dbbdc62" (UID: "fd87787a-68f4-4155-868e-be589dbbdc62"). InnerVolumeSpecName "kube-api-access-xwtfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:30:03 crc kubenswrapper[4915]: I0127 21:30:03.430146 4915 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd87787a-68f4-4155-868e-be589dbbdc62-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 21:30:03 crc kubenswrapper[4915]: I0127 21:30:03.430207 4915 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fd87787a-68f4-4155-868e-be589dbbdc62-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 21:30:03 crc kubenswrapper[4915]: I0127 21:30:03.430221 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwtfn\" (UniqueName: \"kubernetes.io/projected/fd87787a-68f4-4155-868e-be589dbbdc62-kube-api-access-xwtfn\") on node \"crc\" DevicePath \"\"" Jan 27 21:30:03 crc kubenswrapper[4915]: I0127 21:30:03.871346 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-ll27w" event={"ID":"fd87787a-68f4-4155-868e-be589dbbdc62","Type":"ContainerDied","Data":"91aaef3d9b374542f344cbb292ac149ad4958d156b94e9d7544c105bd8e5c932"} Jan 27 21:30:03 crc kubenswrapper[4915]: I0127 21:30:03.871411 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91aaef3d9b374542f344cbb292ac149ad4958d156b94e9d7544c105bd8e5c932" Jan 27 21:30:03 crc kubenswrapper[4915]: I0127 21:30:03.871418 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-ll27w" Jan 27 21:30:04 crc kubenswrapper[4915]: I0127 21:30:04.292065 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492445-pzzdq"] Jan 27 21:30:04 crc kubenswrapper[4915]: I0127 21:30:04.300645 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492445-pzzdq"] Jan 27 21:30:05 crc kubenswrapper[4915]: I0127 21:30:05.371056 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7d78de6-9776-40cd-8198-126773f4ebc8" path="/var/lib/kubelet/pods/a7d78de6-9776-40cd-8198-126773f4ebc8/volumes" Jan 27 21:30:11 crc kubenswrapper[4915]: I0127 21:30:11.357394 4915 scope.go:117] "RemoveContainer" containerID="1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" Jan 27 21:30:11 crc kubenswrapper[4915]: E0127 21:30:11.358109 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:30:24 crc kubenswrapper[4915]: I0127 21:30:24.358363 4915 scope.go:117] "RemoveContainer" containerID="1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" Jan 27 21:30:24 crc kubenswrapper[4915]: E0127 21:30:24.359174 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:30:25 crc kubenswrapper[4915]: I0127 21:30:25.037976 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lt9fb"] Jan 27 21:30:25 crc kubenswrapper[4915]: E0127 21:30:25.038530 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd87787a-68f4-4155-868e-be589dbbdc62" containerName="collect-profiles" Jan 27 21:30:25 crc kubenswrapper[4915]: I0127 21:30:25.038552 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd87787a-68f4-4155-868e-be589dbbdc62" containerName="collect-profiles" Jan 27 21:30:25 crc kubenswrapper[4915]: I0127 21:30:25.038772 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd87787a-68f4-4155-868e-be589dbbdc62" containerName="collect-profiles" Jan 27 21:30:25 crc kubenswrapper[4915]: I0127 21:30:25.040512 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lt9fb" Jan 27 21:30:25 crc kubenswrapper[4915]: I0127 21:30:25.056487 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lt9fb"] Jan 27 21:30:25 crc kubenswrapper[4915]: I0127 21:30:25.197371 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15df8430-bce4-40cc-8a5c-eb5a7f840fbe-utilities\") pod \"redhat-operators-lt9fb\" (UID: \"15df8430-bce4-40cc-8a5c-eb5a7f840fbe\") " pod="openshift-marketplace/redhat-operators-lt9fb" Jan 27 21:30:25 crc kubenswrapper[4915]: I0127 21:30:25.198083 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g74w\" (UniqueName: \"kubernetes.io/projected/15df8430-bce4-40cc-8a5c-eb5a7f840fbe-kube-api-access-4g74w\") pod \"redhat-operators-lt9fb\" (UID: \"15df8430-bce4-40cc-8a5c-eb5a7f840fbe\") " pod="openshift-marketplace/redhat-operators-lt9fb" Jan 27 21:30:25 crc kubenswrapper[4915]: I0127 21:30:25.198128 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15df8430-bce4-40cc-8a5c-eb5a7f840fbe-catalog-content\") pod \"redhat-operators-lt9fb\" (UID: \"15df8430-bce4-40cc-8a5c-eb5a7f840fbe\") " pod="openshift-marketplace/redhat-operators-lt9fb" Jan 27 21:30:25 crc kubenswrapper[4915]: I0127 21:30:25.299881 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g74w\" (UniqueName: \"kubernetes.io/projected/15df8430-bce4-40cc-8a5c-eb5a7f840fbe-kube-api-access-4g74w\") pod \"redhat-operators-lt9fb\" (UID: \"15df8430-bce4-40cc-8a5c-eb5a7f840fbe\") " pod="openshift-marketplace/redhat-operators-lt9fb" Jan 27 21:30:25 crc kubenswrapper[4915]: I0127 21:30:25.299952 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15df8430-bce4-40cc-8a5c-eb5a7f840fbe-catalog-content\") pod \"redhat-operators-lt9fb\" (UID: \"15df8430-bce4-40cc-8a5c-eb5a7f840fbe\") " pod="openshift-marketplace/redhat-operators-lt9fb" Jan 27 21:30:25 crc kubenswrapper[4915]: I0127 21:30:25.300047 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15df8430-bce4-40cc-8a5c-eb5a7f840fbe-utilities\") pod \"redhat-operators-lt9fb\" (UID: \"15df8430-bce4-40cc-8a5c-eb5a7f840fbe\") " pod="openshift-marketplace/redhat-operators-lt9fb" Jan 27 21:30:25 crc kubenswrapper[4915]: I0127 21:30:25.300583 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15df8430-bce4-40cc-8a5c-eb5a7f840fbe-utilities\") pod \"redhat-operators-lt9fb\" (UID: \"15df8430-bce4-40cc-8a5c-eb5a7f840fbe\") " pod="openshift-marketplace/redhat-operators-lt9fb" Jan 27 21:30:25 crc kubenswrapper[4915]: I0127 21:30:25.300657 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15df8430-bce4-40cc-8a5c-eb5a7f840fbe-catalog-content\") pod \"redhat-operators-lt9fb\" (UID: \"15df8430-bce4-40cc-8a5c-eb5a7f840fbe\") " pod="openshift-marketplace/redhat-operators-lt9fb" Jan 27 21:30:25 crc kubenswrapper[4915]: I0127 21:30:25.318608 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g74w\" (UniqueName: \"kubernetes.io/projected/15df8430-bce4-40cc-8a5c-eb5a7f840fbe-kube-api-access-4g74w\") pod \"redhat-operators-lt9fb\" (UID: \"15df8430-bce4-40cc-8a5c-eb5a7f840fbe\") " pod="openshift-marketplace/redhat-operators-lt9fb" Jan 27 21:30:25 crc kubenswrapper[4915]: I0127 21:30:25.378080 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lt9fb" Jan 27 21:30:25 crc kubenswrapper[4915]: I0127 21:30:25.828865 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lt9fb"] Jan 27 21:30:26 crc kubenswrapper[4915]: I0127 21:30:26.118710 4915 generic.go:334] "Generic (PLEG): container finished" podID="15df8430-bce4-40cc-8a5c-eb5a7f840fbe" containerID="2d256f787c5355b25bcb8e783abd2afa837443d4399769d078738cacb51d5aa7" exitCode=0 Jan 27 21:30:26 crc kubenswrapper[4915]: I0127 21:30:26.118986 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lt9fb" event={"ID":"15df8430-bce4-40cc-8a5c-eb5a7f840fbe","Type":"ContainerDied","Data":"2d256f787c5355b25bcb8e783abd2afa837443d4399769d078738cacb51d5aa7"} Jan 27 21:30:26 crc kubenswrapper[4915]: I0127 21:30:26.119013 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lt9fb" event={"ID":"15df8430-bce4-40cc-8a5c-eb5a7f840fbe","Type":"ContainerStarted","Data":"0b66fd4bd497bdd170d4e585339d89bffb08d7bb85c8a7c4e048e03e1e439d81"} Jan 27 21:30:26 crc kubenswrapper[4915]: I0127 21:30:26.121846 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 21:30:27 crc kubenswrapper[4915]: I0127 21:30:27.129996 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lt9fb" event={"ID":"15df8430-bce4-40cc-8a5c-eb5a7f840fbe","Type":"ContainerStarted","Data":"52b7e4edd1d61cbc9d4e8a509f935bdc821c72106f1d6d3c520da0d1aacac8a0"} Jan 27 21:30:28 crc kubenswrapper[4915]: I0127 21:30:28.141430 4915 generic.go:334] "Generic (PLEG): container finished" podID="15df8430-bce4-40cc-8a5c-eb5a7f840fbe" containerID="52b7e4edd1d61cbc9d4e8a509f935bdc821c72106f1d6d3c520da0d1aacac8a0" exitCode=0 Jan 27 21:30:28 crc kubenswrapper[4915]: I0127 21:30:28.141694 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lt9fb" event={"ID":"15df8430-bce4-40cc-8a5c-eb5a7f840fbe","Type":"ContainerDied","Data":"52b7e4edd1d61cbc9d4e8a509f935bdc821c72106f1d6d3c520da0d1aacac8a0"} Jan 27 21:30:29 crc kubenswrapper[4915]: I0127 21:30:29.158248 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lt9fb" event={"ID":"15df8430-bce4-40cc-8a5c-eb5a7f840fbe","Type":"ContainerStarted","Data":"0e9ff7222fcfec28dc1c02ee702e42412bff2ac634824e6ad8a19d200d48f450"} Jan 27 21:30:29 crc kubenswrapper[4915]: I0127 21:30:29.183531 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lt9fb" podStartSLOduration=1.713268985 podStartE2EDuration="4.183510397s" podCreationTimestamp="2026-01-27 21:30:25 +0000 UTC" firstStartedPulling="2026-01-27 21:30:26.121509422 +0000 UTC m=+10117.479363076" lastFinishedPulling="2026-01-27 21:30:28.591750824 +0000 UTC m=+10119.949604488" observedRunningTime="2026-01-27 21:30:29.177906169 +0000 UTC m=+10120.535759833" watchObservedRunningTime="2026-01-27 21:30:29.183510397 +0000 UTC m=+10120.541364061" Jan 27 21:30:32 crc kubenswrapper[4915]: I0127 21:30:32.191246 4915 generic.go:334] "Generic (PLEG): container finished" podID="0f9c81c5-31fb-48a5-9a0d-6f87b46ec965" containerID="ae31fb97529f3ac8b1cbaaaa62caf2beab6e8c568f3b805f5cf7597a10a7f9d6" exitCode=0 Jan 27 21:30:32 crc kubenswrapper[4915]: I0127 21:30:32.191321 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vt59w/must-gather-9w4f4" event={"ID":"0f9c81c5-31fb-48a5-9a0d-6f87b46ec965","Type":"ContainerDied","Data":"ae31fb97529f3ac8b1cbaaaa62caf2beab6e8c568f3b805f5cf7597a10a7f9d6"} Jan 27 21:30:32 crc kubenswrapper[4915]: I0127 21:30:32.192918 4915 scope.go:117] "RemoveContainer" containerID="ae31fb97529f3ac8b1cbaaaa62caf2beab6e8c568f3b805f5cf7597a10a7f9d6" Jan 27 21:30:32 crc kubenswrapper[4915]: I0127 21:30:32.604168 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vt59w_must-gather-9w4f4_0f9c81c5-31fb-48a5-9a0d-6f87b46ec965/gather/0.log" Jan 27 21:30:35 crc kubenswrapper[4915]: I0127 21:30:35.358415 4915 scope.go:117] "RemoveContainer" containerID="1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" Jan 27 21:30:35 crc kubenswrapper[4915]: E0127 21:30:35.358977 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:30:35 crc kubenswrapper[4915]: I0127 21:30:35.379034 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lt9fb" Jan 27 21:30:35 crc kubenswrapper[4915]: I0127 21:30:35.379338 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lt9fb" Jan 27 21:30:35 crc kubenswrapper[4915]: I0127 21:30:35.423733 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lt9fb" Jan 27 21:30:36 crc kubenswrapper[4915]: I0127 21:30:36.293548 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lt9fb" Jan 27 21:30:36 crc kubenswrapper[4915]: I0127 21:30:36.357806 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lt9fb"] Jan 27 21:30:38 crc kubenswrapper[4915]: I0127 21:30:38.253426 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lt9fb" podUID="15df8430-bce4-40cc-8a5c-eb5a7f840fbe" containerName="registry-server" containerID="cri-o://0e9ff7222fcfec28dc1c02ee702e42412bff2ac634824e6ad8a19d200d48f450" gracePeriod=2 Jan 27 21:30:38 crc kubenswrapper[4915]: I0127 21:30:38.744922 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lt9fb" Jan 27 21:30:38 crc kubenswrapper[4915]: I0127 21:30:38.840362 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15df8430-bce4-40cc-8a5c-eb5a7f840fbe-utilities\") pod \"15df8430-bce4-40cc-8a5c-eb5a7f840fbe\" (UID: \"15df8430-bce4-40cc-8a5c-eb5a7f840fbe\") " Jan 27 21:30:38 crc kubenswrapper[4915]: I0127 21:30:38.840425 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g74w\" (UniqueName: \"kubernetes.io/projected/15df8430-bce4-40cc-8a5c-eb5a7f840fbe-kube-api-access-4g74w\") pod \"15df8430-bce4-40cc-8a5c-eb5a7f840fbe\" (UID: \"15df8430-bce4-40cc-8a5c-eb5a7f840fbe\") " Jan 27 21:30:38 crc kubenswrapper[4915]: I0127 21:30:38.840450 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15df8430-bce4-40cc-8a5c-eb5a7f840fbe-catalog-content\") pod \"15df8430-bce4-40cc-8a5c-eb5a7f840fbe\" (UID: \"15df8430-bce4-40cc-8a5c-eb5a7f840fbe\") " Jan 27 21:30:38 crc kubenswrapper[4915]: I0127 21:30:38.841569 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15df8430-bce4-40cc-8a5c-eb5a7f840fbe-utilities" (OuterVolumeSpecName: "utilities") pod "15df8430-bce4-40cc-8a5c-eb5a7f840fbe" (UID: "15df8430-bce4-40cc-8a5c-eb5a7f840fbe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:30:38 crc kubenswrapper[4915]: I0127 21:30:38.849638 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15df8430-bce4-40cc-8a5c-eb5a7f840fbe-kube-api-access-4g74w" (OuterVolumeSpecName: "kube-api-access-4g74w") pod "15df8430-bce4-40cc-8a5c-eb5a7f840fbe" (UID: "15df8430-bce4-40cc-8a5c-eb5a7f840fbe"). InnerVolumeSpecName "kube-api-access-4g74w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:30:38 crc kubenswrapper[4915]: I0127 21:30:38.943291 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15df8430-bce4-40cc-8a5c-eb5a7f840fbe-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:30:38 crc kubenswrapper[4915]: I0127 21:30:38.943332 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g74w\" (UniqueName: \"kubernetes.io/projected/15df8430-bce4-40cc-8a5c-eb5a7f840fbe-kube-api-access-4g74w\") on node \"crc\" DevicePath \"\"" Jan 27 21:30:38 crc kubenswrapper[4915]: I0127 21:30:38.978610 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15df8430-bce4-40cc-8a5c-eb5a7f840fbe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15df8430-bce4-40cc-8a5c-eb5a7f840fbe" (UID: "15df8430-bce4-40cc-8a5c-eb5a7f840fbe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:30:39 crc kubenswrapper[4915]: I0127 21:30:39.047937 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15df8430-bce4-40cc-8a5c-eb5a7f840fbe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:30:39 crc kubenswrapper[4915]: I0127 21:30:39.266949 4915 generic.go:334] "Generic (PLEG): container finished" podID="15df8430-bce4-40cc-8a5c-eb5a7f840fbe" containerID="0e9ff7222fcfec28dc1c02ee702e42412bff2ac634824e6ad8a19d200d48f450" exitCode=0 Jan 27 21:30:39 crc kubenswrapper[4915]: I0127 21:30:39.266997 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lt9fb" event={"ID":"15df8430-bce4-40cc-8a5c-eb5a7f840fbe","Type":"ContainerDied","Data":"0e9ff7222fcfec28dc1c02ee702e42412bff2ac634824e6ad8a19d200d48f450"} Jan 27 21:30:39 crc kubenswrapper[4915]: I0127 21:30:39.267029 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lt9fb" event={"ID":"15df8430-bce4-40cc-8a5c-eb5a7f840fbe","Type":"ContainerDied","Data":"0b66fd4bd497bdd170d4e585339d89bffb08d7bb85c8a7c4e048e03e1e439d81"} Jan 27 21:30:39 crc kubenswrapper[4915]: I0127 21:30:39.267050 4915 scope.go:117] "RemoveContainer" containerID="0e9ff7222fcfec28dc1c02ee702e42412bff2ac634824e6ad8a19d200d48f450" Jan 27 21:30:39 crc kubenswrapper[4915]: I0127 21:30:39.269090 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lt9fb" Jan 27 21:30:39 crc kubenswrapper[4915]: I0127 21:30:39.293496 4915 scope.go:117] "RemoveContainer" containerID="52b7e4edd1d61cbc9d4e8a509f935bdc821c72106f1d6d3c520da0d1aacac8a0" Jan 27 21:30:39 crc kubenswrapper[4915]: I0127 21:30:39.384289 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lt9fb"] Jan 27 21:30:39 crc kubenswrapper[4915]: I0127 21:30:39.417615 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lt9fb"] Jan 27 21:30:39 crc kubenswrapper[4915]: I0127 21:30:39.461154 4915 scope.go:117] "RemoveContainer" containerID="2d256f787c5355b25bcb8e783abd2afa837443d4399769d078738cacb51d5aa7" Jan 27 21:30:39 crc kubenswrapper[4915]: I0127 21:30:39.550752 4915 scope.go:117] "RemoveContainer" containerID="0e9ff7222fcfec28dc1c02ee702e42412bff2ac634824e6ad8a19d200d48f450" Jan 27 21:30:39 crc kubenswrapper[4915]: E0127 21:30:39.552356 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e9ff7222fcfec28dc1c02ee702e42412bff2ac634824e6ad8a19d200d48f450\": container with ID starting with 0e9ff7222fcfec28dc1c02ee702e42412bff2ac634824e6ad8a19d200d48f450 not found: ID does not exist" containerID="0e9ff7222fcfec28dc1c02ee702e42412bff2ac634824e6ad8a19d200d48f450" Jan 27 21:30:39 crc kubenswrapper[4915]: I0127 21:30:39.552394 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e9ff7222fcfec28dc1c02ee702e42412bff2ac634824e6ad8a19d200d48f450"} err="failed to get container status \"0e9ff7222fcfec28dc1c02ee702e42412bff2ac634824e6ad8a19d200d48f450\": rpc error: code = NotFound desc = could not find container \"0e9ff7222fcfec28dc1c02ee702e42412bff2ac634824e6ad8a19d200d48f450\": container with ID starting with 0e9ff7222fcfec28dc1c02ee702e42412bff2ac634824e6ad8a19d200d48f450 not found: ID does not exist" Jan 27 21:30:39 crc kubenswrapper[4915]: I0127 21:30:39.552447 4915 scope.go:117] "RemoveContainer" containerID="52b7e4edd1d61cbc9d4e8a509f935bdc821c72106f1d6d3c520da0d1aacac8a0" Jan 27 21:30:39 crc kubenswrapper[4915]: E0127 21:30:39.552810 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52b7e4edd1d61cbc9d4e8a509f935bdc821c72106f1d6d3c520da0d1aacac8a0\": container with ID starting with 52b7e4edd1d61cbc9d4e8a509f935bdc821c72106f1d6d3c520da0d1aacac8a0 not found: ID does not exist" containerID="52b7e4edd1d61cbc9d4e8a509f935bdc821c72106f1d6d3c520da0d1aacac8a0" Jan 27 21:30:39 crc kubenswrapper[4915]: I0127 21:30:39.552872 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b7e4edd1d61cbc9d4e8a509f935bdc821c72106f1d6d3c520da0d1aacac8a0"} err="failed to get container status \"52b7e4edd1d61cbc9d4e8a509f935bdc821c72106f1d6d3c520da0d1aacac8a0\": rpc error: code = NotFound desc = could not find container \"52b7e4edd1d61cbc9d4e8a509f935bdc821c72106f1d6d3c520da0d1aacac8a0\": container with ID starting with 52b7e4edd1d61cbc9d4e8a509f935bdc821c72106f1d6d3c520da0d1aacac8a0 not found: ID does not exist" Jan 27 21:30:39 crc kubenswrapper[4915]: I0127 21:30:39.552913 4915 scope.go:117] "RemoveContainer" containerID="2d256f787c5355b25bcb8e783abd2afa837443d4399769d078738cacb51d5aa7" Jan 27 21:30:39 crc kubenswrapper[4915]: E0127 21:30:39.553274 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d256f787c5355b25bcb8e783abd2afa837443d4399769d078738cacb51d5aa7\": container with ID starting with 2d256f787c5355b25bcb8e783abd2afa837443d4399769d078738cacb51d5aa7 not found: ID does not exist" containerID="2d256f787c5355b25bcb8e783abd2afa837443d4399769d078738cacb51d5aa7" Jan 27 21:30:39 crc kubenswrapper[4915]: I0127 21:30:39.553385 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d256f787c5355b25bcb8e783abd2afa837443d4399769d078738cacb51d5aa7"} err="failed to get container status \"2d256f787c5355b25bcb8e783abd2afa837443d4399769d078738cacb51d5aa7\": rpc error: code = NotFound desc = could not find container \"2d256f787c5355b25bcb8e783abd2afa837443d4399769d078738cacb51d5aa7\": container with ID starting with 2d256f787c5355b25bcb8e783abd2afa837443d4399769d078738cacb51d5aa7 not found: ID does not exist" Jan 27 21:30:39 crc kubenswrapper[4915]: I0127 21:30:39.632486 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vt59w/must-gather-9w4f4"] Jan 27 21:30:39 crc kubenswrapper[4915]: I0127 21:30:39.633256 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-vt59w/must-gather-9w4f4" podUID="0f9c81c5-31fb-48a5-9a0d-6f87b46ec965" containerName="copy" containerID="cri-o://f7e5108a9ac1f0954ab0106f2a6833ebb593b1640367f16e57ec9ccab1e9e4f9" gracePeriod=2 Jan 27 21:30:39 crc kubenswrapper[4915]: I0127 21:30:39.645778 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vt59w/must-gather-9w4f4"] Jan 27 21:30:40 crc kubenswrapper[4915]: I0127 21:30:40.122547 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vt59w_must-gather-9w4f4_0f9c81c5-31fb-48a5-9a0d-6f87b46ec965/copy/0.log" Jan 27 21:30:40 crc kubenswrapper[4915]: I0127 21:30:40.123199 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vt59w/must-gather-9w4f4" Jan 27 21:30:40 crc kubenswrapper[4915]: I0127 21:30:40.173173 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0f9c81c5-31fb-48a5-9a0d-6f87b46ec965-must-gather-output\") pod \"0f9c81c5-31fb-48a5-9a0d-6f87b46ec965\" (UID: \"0f9c81c5-31fb-48a5-9a0d-6f87b46ec965\") " Jan 27 21:30:40 crc kubenswrapper[4915]: I0127 21:30:40.173980 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t2nz\" (UniqueName: \"kubernetes.io/projected/0f9c81c5-31fb-48a5-9a0d-6f87b46ec965-kube-api-access-5t2nz\") pod \"0f9c81c5-31fb-48a5-9a0d-6f87b46ec965\" (UID: \"0f9c81c5-31fb-48a5-9a0d-6f87b46ec965\") " Jan 27 21:30:40 crc kubenswrapper[4915]: I0127 21:30:40.192193 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f9c81c5-31fb-48a5-9a0d-6f87b46ec965-kube-api-access-5t2nz" (OuterVolumeSpecName: "kube-api-access-5t2nz") pod "0f9c81c5-31fb-48a5-9a0d-6f87b46ec965" (UID: "0f9c81c5-31fb-48a5-9a0d-6f87b46ec965"). InnerVolumeSpecName "kube-api-access-5t2nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:30:40 crc kubenswrapper[4915]: I0127 21:30:40.278257 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vt59w_must-gather-9w4f4_0f9c81c5-31fb-48a5-9a0d-6f87b46ec965/copy/0.log" Jan 27 21:30:40 crc kubenswrapper[4915]: I0127 21:30:40.278575 4915 generic.go:334] "Generic (PLEG): container finished" podID="0f9c81c5-31fb-48a5-9a0d-6f87b46ec965" containerID="f7e5108a9ac1f0954ab0106f2a6833ebb593b1640367f16e57ec9ccab1e9e4f9" exitCode=143 Jan 27 21:30:40 crc kubenswrapper[4915]: I0127 21:30:40.278650 4915 scope.go:117] "RemoveContainer" containerID="f7e5108a9ac1f0954ab0106f2a6833ebb593b1640367f16e57ec9ccab1e9e4f9" Jan 27 21:30:40 crc kubenswrapper[4915]: I0127 21:30:40.278708 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vt59w/must-gather-9w4f4" Jan 27 21:30:40 crc kubenswrapper[4915]: I0127 21:30:40.280905 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t2nz\" (UniqueName: \"kubernetes.io/projected/0f9c81c5-31fb-48a5-9a0d-6f87b46ec965-kube-api-access-5t2nz\") on node \"crc\" DevicePath \"\"" Jan 27 21:30:40 crc kubenswrapper[4915]: I0127 21:30:40.297471 4915 scope.go:117] "RemoveContainer" containerID="ae31fb97529f3ac8b1cbaaaa62caf2beab6e8c568f3b805f5cf7597a10a7f9d6" Jan 27 21:30:40 crc kubenswrapper[4915]: I0127 21:30:40.336930 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f9c81c5-31fb-48a5-9a0d-6f87b46ec965-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "0f9c81c5-31fb-48a5-9a0d-6f87b46ec965" (UID: "0f9c81c5-31fb-48a5-9a0d-6f87b46ec965"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:30:40 crc kubenswrapper[4915]: I0127 21:30:40.383626 4915 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0f9c81c5-31fb-48a5-9a0d-6f87b46ec965-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 21:30:40 crc kubenswrapper[4915]: I0127 21:30:40.386990 4915 scope.go:117] "RemoveContainer" containerID="f7e5108a9ac1f0954ab0106f2a6833ebb593b1640367f16e57ec9ccab1e9e4f9" Jan 27 21:30:40 crc kubenswrapper[4915]: E0127 21:30:40.387896 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7e5108a9ac1f0954ab0106f2a6833ebb593b1640367f16e57ec9ccab1e9e4f9\": container with ID starting with f7e5108a9ac1f0954ab0106f2a6833ebb593b1640367f16e57ec9ccab1e9e4f9 not found: ID does not exist" containerID="f7e5108a9ac1f0954ab0106f2a6833ebb593b1640367f16e57ec9ccab1e9e4f9" Jan 27 21:30:40 crc kubenswrapper[4915]: I0127 21:30:40.387978 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7e5108a9ac1f0954ab0106f2a6833ebb593b1640367f16e57ec9ccab1e9e4f9"} err="failed to get container status \"f7e5108a9ac1f0954ab0106f2a6833ebb593b1640367f16e57ec9ccab1e9e4f9\": rpc error: code = NotFound desc = could not find container \"f7e5108a9ac1f0954ab0106f2a6833ebb593b1640367f16e57ec9ccab1e9e4f9\": container with ID starting with f7e5108a9ac1f0954ab0106f2a6833ebb593b1640367f16e57ec9ccab1e9e4f9 not found: ID does not exist" Jan 27 21:30:40 crc kubenswrapper[4915]: I0127 21:30:40.388030 4915 scope.go:117] "RemoveContainer" containerID="ae31fb97529f3ac8b1cbaaaa62caf2beab6e8c568f3b805f5cf7597a10a7f9d6" Jan 27 21:30:40 crc kubenswrapper[4915]: E0127 21:30:40.388651 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae31fb97529f3ac8b1cbaaaa62caf2beab6e8c568f3b805f5cf7597a10a7f9d6\": container with ID starting with ae31fb97529f3ac8b1cbaaaa62caf2beab6e8c568f3b805f5cf7597a10a7f9d6 not found: ID does not exist" containerID="ae31fb97529f3ac8b1cbaaaa62caf2beab6e8c568f3b805f5cf7597a10a7f9d6" Jan 27 21:30:40 crc kubenswrapper[4915]: I0127 21:30:40.388682 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae31fb97529f3ac8b1cbaaaa62caf2beab6e8c568f3b805f5cf7597a10a7f9d6"} err="failed to get container status \"ae31fb97529f3ac8b1cbaaaa62caf2beab6e8c568f3b805f5cf7597a10a7f9d6\": rpc error: code = NotFound desc = could not find container \"ae31fb97529f3ac8b1cbaaaa62caf2beab6e8c568f3b805f5cf7597a10a7f9d6\": container with ID starting with ae31fb97529f3ac8b1cbaaaa62caf2beab6e8c568f3b805f5cf7597a10a7f9d6 not found: ID does not exist" Jan 27 21:30:41 crc kubenswrapper[4915]: I0127 21:30:41.369272 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f9c81c5-31fb-48a5-9a0d-6f87b46ec965" path="/var/lib/kubelet/pods/0f9c81c5-31fb-48a5-9a0d-6f87b46ec965/volumes" Jan 27 21:30:41 crc kubenswrapper[4915]: I0127 21:30:41.370687 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15df8430-bce4-40cc-8a5c-eb5a7f840fbe" path="/var/lib/kubelet/pods/15df8430-bce4-40cc-8a5c-eb5a7f840fbe/volumes" Jan 27 21:30:47 crc kubenswrapper[4915]: I0127 21:30:47.357733 4915 scope.go:117] "RemoveContainer" containerID="1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" Jan 27 21:30:47 crc kubenswrapper[4915]: E0127 21:30:47.359754 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:31:00 crc kubenswrapper[4915]: I0127 21:31:00.358116 4915 scope.go:117] "RemoveContainer" containerID="1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" Jan 27 21:31:00 crc kubenswrapper[4915]: E0127 21:31:00.358804 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:31:01 crc kubenswrapper[4915]: I0127 21:31:01.176062 4915 scope.go:117] "RemoveContainer" containerID="11c8f6325578761406a1b46c2125f6457a8837cd1a80c573bbb260874c814d10" Jan 27 21:31:11 crc kubenswrapper[4915]: I0127 21:31:11.358076 4915 scope.go:117] "RemoveContainer" containerID="1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" Jan 27 21:31:11 crc kubenswrapper[4915]: E0127 21:31:11.358987 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:31:22 crc kubenswrapper[4915]: I0127 21:31:22.357102 4915 scope.go:117] "RemoveContainer" containerID="1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" Jan 27 21:31:22 crc kubenswrapper[4915]: E0127 21:31:22.359037 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:31:37 crc kubenswrapper[4915]: I0127 21:31:37.357477 4915 scope.go:117] "RemoveContainer" containerID="1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" Jan 27 21:31:37 crc kubenswrapper[4915]: E0127 21:31:37.358253 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:31:49 crc kubenswrapper[4915]: I0127 21:31:49.357828 4915 scope.go:117] "RemoveContainer" containerID="1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" Jan 27 21:31:49 crc kubenswrapper[4915]: E0127 21:31:49.358570 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8dsj_openshift-machine-config-operator(7e61db92-39b6-4acf-89af-34169c61e709)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" Jan 27 21:32:01 crc kubenswrapper[4915]: I0127 21:32:01.275223 4915 scope.go:117] "RemoveContainer" containerID="536885c1a11b7ee1ad9d891a60f08c875625249d4c0a3d95b48b21d13c49ff19" Jan 27 21:32:04 crc kubenswrapper[4915]: I0127 21:32:04.357574 4915 scope.go:117] "RemoveContainer" containerID="1d61690980135948cad88eaa7ca5dafe2d93c0de702675bd8d443bf79b508eb0" Jan 27 21:32:05 crc kubenswrapper[4915]: I0127 21:32:05.108676 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" event={"ID":"7e61db92-39b6-4acf-89af-34169c61e709","Type":"ContainerStarted","Data":"6f629987b3afe09c81732c5c4dd883703285902136b57903d13c37edf4f9c660"} Jan 27 21:34:20 crc kubenswrapper[4915]: I0127 21:34:20.625180 4915 patch_prober.go:28] interesting pod/machine-config-daemon-q8dsj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:34:20 crc kubenswrapper[4915]: I0127 21:34:20.625594 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8dsj" podUID="7e61db92-39b6-4acf-89af-34169c61e709" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515136227563024457 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015136227563017374 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015136203022016477 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015136203022015447 5ustar corecore